Handbook of Research on Public Information Technology Volume I G. David Garson North Carolina State University, USA Mehdi Khosrow-Pour Information Resources Management Association, USA
InformatIon scIence reference Hershey • New York
Acquisitions Editor: Development Editor: Senior Managing Editor: Managing Editor: Copy Editor: Typesetter: Cover Design: Printed at:
Kristin Klinger Kristin Roth Jennifer Neidig Sara Reed Katie Smalley, Shanelle Ramelb Michael Brehm, Larissa Vinci Lisa Tosheff Yurchak Printing Inc.
Published in the United States of America by Information Science Reference (an imprint of IGI Global) 701 E. Chocolate Avenue, Suite 200 Hershey PA 17033 Tel: 717-533-8845 Fax: 717-533-8661 E-mail:
[email protected] Web site: http://www.igi-global.com and in the United Kingdom by Information Science Reference (an imprint of IGI Global) 3 Henrietta Street Covent Garden London WC2E 8LU Tel: 44 20 7240 0856 Fax: 44 20 7379 0609 Web site: http://www.eurospanonline.com Copyright © 2008 by IGI Global. All rights reserved. No part of this publication may be reproduced, stored or distributed in any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher. Product or company names used in this set are for identification purposes only. Inclusion of the names of the products or companies does not indicate a claim of ownership by IGI Global of the trademark or registered trademark. Library of Congress Cataloging-in-Publication Data Handbook of research on public information technology / David G. Garson and Mehdi Khosrow-Pour, editor. p. cm. Summary: “This book compiles estimable research on the global trend toward the rapidly increasing use of information technology in the public sector, discussing such issues as e-government and e-commerce; project management and information technology evaluation; system design and data processing; security and protection; and privacy, access, and ethics of public information technology”--Provided by publisher. ISBN-13: 978-1-59904-857-4 (hbk.) ISBN-13: 978-1-59904-858-1 (e-book) 1. Internet in public administration. I. Garson, David G. II. Khosrowpour, Mehdi, 1951JF1525.A8H363 2008 352.3’802854678--dc22 British Cataloguing in Publication Data A Cataloguing in Publication record for this book is available from the British Library. All work contributed to this book set is original material. The views expressed in this book are those of the authors, but not necessarily of the publisher.
If a library purchased a print copy of this publication, please go to http://www.igi-global.com/reference/assets/IGR-eAccess-agreement. pdf for information on activating the library's complimentary electronic access to this publication.
Editorial Advisory Board
Annie Becker Florida Institute of Technology, USA George Ditsa University of Wollongong, Australia Yair Levy Nova Southeastern University, USA Mahesh S. Raisinghani TWA, USA Edward Szewczak Canisus College, USA
Table of Contents
Foreword ....................................................................................................................................... xxxv Preface .......................................................................................................................................... xxxvii Acknowledgment ............................................................................................................................xliii
Volume I Section I E-Government and E-Commerce
Chapter I Key Issues in E-Government and Public Administration / Rhoda C. Joseph and David P. Kitlan ....................................................................................................................................1 Chapter II Government Web Sites as Public Forums / Pearson Liddell, Jr., Robert S. Moore, Melissa Moore, William D. Eshee, and Gloria J. Liddell ...................................................................12 Chapter III Limitations of Evolutionary Approaches to E-Government / Rodrigo Sandoval-Almazán and J. Ramon Gil-Garcia ..........................................................................................................................22 Chapter IV Issues and Trends in Internet-Based Citizen Participation / Stephen K. Aikins ..................................31 Chapter V Public Sector Participation in Open Communities / Andrea B. Baker, J. Ramon Gil-Garcia, Donna Canestraro, Jim Costello, and Derek Werthmuller .................................................................41
Chapter VI Community Informatics / Larry Stillman and Randy Stoecker ..........................................................50 Chapter VII Public Wireless Internet / Dale Nesbary ............................................................................................61 Chapter VIII The Current State and Future of E-Participation Research / Chee Wei Phang and Atreyi Kankanhalli ..............................................................................................................................70 Chapter IX Blogging / David C. Wyld ...................................................................................................................81 Chapter X E-Government and SMEs / Ron Craig ...............................................................................................94 Chapter XI EU E-Business and Innovation Policies for SMEs / Anne Wiggins ....................................................105 Chapter XII Exploitation of Public Sector Information in Europe / Ioannis P. Chochliouros, Anastasia S. Spiliopoulou, and Stergios P. Chochliouros ...................................................................118 Chapter XIII Information Technology Among U.S. Local Governments / Donald F. Norris..................................132 Chapter XIV Public Sector Human Resources Information Systems / Christopher G. Reddick .............................145 Chapter XV Digital Libraries / Micah Altman ........................................................................................................152 Chapter XVI An Exploratory Study of the E-Government Services in Greece / Dimitrios K. Kardaras and Eleutherios A. Papathanassiou ...........................................................................................................162 Chapter XVII e-Government’s Barriers and Opportunities in Greece / Giorgos Laskaridis, Konstantinos Markellos, Penelope Markellou, Angeliki Panayiotaki, and Athanasios Tsakalidis ..........................................................................................................................175 Chapter XVIII E-Lections in New Zealand Local Government / Alex Dunayev and John Paynter ..........................192
Chapter XIX E-Census 2006 in New Zealand / John Paynter and Gabrielle Peko .................................................201 Chapter XX Security Challenges in Distributed Web Based Transactions: An Overview on the Italian Employment Information System / Mirko Cesarini, Mariagrazia Fugini, Mario Mezzanzanica, and Krysnaia Nanini .......................................................................................209 Chapter XXI Interactive Personalized Catalogue for M-Commerce / Sheng-Uei Guan and Yuan Sherng Tay .......218 Chapter XXII Trust Based E-Commerce Decisions / Vesile Evrim and Dennis McLeod ..........................................229 Chapter XXIII Using Partial Least Squares in Digital Government Research / J. Ramon Gil-Garcia ......................239
Section II Privacy, Access, Ethics, and Theory
Chapter XXIV Privacy Issues in Public Web Sites / Eleutherios A. Papathanassiou and Xenia J. Mamakou ..........256 Chapter XXV A Framework for Accessible and Usable Web Applications / Lourdes Moreno, Elena Castro, Dolores Cuadra, and Paloma Martinez ..............................................................................................265 Chapter XXVI Intelligent User-Centric Access to Public Information / Giovanni Maria Sacco ...............................274 Chapter XXVII Open Access to Scholarly Publications and Public Policies / Jean-Philippe Rennard .......................284 Chapter XXVIII The Digital Divide and Social Equity / Alfred P. Rovai and Emery M. Petchauer ............................294 Chapter XXIX Africa and the Challenges of Bridging the Digital Divide / Esharenana E. Adomi ...........................303 Chapter XXX Research Ethics in E-Public Administration / Carlos Nunes Silva .....................................................314
Chapter XXXI Medical Ethical and Policy Issues Arising from RIA / Jimmie L. Joseph and David P. Cook ...........323 Chapter XXXII Social Capital and the Gendering of Differential IT Use / Lia Bryant and Iolanda Principe ............333 Chapter XXXIII Technology Diffusion in Public Administration / Eugene J. Akers ....................................................339 Chapter XXXIV Institutional Theory and E-Government Research / Shahidul Hassan and J. Ramon Gil-Garcia .....349 Chapter XXXV Structuration Theory and Government IT / J. Ramon Gil-Garcia and Shahidul Hassan...................361
Section III Security and Protection
Chapter XXXVI Intelligence and Security Informatics / Jimmie L. Joseph ..................................................................378 Chapter XXXVII Practical Measures for Securing Government Networks / Stephen K. Aikins ....................................386 Chapter XXXVIII Digital Convergence and Cybersecurity Policy / Anthony W. Buenger, Jr..........................................395 Chapter XXXIX Bioterrorism Response and IT Strategies / David A. Bray .................................................................406 Chapter XL Federal Public-Key Infrastructure / Ludwig Slusky and Parviz Partow-Navid ..................................413 Chapter XLI Radio Frequency Identification (RFID) Technology / David C. Wyld ................................................425 Chapter XLII Roaming-Agent Protection for E-Commerce / Sheng-Uei Guan .......................................................441 Chapter XLIII Integrity Protection of Mobile Agent Data / Sheng-Uei Guan ...........................................................453
Volume II Chapter XLIV The Role of Data Mining in Intrusion Detection Technology / Amalia Agathou and Theodoros Tzouramanis ......................................................................................................................463
Section IV System Design and Data Processing
Chapter XLV System Dynamics to Understand Public Information Technology / Luis Felipe Luna-Reyes ............476 Chapter XLVI Government Innovation Through Knowledge Management / Luis Felipe Luna-Reye.......................493 Chapter XLVII A Framework for Knowledge Management in E-Government / Kostas Metaxiotis ...........................508 Chapter XLVIII Web Application Classification: A Maintenance/Evolution Perspective / Hsiang-Jui Kung and Hui-Lien Tung ..............................................................................................................................520 Chapter XLIX Web Services and Service-Oriented Architectures / Bruce J. Neubauer ............................................531 Chapter L The Strategic Determinants of Shared Services / Anton Joha and Marijn Janssen ...........................544 Chapter LI Data Mining in Public Administration / John Wang, Xiaohua Hu, and Dan Zhu ..............................556 Chapter LII Categorization of Data Clustering Techniques / Baoying Wang, Imad Rahal, and Richard Leipold...................................................................................................................................568 Chapter LIII Statistical Dissemination Systems and the Web / Sindoni Giuseppe and Tininini Leonardo .............578 Chapter LIV Text Mining / Antonina Durfee ...........................................................................................................592
Chapter LV Statistical Data and Metadata Quality Assessment / Maria Vardaki and Haralambos Papageorgiou .................................................................................................................604 Chapter LVI Probability Association Approach in Automatic Image Annotation / Feng Xu and Yu-Jin Zhang .......................................................................................................................................615 Chapter LVII Online Analytical Processing and Data-Cube Technologies / Lixin Fu and Wenchen Hu..................627
Section V Project Management and IT Evaluation
Chapter LVIII Managing People and Information in Complex Organizations / Kalu N. Kalu ..................................638 Chapter LIX Human-Factors Design for Public Information Technology / Vincent E. Lasnik ...............................650 Chapter LX An Overview of IT Outsourcing in Public Sector Agencies / Anne C. Rouse ....................................662 Chapter LXI E-Health, Local Governance, and Public-Private Partnering in Ontario / Jeffrey Roy .......................672 Chapter LXII Implementing a Sound Public Information Security Program / Stephen K. Aikins ............................689 Chapter LXIII Evaluation of E-Government Web Sites / Michael Middleton............................................................699 Chapter LXIV IT Evaluation Issues in Australian Public-Sector Organizations / Chad Lin ......................................711 Chapter LXV Performance and Accountability in E-Budgeting Projects / Gabriel Puron-Cid and J. Ramon Gil-Garcia ...........................................................................................................................722 Chapter LXVI A Model for Reengineering IT Job Classes in State Government / Craig P. Orgeron .......................735
Section VI Selected Readings
Chapter LXVII Developing a Generic Framework for E-Government / Gerald Grant and Derek Chau ...................748 Chapter LXVIII A Web Query System for Heterogeneous Government Data / Nancy Wiegand, Isabel F. Cruz, Naijun Zhou, and William Sunna ........................................................................................................775 Chapter LXIX Digital Government Worldwide: An E-Government Assessment of Municipal Web Sites / James Melitski, Marc Holzer, Seang-Tae Kim, Chan-Gun Kim, and Seung-Yong Rho ..................................................................................................................................790 Chapter LXX User Help and Service Navigation Features in Government Web Sites / Genie N.L. Stowers ...........805 Chapter LXXI An Empirical Study on the Migration to OpenOffice.org in a Public Administration / B. Rossi, M. Scotto, A. Sillitti, and G. Succi .......................................................................................818 Chapter LXXII Organisational Challenges of Implementing E-Business in the Public Services: The Case of Britain’s National Mapping Agency / Francesca Andreescu ..............................................................833 Chapter LXXIII Public Administrators’ Acceptance of the Practice of Digital Democracy: A Model Explaining the Utilization of Online Policy Forums in South Korea / Chan-Gon Kim and Marc Holzer .................854 Chapter LXXIV E-Mexico: Collaborative Structures in Mexican Public Administration / Luis F. Luna-Reyes, J. Ramon Gil-Garcia, and Cinthia Betiny Cruz ..................................................................................873 Chapter LXXV The Impact of the Internet on Political Activism: Evidence from Europe / Pippa Norris .................889 Chapter LXXVI Adoption and Implementation of IT in Developing Nations: Experiences from Two Public Sector Enterprises in India / Monideepa Tarafdar and Sanjiv D. Vaidya ...............................905
Detailed Table of Contents
Foreword ....................................................................................................................................... xxxv Preface .......................................................................................................................................... xxxvii Acknowledgment ............................................................................................................................xliii
Volume I
Section I E-Government and E-Commerce
Chapter I Key Issues in E-Government and Public Administration / Rhoda C. Joseph and David P. Kitlan ...................................................................................................................................... 1 This chapter examines the impact of e-government on public administration from both the constituent and service perspectives. The chapter presents a holistic view of both challenges and advantages of implementing e-government in the area of public administration. Chapter II Government Web Sites as Public Forums / Pearson Liddell, Jr., Robert S. Moore, Melissa Moore, William D. Eshee, and Gloria J. Liddell ................................................................... 12 In countries around the globe, the public availability of information through technologies, such as the Internet, has increased the average citizen’s ability to access documents, resources and solutions with unprecedented ease. As a result, governments must adapt their systems and Internet-based or electronic communication to offer the most relevant services to their citizenry. In this chapter, we employ a legal perspective to examine the ramifications of public information strategies that allow firms to have hyperlinks embedded within the content of public information systems. This perspective allows the public information manager to make informed decisions when developing government portal strategies.
Chapter III Limitations of Evolutionary Approaches to E-Government / Rodrigo Sandoval-Almazán and J. Ramon Gil-Garcia .......................................................................................................................... 22 This chapter examines the advancement of e-government, primarily through the use of information and communication technologies (ICT). State and local governments are using ICTs in the creation of Web sites and portals, which provide information about the government agencies and, in some cases, electronic transactions such as tax payment systems, online communities, job search, licensing, and vehicle registration, among others. It is through these innovations that government systems are reaching a higher level of sophistication. Chapter IV Issues and Trends in Internet-Based Citizen Participation / Stephen K. Aikins .................................. 31 This chapter reviews the opportunities and challenges of Internet-based citizen participation, the trend noted in the findings of some of the empirical studies and attempts to explain the reason the Internet has failed in its putative potential to bring citizens closer to their governments. The use of Internet technology to further citizen participation is believed to hold great promise to enhance democratic governance by allowing citizens to access public information and interact with government officials. Chapter V Public Sector Participation in Open Communities / Andrea B. Baker, J. Ramon Gil-Garcia, Donna Canestraro, Jim Costello, and Derek Werthmuller ................................................................. 41 This chapter examines the advantages and challenges associated with open source software, particularly for public sector organizations. As new Internet-based products, services and resources are developed, private companies and government agencies are exploring the use of open standards and open source software for their daily operations. Advantages are discussed including interoperability and re-usability of code as well as data longevity. Challenges are discussed including technical training and support services as well as participation in online development communities and how this is constrained by the current legal framework and personnel practices. Chapter VI Community Informatics / Larry Stillman and Randy Stoecker .......................................................... 50 Researchers and practitioners use a wide range of terms when they discuss community involvement with information and communications technologies (ICT). Common (English-language) terms include ‘community networks,’ ‘community computing,’ ‘community information networks,’ ‘civic networking,’ ‘community technology,’ ‘community computer networks,’ ‘online neighborhood network,’ ‘virtual community,’ ‘online community,’ ‘community e-business,’ and most recently, ‘community informatics.’ Since at least the late 1990s, the term ‘community informatics’ has come into use amongst many academic researchers as an overarching label for the academic study of projects and initiatives which deliberately engage community groups and organizations with ICTs. However, community informatics has not yet achieved a stable set of findings or core questions which are commonly used to conduct research.
Chapter VII Public Wireless Internet / Dale Nesbary ............................................................................................ 61 There exists a growing controversy over whether government should be in the business of providing wireless broadband Internet. Public sector entities, particularly counties and cities, are developing the physical and intellectual infrastructure designed to provide wireless broadband Internet to their residents. Opponents of government entry into the wireless broadband market argue that existing private broadband vendors are fully capable of providing wireless Internet in an efficient manner. Supporters argue that government is uniquely capable of building and supporting, at least initially, wireless broadband at a lower cost and in a more pluralistic and efficient manner than private vendors have done thus far. Chapter VIII The Current State and Future of E-Participation Research / Chee Wei Phang and Atreyi Kankanhalli .............................................................................................................................. 70 In this chapter, the authors use the term “e-participation” initiatives to refer to government’s use of ICT to engage citizens in democratic processes. The term “e-participation” is chosen because it is sufficiently general to encompass all such efforts by governments. Instances of e-participation initiatives can be found globally, such as Denmark’s Nordpol.dk, the United States’s Regulations.gov, and Singapore’s Government Consultation Portal. The past decade has witnessed an increasing trend of information and communication technologies (ICT) exploitation by governments around the world to enhance citizen participation. This is reflected in the emergence of a plethora of terms associated with the phenomenon, such as e-consultation or online consultation, online rule-making, online deliberation online public engagement, and e-participation. Chapter IX Blogging / David C. Wyld ................................................................................................................... 81 This chapter focuses on applications in blogging. A blog can be simply defined in the following manner: “A blog is an easy-to-use content management tool. When you ‘blog,’ you are instantly adding new content to your site via a Web interface. No technical or programming skills are necessary” (Weil, 2004, n.p.). In a nutshell, a blog is a “do-it-yourself” Web site. Gone are the days (of say 2003) when one would have to be knowledgeable in html or xml programming or make use of complex, and often expensive, Web creation software to create or update a Web site. With a blog, your Web site can be constantly added to and updated, without having to do anything more than typing (or cutting and pasting) into a text box. Through posting links, you can link your blog to any other site on the Web. You can even add audio and visual material to your blog site by uploading them, much as you would add an attachment to an e-mail. Others who find your site of interest can use RSS (really simple syndication) or sign-up for e-mail alerts to be notified when you post or add material to your blog. Chapter X E-Government and SMEs / Ron Craig ............................................................................................... 94 This chapter looks at a particular focus of e-government, that of support for business in general and SMEs (small and medium-sized enterprises) in particular. While this is only one segment of PIT (public
information technology), it is an important one. The chapter starts with an overview of the importance of SMEs to regional and national economies, showing why governments encourage their start-up, survival and growth. Following this, an overview of e-G2B (e-government to business) initiatives around the world is provided, with particular attention directed to the SME perspective. Chapter XI EU E-Business and Innovation Policies for SMEs / Anne Wiggins .................................................. 105 This chapter explores the academic and government bodies of literature related to EU SMEs (small and medium-sized enterprises) e-business and policy initiatives. Definitions of SMEs are explained, the unique characteristics of SMEs and entrepreneurial characteristics are outlined, and the case made that there is a clear need for more comprehensive research on SMEs in the EU. Chapter XII Exploitation of Public Sector Information in Europe / Ioannis P. Chochliouros, Anastasia S. Spiliopoulou, and Stergios P. Chochliouros ..................................................................118 This chapter focuses on the challenges affecting public sector information (PSI) in the European markets. The gradual “penetration” of an innovative, digitally-oriented information society in the scope of the actual convergence among telecommunications, broadcasting, and information technology, create primary opportunities for access and exploitation of PSI, in the context of a fully competitive and liberalized European electronic communications market. There are now significant challenges on the scene, for improving mutual communication between public sector and private companies, thus creating chances for exploiting new opportunities, to the benefit of the broader European market(s). However, the nonexistence of an appropriate legal framework governing the conditions and/or terms for the commercial use of PSI constitutes a serious drawback for any serious attempt towards evolution, and for an effective development of a European e-communications market. Chapter XIII Information Technology Among U.S. Local Governments / Donald F. Norris................................ 132 The purpose of this chapter is to provide an overview of the adoption, uses and impacts of information technology (IT), including electronic government, among local governments in the United States in the 1950s, these governments began to adopt IT for a variety of purposes and functions, and they continue to do so today. Since the mid 1970s, a small but prolific group of scholars has conducted a large body of research on various aspects of IT and local government. It is from that research and the author’s own studies into this subject that the chapter is based (regarding e-government, see also, Norris, 2006). Given the constraint of space, this chapter can only highlight aspects of this important topic. Readers who wish to delve more deeply into the subject of information technology and local government may wish to avail themselves of the works found in the bibliography as well as references from other, related works which can be found through those works.
Chapter XIV Public Sector Human Resources Information Systems / Christopher G. Reddick ........................... 145 This chapter examines HRIS’ impacts on operations, relationships, and transformations of local government organizations. Human resources information systems is any technology that is used to attract, hire, retain, and maintain talent, support workforce administration, and optimize workforce management. Examples include computers, Internet (Web and e-mail) or other technological means of acquiring, storing, manipulating, analyzing, retrieving, and distributing pertinent information regarding human resources (HR). Chapter XV Digital Libraries / Micah Altman ...................................................................................................... 152 This chapter presents an overview of the history, advantages, disadvantages, and design principles relating to digital libraries, and highlights important controversies and trends. Digital libraries are collections of digital content and services selected by a curator for use by a particular user community. In the last decade digital libraries have rapidly become ubiquitous because they offer convenience, expanded access, and search capabilities not present in traditional libraries. This has greatly altered how library users find and access information, and has put pressure on traditional libraries to take on new roles. However, information professionals have raised compelling concerns regarding the sizeable gaps in the holdings of digital libraries, about the preservation of existing holdings, and about sustainable economic models. Chapter XVI An Exploratory Study of the E-Government Services in Greece / Dimitrios K. Kardaras and Eleutherios A. Papathanassiou ......................................................................................................... 162 The goal of this chapter is to evaluate e-government services in Greece with a set of carefully chosen criteria, in a manner that can be used for evaluating e-government services worldwide. The impact of “e-business” on the public sector is the main source of the government’s transformation towards “e-government,” which refers to the public sector’s efforts to use information and communication technologies to deliver government services and information to the public. E-government allows citizens to interact more directly with the government, transforming multiple operational and bureaucratic procedures and employing a customer-centric approach to service delivery; it allows intra-governmental communication; it also offers numerous possibilities for using the Internet and other Web-based technologies to extend online government services (Gant, 2002). Chapter XVII e-Government’s Barriers and Opportunities in Greece / Giorgos Laskaridis, Konstantinos Markellos, Penelope Markellou, Angeliki Panayiotaki, and Athanasios Tsakalidis ........................................................................................................................ 175 This chapter presents the e-government efforts in Greece. Its aim is to point out the necessity of designing and implementing efficient e-government applications. The vision of an electronically modernized Greek public administration will be realized if a series of key strategic aspects will be considered as well
as international best practices and experiences. Moreover, it will demonstrate the arising opportunities and the key challenges. Chapter XVIII E-Lections in New Zealand Local Government / Alex Dunayev and John Paynter ........................ 192 World-wide governments are investing in initiatives to open access to information, resources, communication and services via channels typically used for electronic commerce. Government agencies are usually the leaders in communication technology commonly developed primarily for military use and later adopted by the general public. Since its inception, the Internet has gained widespread usage, prompting governments to provide online services to the public. The broad category for this type of information and services provision is called “e-government;” it is the general description of a way to provide better access to government information and services. This chapter presents New Zealand’s egovernment strategy in which the Internet will be used to improve the quality of the services and provide greater opportunities to participate in the democratic process for the citizens. Chapter XIX E-Census 2006 in New Zealand / John Paynter and Gabrielle Peko ............................................... 201 This chapter explores the use of e-census technologies in New Zealand. In New Zealand, the census is held every five years. A snapshot is taken on the chosen day and from that the number of people and housing units (houses, flats, apartments) are counted. Everyone in the country on that day is asked to fill in census forms. For the 2006 census an option was introduced to complete the forms on the Internet. Other initiatives included sending text messages about this process, amongst other things to the enumerators (collectors) whose job it is to collate the information in the field. The use of information technology, primarily with the Internet, offers the opportunity to distribute the information and deliver services on a very large scale. Chapter XX Security Challenges in Distributed Web Based Transactions: An Overview on the Italian Employment Information System / Mirko Cesarini, Mariagrazia Fugini, Mario Mezzanzanica, and Krysnaia Nanini ..................................................................................... 209 This chapter examines the objectives and challenges of the Italian plan of e-government, within which the Italian job workfair is conceived. Public administrations, during the last few years, activated modernizations in public service delivery. In particular, this arrangement relates to the service digitalization and automation, thanks to the massive inclusion of information and communication technologies in public offices. This paved the way for internal and external organizational and technological changes, in that a new approach is required to leverage the new technologies. Moreover, the Internet technologies began to play an important role in public services delivery, and many transactions are Web-based nowadays. Eventually, several governments in Europe, and others all over the world, started their own plans of e-government with the goal of increasing the amount and the quality of the service offered via the Internet to their customers (citizens, enterprises, profit and no-profit organizations).
Chapter XXI Interactive Personalized Catalogue for M-Commerce / Sheng-Uei Guan and Yuan Sherng Tay ..... 218 M-commerce possesses two distinctive characteristics that distinguish it from traditional e-commerce: the mobile setting and the small form factor of mobile devices. Of these, the size of a mobile device will remain largely unchanged due to the tradeoff between size and portability. Small screen size and limited input capabilities pose a great challenge for developers to conceptualize user interfaces that have good usability while working within the size constraints of the device. In response to the limited screen size of mobile devices, there has been unspoken consensus that certain tools must be made available to aid users in coping with the relatively large volume of information. Recommender systems have been proposed to narrow down choices before presenting them to the user (Feldman, 2000). The authors propose a product catalogue where browsing is directed by an integrated recommender system. Chapter XXII Trust Based E-Commerce Decisions / Vesile Evrim and Dennis McLeod ........................................ 229 In time, due to the increase of human-computer interaction, trust has become one of the most challenging topics in computer science. Today, trust based e-commerce decisions are becoming more valuable as Internet services are increasingly being used in business to consumer e-commerce applications. E-commerce provides a new way of shopping for the customers by offering more choices and transforming economic activity into a digital media. It also provides an opportunity for the businesses to extend their sales to a larger community. However, the success of getting higher profits and improved services are based on better communication. As in the real world, critical understanding of users’ behavior in cyberspace cannot be achieved without the analysis of the factors affecting the purchase decisions (Limayem, Khalifa, & Frini, 2000). Having lots of options in an environment that is missing face-to-face interaction enforce users to make trust-aware decisions to better protect their privacy and satisfy their expectations such as quality of services. Chapter XXIII Using Partial Least Squares in Digital Government Research / J. Ramon Gil-Garcia .................... 239 This chapter examines how to use partial least squares (PLS) and argues that this could help to incorporate more realistic assumptions and better measurements into digital government research. It does it through a commented example of a digital government research study (Gil-García, 2005b). It is important to clarify that the intention is not to suggest that every research project should use PLS, but to encourage scholars and practitioners to seriously consider this technique as an alternative when designing and carrying out their research. PLS is a structural equation modeling (SEM) technique similar to covariance-based SEM as implemented in LISREL, EQS, or AMOS. Therefore, PLS can simultaneously test the measurement model (relationships between indicators and their corresponding constructs) and the structural model (relationships between constructs).
Section II Privacy, Access, Ethics, and Theory Chapter XXIV Privacy Issues in Public Web Sites / Eleutherios A. Papathanassiou and Xenia J. Mamakou ........ 256 The advent of the Internet has altered the way that individuals find information and has changed how they engage with many organizations, like government, health care, and commercial enterprises. The emergence of the World Wide Web has also resulted in a significant increase in the collection and process of individuals’ information electronically, which has lead to consumers concerns about privacy issues. Many researches have reported the customers’ worries for the possible misuse of their personal data during their transactions on the Internet (Earp & Baumer, 2003; Furnell & Karweni, 1999), while investigation has been made in measuring individuals’ concerns about organizational information privacy practices (Smith, Milberg & Burke, 1996). Information privacy, which “concerns an individual’s control over the processing— i.e., the acquisition, disclosure, and use— of personal information” (Kang, 1998) has been reported as one of the most important “ethical issues of the information age” (Mason, 1986). Chapter XXV A Framework for Accessible and Usable Web Applications / Lourdes Moreno, Elena Castro, Dolores Cuadra, and Paloma Martinez ............................................................................................ 265 Internet growth makes feasible their use by an increased number of people around the world. This chapter examines several approaches introduced in order to create a universal access for all types of users independent of their capabilities. Nowadays, disabled people have several problems using the Web in the same way as non-disabled people, but the use of this technology is a right for everybody and more in the public administration scope in which, a lot of services must be available for users and on a correct way. Universal access may be obtained through the integration of usability and accessibility concepts in the software engineering discipline. These design methodologies consist of the possibility that every user, with independence of if they have disabilities or not, participate in all phases of the Web application development. Chapter XXVI Intelligent User-Centric Access to Public Information / Giovanni Maria Sacco ............................. 274 The quantity and diversity of information available from public government sources is now quite large. Governments, especially local ones, are using the Web to provide a number of services that are mainly informative and aim at improving the quality of life of citizens and at promoting the local community, for example job placement services, tourist information, and so on. Finally, government e-services available to citizens represent one of the most frequent and critical points of contact between public administrations and citizens. In addition to common services such as ID cards, permits, e-services represent the only practical way of providing incentives and support to specific classes of citizens. The key problem is that information must be findable (Morville, 2002). Easy and effective user-centric access to complex information is therefore one of the most critical functionalities of e-government. Since the goal is
end-user interactive access, a holistic approach, in which modeling, interface and interaction issues are considered together, must be used and will be discussed in this chapter. Chapter XXVII Open Access to Scholarly Publications and Public Policies / Jean-Philippe Rennard ..................... 284 “If I have seen further it is by standing upon the shoulders of giants.” The famous statement of Sir Isaac Newton demonstrates that the progress of science relies on the dissemination of discoveries and scientific knowledge. Even though scientific progress is not strictly cumulative (Kuhn, 1970) information sharing is the heart of this progress. Nowadays, scientific knowledge is mainly spread through scholarly journals, that is, highly specialized journals where quality controls and certifications are achieved through peer-review. The first section of this chapter will present the specificity of the current economic model of scientific publications. The second section will introduce to the open access movement and to its emerging economic model. The third section will show the growing implication of governments in that movement. Chapter XXVIII The Digital Divide and Social Equity / Alfred P. Rovai and Emery M. Petchauer .......................... 294 As the Internet becomes increasingly central to living in today’s society, it becomes important that certain groups are not systematically excluded. This chapter examines the digital divide with an emphasis on critical perspectives that recognize power, racism, and social stratification and the challenges faced by public officials to promote information technology policies and programs that support social equality. Chapter XXIX Africa and the Challenges of Bridging the Digital Divide / Esharenana E. Adomi ......................... 303 In this chapter, efforts are made to define digital divide, unravel the status of Africa in the global digital map, enumerate the causes of low level of ICT The provision of communication services in developing regions (like Africa) is an essential aspect of enhancing and facilitating the rate of economic and social development (Yavwa & Kritzinger, 2001). There is thus the need for African countries to make frantic efforts to ensure that ICTs are provided adequately and consistently to close the divide and reap the benefits of economic and social development. Chapter XXX Research Ethics in E-Public Administration / Carlos Nunes Silva ................................................... 314 The purpose of this chapter is to discuss professional ethical issues in research activities conducted in e-public administration, most of which are common to the private and non-profit sectors. It offers an overview of key ethical issues in this field and identifies ethical challenges raised by the application of information and communications technologies (ICT) in public administration research activities. The evidence available shows that ICT places new ethical challenges but does not change radically the nature of ethical problems characteristic of paper-based and face-to-face public administration.
Chapter XXXI Medical Ethical and Policy Issues Arising from RIA / Jimmie L. Joseph and David P. Cook ......... 323 New technologies can lead to social upheaval and ethical dilemmas which are unrecognized at the time of their introduction. Medical care technology has advanced rapidly over the course of the past two decades and has frequently been accompanied by unforeseen consequences for individuals, the medical profession and government budgets, with concomitant implications for society and public policy (Magner 1992; Marti-Ibanez 1962). Advances in information technology (IT) during the last decade and a half are now impacting the medical profession, and the delivery of medical advances, in ways that will impact public policy debates for the foreseeable future. The World Wide Web makes information that was once the eminent domain of medical professionals available to average citizens who are increasingly demanding medical treatments from the leading edge of medical technology. Chapter XXXII Social Capital and the Gendering of Differential IT Use / Lia Bryant and Iolanda Principe .......... 333 Public information technology, as a term, implicitly suggests universal access by citizens to information through the use of technology. The concepts of social capital and the digital divide intersect in access to public information technology. Social inclusion or exclusion occurs as a consequence of the ways in which societies are stratified according to race, gender, (dis)ability, ethnicity and class. This chapter focuses especially on one aspect of stratification, gender and theorizes the gendering of differential access and use of information technologies. An understanding of gendered participation relevant to access to public information technology within the policy contexts for electronic government and social inclusion is important to inform public information technology policy, and service planning and delivery that are premised on the notion of universal access. Chapter XXXIII Technology Diffusion in Public Administration / Eugene J. Akers .................................................. 339 This chapter examines the diffusion of information technology in the public sector and how it provides the opportunity to apply the appropriateness of diffusion theory in a combined context of information technology and public policy innovation. The ability to understand the salient aspects of innovations as perceived by the members of a social system is essential to the success of planned change. Chapter XXXIV Institutional Theory and E-Government Research / Shahidul Hassan and J. Ramon Gil-Garcia ... 349 This chapter provides a brief overview of institutional theory in various disciplinary traditions, with an emphasis on institutional theory in sociology. The authors identify various patterns of the use of institutional theory in information systems and e-government research. They also discuss future trends in e-government based on institutional theory. Additionally, based on their analysis of the current state of the art, the authors suggest some research directions for using institutional theory in future e-government research.
Chapter XXXV Structuration Theory and Government IT / J. Ramon Gil-Garcia and Shahidul Hassan................. 361 This chapter presents several examples of how the structuration theory has been applied to study IT in both public and private sector organizations. The authors highlight the usefulness of this perspective to understand incremental and radical change in organizational and inter-organizational settings. The chapter highlights the characteristics of the ensemble view of IT in organizations and provides a brief overview of the structuration theory. Also presented are four influential models that apply the structuration theory to information systems research. Additionally, the chapter argues that previous models have mainly explained incremental change within organizational settings and an important future trend for public information technology research should be to understand radical change and inter-organizational relationships. Section III Security and Protection Chapter XXXVI Intelligence and Security Informatics / Jimmie L. Joseph ................................................................ 378 Intelligence and security informatics (ISI) is the application of information systems (IS), databases and data coding schemes to issues of intelligence gathering, security and law enforcement. This chapter examines the differences between ISI and other disciplines of informatics. ISI differs from other disciplines because of the critical role played by the general public in data gathering and information dissemination. Three major differences exist between ISI and other forms of informatics, and these differences make ISI unique in terms of data collection and dissemination. The differences are: (1) data source reliability, (2) the need to determine which datum is relevant, and (3) the need to disseminate the finding to the general public without knowing in advance the appropriate individuals or institutions needing to be informed. Chapter XXXVII Practical Measures for Securing Government Networks / Stephen K. Aikins .................................. 386 Governments have the obligation to manage their information security risks by securing mission critical internal resources such as financial records and taxpayer sensitive information on their networks. Consequently, public sector information security officers are faced with the challenge to contain damage from compromised systems, prevent internally and Internet launched attacks, provide systems for logging and intrusion detection, and build frameworks for administrators to securely manage government networks (Oxenhandler, 2003). This chapter discusses some of the cost-effective measures needed to address information security vulnerabilities and related threats.
Chapter XXXVIII Digital Convergence and Cybersecurity Policy / Anthony W. Buenger, Jr........................................ 395 The purpose of this chapter is to explain how digital convergence is affecting the public sector and the need for a cyber security policy that includes the active involvement of both the public and private sectors. Digital convergence constitutes the full realization of the information age and provides the foundation to link cultural, personal, business, governmental, and economic affairs into a rapidly expanding global digital world called cyberspace. However, this linking of people around the globe is challenging the government to actively work with private industry to ensure its critical infrastructures and associated information is adequately protected. Chapter XXXIX Bioterrorism Response and IT Strategies / David A. Bray ............................................................... 406 This chapter examines how public health information technology (IT) can aid public health preparedness in terms of bioterrorism preparedness and associated emergency response. Most analyses of possible future bioterrorism events predict they may be similar to the anthrax events of 2001, specifically a limited population of individuals may experience morbidity or mortality, but the concern, panic, and worry stirred up by the threat will catch the attention of the entire nation. If public health IT is to help with bioterrorism preparedness, it needs to not only address mitigation of civilian illnesses and deaths, but also help to manage individual and societal fears springing from the real or threatened occurrence of such an event. Chapter XL Federal Public-Key Infrastructure / Ludwig Slusky and Parviz Partow-Navid ................................ 413 All branches of federal government are required to migrate their business practices to a paperless operation. Privacy and information security (InfoSec) are critical for protection of information shared over networks internally between the US Government agencies and externally with non-federal organizations (businesses; state, local, and foreign governments; academia; etc.) or individuals. This chapter will examine public key infrastructure (PKI), which is the simplest, most widely used architecture for secure data exchange over not-secure networks. It integrates computer hardware and software, cryptography, information and network security, policies and procedures to facilitate trust in distributed electronic transactions and mitigate the associated risks. Chapter XLI Radio Frequency Identification (RFID) Technology / David C. Wyld .............................................. 425 We are in the midst of what may become one of the true technological transformations of our time. RFID (radio frequency identification) is by no means new a “new” technology. This chapter examines several dimensions of RFID, which is fundamentally based on the study of electromagnetic waves and radio, pioneered in the nineteenth century work of Faraday, Maxwell, and Marconi. The idea of using radio frequencies to reflect waves from objects dates back as far as 1886 to experiments conducted by Hertz. Radar was invented in 1922, and its practical applications date back to World War II, when the British
used the IFF (identify friend or foe) system to identify enemy aircraft (Landt, 2001). Stockman (1948) laid-out the basic concepts for RFID. However, it would take decades of development before RFID technology would become a reality. Since 2000, significant improvements in functionality, decreases in both size and costs, and agreements on communication standards have combined to make RFID technology viable for commercial and governmental purposes. Today, RFID is positioned as an alternative way to identify objects to the ubiquitous bar code. Chapter XLII Roaming-Agent Protection for E-Commerce / Sheng-Uei Guan ..................................................... 441 There has been a lot of research done on the area of intelligent agents. Unfortunately, there is no standardization in the various proposals, resulting in vastly different agent systems. Efforts are made to standardize some aspects of agent systems so that different systems can interoperate with each other. This chapter will examine some of the leading standards in agent representation including KQML and agent TCL, their security vulnerabilities and the application of safety protocols. This chapter will examine these standards in the context of e-commerce and m-commerce and efforts to protect transactions through SAFE (secure roaming agent for e-commerce) and offer a look at interoperability in roaming systems. Chapter XLIII Integrity Protection of Mobile Agent Data / Sheng-Uei Guan ......................................................... 453 This chapter discusses security and integrity issues facing agent technology. Various frameworks are discussed including SAFER, or secure agent fabrication, evolution and roaming, which is a mobile agent framework that is specially designed for the purpose of electronic commerce (Guan & Hua, 2003, Guan et al., 2004; Zhu et al., 2000). By building strong and efficient security mechanisms, SAFER aims to provide a trustworthy framework for mobile agents to assist users in conducting mobile or electronic commerce transactions. Agent integrity is another area crucial to the success of agent technology (Wang et al., 2002). Despite the various attempts in the literature, there is no satisfactory solution to the problem of data integrity so far. Some of the common weaknesses of the current schemes are vulnerabilities to revisit attack when an agent visits two or more collaborating malicious hosts during one roaming session and illegal modification (deletion/insertion) of agent data. Agent monitoring protocol (AMP) (Chionh et al., 2001) is examined, which is an earlier proposal under SAFER to address agent data integrity, does address some of the weaknesses in the current literature.
Volume II Chapter XLIV The Role of Data Mining in Intrusion Detection Technology / Amalia Agathou and Theodoros Tzouramanis .................................................................................................................... 463 This chapter examines the several important contributions and improvements data mining has introduced to the field of IDS (intrusion detection system) technology. Over the past few years, the Internet has
changed computing as we know it. The more possibilities and opportunities develop, the more systems are subject to attack by intruders. Thus, the big question is about how to recognize and handle subversion attempts. One answer is to undertake the prevention of subversion itself by building a completely secure system. However, the complete prevention of breaches of security does not yet appear to be possible to achieve. Therefore these intrusion attempts need to be detected as soon as possible (preferably in real-time) so that appropriate action might be taken to repair the damage. This is what an IDS does. IDSs monitor and analyze the events occurring in a computer system in order to detect signs of security problems. However, intrusion detection technology has not yet reached perfection. Section IV System Design and Data Processing Chapter XLV System Dynamics to Understand Public Information Technology / Luis Felipe Luna-Reyes .......... 476 This chapter presents system dynamics as a method to get a better understanding of such mismatches in public information technologies. The method has already been used successfully in planning and evaluation of both public and private IT applications (Madachy & Tarbet, 2000; Tarek K. Abdel-Hamid & Madnick, 1991; Wolstenholme, 2003; Wolstenholme, Henderson, & Gavine, 1993). The method allows to understand the interactions among technologies and organizations as a continuous process of organizational change (March, 1981), in which is possible to find brief periods of rapid change. However, even those periods of rapid change are conceptualized as the result of endogenous and continuous local adaptations (Hutchins, 1991), where technology enables, not causes change (Orlikowski, 2000). Also presented are the basic principles and tools of system dynamics, and continues with an example of its application in the analysis of an IT project in the Public Sector. The chapter ends with a brief description of future trends in modeling and simulation as well as a brief conclusion. Chapter XLVI Government Innovation Through Knowledge Management / Luis Felipe Luna-Reye..................... 493 The purpose of this chapter is to discuss the process involved in managing knowledge, considering critical factors in the process. In this way, the chapter is organized in four different but conceptually interrelated sections. In the first of them, the author describes some of the main concepts of knowledge and knowledge management. The second section is a description of the process as stated in the original question, and the next one is a brief discussion on the impact of the four critical factors identified by Arthur Andersen and Company on the main stages in the process in the way that is proposed in the initial question. The last sections of the chapter constitute a description of future trends and conclusions to the essay. Chapter XLVII A Framework for Knowledge Management in E-Government / Kostas Metaxiotis ......................... 508 While most of the prior research studies have investigated the possible application of KM in the public sector, none have focused on the application of KM in e-government; this is done through this chapter. In
this chapter, the author, recognizing the importance of e-government and KM to devolve into the public administration sector, continues his previous research related to the application of KM in e-government (Metaxiotis & Psarras, 2005), discusses key issues and presents a framework for the application of KM in e-government as a basis for future research. Chapter XLVIII Web Application Classification: A Maintenance/Evolution Perspective / Hsiang-Jui Kung, and Hui-Lien Tung ............................................................................................................................ 520 This chapter examines the three layers of Web applications: conceptual, presentation, and navigation; and its two perspectives: designer and viewer. Software evolution is “the dynamic behavior of programming systems as they are maintained and enhanced over their life times” (Belady & Lehman, 1976). Web application evolution is of increasing importance as more Web systems are in production. Many companies use the Web to communicate with the external world as well as within their organizations and to carry out their business processes more effectively. Web technologies have been adopted by organizations in the public sector. Many state agencies provide their services via the Web. This study investigates the management of e-government applications at a U.S. state technology agency (STA). Chapter XLIX Web Services and Service-Oriented Architectures / Bruce J. Neubauer .......................................... 531 A review of the development of information systems can help in understanding the potential significance of Web services and service oriented architecture (SOA) in the public sector. SOA involves the convergent design of information systems and organizational workflows at the level of services. The purpose of this chapter is to suggest a strategy for mapping the design of service oriented architectures on to the complex patterns of governance including combinations of federalism, regionalism and outsourcing of functions from government agencies to nonprofit organizations. This involves the modeling of workflows and the identification of opportunities for the sharing of services among agencies and nonprofits. Chapter L The Strategic Determinants of Shared Services / Anton Joha and Marijn Janssen ......................... 544 The goal of the research presented in this chapter is to analyze the strategic determinants influencing the decision-making for using and implementing shared services. The structure of this chapter is as follows. In the following section we discuss the historical and theoretical background of shared services. In the section thereafter we provide an overview of the strategic determinants influencing the shared services decision. Next, both future trends and future research directions are presented and finally, in section six, conclusions are drawn. Chapter LI Data Mining in Public Administration / John Wang, Xiaohua Hu, and Dan Zhu ............................ 556 This chapter examines the application of data mining within public organizations. In general, data mining is a data analytical technique that assists businesses in learning and understanding their customers so that
decisions and strategies can be implemented most accurately and effectively to maximize profitability. Data mining is not general data analysis, but a comprehensive technique that requires analytical skills, information construction, and professional knowledge. Businesses are now facing global competition, and are being forced to deal with an enormous amount of data. The vast amounts of data and the increasing technological ability to store it also facilitated data mining. In order to gain a certain level of competitive advantage, a data analytical technology called data mining is now commonly adopted among businesses. Organizations use data mining as a tool to forecast customer behavior, reduce fraud and waste, and assist in medical research. Chapter LII Categorization of Data Clustering Techniques / Baoying Wang, Imad Rahal, and Richard Leipold................................................................................................................................. 568 This chapter examines data clustering, a discovery process that partitions a data set into groups (clusters) such that data points within the same group have high similarity while being very dissimilar to points in other groups (Han, 2001). The ultimate goal of data clustering is to discover “natural” groupings in a set of patterns, points, or objects, without prior knowledge of any class labels. In fact, in the machine learning literature, data clustering is typically regarded as a form of unsupervised learning as opposed to supervised learning. In unsupervised learning or clustering, there is no training function as in the supervised learning. There are many applications for data clustering including, but not limited to, pattern recognition, data analysis, data compression, image processing, understanding genomic data, and market-basket research. Chapter LIII Statistical Dissemination Systems and the Web / Sindoni Giuseppe and Tininini Leonardo ........... 578 This chapter reviews the main concepts at the basis of multidimensional (data warehouse) modeling and navigation. We also illustrate some peculiarities of statistical data that make the implementation of a statistical data warehouse that is a statistical dissemination system enabling the user to perform a multidimensional navigation, a challenging issue in many aspects. Finally, we analyze the main characteristics of some of the most important systems for the dissemination of statistical data on the Web, distinguishing two main approaches, the former based on a free navigation on specific subcubes, the latter on a constrained navigation on a single data cube. Chapter LIV Text Mining / Antonina Durfee ......................................................................................................... 592 Massive quantities of information continue accumulating at about 1.5 billion gigabytes per year in numerous repositories held at news agencies, libraries, corporate intranets, PCs, and the Web. A large portion of all available information exists in the form of texts. Researchers, analysts, editors, venture capitalists, lawyers, help desk specialists, and even students are faced by text analysis challenges. This chapter explores text mining tools which aim at discovering knowledge from textual databases by isolating key bits of information from large amounts of text, identifying relationships among documents. Text mining technology is used for plagiarism and authorship attribution, text summarization and retrieval,
and deception detection. Chapter LV Statistical Data and Metadata Quality Assessment / Maria Vardaki and Haralambos Papageorgiou ............................................................................................................... 604 This chapter aims in summarizing some of the latest efforts in assessing quality of statistical results in national public administrations and international organizations in order to meet demands for comparable, high quality and reliable statistics used for economy and policy-monitoring purposes. Topics that are covered include quality criteria proposed by national and international organizations, metadata requirements for quality reporting and transformations that should be integrated in the workflow process of public administrations’ information systems for automatic manipulation of both data and metadata, thus minimizing errors and assuring quality of results. Chapter LVI Probability Association Approach in Automatic Image Annotation / Feng Xu and Yu-Jin Zhang ..................................................................................................................................... 615 Automatic image annotation is derived from the manual annotation for CBIR. Since the semantic gap degrades the results of image search, the text descriptions are considered. It is desired that the text and the visual features can cooperate to drive more effective search. The text labels, as the high-level features, and the visual features, as the low-level features, are complementary for image content description. Therefore, automatic image annotation becomes an important research issue in image retrieval. In this chapter, some approaches for automatic image annotation will be reviewed and one of the typical approaches is described in detail. Then the keyword-based image retrieval is introduced. The general applications of automatic image annotation are summarized and explained by figure examples. Chapter LVII Online Analytical Processing and Data-Cube Technologies / Lixin Fu and Wenchen Hu................ 627 This chapter examines the applications of online analytical processing and data cube technologies in the public sector. Since the late 80’s and early 90’s, database technologies have evolved to a new level of applications—online analytical processing (OLAP)—where the executive management can make quick and effective strategic decisions based on the knowledge in terms of queries against large amount of data stored. Some OLAP systems are also regarded as decision support systems or executive information systems. The traditional, well-established online transactional processing systems such as relational database management systems mainly deal with mission critical daily transactions. Two cases are examined within this chapter. One case is related to data analysis for student retention. Another case is related to NSF grant awards analysis. One may want to know the number of awards grouped by schools, by disciplines, by regions, by amounts, by dates, and so on, and grouped by any arbitrary combination of these dimensions.
Section V Project Management and IT Evaluation Chapter LVIII Managing People and Information in Complex Organizations / Kalu N. Kalu ................................ 638 Information technology affects organization and society itself, as it redefines work content, reorganizes leadership styles and cultures, reshuffles power hierarchies and spawns a series of both man-designed and spontaneous adaptations. Information technology oftentimes necessitates a new division of labor that creates policy problems and loss of accountability. Organizational leadership, especially in the public sector, urgently requires a theoretical as well as a practical revaluation to cope with the structural and functional changes within work and administrative organizations. This chapter seeks to elucidate three leadership models in the context of IT-induced changes in organizational forms and processes, networked leadership, organic leadership, and gatekeeper leadership models. Chapter LIX Human-Factors Design for Public Information Technology / Vincent E. Lasnik ............................. 650 This chapter examines the realm of human factors design for public information technology in the rapidly evolving post-modern “knowledge age” of the 21st century, with special focus on how new research and development into human cognition, perception and performance capabilities is changing the design function for IT systems and products. Many “one size fits all” IT designs are neither adaptive nor adaptable—promulgating a top-down technological imperialism penetrating every aspect of their use. The communication, collaboration, and interaction infrastructure of IT organizations thus remains acutely challenged with enduring problems of usability, learnability, accessibility, and adaptability. As the function and form of products undergoes increasingly rigorous scrutiny, one important design goal is emerging as a paramount priority: improving the usability of products, tools and systems for all stakeholders across the enterprise. It is therefore important to briefly describe emerging human factor design knowledge and practices applicable to organizations that invent, incubate, innovate, prototype, and drive the creation and application of public IT. The findings here suggest the most effective strategies to manage and augment user-centered design endeavors across a wide array of public IT products and organizations. Chapter LX An Overview of IT Outsourcing in Public Sector Agencies / Anne C. Rouse .................................. 662 This chapter examines the outsourcing of services by governments as the result of public sector reforms. Outsourcing has been argued to lead to cost savings; “improved discipline”; better services; access to scarce skills; and the capacity for managers to focus more time on the “core business” of their organizations (Domberger, 1998). Government outsourcing initiatives have encompassed a range of services, but given the large sums of money invested in IT assets, outsourcing of IT services (IT outsourcing, or ITO) has been a major initiative for many agencies. Case studies have reported ITO successes and failures (e.g., Currie & Willcocks, 1998; Rouse & Corbitt, 2003a; Willcocks & Kern, 1998; Willcocks & Lacity,
2001; Willcocks & Currie, 1997), but much of the evidence presented to public sector decision makers to justify this reform is anecdotal and unsystematic, and when investigated in depth does not necessarily support widespread conclusions. Chapter LXI E-Health, Local Governance, and Public-Private Partnering in Ontario / Jeffrey Roy ..................... 672 The purpose of this chapter is to undertake a critical examination of the emergence of e-health in the Canadian province of Ontario. More than solely a technological challenge, the emergence and pursuit of e-health denote a complex governance transformation both within the province’s public sector and in terms of public-private partnering. The Ontario challenge here is complicated by the absence of formal regional mechanisms devoted to health care, a deficiency that has precipitated the creation of local health integration networks to foster e-health strategies on a sub-provincial basis, as well as ongoing difficulties in managing public information technologies. With respect to public-private partnering, a greater regionalization of decision-making and spending authorities, within transparent and locally accountable governance forums, could provide incentives for the private sector to work more directly sub-provincially, enjoying greater degrees of freedom for collaboration via more manageable contracting arrangements. Chapter LXII Implementing a Sound Public Information Security Program / Stephen K. Aikins .......................... 689 This chapter sheds light on the needed policy guidelines and standards for safeguarding an agency’s information resources. The evolving nature of information security threats such as cyber crime, as well as the need to ensure confidentiality and privacy of citizen information and to protect critical infrastructure call for effective information security management in the public sector. E-government applications have made it easier for citizens to conduct business online with government agencies, although their trust in the ability of governments to keep that information private is low. Considering the amount of citizen information held by governments at all levels, and the steps needed to address potential homeland security and information technology (IT)-related threats to critical infrastructure, the need for effective means of safeguarding public agency data has become an issue of paramount importance. In addition, the need to ensure integrity and availability of public information resource is crucial to many government operations. Chapter LXIII Evaluation of E-Government Web Sites / Michael Middleton.......................................................... 699 The intent of this chapter is to provide an overview of different approaches to Web site evaluation in order to suggest further application and development of evaluation instruments. In recent times, the popularity of the Internet has led to e-government practices being widely recognized as an important option for service to the general public. In response, various tiers of government from national to local level have sought opportunities to engage the public through Web sites. Many governments now provide some level of access to government through Web interfaces, for example through access to resources such as publications and government data. In some cases there are services provided that may be executed online. For example users may provide personal information for licensing or to undertake payments. There continues to be a diversity of implementation quality and levels for such services.
Chapter LXIV IT Evaluation Issues in Australian Public-Sector Organizations / Chad Lin .....................................711 The main objective of this chapter is to identify evaluation issues that are critical in implementation of IT projects by public sector organizations. A key contribution of the chapter is to identify and examine evaluation issues and other key factors faced by public sector organizations undertaking IT projects. The key issues presented in this chapter are of interest to senior public sector executives concerned with making decisions about IT investments and realizing IT benefits. Chapter LXV Performance and Accountability in E-Budgeting Projects / Gabriel Puron-Cid and J. Ramon Gil-Garcia ......................................................................................................................... 722 Based on the analysis of three federal initiatives, this chapter argues that due to how much ICT is embedded in government institutional and organizational environments, the tensions between performance and accountability become also reflected in the goals, features, and functionality of e-budgeting projects (See terms and definitions). Further, the prevalence of accountability for finance and fairness (accountability bias) already identified in the literature (Behn, 2001) is also reflected in the formal goals, general characteristics, and technical capabilities of the e-budgeting systems. The cases thus support the general hypothesis that information technologies do not necessarily have the power to transform government radically, at least not in the case of e-budgeting initiatives. Chapter LXVI A Model for Reengineering IT Job Classes in State Government / Craig P. Orgeron ..................... 735 The ubiquitous nature of information technology at all levels of government and the core requirement to recruit and retain qualified technology professionals calls for an expansion in the body of research; this research can provide invaluable insight into the success and failure in public sector information technology human resource practices. The intent of the research within this chapter is the utilization of the DeMers’ (2002) seven-pronged approach to critically examine Mississippi state government agencies with the expected result of assessing the effectiveness and efficiency of the IT personnel classification system. This leading-edge and highly effective IT personnel classification system, designed specifically to improve IT recruitment and retention, was implemented by the state of Mississippi in partnership with the Hay Group, an internationally-known human resource consultancy. Section VI Selected Readings Chapter LXVII Developing a Generic Framework for E-Government / Gerald Grant and Derek Chau ................. 748 Originally published in the Journal of Global Information Management, Vol. 13, No. 1, this article addresses the following key question: given the wide variety of visions, strategic agendas, and contexts
of applications, how may we assess, categorize, classify, compare, and discuss the e-government efforts of various government administrations? In answering this question the authors propose a generic e-government framework that will allow for the identification of e-government strategic agendas and key application initiatives that transcend country specific requirements. In developing the framework a number of requirements are first outlined. The framework is proposed and described. It is then illustrated using brief case studies from three countries. Findings and limitations are discussed. Chapter LXVIII A Web Query System for Heterogeneous Government Data / Nancy Wiegand, Isabel F. Cruz, Naijun Zhou, and William Sunna ...................................................................................................... 775 Originally published in the International Journal of Electronic Government Research, Vol. 1, No. 2, this article describes a Web-based query system for semantically heterogeneous government-produced data. Geospatial Web-based information systems and portals are currently being developed by various levels of government along with the GIS community. Typically, these sites provide data discovery and download capabilities but do not include the ability to pose DBMS type queries. The authors extend work in schema integration by focusing on resolving semantics at the value level in addition to the schema or attribute level. They illustrate their method using land use data, but the method can be used to query across other heterogeneous sets of values. Their work starts from an XML Web-based DBMS and adds functionality to accommodate heterogeneous data between jurisdictions. Their ontology and query rewrite systems use mappings to enable querying across distributed heterogeneous data. Chapter LXIX Digital Government Worldwide: An E-Government Assessment of Municipal Web Sites Throughout the World / James Melitski, Marc Holzer, Seang-Tae Kim, Chan-Gun Kim, and Seung-Yong Rho ................................................................................................................................ 790 Originally published in the International Journal of Electronic Government Research, Vol. 1, No. 1, this article evaluates the current practice of digital government in large municipalities worldwide. The study assesses 84 cities from around the world using a five-stage e-government framework. The authors’ research and methodology goes beyond previous research by utilizing 92 measures that were translated into the native language of each city, and the assessment of each municipal Web site was conducted by a native speaker of the municipality’s language between June and October of 2003. They review relevant e-government literature for evaluating Web sites in the United States and internationally, discuss our sample selection, methodology, theoretical framework, findings, and recommendations. Their results indicate that Seoul, Hong Kong, Singapore, New York, and Shanghai are the top five large cities in the providing digital government opportunities to citizens online. In addition, the authors’ research suggests a difference in the digital government capabilities between the 30 developed nations belonging to the Organization for Economic Co-operation and Development (OECD) and lesser-developed (non-OECD) nations.
Chapter LXX User Help and Service Navigation Features in Government Web Sites / Genie N.L. Stowers ......... 805 Originally published in the International Journal of Electronic Government Research, Vol. 2, Issue 4, this article examines the user help and service navigation features in government Web sites and compares them across levels of government. These features are critical to ensuring that users unfamiliar with government are able to successfully and easily access e-government services and information. The research finds clear patterns in the use of similar help and navigation features across governments, leading to a conclusion that these features are diffusing in the public sector Web development field. The chapter concludes by stating that Web developers should work to overcome a second digital divide—one of a lack of knowledge of Web site organization and government structure. Users need to be actively assisted to find information by Web developers. Chapter LXXI An Empirical Study on the Migration to OpenOffice.org in a Public Administration / B. Rossi, M. Scotto, A. Sillitti, and G. Succi ..................................................................................... 818 Originally published in the International Journal of Information Technology and Web Engineering, Vol. 1, Issue 3, this article reports the results of a migration to open source software (OSS) in public administration. The migration focuses on the office automation field and, in particular, on the OpenOffice. org suite. The authors have analyzed the transition to OSS considering qualitative and quantitative data collected with the aid of different tools. All the data have been always considered from the point of view of the different stakeholders involved, IT managers, IT technicians, and users. The results of the project have been largely satisfactory. However the results cannot be generalized due to some constraints, like the environment considered and the parallel use of the old solution. Nevertheless, the authors think that the data collected can be of valuable aid to managers wishing to evaluate a possible transition to OSS. Chapter LXXII Organisational Challenges of Implementing E-Business in the Public Services: The Case of Britain’s National Mapping Agency / Francesca Andreescu ............................................................ 833 Originally published in the International Journal of E-Business Research, Vol. 2, Issue 4, this article explores the processes of strategic and organizational transformation engendered by e-business implementation in a commercialized British public sector organization within the geographic information industry. Recognized as a leading participant in the geographic information industry, within which it is forging partnerships with key private sector companies, the organization has enthusiastically grasped e-business as an all-embracing phenomenon and implemented a new strategy that transformed the way it did business. The case analysis illustrates the challenges and constraints that this organization is facing in implementing e-business strategies in practice.
Chapter LXXIII Public Administrators’ Acceptance of the Practice of Digital Democracy: A Model Explaining the Utilization of Online Policy Forums in South Korea / Chan-Gon Kim and Marc Holzer ............... 854 The Internet provides a new digital opportunity for realizing democracy in public administration, and this study raises a central question: What factors determine public officials’ acceptance of the practice of digital democracy on government Web sites? The authors focus on online policy forums among many practices of digital democracy. To gauge public officials’ behavioral intentions to use online policy forums on government Web sites, they examined individual and organizational factors, as well as system characteristics. They also administered a survey questionnaire to Korean public officials and analyzed a total of 895 responses. Path analysis indicates that three causal variables are important in predicting public officials’ intentions to use online policy forums: perceived usefulness, attitudes toward citizen participation, and information quality. In this article, originally published in the International Journal of Electronic Government Research, Vol. 2, Issue 2, the authors discuss implications of this study for practices and theories of digital democracy. Chapter LXXIV E-Mexico: Collaborative Structures in Mexican Public Administration / Luis F. Luna-Reyes, J. Ramon Gil-Garcia, and Cinthia Betiny Cruz ................................................................................ 873 After six years of challenges and learning pushing forward the e-government agenda in Mexico, the Presidential succession brought an opportunity for assessing the current progress, recognizing the main unsolved problems, and planning the vision for the future of e-government in Mexico. This case, originally published in the International Journal of Cases on Electronic Commerce, Vol. 3, Issue 2, provides a rich description of the e-Mexico system, including its main objectives and goals, governance structures, IT infrastructure, collaboration processes, main results, and current challenges. Some background information about Mexico is also provided at the beginning of the case. Playing the role of a consultant working for the new Mexican CIO, the reader is asked to evaluate the current situation and help in the design of a work plan, including a proposal for organizing the ICT function, the main strategic objectives, and some specific lines of action for the next six years. Chapter LXXV The Impact of the Internet on Political Activism: Evidence from Europe / Pippa Norris ............... 889 The core issue for this study concerns less the social than the political consequences of the rise of knowledge societies, in particular the capacity of the Internet for strengthening democratic participation and civic engagement linking citizens and government. To consider these issues, this article, originally published in the International Journal of Electronic Government Research, Vol. 1, No. 1, is separated into four parts. Part I summarizes debates about the impact of the Internet on the public sphere. Part II summarizes the sources of survey data and the key measures of political activism used in this study, drawing upon the 19-nation European Social Survey, 2002. Part III examines the evidence for the relationship between use of the Internet and indicators of civic engagement. The conclusion in Part IV summarizes the results and considers the broader implications for governance and democracy.
Chapter LXXVI Adoption and Implementation of IT in Developing Nations: Experiences from Two Public Sector Enterprises in India / Monideepa Tarafdar and Sanjiv D. Vaidya ............................................................................................................................... 905 Originally published in the Journal of Cases on Information Technology, Vol. 7, No. 1, this case describes issues in IT adoption at two large public sector organizations in India. Along with illustrating the significance of top management drive and end-user buy in, it particularly highlights the role of middle management in managing the IT adoption process at different levels in these large organizations.
xxxv
Foreword
The breadth and speed of information technology in the public sector is breathtaking. Every day new software and hardware technologies are developed. Each of these new technologies has significant consequences for society and governments. For example, as online databases grow in their numbers, size, and accessibility over the Internet, the need to provide for robust security becomes more important. Likewise, new ethical questions and dilemmas arise as a result of these developments such as what balance should exist between rights of access and privacy. Different governments treat this balance in different ways. Thus, there is a close interconnection between technical and social developments in information technology. The Internet, especially with respect to governmental e-commerce, depends on the development of trust between citizens and governments. Without that trust, participation in governmental e-commerce would be low and its impact muted. Similarly, e-government technologies are aimed at providing governments with increased capacity for more effective service but this increased effectiveness can only be achieved if organizations undertake changes in their processes and these changes also depend upon the actions of public managers, not just the technology. Thus, there is a need to keep up on research concerning technical, social, and managerial impacts of technologies. All governmental organizations have been affected by these new technologies, ranging from small local governments to the largest nation-states. To what extent are the transformations due to technological change similar across organizations that differ in size and culture? These are questions that researchers are beginning to address. In short, the sheer size and speed of change in public information technology is so broad and complex that it is extremely difficult even for those who have specialized in this area to keep abreast of these developments. There is a need for works that bring together these diverse strands of research on digital government. This volume provides a service to researchers in the field. It covers a broad range of research including hardware, software, social, managerial, ethical, and political issues of public information technology. Its coverage is international in scope, including the United States, Europe, and emerging countries. The breadth of coverage ensures that the book contains material relevant to a wide variety of researchers. The diversity of research is striking. For example, it includes chapters on radio frequency identification technology, service-oriented architecture, the bridging of the digital divide in Africa, and blogging. To summarize, this book will provide readers with an excellent perspective concerning the state of research on digital government. Bruce Rocheleau Northern Illinois University, USA
xxxvi
Bruce Rocheleau is a professor of political science in the Department of Public Administration at Northern Illinois University in DeKalb, Illinois. In the area of information management in the public sector, he is currently pursuing empirical research concerning the use of computers and related information technologies in the public sector. His research includes the study of information management, geographic information systems, impacts of networks, and organizational problems involved in sharing information and resources. In the area of policy analysis and evaluation, his research includes the study of welfare, mental health, and aging policies. His research has included studies of the implementation and impact of programs in these areas. He is the author of numerous publications about governmental information management. He recently published the book Public Management Information Systems (Idea Group Inc., 2006) and edited the book Case Studies in Digital Government (IGI Global, 2007).
xxxvii
Preface
This volume brings together a wide range of research on the past, present, and future of the international trend toward greater and greater use of information technology in the public sector. Rather than survey the content of these research contributions, which are adequately described in their respective abstracts, it seems better to devote this preface to considering the broad context of public sector information systems. Toward that end, what follows discusses three over-arching questions which arise time and again in this literature: (1) How and whether the vast international investments in e-government now occurring are justified in terms of the economic development and related advances for which e-government is purportedly a critical part of the infrastructure; (2) How and whether information technology will be a force for centralizing government, for decentralization, or for some synergistic new combination of trends affecting the powers that be; and (3) How and whether the unprecedented levels of participation potentially enabled by the Internet age will translate into political participation and social capital, energizing social development on a global scale.
E-govErnmEnt as invEstmEnt in Economic dEvElopmEnt Investment in information technology in the private sector and in e-government in the public sector is often seen as the path to economic expansion. From promotion of federal e-government by the National Performance Review in the 1990s to promote community-wide area networks at the local level in the 2000s, the e-government business model has been promoted as a critical economic development policy in the United States. The argument that economic development would be promoted has led a number of communities to find ways to establish wide-area networks for their downtowns or even for their entire jurisdictions, sometimes free for citizens. Cities with WiFi initiatives in 2006 included: Anaheim, CA; Arlington, VA; Minneapolis, MN; Pasadena, CA; Philadelphia; Portland, OR; San Francisco; and Tempe, AZ. This was just one aspect of the worldwide movement toward “smart communities,” which integrate information technology throughout civic infrastructure. The same economic development logic has been promoted on a state basis also. In December 2006, Virginia Governor Timothy M. Kaine announced he would submit a $1.6 million budget amendment to extend broadband capacity on the state’s eastern shore, additional to the $1.4 million already appropriated. Kaine said, “Providing access to reliable, high speed broadband is an essential public investment, attracting high tech industries and strengthening economic development.” In this context it is something of a contrast to note that in the first five years of existence of the Bush administration’s e-government agenda, Congress funded only $13 million of the original $100 million goal. Over the same period, Congress became increasingly skeptical of the value of e-government fund
xxxviii
This volume brings together a wide range of research on the past, present, and future of the international trend toward greater and greater use of information technology in the public sector. Rather than survey the content of these research contributions, which are adequately described in their respective abstracts, it seems better to devote this preface to considering the broad context of public sector information systems. Toward that end, what follows discusses three over-arching questions which arise time and again in this literature: (1) How and whether the vast international investments in e-government now occurring are justified in terms of the economic development and related advances for which e-government is purportedly a critical part of the infrastructure; (2) How and whether information technology will be a force for centralizing government, for decentralization, or for some synergistic new combination of trends affecting the powers that be; and (3) How and whether the unprecedented levels of participation potentially enabled by the Internet age will translate into political participation and social capital, energizing social development on a global scale.
E-govErnmEnt as invEstmEnt in Economic dEvElopmEnt Investment in information technology in the private sector and in e-government in the public sector is often seen as the path to economic expansion. From promotion of federal e-government by the National Performance Review in the 1990s to promote community-wide area networks at the local level in the 2000s, the e-government business model has been promoted as a critical economic development policy in the United States. The argument that economic development would be promoted has led a number of communities to find ways to establish wide-area networks for their downtowns or even for their entire jurisdictions, sometimes free for citizens. Cities with WiFi initiatives in 2006 included: Anaheim, CA; Arlington, VA; Minneapolis, MN; Pasadena, CA; Philadelphia; Portland, OR; San Francisco; and Tempe, AZ. This was just one aspect of the worldwide movement toward “smart communities,” which integrate information technology throughout civic infrastructure. The same economic development logic has been promoted on a state basis also. In December 2006, Virginia Governor Timothy M. Kaine announced he would submit a $1.6 million budget amendment to extend broadband capacity on the state’s eastern shore, additional to the $1.4 million already appropriated. Kaine said, “Providing access to reliable, high speed broadband is an essential public investment, attracting high tech industries and strengthening economic development.” In this context it is something of a contrast to note that in the first five years of existence of the Bush administration’s e-government agenda, Congress funded only $13 million of the original $100 million goal. Over the same period, Congress became increasingly skeptical of the value of e-government funding. The Congressional appropriations committees instituted some of the most restrictive language to date in their FY 2007 appropriations. As a result, almost all of the 25 showcase “Quicksilver” e-government initiatives announced by the Bush administration in 2001 have been delayed or affected by the low level of budgeting, forcing the devotion of the FY 2008 budget not to new initiatives but to trying to finish ones started long ago. While the Office of Management and Budget (OMB) looks at the problem as one of educating ignorant members of Congress to get them “on board,” Congress tends to see the problem as one of OMB’s seeming inability to document the value of e-government investments. The American e-government funding model of pass-the-hat agency self-funding combined with user fees is rationalized as one which creates federal departmental “ownership” of e-goverment projects, but while few if any departments speak against the OMB/Bush administration strategy, it is clearly a strategy which has not forged the
xxxix
sort of strong political alliances among stakeholders which underpin Congressional funding of other programs which get higher priority. Congress has never fully bought into the e-government program and support for it is weak on the hill. In the FY 2006, appropriations bill for transportation, treasury, housing and urban development, judiciary and other related agencies, Congress required that the OMB justify e-government expenditures and request renewal funds. As a result, in January 2006, the Office of Management and Budget submitted the mandated report to Congress, justifying the cost of the 25 e-government and five line of business consolidation projects—all of which would cost over $192 million in FY 2006—squeezed from existing agency budgets. In essence, in lean budgetary times, the OMB was forcing agencies to spend on non-priority items which Congress had not specifically appropriated money for, such as forcing the National Park Service to spend $1.5 million on e-government when the NPS was scrimping for basic park operations. OMB’s contrary view was that its e-government guidance was simply helping agencies be efficient in getting the biggest bang for the buck. All of this is to suggest one thing: public investment in information technology is controversial. It is at once a great hope, perhaps the great hope, for economic and governmental transformation, and it is an endeavor where cost over-runs, delayed implementation, and outright failure are commonplace. If the nature of public information systems was better understood, success would be more likely and political support more forthcoming. The purpose of this handbook of research on the nature of public information systems seeks to make a small contribution to that much-needed understanding.
information tEchnology, cEntralization, and dEcEntralization While advocates of the virtual state have often cited the advantages of networks over hierarchy in flexibly adapting to change, assembling and re-assembling to meet one or another ad hoc challenge, studies of actual organizational response to emergencies, such as the World Trade Tower bombing of 2001, strongly suggest that network effectiveness depends on pre-existing social capital and trust expressed through pre-existing strong networks. Organizational hierarchies play a critical role both in building an organizational culture oriented toward use of networks for coordinated response, and in building the networks themselves prior to and in the absence of emergency demands. That is, the hierarchy vs. networks dichotomy is false. Rather, the two exist in synergy in effective organizations. Perhaps the leading example of centralization in recent years has been the push of the OMB to replace departmental IT systems with enterprise-wide lines of business systems in financial management, human resource management, and many other areas. For FY 2008, the OMB instructed departments that their budget proposals would have to demonstrate implementation of the administration’s lines of business consolidation initiative. At the state level, consolidation and centralization is approached with almost religious zeal. Major recent IT consolidation efforts have occurred in California, Michigan, and New York, for example. Some IT analysts and leaders have argued that centralization trends amount to over-centralization and urge a pendulum swing in the opposite direction. A chief executive of the Gartner Group, a leading IT consulting firm, for instance, recently argued that CIOs needed to relinquish some control and responsibilities to end users, that the centralized concentration of IT funding ironically leaves little for the original goal of business transformation or for needed investment in human aspects of IT. With recentralization, end users again face the frustration of dealing with rigid central IT departments and turn to alternatives such as new consumer Internet, communications, and database technologies outside CIO control.
xl
An interesting illustration occurred when the U.S. military implemented MCS2 (electronically networked maneuver control systems) in the Gulf War. An expected result was centralization of information in the hands of top-ranking officers. What was less expected was an increase in micromanagement, with many commanders unwilling to use new information systems to unleash the abilities of lower-level staff to make decisions. The desire not to relinquish decision-making power made available by centralization arising from information systems implementation, reinforced and enhanced existing bureaucratic structures in spite of the vision of some that these systems would allow flexible, informed, decentralized decision-making in the field. Control systems can be programmed to reflect the political priorities of those in power. As systems design is not normally covered in the press, this comes to light only occasionally, but the practice is routine. A recent example was the attempt of the Bush administration to embed anti-union features in both the proposed merit-based personnel system of the Defense Department’s National Security Personnel System (NSPS), and in a similar human resources system of the Department of Homeland Security. Anti-labor aspects of both were ruled illegal in federal court decisions in 2006 and 2005 respectively. The NSPS software system was found, for instance, to fail to ensure that employees could bargain collectively, did not provide for congressionally-required third-party review of labor relations decisions, and did not provide a fair appeals process for employees. Normally, however, embedding political controls in ostensibly neutral software goes unchallenged and unnoticed by the public at large. Even in the NSPS case, the Department of Defense did not accept the court ruling, but continued development of the system even as it was contested in the courts.
thE hopE that information tEchnology will build social capital There is evidence that Internet access does indeed improve the ability of citizens to interact with their government, though most use at that time is information-seeking rather than undertaking actual participatory transactions. Empirical analysis of the 2000 presidential elections has revealed that the Internet did show promise of bringing new individuals into the political process. Numerous writers have speculated that the age of the Internet would lead to a more participatory citizenry, whose experiences in electronic participation would build social capital and energize social, political, and economic development of all types. In traditional forms of political participation, community participation in politics has been found to correlate with socio-economic status, being older, and having lived in the community longer. Data do not show this to be the case for online political participants. Contrary to the predictions of social capital theory, recent findings show that engagement in non-political, social groups in the community is not correlated with online political participation. We may well ask if online communities destined to play a major political role in the future. Again, empirical case studies related to this question lead to the conclusion that although public administration literature has cited the importance of online communities as a vehicle for the delivery of public goods, actual experiences suggests cybercommunities tend to have weak governance structures, undermining accountability and legitimacy. This two-volume set is separated into six sections: (1) e-government and e-commerce, (2) privacy, access, ethics, and theory, (3) security and protection, (4) system design and data processing, (5) project management and IT evaluation, and (6) selected readings. Each chapter within these sections is separated into seven segments: (1) an introduction providing the historical perspective of the subject matter, (2)
xli
a background providing discussions supporting the author’s view as well as the views of others, (3) a segment devoted to the primary information regarding the subject matter, (4) a future trends segment providing future and emerging trends in the field of research and insight into the future of the topic from the perspective of published research as well as future research opportunities within the domain of the topic, (5) a conclusion, providing discussion for the overall coverage of the topic, (6) as well as a future research directions segment that acts as a supplement, discussing the managerial and more technical aspects of the subject matter, (7) a references and further reading section , and (8) concluding is a complete list of terms and definitions for readers to familiarize themselves with the subject matter’s terminology. The first section, titled “E-Government and E-Commerce,” covers the rise of e-government and its series of stages, from one way information dissemination to two-way interactions to two-way transactions, culminating in cross-agency integration of e-services. The transition from the first to second stage has not proved a difficult obstacle for most jurisdictions, which implement interactions such as feed back forms and e-mail. However, the transition to stage 3, the interaction stage, has proved more difficult. Although many examples of e-transactions exist (e,g., paying taxes online), mass adoption of e-transactions by the public has proved illusive. Even more difficult has been overcoming department-centric business of government models and replacing them with integrated cross-agency models. The chapters within this section examine these issues as well as others that exemplify the progressive movement toward electronic government which while problematic, still is in the process of fulfilling its potential. The second section, titled “Privacy, Access, Ethics, and Theory,” covers the potential of Internet technology to bring about democratic transparency in the way government conducts its business. However, even a transparent government must support individual privacy rights. Privacy is a growing issue because people have good reason to believe that data collected on them for one purpose may be appropriated and used for altogether difference purposes than the original ones about which they were informed. In theoretical terms, some seeking to understand these issues have turned to structuration theory, which is a variant of institutional theory growing out of the work of Anthony Giddens. Gidden held that individual actions both shape and are constrained by social structures. In addition to the structuration theory, the institutional theorist perspective, more specifically, Fountain’s theory of technology enactment, is discussed throughout the section. The third section, titled “Security and Protection,” covers several security threats including massive data theft, cyber-terrorism, and the use of malware. In the United States, information technology security rose to first place in budget priority after the bombing of the World Trade Center and has remained a top priority to the present day. Although there are currently over 217,000 various known threats including identity theft, imposter Web sites, file sharing, this section will cover some of the most prevalent in the public sector. The fourth section, titled “System Design and Data Processing,” delves into topics such as service oriented architectures, enterprise resource planning systems, statistical data and statistical dissemination systems, and data cube technologies, amongst many others. Enterprise resource planning systems in particular has often been the result of systems architecture planning in the U.S. and worldwide. After a checkered start in the private sector in the late 1980s and early 1990s, such systems were widely adopted in the public sector in the late 1990s and by the 2000s transition from agency-specific to enterprise-wide software was the primary reform thrust of the U.S. Office of Management and Budget as well as many states and localities. Efforts were made to unify financial, human resources, payroll, procurement and other departmental software systems into single jurisdiction-wide systems. The fifth section, titled “Project Management and IT Evaluation,” examines the relationship between information technology and project management. Information technology frequently succeeds or fails
xlii
on the strength or weakness of project management. The United States is seeing an increase in project management due to its emphasis in the Bush administration’s FY 2007 budget. Project management is often tied to enforcing IT enterprise architecture, which reflects the IT policies at the national level and of state chief information officers at the state level. Evaluation is another topic covered within this section which is coequal in importance to project management. Project management may be more critical for short-term IT success, but in the long run the success of IT initiatives requires that they be proven to work in a cost-effective manner, hence the critical importance of evaluation. Concluding the Handbook of Research on Public Information Technology is a “Selected Reading” section of 10 refereed journal articles for additional insight into the realm of information technology in the public sector. These articles come highly recommended and introduce innovative applications, trends and technologies within this fast growing area of information science and technology.
summary A work much more famous than this volume could dream to be started with the phrase, “it was the best of times, it was the worst of time,” and went on to contrast two cities, one cloaked in tradition and one in revolutionary upheaval. It is interesting to note that the vision of digital cities and global communities draws on such contrasts, comparing often-hierarchical traditional patterns of work and governance with the revolutionary potential of information technology as a liberating force, perhaps not for “liberty, equality, fraternity,” but at least for participation, transparency, and empowerment. In this preface your editor has tried to suggest that empirical research can throw much-needed light on traditional and revolutionary perspectives alike. There are so many important issues attached to this subject, many explored by scholarly contributions in this volume, dealing both with pragmatic implementation and with conceptual design, that they cannot be enumerated here. What is certain, however, is something close to the heart of and bringing a smile to the lips of every academic: more research is needed! Toward this end, this volume seeks to make a small contribution. G. David Garson North Carolina State University, USA
xliii
Acknowledgment
Editing an important, comprehensive publication of this magnitude requires significant assistance and valuable scholarly contents from many researchers, academicians, professionals, and practitioners from all over the world. In the area of content, we would like to extend our kindest thanks to those colleagues from all areas of the globe who contributed their expertise to this handbook and patiently collaborated with us and many demanding reviewers to improve the overall quality of their contribution. Our sincere gratitude and admiration is also extended to the reviewers who have made significant contributions toward the overall excellence of this publication. Special thanks go to the members of the editorial advisory board of this publication, including Professors Annie Becker, George Ditsa, Yair Levy, Mahesh Raisinghani, and Edward Szewczak for their valuable contributions and support. In the editorial area, we would like to express our thanks to the editorial team at IGI Global (formerly “Idea Group Inc.”), Ms. Jennifer Neidig, production senior managing editor, Ms. Sara Reed, production managing editor, Ms. Diane Huskinson, production assistant managing editor, Ms. Kristin Roth, managing development editor, Ms. Kristin Klinger, managing acquisition editor, and Ms. Lindsay Johnston, assistant acquisitions editor, for their countless attention to detail, endless and continuous efforts during the review and development process, and for keeping this project on schedule. Special thanks also go to our graphic artist, Ms. Lisa Tosheff. Finally, our warmest appreciation is extended to our spouses for their continuous support, wisdom, encouragement, understanding, patience, love, and for keeping up with our long hours at work.
Volume I
Section I
E-Government and E-Commerce
The rise of e-government may be described as a series of stages, from one-way information dissemination to two-way interactions to two-way transactions, culminating in cross-agency integration of e-services. The transition from the first to second stage has not proved a difficult obstacle for most jurisdictions, which implement interactions such as feedback forms and e-mail. However, the transition to Stage 3, the interaction stage, has proved more difficult. Although many examples of e-transactions exist (e.g., paying taxes online), mass adoption of e-transactions by the public has proved illusive. Even more difficult has been overcoming the department-centric business of government models and replacing them with integrated cross-agency models. In the United States, in the first 5 years of existence of the Bush administration’s e-government agenda, Congress funded only $13 million of the original $100 million goal. Over the same period, Congress became increasingly skeptical of the value of e-government funding. The Congressional appropriations committees instituted some of the most restrictive language to date in their fiscal year (FY) 2007 appropriations. As a result, almost all of the 25 showcase “Quicksilver” e-government initiatives announced by the Bush administration in 2001 have been delayed or affected by the low level of budgeting. At this writing, the president’s FY 2008 budget called for no new e-government initiatives at all, instead concentrating on consolidating the 25 e-government and nine Line of Business enterprise systems projects. This ebbing of e-government momentum and defensive emphasis on consolidation reflected the political reality of an increasingly skeptical Congress wanting bottom-line evidence of the promised payoff for e-government investment. At the same time, however, the United States was quickly becoming a nation in which most cities can claim to be “digital places,” at least in terms of e-government. The Center for Digital Government and the National League of Cities produced the “Digital Cities Survey” finding that between 2003 and
2005, cities linking all municipal departments and supporting online transactions by the public had grown dramatically from 57% to 84%. For instance, the capacity to apply online for city jobs grew from 25% to 48% of all cities in this period, and the capacity to apply online for building permits expanded from 17% to 51%. This was far from being an American phenomenon. By the mid-2000s, the digital-city movement was commonplace worldwide, with most large cities having some sort of digital initiative—some being merely forms of e-government and others much more expansive in nature. A notable example was China’s “Digital Beijing” initiative, begun in 1999 to build a state-of-the-art information infrastructure for that nation’s capital. By the time of the 2008 Olympic Summer Games in Beijing, the Digital Beijing initiative is expected to support 100% of key government services and 80% of business transactions. Numerous other examples, some detailed in the essays that follow, show that the movement toward electronic government, while problematic, still is in the process of fulfilling its potential. G. David Garson, September 2007
Chapter I
Key Issues in E-Government and Public Administration Rhoda C. Joseph School of Business Administration, USA David P. Kitlan School of Business Administration, USA
introduction Information and communication technologies (ICTs) are key elements supporting the growth of e-government initiatives. Public administration refers to the products and procedures that the government implements to interact with its constituents: citizens, businesses, employees, and other governments. To address the needs of these different constituents, a wide variety of government services are necessary. This chapter examines the impact of e-government on public administration from both the constituent and service perspectives. The chapter presents a holistic view of both challenges and advantages of implementing e-government in the area of public administration. The discussion in this chapter will proceed as follows. Section 2 provides an overview of e-government. This section presents a classification of e-governments and also explains how typical e-government develops and looks at the wide variety of functions involved in public administration. Section 3 combines the areas of e-government and public administration and
examines how closely and critically intertwined they are. The main advantages and challenges of implementing e-government projects to support public administration are also presented. Section 4 documents the future potential of e-government in public administration and discusses key issues such as e-voting and global access. Lastly the chapter concludes with a summary of ideas presented and some key terms and definitions.
background This chapter examines the intersection between e-government and public administration. Each of these two areas represents a rich body of literature. In this section we first define e-government and its position in a global context. We then discuss the functions and goals of public administration. There is significant overlap between the goals of e-government and those of the public administration function. Firstly, e-government refers to the use of electronic media (such as the Internet, intranets, hand-held devices) by governments to interact with
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Key Issues in E-Government and Public Administration
their constituents. “E-government can be viewed as the process for creating (adding) public value with the use of ICT (Capati-Caruso, 2006).” E-government projects occur at many levels throughout the world. Countries such as Canada, Singapore, and the United States are leading the charge as innovative leaders in e-government; with nations such as Brazil, South Africa, and Italy making important steps to increase their e-government infrastructure (Hunter & Jupp, 2001). E-government is a global phenomenon which is poised to see more growth in the future. From a historical perspective electronic commerce (e-commerce) provides a referential platform for the development of e-government. E-commerce provides an electronic option for buyers and sellers to come together. The positive impacts of e-commerce include reduced search costs and improved price discovery (Bakos, 1998). Many e-government tasks are routine and noncommercial, however some of the benefits and challenges evident in the e-commerce domain also occur in the e-government domain. Comparisons between e-commerce and e-government must be done cautiously. Even though they both use Web-based technologies and involve sharing information between two or more entities, significant differences persist. E-government deals with sensitive information (such as social services, taxes) that should not be made available to third party private for-profit businesses. The explicit goals of each application also conflict: e-commerce is used to drive revenue while e-government seeks to increase information sharing and task efficiencies. E-government initiatives are classified based on the group that interacts with the government. Government-to-government (G2G) initiatives refer to governments interacting with other governments. One example can be a local municipality interacting with the state government for the payment or receipt of taxes. The movement of information from a lower level to a higher level of government is called vertical integration and is one of the more advanced characterizations of e-government. G2G also occurs horizontally where one department interacts with another equally significant branch
of government. For example, there are projects that involve interaction between the department of transportation and the department of education (e.g., transit passes for school students). Government-to-business (G2B) initiatives refer to communications and transactions facilitated by electronic means between a government and a representative business. A large part of the interaction between a government and for-profit businesses is through the collection of taxes, and bids on government contracts. In the non-profit domain, dissemination of grant requests and proposals represent more typical types of interaction. In either case, these are typical examples of the type of activities that are supported in the B2G domain. Employees are the core of effective governance. Government-to-employee (G2E) initiatives cover the human resource management component of the relationship between the government and its employees. The three main benefits to be derived from the implementation of these types of projects are improved strategic planning; cost reduction; and service improvements between management and employees (Ruël, Bondarouk, & Looise, 2004). The tasks covered in the G2E domain range from online recruitment, training and testing, to self service systems where employees can modify their health plans, retirement plans and even federal withholdings. The final group, and potentially the most critical one, is government-to-citizen (G2C). This refers to the government’s interaction with the citizenry. A recent 22-country study indicated that governments around the globe identify that that a customer-centric focus is critical for e-government success (Hunter & Jupp, 2001). In areas with low Internet penetration, it might be the only area of focus for e-government projects because of the limited access. Citizens broadly refer to all individuals that interact with the government. G2C represents all electronic communications and transactions that occur between a government and one or more of its citizens. The individual referred to as “citizen” can be a foreign national, a student or a resident, and is typically involved with unique interactions with the government. Governments
Key Issues in E-Government and Public Administration
tend to focus strongly on this category largely because one of the founding principles of governments and governance is to serve its people. E-government growth can be explained by considering an evolutionary pathway. One model describes four stages for the growth of e-government (Layne & Lee, 2001): • • • •
Stage 1: Cataloguing. Online presentation of information. Stage 2: Transaction. Limited forms and services available online. Stage 3: Vertical integration. Top down links of different systems. Stage 4: Horizontal integration. Links across different functional units.
However, not all e-government projects follow all four stages. There can be multiple interactions in development, as well as an end to further growth beyond a specific stage. E-government is a tool that facilitates improved delivery of products and services to all participants that interact with the government. Secondly, as mentioned earlier, public administration refers to the products and procedures that the government implements to interact with its constituents: citizens, businesses, employees and other governments. “Public administration is the use of managerial, political, and legal theories and processes to fulfill legislative, executive, and judicial mandates for the provision of governmental regulatory and service functions (Rosenbloom & Kravchuk, 2005).” This is a very broad definition and it encompasses many different theories and applications. This chapter focuses mainly on the managerial aspects of public administration, due to its relevance for implementation of e-government. Of course there are always political and legal undertones, however an examination of those issues are beyond the scope of the current discussion. In general, public administration involves providing service to citizens and to the public. This implies an orientation towards providing solutions to problems faced by individual citizens, groups of citizens and society as a whole. Thus,
public administration can also be considered as the study and implementation of policy, the main goal of which is to identify public interests and to develop and implement adequate means of satisfying them. Government services can be grouped as follows: human services; community services; justice services; transportation services; land resources; business services; financial services and others (Bakry, 2004). In addition to providing traditional services to the public, such as those related to health care and transportation, public administration involves a variety of administrative activities. These include, for example, making strategic decisions, coordinating, controlling and regulating activities, issuing permits and licenses, and providing documents and other information. Another way of considering the many functions of public administration is to view them from the standpoint of the constituent or beneficiary of the service. For example, government provides services to citizens, businesses, employees and other government entities. The following section considers these constituents in more detail. Issues in e-government have a direct and critical impact on the administration of public services. Discussions of e-government initiatives in the public administration domain are relevant for the following reasons: 1. 2. 3.
Government agencies share information with the public in the Web domain. Both internal and external governmental transactions are executed through electronic channels. The Web is a critical mediator between a government and its constituents.
E-government consists of four main constituents and public administration encompasses many different functions. Table 1 illustrates e-government constituents and some examples of areas of interaction that occur with public administration. Some or all components involved in these transactions can be conducted in a Web-based environment.
Key Issues in E-Government and Public Administration
Table 1. Interaction of e-government and public administration Public Administration Functions
Human Services
Community Services
Transportation
Justice
Land Resources
Financial Services
Citizens
Consumer safety
Post offices
Driver licenses
Law enforcement
National parks
College scholarships
Businesses
Safety standards
Worker training
Regulate trucking
Control cyber-crime
Water conservation
Loans and grants
Employees
Evaluate standards
Support community groups
Provide transportation
Report violations
Execute transfers
Payroll processing
Governments
Military bases
Flood recovery
Regulate trade
Public safety
Land transfers
Budget creation
E-government Constituents
advantagEs of E-govErnmEnt in public administration There are numerous potential benefits related to the implementation of e-government, (Fahnbulleh, 2005) such as: • • • • • • • •
Lower overall administrative costs to government Provide more efficient government operations Create a stronger and closer relationship between citizens and government Provide easier access to government for all Improve the level of service to citizens Allow greater access to decision-making Empowerment of citizens Provide more transparency in government with more responsibility
The main advantages presented by e-government to public administration can be summarized as follows: paper reduction; transaction efficiency; and improved governance. All three advantages are further described in the following sections. We further note that is some specific instances defined advantages can have unexpected consequences.
paper reduction Government offices and agencies are notorious for the amount of paper that they utilize for routine transactions. To combat this problem, in 1998 the U.S. federal government passed the Government Paper Elimination Act (GPEA) which required the use of electronic means instead of paper, when possible, for all official business. The deadline for implementation was October, 2003. The GPEA plays a key role in supporting the growth of egovernment (Fletcher, 2002). In effect, the GPEA made it a government mandate to move functions online, instead of leaving the decision to voluntarily adopt the use of Web-based information technology to individual agencies. The use of electronic media for public administration services reduces many of the existing problems associated with paper-based methods for data collection. The most common problems to be addressed are loss of paper, destruction of data, and inconsistent data entry. The move towards paper reduction is not a panacea in itself, but it provides a platform where various constituents can interact with the government in an online environment. Some argue that without a paper trail, there can be a lack of accountability as well as the non-existence of a back-up mechanism if the electronic system fails.
Key Issues in E-Government and Public Administration
Transaction Efficiency Renewing a driver’s license; issuing a new building permit; and collecting taxes are all typical daily transactions that occur between a government and her constituents. The constituents in these examples can individually or collectively involve a citizen, an employee, a business or another government agency. The use of electronic technology instead of a paper-based system usually results in increased efficiency. How is efficiency measured? One classic and simple method for determining increased efficiency is time. If the transaction takes a smaller amount of time to complete using medium A instead of medium B, then it is more efficient. The use of e-commerce improves economic efficiency and provides a platform for sustaining growth (Bakos, 1998). E-commerce has been an important precursor for developments in the area of e-government. Consequently, many of the benefits harnessed from e-commerce implementation are evident in the e-government domain. Savings of time and money are two of the most important factors to predict potential usage of e-government services (Gilbert, Balestrini, & Littleboy, 2004). In the area of public administration, all sectors can benefit from reduced costs and time efficiencies. Understandably, these benefits will not be achieved overnight. However there is great potential for increased transaction efficiency in the future.
access information, forms and reports, and execute transactions themselves in real-time, they achieve greater ownership of the process. The Internet provides the platform for active participation in government, as well as an avenue for activism and lobbying that can affect the political process (Marche & McNiven, 2003). Participation by a large number of interest groups can improve overall governance in a particular region or country. In a democratic society, increased transparency, as it relates to public policy, is generally perceived as a positive outcome. However, in more restricted societies, control of the Internet and Web-based activities is used as a tool for maintaining government control and affecting governance. Improved governance is thus an advantage of e-government that is constrained by the societal and political norms of the region in which it is implemented.
challEngEs of E-govErnmEnt in public administration There are numerous potential barriers related to the implementation of e-government. A recent paper identified the following as key barriers (Fahnbulleh, 2005): • •
improved governance Governance refers to the systems, methods and procedures that define how a government operates. Improved governance and increased accountability is possible through the inclusion of societal participants in the major activities of governments (Ackerman, 2004). E-government provides the tools for increasing transparency in public administration. Information in the hands of citizens makes them more connected to government and aware of internal processes that may initially have been perceived as a black box. When a citizen is able to
• •
Concerns about inadequate security and privacy of data Unequal access to computer technology by citizens High initial costs of setting up an e-government solution Resistance to change
Different platforms identify these challenges in different manners. In this chapter we are specifically interested in the challenges and barriers as they pertain to effective public administration. The main challenges identified are trust, resistance to change, digital divide, cost and privacy and security concerns. Even though we discuss each of these separately they do not exist in isolation. One challenge can have an effect on one or more
Key Issues in E-Government and Public Administration
of the other categories. For example resistance to change might be influenced by a lack of trust, or the digital divide can be further widened because of inadequate funding. In the following sections we explore the challenges in more detail.
trust Trust can be defined along two dimensions: as an assessment of a current situation, or as an innate personality trait or predisposition (Driscoll, 1978). Trust in itself varies with the individual and the situation. Issues pertaining to government can stimulate strong feelings of trust or mistrust in different constituents. The implementation of public administration functions via e-government requires the presence of two levels of trust. The first is that the user must be confident, comfortable and trusting of the tool or technology with which they will interact. The second dimension of trust pertains to trust of the government. If a constituent has limited trust in either the technology or the government, it hampers their use of e-government systems. Trust is an important recurring theme in user decision making. More specifically, trust has been examined in the context of electronic commerce (Jarvenpaa, Tractinsky, & Vitale, 2000; Koufaris & Hampton-Sosa, 2004) and it is a significant factor affecting an individual’s purchase decision. By extension, an individual constituent that has previously not established trust in the e-commerce domain can transfer that lack of trust to other areas, such as e-government. Recently, confidential information on military veterans was compromised when a computer containing their personal information was lost. This type of incident can erode trust and user confidence in government systems. Trust, along with financial security, are two critical factors limiting the adoption of e-government services (Gilbert et al., 2004). It is thus important to maintain effective security mechanisms in the e-government domain to promote and protect consumer trust and confidence.
resistance to change The innovation diffusion theory states that over time an innovation will diffuse through a population, and the rate of adoption will vary between those who adopt early—referred to as “early adopters”—and to those who adopt the innovation much later, referred to as “laggards” (Rogers, 1995). The varying rates of adoption indicate that some users are more resistant to accepting the innovation, which in this case is e-government. The resistant to change phenomenon can explain much of the hesitation that occurs on the part of constituents in moving from a paperbased to a Web-based system for interacting with government. Income, age, and education are all contributing factors that can result in resistance to the use of e-government initiatives. Further, innate personal characteristics, such as dogmatism, can work to increase an individual’s resistance to change. If there is a great preference to maintain the existing status quo, then there is a greater likelihood that resistance to new methods of operation will persist. Long-term employees may be particularly susceptible to this problem, since they may have completed tasks the same way for many years. Citizens, employees and businesses can all have their biases with respect to how transactions should be processed. However, government entities and public policy administrators cannot ignore the changes that occur as a result of the implementation of information and communication technology (ICT). In the early 1990s (Freeman, 1993) identified the important role that ICT would have in shaping public policy, and cautioned both rich and poor governments about neglecting its significance. Education about the value of the new systems is one step toward reducing some of the existing resistance. It can also be particularly useful for a champion, such as a leader or manager, to buy into the new system at an early stage in the adoption process. The champion might be an employee that others respect, or a business that is known for setting trends in the industry.
Key Issues in E-Government and Public Administration
digital divide
cost
The digital divide refers to the separation that exists between individuals, communities, and businesses that have access to information technology and those that do not have such access. Social, economic, infrastructural and ethno-linguistic indicators provide explanations for the presence of the digital divide (Bagchi, 2005). Further the presence of the digital divide indicates that a community might not be fully equipped with the tools or knowledge to benefit from the implementation of e-government projects. Many non-profit and community based organizations (CBO) provide valuable public services to various communities. Some of these organizations work very closely with government agencies via grant requests and information sharing. With the use of e-government for the dissemination of critical information, many of the smaller agencies that lack the necessary infrastructure may not reap the benefits available through the Web. In fact, many of the smaller CBOs succumb to the “organizational divide” where they lack the means to remain informed and current with the new technology (Kirschenbaum, Kunamneni, & Servon, 2002). The limited access to information can indeed impede an organization’s willingness to support the adoption of new e-government projects in the public domain. Economic poverty is closely related to limited information technology resources (Servon, 2002). An individual living at or below the poverty level is less likely to have a personal computer at home and may need to rely on work or public domains (such as public libraries) to provide access to egovernment and other online services. Limited availability of the necessary information technology infrastructure can serve as a great deterrent to the adoption of any Web-based initiatives. As the digital divide narrows, broader adoption of e-government in the public domain becomes possible.
Cost is generally a prohibitive factor in the implementation of information technology, particularly in the public sector where other projects and initiatives might have a higher priority than e-government. Elected officials responsible for allocation of funds may also be unwillingly to promote such projects where the returns are not always visible in the short term and utilization of the technology is not guaranteed. Typically costs associated with e-government projects include: hardware, software, testing, training, migration to new system and maintenance. In 2004, the United Kingdom and Singapore respectively spent 1 percent and 0.8 percent of their gross domestic product (GDP) on e-government. Other nations are spending even less, because of other economic, social and political obligations. In the near future, cost will continue to be a significant challenge to extensive and comprehensive implementation of e-government projects for public administration.
Privacy and Security Three basic levels of access exists for e-government stakeholders: no access to a Web service; limited access to a Web-service or full-access to a Web service, however when personal sensitive data exists the formation of the security access policy is a much more complex process with legal consideration (Wong, Tam, & Cheng, 2006). With the implementation of e-government projects, effective measures must be taken to protect sensitive personal information. A lack of clear security standards and protocols can limit the development of projects that contain sensitive information such as income, medical history. Further, users must be confident that the Websites they visit and transactions they complete are safeguarded against theft, fraud and unauthorized access. A one size model cannot fit all consumers.
Key Issues in E-Government and Public Administration
futurE trEnds The last decade represented an era of strong growth in the domain of e-government. However, issues related to security continue to be a challenge. As more information is collected, additional vulnerabilities are exposed. Indeed, security is poised to be a recurring problem in this domain as the growth in this area continues to take place. Great strides have been made in recent years and new practices and procedures are in the developmental stage. One of the future bridges between e-government and the implementation of public policy is electronic voting (e-voting). Even though the concept of e-voting still faces many concerns and challenges (Stone, 2006), it will remain on the public policy agenda for many years to come. Besides security and e-voting, universal access to government services will also be an important issue in the future. In particular, individuals with special needs, seniors and persons with disabilities will be given strong focus. Full e-government access to persons with a disability is yet to be achieved (Jaeger, 2004). As the world’s population ages, senior citizens will present unique needs and will request services that must be delivered in a manner that is readily accessible to them. The issues presented above represent only a small portion of those future issues that will be relevant within the e-government—public administration landscape. The list is by no means exhaustive. As work in this area continues, new challenges will occur. Similarly, both tangible and intangible benefits, that were not previously expected, will be gained.
conclusion The use of information technology is clearly one of the major potential solutions in the desire to achieve improved governance. The use of e-government services to improve public administration functions will continue to have a strong impact on the operation of federal, state and local governments. E-government can also serve as a catalyst for the
radical redesign of governmental organizations and agencies. There are numerous benefits and challenges that can result from these changes. Change is inevitable and the movement to use technology to improve public administration services has been launched. It is thus critical that all necessary steps be taken to make these new ventures a success. This will require continued cooperation and support for all constituents: citizens, employees, businesses and government agencies. Whether e-government in the public sector will flourish, or the implementation barriers will retard its evolution, will be determined by time. Ultimately, efforts will continue to propel growth, and effective assessment measures must be available to determine the level of success.
futurE rEsEarch dirEctions This section contains specific research directions, highlighting both managerial and technical aspects of e-government. In addition to the previously mentioned trends of security, e-voting, and universal access to government services, there are numerous other potential future areas of research related to e-government and the public sector. Most of these will either directly or indirectly involve the use of ICTs to address the challenges of improved governance. Broad research directions include the technologies supporting e-governance, as well as e-democracy, privacy and the socio-economic impact of ICTs on public administration. E-governance involves an increased reliance on shared access to, and transmission of, stored knowledge in the form of digital data by public institutions. Further research can explore the user of technologies such as intelligent agents, virtual learning, global databases, and networked, digital libraries, as well as the impact of new media on ICT processes in public administration. A subset of e-governance, known as e-democracy, involves providing services that enable democratic communication and civic participation among constituents through the use of shared digi-
Key Issues in E-Government and Public Administration
tal networks. For example, research could involve the extent to which ICTs can improve democratic participation as well as the development of best practices for such participation. The online provision of public information and services involves issues of privacy and raises questions concerning the protection of personal information. Although technological progress that helps to protect privacy is ongoing, much still needs to be accomplished in this area in understand the impact of ICTs on privacy and to safeguard the growing repositories of data that are required by an increasing reliance on ICTs. Research is needed to identify options for further development and to assess the legal, social and political consequences of inadequate privacy. Finally, the socio-economic impacts of ICTs on public administration are potentially rich areas of research to pursue. Further understanding of the activities and relationships between citizens, organizations, public entities and other participants in the public administrations arena will be needed in order to insure that effective processes and systems are developed in the future. The opportunities for further exploration in these and other areas related to e-governance appear to be numerous and rich with potential. The area of e-government provides a rich platform for both quantitative and qualitative research methods. To further develop the above suggested topics both researchers and practitioners can employ surveys, interviews and case studies to develop explanations and make predictions on the impact of e-government in the area of public administration.
rEfErEncEs Ackerman, J. (2004). Co-governance for accountability: Beyond “exit” and “voice.” World Development, 32(3), 447-463. Bagchi, K. (2005). Factors contributing to global digital divide: Some empirical results. Journal of Global Information Technology Management, 8(3), 47-65.
Bakos, J. Y. (1998). The emerging role of electronic marketplaces on the internet. Communications of the ACM, 41(8), 35-42. Bakry, S. H. (2004). Development of e-government: a STOPE view. International Journal of Network Management, 14(5), 339-350. Capati-Caruso, A. (2006). E-Government cost and financing. In The Knowledge Management Branch in the Division for Public Administration and Development Management, Department of Economic and Social Affairs, United Nations (NDESA/DPADM/KMB). Retrieved October 17, 2006 from http://unpan1.un.org/intradoc/groups/ public/documents/UN/UNPAN023430.pdf Driscoll, J. W. (1978). Trust and participation in organizational decision making as predictors of satisfaction. Academy of Management Journal, 21(1), 44 - 57. Fahnbulleh, N. (2005). The future of electronic government. Futurics, 29(1/2), 7-12. Fletcher, P. D. (2002). The government paperwork elimination act: Operating instructions for an electronic government. International Journal of Public Administration, 25(5), 723 - 736 Freeman, C. (1993). Technical change and future trends in the world economy. Futures, 25(6), 621-635. Gilbert, D., Balestrini, P., & Littleboy, D. (2004). Barriers and benefits in the adoption of e-government. The International Journal of Public Sector Management, 17(4/5), 286-301. Hunter, D. R., & Jupp, V. (2001). Rhetoric vs Reality—Closing the Gap: Accenture. Jaeger, P. T. (2004). The social impact of an accessible e-democracy. Journal of Disability Policy Studies, 15(1), 19-26. Jarvenpaa, S. L., Tractinsky, N., & Vitale, M. (2000). Consumer trust in an internet store. Information Technology and Management, 1(1-2), 45-71.
Key Issues in E-Government and Public Administration
Kirschenbaum, J., Kunamneni, R., & Servon, L. (2002). The organizational divide. In L. Servon (Ed.), Bridging the digital divide: Technology, community and public Policy. Malden, MA: Blackwell Publishing. Koufaris, M., & Hampton-Sosa, W. (2004). The development of initial trust in an online company by new customers. Information & Management, 41(3), 377-397. Layne, K., & Lee, J. (2001). Developing fully functional e-government: A four stage model. Government Information Quarterly, 18(2), 122136. Marche, S., & McNiven, J. D. (2003). E-government and e-governance: The future isn’t what it used to be. Canadian Journal of Administrative Sciences, 20(1), 74-86. Rogers, E. M. (1995). Diffusion of innovations (4th ed.). New York: The Free Press. Rosenbloom, D. H., & Kravchuk, R. S. (2005). Public administration: Understanding management, politics, and law in the public sector (6th ed.). McGraw Hill. Ruël, H., Bondarouk, T., & Looise, J. K. (2004). E-HRM: Innovation or irritation. An explorative empirical study in five large companies on web-based HRM**. Management Revue, 15(3), 364-380. Servon, L. (2002). Bridging the digital divide: Technology, community and public policy. Malden, MA: Blackwell Publishing. Stone, A. (2006, Jun 13). Holes found in e-voting. USA Today, p. A.12. Wong, K. F., Tam, M. K. W., & Cheng, C. H. (2006). E-government — A WebServices framework. Journal of Information Privacy & Security, 2(2), 30-50.
0
furthEr rEading Edmiston, K. D. (2003). State and local e-government: Prospects and challenges. American Review of Public Administration, 33(1), 20-45. Galindo, F. (2004). Electronic government from the legal point of view: Methods. International Review of Law, Computers & Technology, 18(1), 7. Hazlett, S.-A., & Hill, F. (2003). E-government: The realities of using IT to transform the public sector. Managing Service Quality, 13(6), 445. Holden, S. H., Norris, D. F., & Fletcher, P. D. (2003). Electronic government at the local level: Progress to date and future issues. Public Performance & Management Review, 26(4), 325. Horn, S. P. (2003). Taxation of e-commerce. Journal of American Academy of Business, Cambridge, 2(2), 329-338. Ke, W., & Wei, K. K. (2004). Successful e-government in singapore. Communications of the ACM, 47(6), 95-99. Komito, L. (2005). E-participation and governance: Widening the net. Electronic Journal of E-Government, 3(1), 39-48. Meijer, A. J. (2003). Transparent government: Parliamentary and legal accountability in an information age. Information Polity, 8(1/2), 67-78. Melitski, J. (2003). Capacity and e-government performance: An analysis based on early adopters of internet technologies in new jersey. Public Performance & Management Review, 26(4), 376. Melitski, J.(2005). Digital government worldwide: An e-government assessment of municipal web sites. International journal of electronic government research, 1(1), 1-19. Prattipati, S. N. (2003). Adoption of e-governance: Differences between countries in the use of online government services. Journal of American Academy of Business, Cambridge, 3(1/2), 386-391.
Key Issues in E-Government and Public Administration
Reddick, C. G. (2005). Citizen interaction with e-government: From the streets to servers? Government Information Quarterly, 22(1), 38-57. Steyaert, J. C. (2004). Measuring the performance of electronic government services. Information & Management, 41(3), 369-375. Tan, C. W., & Pan, S. L. (2003). Managing e-transformation in the public sector: An e-government study of the inland revenue authority of singapore (IRAS). European Journal of Information Systems, 12(4), 269-281. Treiblmaier, H., Pinterits, A., & Floh, A. (2004). Antecedents of the adoption of e-payment services in the public sector. Paper presented at the International Conference on Information Systems, Washington DC. Virginia de Vasconcellos, M., ^ das Graças Rua, M. (2005). Impacts of internet use on public administration: A case study of the brazilian tax administration. Electronic Journal of E-Government, 3(1), 49-58. Xue, S. (2004). Web usage statistics and web site evaluation: A case study of a government publications library web site. Online Information Review, 28(3), 180. Ya Nia, A., & Tat-Kei Ho, A. (2005). Challenges in e-government development: Lessons from two information kiosk projects. Government Information Quarterly, 22(1).
tErms and dEfinitions E-Government: Government functions and services administered to citizens, businesses, employees and other government agencies via the use of the Internet. The four main categories are: government to citizens (G2C); government to businesses (G2B); government to employees (G2E); and government to government (G2G). Early Adopters: Refers to the population of users that are among the first to purchase or use a new technology. Laggards/LateAdopters: Refers to the population of users that adopt an innovation at a much later time. Innovation: A product, process or idea that is perceived as novel to the user or audience. Internet: Public network of computers, including servers and client machines. Intranet: Private network of computers supporting a business or organization. Public Administration: Refers to the products and procedures that the government implements to interact with its constituents: citizens, businesses, employees and other governments.
Chapter II
Government Web Sites as Public Forums Pearson Liddell, Jr. Mississippi State University, USA Robert S. Moore Mississippi State University, USA Melissa Moore Mississippi State University, USA William D. Eshee Mississippi State University, USA Gloria J. Liddell Mississippi State University, USA
introduction In countries around the globe, the public availability of information through technologies such as the Internet has increased the average citizen’s ability to access documents, resources and solutions with unprecedented ease. Governments are increasingly faced with the task of balancing their information technology capabilities and the electronically based solutions demanded by citizens (Brewer, Neubauer & Geiselhart, 2006, Streib & Navano, 2006). For example, Thomas and Streib (2003) note that the citizen-initiated contacts made through Web interfaces is fundamentally different than traditional interfaces in that the interaction
is often easier and quicker than traditional contacts. However, they also note that with the lack of personal interaction, citizens make decisions based on only the information presented on the Web site, which may or may not be based on totally objective information sources. Public managers who are the gatekeepers regarding the informational content of communication systems have to be especially cognizant of which resources to provide on public owned systems. At times, online solutions provided by the government may involve referring citizens to for-profit firms through hyperlinks on governmentowned Web sites (Sellitto & Burgess, 2005). The inclusion of one firm over another on government
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Government Web Sites as Public Forums
Web sites has been noted to be a real concern for public managers (Menzel, 1998). In this article, we employ a legal perspective to examine the ramifications of public information strategies that allow firms to have hyperlinks embedded within the content of public information systems. This perspective allows the public information manager to make informed decisions when developing government portal strategies.
background The Internet has proven to be an invaluable communication media resource. Individuals and businesses desiring a product, service or information use the Web to find prices, specifications, recommendations and peer ratings. Unlike traditional communication media, such as television or radio broadcasts in which content is provided in sequence, the Internet allows an individual to choose from Web pages that only interest them right then and right there. Businesses, especially those that sell products and services related to a site’s content (Moore et al., 2005), are often interested in attracting the Web site visitors and often have the opportunity to purchase access to this group from the visited Web site’s owner. The business can pay the Web site owner to place an ad or an embedded hyperlink on the Web site that, with a click, will take the interested customer directly to the business’s Web site. The end result being that the clicked on business has benefited from having a hyperlink on the Web site. Alternatively, if the Web site owner does not want to deal with a particular business, then the owner simply does not provide a hyperlink. However, what if the Web site owner is a governmental entity? For example, in an effort to encourage online submission of tax returns, the United States Internal Revenue Service has set up a page in which it lists the names and hyperlinks to private firms that will assist individuals to electronically prepare their taxes (IRS, 2006). Similarly, the City of Chicago has hyperlinks to external sites such as Chicago City Search and Metromix, which are
for profit commercial ventures (cityofchicago.org, 2007). These Web site hyperlinks represent the government’s de facto decision to not promote other alternative businesses. Additionally, having a link on these government-sponsored sites may have a positive branding effect on the hyperlinked company resulting in an individual believing that because it is on a government Web site, the hyperlinked firm is sanctioned by the government. If a governmental entity can arbitrarily grant or deny access to a Web site, then that governmental entity has the ability to greatly affect not only political debate but also marketplace ideas and access by controlling who will participate in the debate. But, governments must have some control over whom and what is allowed on their Web sites. So, how is this control maintained while not running afoul of the business’s right to access? The U.S. government’s Web portal Firstgov.gov has even set about to clarify its position on linking from its site by instituting the following policy: In rare instances, FirstGov.gov links to Web sites that are not government-owned or governmentsponsored if these Web sites provide government information and/or services in a way that is not available on an official government Web site. FirstGov.gov provides these non-government Web sites as a public service only. The U.S. government, including the U.S. General Services Administration (the primary sponsoring federal agency of FirstGov.gov), neither endorses nor guarantees in any way the external organizations, services, advice, or products included in these Web site links. Furthermore, the U.S. government neither controls nor guarantees the accuracy, relevance, timeliness or completeness of the information contained in non-government Web site links. (See Disclaimer of Endorsement for more information on this topic.) (USA.gov, 2006) Clearly, the government treats its Web sites as property and government property for First Amendment purposes, has been placed in three broad categories, or forums: the traditional public forum, the designated public forum, and the nonpublic forum. The significance of the char-
Government Web Sites as Public Forums
acterization of the forum lies in the evidentiary standard applied to each forum (Liddell et al., 2004). The strict scrutiny test is applied to the traditional public forum and the designated public forum, while the reasonableness test is applied to the nonpublic forum. Under the strict scrutiny test the government has the onerous burden of showing that it has a compelling interest in limiting access to the government property and that it limits access to the government property only to the extent necessary to accomplish its compelling interest. However, under the reasonableness test the government merely has to show that its action has a rational relationship to the furtherance of an expressed public policy, a much lower legal standard. The traditional public forum is the most difficult to attain because it is the most narrowly defined and the most vigorously protected. A traditional public forum is a place that has a longstanding history of public assembly for debate. Such places have historically been streets and parks, which were used as a place for public assemblies for the dissemination of information even before the founding of the United States. One of the keys to the determination of traditional public forum status is the reciprocal exchange of ideas. A problem with obtaining the traditional public forum status is that the U.S. Supreme Court has required strict compliance with the forum having the characteristic of a place having been for a significant time one where divergent opinions and ideas have been freely exchanged. This, of course, places meaningful obstacles between the Internet and traditional public forum status because of the recent creation of the Internet. Indeed the U.S. Supreme Court has determined that Internet access in public libraries does not meet the criteria for a traditional public forum. Yet, the unique nature of the Internet may allow it to one day transcend this time limitation. Even where a traditional public forum is not found, a designated public forum may be found. Where the government intends to create public forum for the express purpose of the reciprocal exchange of ideas, the government has created a designated public forum. This kind of forum can-
not be created accidentally but must be created by a manifested intent. Therefore, in Widmar v. Vincent (1981), a state university that had an explicit policy permitting registered student groups to use its facilities could not refuse the use of the university facilities to a religious student organization. However, in Arkansas Educational Television v. Forbes (1998), a public television station could refuse to allow a candidate to participate in a televised political debate because the television station’s policy did not allow unfettered participation. In other words, there was no manifest intent to open the forum to all of the candidates. If a forum is neither a traditional public forum nor a designated public forum, then it is automatically classified as a nonpublic forum. This is basically a default category. This categorization of forums may seem fairly straightforward but things get pretty muddled in the application. Specifically, it often becomes difficult determining what the actual forum is. How the determination of the relevant forum can affect the outcome of a case is demonstrated clearly in a series of cases called the Klan Adopt-A-Highway cases (Montgomery, 1999). In those cases in which the Ku Klux Klan applied to take part in various state highway clean-up programs to the apparent consternation of the states. Under the same basic facts the courts held that there was a traditional public forum, a designated public forum and a nonpublic forum depending whether the relevant forum was held to be a highway, a highway rightof-way or the adopt-a-highway program. In this regard, the selection of the relevant forum makes the forum analysis virtually irrelevant. Further, the selected relevant forum continues to impact the evidentiary standard once the type of forum is decided. For instance, in Cornelius v. NAACP Legal Defense and Education Fund (1985), a case to decide whether certain nonprofit advocacy groups would be allowed to participate in a federal fundraising campaign called the Combined Federal Campaign (CFC), a determination had to be made as to whether the forum in question would be the federal workplace, from whence the donations were solicited, or the CFC, to whence the donations went. The court chose the CFC as the relevant
Government Web Sites as Public Forums
forum as opposed to the federal workplace and determined that the CFC was a nonpublic forum and, therefore, the reasonableness test applied. Choosing the CFC as the relevant forum narrowed the purpose of the forum from the broad purpose of the federal workplace to the streamlined purpose of the CFC and thus made it easier to justify the reasonableness of the CFC denying participation to the nonprofit advocacy groups.
thE wEb sitE as govErnmEntcontrollEd propErty At all levels of government, many agencies now operate Web sites as a normal part of their operations. Control of who is able to participate on each of these government Web sites is important because the control of participation allows the governments to control the message and purpose of the Web site. However, the government’s control is not absolute because the Web site is a place where citizens generally congregate to obtain information and the government solicits comments using ubiquitous “contact us” options provided on many pages (e.g., U.S. Department of Agriculture’s front page at www.usda.gov). Online, the congregation is virtual but the ideas are real. What happens when the government seeks to limit a citizen’s right to use the government Web site as a platform at which the citizen may want to express ideas in contravention to the government? In The Putnam Pit v. City of Cookeville (2000), the Sixth Circuit Court of Appeals addressed this issue. In this case Geoffrey Davidian, the publisher of an online newspaper, The Putnam Pit, that specialized in uncovering the perceived wrongdoings of the city of Cookeville and elected and appointed officials, wanted the city to place a hyperlink on the city’s Web site that would allow Web site users to go directly from the city’s Web site to the Web site of The Putnam Pit. Naturally the city was averse to supplying a most vocal critic with a platform to voice his criticisms and refused to grant Davidian permission to have a hyperlink from the city’s Web site. Davidian then sued.
The Sixth Circuit first determined that the relevant forum was the city of Cookeville Web site and not the Internet as a whole. The court reasoned that Davidian sought access to the specific Web site of the city and not generally to the Internet. Indeed, he was already on the Internet. The court then said that a Web site is not a longstanding place that has been used to facilitate the exchange of ideas and held that the city’s Web site was not a traditional public forum. R. J. Conrod (2001) believes that the court was in error in holding that the city’s Web site was not a traditional public forum because of the uniqueness of the Internet in its extraordinary communicative powers. Conrod makes several very persuasive arguments for recognizing Web sites as traditional public forums, including the facts that the Web site is a physical “place” and allows speech through hyperlinks. However, Conrod failed to take in consideration the issue of the relevant forum as discussed by Montgomery (1999). When the Putnam Pit court chose the Web site, as opposed to the Internet as a whole, as the relevant forum it virtually negated the impact of the Internet’s communicative power because the focus of the activity is access to the individual Web site and not access to the Internet. The courts have historically chosen very narrow relevant forums, which makes it highly unlikely that Internet will become a traditional public forum until courts have to consider governmentowned blogs or other types of Web sites that are set up specifically for public discourse (Liddell et al., 2006). In fact, in a more recent case, U.S. v. American Library Association, (2003), the U.S. Supreme Court reinforced its relevant forum reasoning by holding that the Internet in public libraries is not a public forum because libraries are not historically places of public discourse but a place to acquire resources. Next, the Putnam Pit court dealt with the issue of the Web site as a designated public forum. The court found two factors that militated against the Web site being characterized as a designated public forum. First, the city was selective as to who could get a hyperlink from the site and, second, the purposes of Davidian did not coincide with the purposes of the city’s Web site. The court then held that the city’s Web site was a nonpublic forum.
Government Web Sites as Public Forums
Now that the Web site was classified as a nonpublic forum the reasonableness test was used to determine if it was appropriate for the city to limit Davidian’s access. Because the relevant forum was the Web site, reasonableness was based on the purpose of the Web site, as opposed to the purpose of the Internet as a whole. The court concluded that the purposes of Davidian did not align with the purpose of the city, which was to give the public information about the general activities if the city and, therefore it was appropriate for the city to limit Davidian’s access. Unfortunately for the city, the court’s review of this case did not stop at the forum but continued on to consider whether the city was using lack of accessibility to the nonpublic forum to squelch the voice of Davidian in the marketplace of ideas simply because the city did not like Davidian’s ideas. The government could limit Davidian’s access to the Web site because the city did not agree with Davidian if the limitation furthers a legitimate governmental purpose. For example, the Supreme Court has allowed the limitation of funds to promote abortions because the government could legitimately prefer childbirth to abortion. However, in The Putnam Pit the court was concerned because there were no enunciated standards for when a speaker would be denied a hyperlink from the Web site. This absence of a standard gave the city officials unbridled discretion to limit access to the Web site. Further, without standards it is easy to conjure reasons and rationales for denying access after the deed in completed. Therefore, the court found that viewpoint discrimination existed and the case was allowed to proceed to trial.
futurE trEnds The issue of allowing or not allowing for-profit entities a presence on government Web sites is important for information mangers to consider in portal design. In the near future, three important trends are likely: the acknowledgement that the presence of a firm on a government Web site allows the firm to reap the benefits of the government’s credibility, the potential liability associated with
third party links, and the development of concrete objective linking policies. Whether the government likes it or not the hyperlink serves as somewhat as of an endorsement from a highly credible source: the United States Government. Source credibility has been shown to influence decision making in many studies in communications, advertising and consumer behavior based on the seminal work of Hovland, Janis and Kelley. (1953). The basics of the theory suggest that the effectiveness of a message is affected by the expertise and trustworthiness of the source. In our earlier examples of the city of Chicago and the U.S. Internal Revenue Service, we see government agencies espousing specific commercial enterprises to the exclusion of others as embedded hyperlinks or a comment in a forum. In a recent article Goldsmith, Lafferty and Newell. (2000) found support that corporate credibility was highly correlated with positive attitudes and purchase intentions. In extension, we would suggest that as long as some individuals hold the government as a credible source, then firms associated through hyperlinks would be afforded positive attitudes and intentions. In essence a hyperlinked firm may follow a similar association as that signaled by brands (e.g., Erdem, Swait & Valenzuela, 2006). Being associated with a popular brand (the government) signals the credibility of the hyperlinked firm-over other non hyperlinked firms. Additionally, a recent trend in internet communications has been the introduction of blogs or Web site diaries. Blogs allow individuals or organizations to report on day to day activities to an interested audience. For the government, a number of agencies and elected officials have chosen to use this type of forum as an additional means of communication with constituencies. Elected officials, such as U.S. Senator Obama (chosen for illustrative purposes only), uses his blog as a means of allowing his readers to learn his thoughts on key issues. In a post to the blog on February 28, 2006 regarding his views toward the auto industry (Obama, 2006) the Senator also includes a hyperlink to a cross linked article at the Boston Globe. If one selects the Boston Globe link, an article titled, “Salvaging the auto indus-
Government Web Sites as Public Forums
try” by Barack Obama and Jay Inslee appears (Boston Globe, 2006). Also, on the Boston Globe Webpage are advertisements. The mere existence of the link allows viewers of his government site to potentially be influenced by the decision to have this link and follow it to the Boston Globe an advertisement supported Web site which benefits financially from the hyperlink. Governments must come to understand that they possess something that all businesses crave: brand name recognition and credibility. Once a company is placed on a government site that company may be given the aura of competence and reliability even though the government may place disclaimers on their sites in regards to those companies’ competence and reliability (see FirstGov.gov’s linking policy noted previously). Therefore, having a link on a government entity’s Web site, such as the city of Chicago, may be coveted because it potentially recognizes the company “government trusted” even if it is not. Because it may have this powerful and valuable branding effect, governments must protect themselves from the liability of wrongfully excluding a company from its Web site. Court cases have made it clear that one of the best ways to limit free speech problems for government entities is to assure that your Web site is a nonpublic forum. To assure treatment as a nonpublic forum, governments should have a formal written policy that asserts the government’s right to limit use of the Web site. Further the policy should identify the Web site as a nonpublic forum. Such formal written policy would make explicit the factor in Forbes that the court determined by implication, that is, the forum is not open, without limitation to the public. The policy should describe the selection criteria (who, what, when and how) and that the criteria pertains to the specific Web site. For example, the IRS.gov linking policy includes both what is considered acceptable as well as a detailed procedure it carries out before allowing a link: •
The site clearly relates to and complements existing information, products, and services on IRS.gov.
• • • •
• • •
• •
The site contains relevant and useful content that will benefit our customers. The site clearly states and reflects its purpose, and provides overall quality and authoritative value to the IRS.gov audience. The site contains accurate and timely information. The site provides information at no cost. We do not link to sites whose primary purpose is to sell products or services, except as part of an approved agreement with IRS (e-file Partners, for example). The site has an excellent overall quality and professional image. The site is “user-friendly” and easy to navigate The site is a credible source for information. Site must be free of typos and errors so that it does not detract from the credibility of the site. The site does not exhibit hate, bias, or discrimination. The site does not contain misleading or unsubstantiated claims, or conflict with the mission of the IRS.
conclusion Governments possess a tremendous marketing and advertising tool in Web site branding. Businesses recognize this branding effect and may want to capitalize on it. Governments must control access to its sites because otherwise it may lose the message and purpose of the site. Many times the intents of businesses and government may clash. Governments now have the upper hand in controlling access; however, the Internet is growing more powerful every day as an information dissemination device and as an opinion influencing device, a place of public discourse. If the courts perceive a government Web site to be a place of unfettered public discourse it may change the forum to that of a traditional public forum and swing the access in favor of businesses. For the public information manager this could influence the definition of forum for the Web site and bring into question the
Government Web Sites as Public Forums
decision to allow external for-profit firms to have links on government Web sites. However, the lack of a historical record regarding hyperlink strategies which would refute or substantiate the claims of the courts (Liddell et al., 2004) presents a serious difficulty for forum analysis with respect to government Web sites. Currently, the courts have failed to include factors such as the government’s ability to be perceived as providing credibility to hyperlinked firms, and the classification of government sponsored blogs on the Internet as places of public discourse (Liddell et al., 2006). As future cases incorporate these forms of data on citizen-government communications they may find that the Internet is truly a new and different animal in terms of forum. This data will give teeth to the arguments of Conrod (2001) for classifying the Internet, and in particular, certain government Web sites as public forums. However, the Internet is like any other new system, concept or device: it must prove what it is in a historical context and not simply based on conjecture.
futurE rEsEarch dirEctions For researchers interested in examining the impact of government portal design decisions in the context of allowing links to third party interests, we would suggest that further research move toward a research agenda which systematically assesses: (1) What is happening right now; (2) How is it being enabled/hindered; (3) What is the potential impact on other stakeholders, such as citizens; and (4) What is the impact in the international environment? To examine what is happening right now, a case study approach could be undertaken to examine the impact on organizations such as charities, local businesses, and other non-government sites which have links on government Web sites. This form of in-depth analysis of organizations could potentially reveal the power that the government has relative to citizens selecting the links and their subsequent actions on the third party site. Issues such as whether visitors resulting from government links versus other means of reaching the site are more valuable in terms of transactions, donations,
or time spent on the site could assist in measuring the value of these links. Additionally, such an analysis could shed light on the opportunity power government officials possess if/when they choose to eliminate the hyperlinks. Another potential research area lies in understanding the process in which different government entities determine how a third party can place a link and how many may have links on government pages. For example, some states have sections of government maintained Web sites devoted to smaller local manufacturers. To be listed on the site is there a stated procedure or is it up to the subjective decision of one or several different individuals? What is the appeal process if a firm is deemed unfit to be listed? The development of a best practices or normative guidelines and procedures may eliminate future legal issues. The third area addresses the potential impact on important stakeholders. What impact does being associated with a government Web site have on the average citizen? Even though some Web sites, such as www.irs.gov, may inform visitors they are leaving a government site do all citizens understand what this means? Are there vulnerable groups that simply attach a government sponsored label to the third party links? If so, then the government is implicitly giving firms that have links an unfair advantage in the marketplace. Alternatively, how can a firm that has a link on government Web site use that information? Are they allowed to post on their Web site something on the order …as seen on irs.gov…our company provides…? Clearly, this form of promotion can influence citizen perceptions of the firm and lends credibility to the firm. The fourth area of potential impact is the international environment. For example, answers to questions such as: (1)Have U.S. governmental portal designs had any legal or economic effect on international companies? (2)How are foreign governments handling these design issues and what effects are they having on the various stakeholders? (3)Is there a difference between developed and less developed countries in how these design issues are managed? (4)Can these design issues be used to give citizen an international competitive advantage? and, (5) How would design issues be
Government Web Sites as Public Forums
viewed under GATT and WTO rules? can assist in maintaining and enhancing a consistent message across nations. In summary, there is a constant interplay between a government’s need to promote economic entities and a desire to be fair. Research aimed at increasing our understanding of the implications of government actions in the electronic marketplace can help bring about a more equitable and objective competitive environment.
rEfErEncEs Arkansas Educational Television v. Forbes, 523 U.S. 666 (1998). Boston Globe (2006.) Salvaging the auto industry. Retrieved October 16, 2006 from www.boston. com/news/globe/editorial_opinion/oped/articles/2006/02/08/salvaging_the_auto_industry Brewer, G.A., Neubauer, B.J., & Geiselhart, K. (2006). Designing and implementing e-government systems: Critical implications for public administration and democracy. Administration & Society, 38(4), 472-499. Cityofchicago.org (2007). Exploring Chicago: Shopping & dining. Retrieved October 29, 2007 from http://egov.cityofchicago.org/city/webportal/home.do Conrod, R.J. (2001). Linking public Web sites to the public forum (Note). Virginia Law Review, 87, 1007-1044. Cornelius v. NAACP Legal Defense and Education Fund, 473 U.S. 788 (1985) Erdem, T., Swait, J., Valenzuela, A. (2006). Brands as signals: A cross-country validation study. Journal of Marketing, 70(1), 34-49. Hovland, C.I, Janis, I.L., & Kelley, H.H. (1953). Communication and persuasion. New Haven:Yale University Press. IRS. (2006).Internal revenue service Web site. Retrieved October 17, 2006 from http://www. irs.gov/
Goldsmith, R.E., Lafferty, B.A., & Newell, S.J. (2000). The impact of corporate credibility and celebrity credibility on consumer reaction to advertisements and brands. Journal of Advertising, 29(3), 43-54. Liddell, P. Jr., Eshee, W.D., Moore, M., Moore, R., & Liddell, G.J. (2004). This little piggy stayed home: Accessibility of governmentally controlled internet marketplaces. Albany Law Journal of Science and Technology, 15(1), 31-72. Liddell, P. Jr., Moore, R., Eshee, W.D., Moore, M., & Liddell, G.J. (2006). Government-owned Web sites and free enterprise: First Amendment implications. Journal of Internet Law, 10(4), 1, 14-19. Menzel, D.C. (1998). www.ethics.gov: Issues and challenges facing public managers. Public Administration Review, 58(5), 445-452. Montgomery, S.S. (1999). When the klan adoptsa-highway: The weaknesses of the public forum doctrine exposed. Washington University Law Quarterly, 77(Summer), 557-583. Moore, R.S., Stammerjohan, C. & Coulter, R. (2005).The effects of ad-web site congruity and execution cues on attention and attitudes. Journal of Advertising, 34(2), 77-90. Obama, B. (2006). Salvaging the auto industry. Retrieved October 17, 2006 from http://obama. senate.gov/blog/060208-salvaging_the_auto_industry/ The Putnam Pit v. City of Cookeville, 221 F. 3d 834 (6th Cir.2000). Sellitto, C., & Burgess, S. (2005). A governmentfunded Internet portal as a promoter of regional cluster relationships: a case study from the Australian wine industry. Environment & Planning C: Government & Policy, 23(6), 851-866. Streib, G., & Navano, T. (2006). Citizen demand for interactive e-government: The case of georgia consumer services. American Review of Public Administration, 36(3), 288-300.
Government Web Sites as Public Forums
Thomas C.J., & Streib, G. (2003). The new face of government: Citizen-initiated contacts in the era of e-government. Journal of Public Administration Research & Theory, 13(1), 83-101. USA.gov. (2006). Linking policy: USA.gov. Retrieved May 9, 2006 from http://www.firstgov. gov/About/Linking_Policy.shtml U.S. v. American Library Association, 539.U.S. 194 (2003). Widmar v. Vincent, 454 U.S. 263 (1981).
furthEr rEading Beltramini, R.F., &. Stafford, E.R. (1993). Comprehension and perceived believability of seals of approval information in advertising. Journal of Advertising, 22(3), 3-13. Brueckner, A. (2005). E-government II: Best practices for digital government. Bulletin of the American Society for Information Science & Technology, 31(3), 16-16. Cohen, J.E. (2007). Cyberspace as/and space. Columbia Law Review, 107(January), 210-256. Coleman, S. (2006). Digital voices and analogue citizenship: Bridging the gap between young people and the democratic process. Public Policy Research, 13(4), 257-261. Daughdrill, B.E. (2001). Poking along in the fast lane on the information super highway: Territorial-based jurisprudence in a technological world. Mercer Law Review, 52(Summer), 1217-1240. De Jong, M. & Lentz, L. (2006). Scenario evaluation of municipal web sites: Development and use of an expert-focused evaluation tool. Government Information Quarterly, 23(2), 191-206. Den Bleyker, K.C. (2007). The first amendment vs. operational security: Where should the milblogging balance lie? Fordham Intellectual Property, Media and Entertainment Journal, 17(Winter), 401-442.
0
Deutsche, N.T. (2006). Professor nimmer meets professor schauer (and others): An analysis of “definitional balancing” as a methodology for determining the “visible boundaries of the First Amendment.” Akron Law Review, 39, 483-539. Dolan, M.J. (2004). The special purpose forum and endorsement relationships: New extensions of government speech. Hastings Constitutional Law Quarterly, 31(Winter), 71-139. Eschenfelder, K.R., & Miller, C.A. (2007). Examining the role of web site information in facilitating different citizen–government relationships: A case study of state chronic wasting disease web sites. Government Information Quarterly, 24(1), 64-88. Finch, J., & Quackenboss, C. (2001). Media- and vehicle-source effects in Internet communications. Marketing Management Journal, 11(1), 114-123. Fischer, R.J. (2003). “What’s in a name?”: An attempt to resolve the “analytic ambiguity” of the designated and limited public fora. Dickinson Law Review, 107(Winter), 639-674. Gengatharen, D., Standing, C., & Burn, J. (2005). Government supported community portal regional emarketplaces for SMEs: Evidence to support a staged approach. Electronic Markets, 15(4), 405-417. Horst, M., Kuttschreuter, M., & Gutteling, J.M. (2007). Perceived usefulness, personal experiences, risk perception and trust as determinants of adoption of E-government services in the netherlands. Computers in Human Behavior, 23(4), 1838-1852. Hunter, D. (2003). Cyberspace as place and the tragedy of the digital anticommons. California Law Review, 91(March), 439-519. Jordana, J., Fernández, X., Sancho, D., & Welp, Y. (2005). Which Internet policy? Assessing regional initiatives in Spain. Information Society, 21(5), 341-351. Kent, R.J. (2002). The effects of media-source cues in ad recall tests. Journal of Current Issues & Research in Advertising, 24(1), 1-9.
Government Web Sites as Public Forums
Langford, J., & Roy, J. (2006). E-Government and public-private partnerships in Canada: when failure is no longer an option. International Journal of Electronic Business, 4(2), 118-135.
Wright, S. (2006). Government-run online discussion fora: Moderation, censorship and the shadow of control. British Journal of Politics & International Relations, 8(4), 550-568.
Schesser, S.D. (2006). A new domain for public speech: Opening public spaces online. California Law Review, 94(6), 1791-1825.
Zick, T. (2007). Clouds, cameras, and computers: The first amendment and networked public places. Florida Law Review, 59(January), 1-69.
Sellitto, C., & Burgess, S. (2005). A governmentfunded internet portal as a promoter of regional cluster relationships: A case study from the Australian wine industry. Environment & Planning C: Government & Policy, 23(6), 851-866.
Zick, T. (2006). Space, place, and speech: The expressive topography. Goerge Washington Law Review, 74(April), 439-505.
Stieglitz, E.J. (2007). Anonymity on the Internet: How does it work, who needs it, and what are its policy implications? Cardoza Arts & Entertainment Law Journal, 24, 1395-1417.
tErms and dEfinitions
Strowel, A., & Ide N. (2001). Liability with regard to hyperlinks. Columbia-VLA Journal of Law & the Arts, 24(Summer), 403-448. Vincent, C.B. (2006). Cybersmear II: Blogging and the corporate rematch against John Doe version 2.006. Delaware Journal of Corporate Law, 31, 987-1009. Volkmer, C.J. (2002). Hyperlinks to and from commercial Web sites. Computer Law Review & Technology Journal, 7(Fall), 65-79. Vragov, R., & Kumar, N. (2006). Electronic markets for the allocation, financing and distribution of public goods. Electronic Markets, 16(4), 274-281. Weber, G.S. (2004). Needling the thread: A moderator’s guide to freedom of speech limitations on government sponsored web-based threaded discussions. Computer Law Review & Technology Journal, 7(Winter). 323-377. Woodall, W.B. (2004). Fixing the faulty forum framework: Changing the way courts analyze free speech cases. First Amendment Law Review, 2(Spring), 295-394.
Designated Public Forum: A governmentowned place on which the government has purposefully opened for discourse by some specific class. First Amendment: U.S. Constitution amendment that guarantees reasonable freedom of speech. Forum: Any place for public discourse. Nonpublic forum: A government-owned place on which the government has not opened for discourse by some specific class. Reasonableness test: The standard of proof where the government has only to show a rational relationship to the furtherance of an expressed public policy in order to limit speech in a forum. Relevant Forum: The characterization of a forum as a specific place. Source Credibility: people are more likely to be persuaded when the source of the message is perceived to be credible. Strict Scrutiny Test: The standard of proof where the government must show a compelling reason to limit speech in a forum. Traditional Public Forum: A governmentowned place that has by tradition and longstanding been a place of public discourse.
Chapter III
Limitations of Evolutionary Approaches to E-Government Rodrigo Sandoval-Almazán Universidad Autónoma del Estado de México, Mexico J. Ramon Gil-Garcia Centro de Investigación y Docencia Económicas, Mexico
introduction Information and communication technologies (ICTs) have the potential to improve the quality of the overall citizen experience when interacting with government, including information and services (Bourquard, 2003; Dawes, Pardo, & DiCaterino, 1999; Garson, 2004; Gartner, 2000; Grönlund, 2001). State and local governments are increasingly using ICTs in their operational tasks as well as their provision of public services (Holden, Norris & Fletcher, 2003; Moon, 2002; West, 2005). Many of these governments have created Websites and portals, which provide information about the government agencies and, in some cases, electronic transactions such as tax payment systems, online communities, job search, licensing, and vehicle registration, among others. Through the incorporation of these new features and applications, technological and organizational sophistication have been systematically added to e-government initiatives throughout the last few years (Holden, Norris & Fletcher, 2003; Moon, 2002; West, 2005). However, there are few sys-
tematic approaches to evaluate the quality and impact of these initiatives. One of the most frequently used approaches to understand and evaluate e-government is based on the construction of evolutionary stages, which are assumed to be independent and consecutive (Gil-Garcia & Martinez-Moyano, 2005; Moon, 2002; Sandoval-Almazán & Gil-García, 2006). This evolutionary approach allows an assessment and comparison among e-government initiatives, including Web portals. However, it has important limitations. This study assesses the functionality of Mexican state portals in two consecutive years (2005 and 2006) and discusses some of these limitations. Based on a questionnaire containing qualitative and quantitative items, this research is guided by three interrelated hypotheses. First, e-government stages are not mutually independent and a single Web portal can present characteristics of multiple stages. Second, e-government stages are not necessarily consecutive and therefore, some portals can present only characteristics of an early stage (i.e., information) and an advanced stage (i.e., political participation). Finally, the linear
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Limitations of Evolutionary Approaches to E-Government
progression assumed in evolutionary models is problematic when applied to some specific realities. For political, managerial or technical reasons a portal can be considered in an advanced stage one year and be considered in an early stage the next year. This chapter is organized in seven sections, including this introduction. Section 2 provides an overview of evolutionary approaches to e-government and states some of their most important characteristics and assumptions. Section 3 describes the research method used in this study. Section 4 presents the main findings and discusses the three main hypotheses. Section 5 proposes some future trends within this topic. Finally, section 6 provides some final comments and section 7 suggests areas for future research.
background: Evolutionary approachEs to E-govErnmEnt Electronic government is not a well-defined concept and scholars and practitioners have suggested a great number of definitions (Gil-Garcia & LunaReyes, 2006; Prins, 2001; Schelin, 2003). For this study, electronic government is the use of information and communication technologies (ICT) to improve and facilitate citizens’ relationships with government through democratic procedures, costeffective transactions and efficient regulations, all of which enhances this relationship. Understanding and evaluating e-government is at least as difficult as defining it with precision and taking into consideration all the important aspects. In recent years, different approaches to understand and evaluate e-government have been proposed (for reviews see Gil-Garcia & Luna-Reyes, 2006; Schelin, 2003). One of the most frequently used approaches is the evolutionary perspective, which creates stages and analyzes e-government initiatives according to the characteristics and technical features found in these stages (i.e., presence, information, integration). The evolutionary approach is useful for Web portal evaluation because attempts to measure the degree of innovation and provides clear guidance for the development
and improvement of government Web sites (Sandoval-Almazán & Gil-García, 2006). This approach maintains the assumption that there is an evolution towards electronic government (Gil-Garcia & Martinez-Moyano, 2005; Layne & Lee, 2001). Some authors contend that each one of the stages is already electronic government. Others delimit in which phases a government can be considered electronic. After reviewing different ways to present the stages of e-government, Sandoval-Almazán & Gil-García (2006) present the following model as a summary of previous theoretical developments (see Table 1).
rEsEarch mEthod Based on the main stages and characteristics of the evolutionary approach, we developed a questionnaire and applied it to the Mexican state portals. The Mexican state portals were observed and analyzed in January of 2005 and 2006. The features and content of the portals were measured using 38 questions related with the different stages. The survey was applied in a 30-45 minute session for each of the 32 Mexican state portals, including the federal district. The other part of the survey, which is not reported in this paper, contains 12 qualitative measures with a multiple-choice answer system.
undErstanding somE limitations of Evolutionary approachEs to E-govErnmEnt: thE mExican statE portals 2005-2006 This section presents the results of the analysis of the Mexican state portals in 2005 and 2006. First, it highlights the main findings of applying the evolutionary approach to the evaluation of the portals. Then, based on these findings, a discussion of the limitations of evolutionary is presented, following the three initial hypotheses of this study.
Limitations of Evolutionary Approaches to E-Government
Table 1. Evolutionary approaches to e-government (Source: Adapted from Sandoval-Almazán & GilGarcía, 2006) E-Government Stage/ Component
Additional Technological and Organizational Sophistication
References
Presence
Limited government information Few Web pages developed by single agencies Static information about government structure and services
Gil-Garcia & Martinez-Moyano, 2005; UN & ASPA, 2002
Information
Greater number of Web pages Statewide portal as the entry point with links to most of the state pages More dynamic information (frequent updates).
Gil-Garcia & Martinez-Moyano, 2005; Holden, Norris & Fletcher, 2003;Hiller & Bélanger, 2001; Layne & Lee, 2001; Moon, 2002; UN & ASPA, 2002
Interaction
Forms that can be downloaded Two-way communication through electronic mail Use of searching machines Use of chats, forums or other forms of interactive communication (service related) Some customization (citizens profiles, use of passwords)
Gil-Garcia & Martinez-Moyano, 2005; Hiller & Bélanger, 2001; Moon, 2002; UN & ASPA, 2002
Transaction
On-line services (secure and completely online), including electronic payments (e.g., credit cards). More customization (use of passwords, citizen profiles, etc.) Portal organized according to people’s needs instead of government structures.
Gil-Garcia & Martinez-Moyano, 2005; Hiller & Bélanger, 2001; Holden, Norris & Fletcher, 2003; Layne & Lee, 2001; Moon, 2002; UN & ASPA, 2002
Integration
Services portal with a single point of checkout (multiple agencies, same function, and different levels of government). Services portal with a single point of checkout (multiple agencies, different functions, same level of government). Services portal with a single point of checkout (multiple agencies, different functions, different levels of government).
Gil-Garcia & Martinez-Moyano, 2005; Hiller & Bélanger, 2001; Holden, Norris & Fletcher, 2003; Layne & Lee, 2001; Moon, 2002; UN & ASPA, 2002;
Political Participation
Electronic vote, online participation.
Hiller & Bélanger, 2001; Moon, 2002
On each stage, we created a scale from 0 to 6 points and classified the states according to the number of characteristics and features that were found. We assigned the label “High Level” to state portals that obtained 6-5 points in each stage, “Medium Level” to state portals that obtained 4-3 points, and “Low Level” to state portals that obtained 2-1 points. The label “No Points” was assigned to the portals that did not obtain any point in a stage (see Table 4).
main findings From 2005 to 2006 there are not great differences. Most of the Mexican portals have characteristics of the information stage. No states were at the “high level” in advanced stages such as transaction or political participation and there was only one at the “high level” in interaction and integration in 2006 (see Table 2).
Table 3 shows the number of portals classified at the “medium level” in each stage. There is no difference between 2005 and 2006 in the information stage. There is an increase in interaction, integration and political participation. Finally, there was a slight decrease in the number of portals at the “medium level” in transaction (from 10 to 9). Table 4 summarizes the results for all levels and stages. The information stage is relatively stable with about the same number of states at each level for 2005 and 2006. There is a slight improvement in the interaction stage, which more state portals at the “medium level” in 2006 with respect to 2005. There are not state portals at the “high level” of the transaction stage in any of the two years and there is an important increase in the number of state portals at the “low level” from 2005 to 2006. There is a slight decrease in the number of portals at the “high level” and “medium level” of the integration stage from 2005 to
Limitations of Evolutionary Approaches to E-Government
Table 2. Evolutionary approach results for Mexican state portals (high level) E-Government Stage/Component
2005
2006
Information
15
18
Interaction
1
1
Transaction
0
0
Integration
2
1
Political Participation
0
0
Table 3. Evolutionary approach results for Mexican state portals (medium level) E-Government Stage/Component
2005
2006
Information
5
5
Interaction
11
16
Transaction
10
9
Integration
17
19
Political Participation
0
4
2006. Finally, there is an increase in the number of portals at the “medium level” of the political participation stage from 2005 to 2006, but no state portal is at the “high level” in this stage either in 2005 or in 2006.
E-Government Stages are not Mutually Independent The first hypothesis of this study was that e-government stages were not mutually independent. This study shows evidence that several of the state portals have characteristics of more than one stage, and in fact the stages may be thought as components of electronic government instead. For example, the cases of Sonora, Sinaloa and Querétaro present characteristics and features of several stages simultaneously. For instance, Sonora is in the interaction stage in general, but also at the low level of political participation, and medium level on integration and transaction, sharing multiple characteristics of all these stages in the same year. In fact, most state portals shared characteristics of more than one stage in 2005 and/or in 2006 and some of them can be classified
at the low level in early stages and at higher levels in advanced stages.
E-Government Stages are not Necessarily Consecutive The second hypothesis of this study was that egovernment stages are not necessarily consecutive. In this study, some state portals present characteristics of advanced stages without presenting characteristics of the early stages. For example, the portal of Baja California Norte had characteristics of interaction, integration and political participation, but did not have characteristics of interaction. Similarly, Guanajuato is strong in interaction, but does not present a consecutive or linear progression in the other components. This shows that the “stages” are not necessarily consecutive and governments can make decisions about where they want to focus their efforts. Again, it may be more useful to conceptualize egovernment stages as interrelated components of e-government, which are not necessarily mutually exclusive or consecutive.
The Linear Progression Assumed in Evolutionary Models is Problematic Finally, the third hypothesis of this study was that the linear progression assumed in evolutionary models is problematic when applied to some specific realities. For political, managerial or technical reasons, some state portals can lose characteristics of certain stages and being classified as being in an advanced stage one year and in an early stage the next one. In this study, there were several states that lost points in one or several of the stages: Chiapas in integration; Estado de México and Tabasco in transaction and integration; Puebla and Colima in information. These five states went from high level to medium level or from medium level to low level. As mentioned early, some of the reasons might be related to the political environment in the state. For example, in Chiapas and Estado de México there was a change in administration and a new governor took office between our first and second evaluations.
Limitations of Evolutionary Approaches to E-Government
Table 4. Mexican state portals by stage/component (2005 & 2006) E-Government Stage/ Component
Mexican State Portals 2005
Mexican State Portals 2006
Information
High Level: Baja California Norte, Chiapas, Puebla, Colima, Estado de México, Hidalgo, Jalisco, Michoacán, Morelos, Nuevo León, Sonora, Tlaxcala, Tabasco, Tamaulipas, Distrito Federal Medium Level: Guanajuato, Durango, Querétaro Chiapas; Chihuahua Low Level: Guerrero, Veracruz, Baja California Sur, Campeche. No Points: Oaxaca, Sinaloa, Nayarit, San Luis Potosí, Zacatecas, Quintana Roo
High Level: Morelos, Sonora, Sinaloa, Nuevo León, Michoacán, Chiapas, San Luis Potosí, Hidalgo, Estado de México, Yucatán, Jalisco, Oaxaca, Baja California Norte, Veracruz, Distrito Federal, Querétaro, Campeche, Tabasco. Medium Level: Colima, Zacatecas, Aguascalientes. Puebla, Baja California Sur Low Level: Chihuahua. No Points: Coahuila, Durango, Guanajuato, Guerrero, Nayarit, Quintana Roo, Tamaulipas, Tlaxcala.
Interaction
High Level: Sonora Medium Level: Nuevo León, Puebla, Querétaro, Chiapas, Estado de México, Tabasco, Tamaulipas, Yucatán, Colima, Michoacán, Tlaxcala. Low Level: Guanajuato, Distrito Federal, Jalisco, Sinaloa, Baja California Norte, Morelos, San Luis Potosi, Aguascalientes, Chihuahua, Coahuila, Zacatecas, Guerrero, Hidalgo, Quintana Roo, Veracruz, Campeche No Points: Oaxaca, Nayarit, Baja California Sur, Durango
High Level: Guanajuato Medium Level: Sonora, Chiapas, Estado de México; Tamaulipas, Campeche. Aguascalientes, Baja California Sur, Colima, Michoacán, Nayarit, Nuevo Leon, Oaxaca, Puebla, Veracruz, Yucatán Zacatecas. Low Level: Sinaloa, Morelos, San Luis Potosi, Jalisco, Distrito Federal, Querétaro, Tlaxcala, Chihuahua, Hidalgo, Morelos, Tabasco, Baja California Norte. No Points: Coahuila, Durango, Guerrero, Quintana Roo
Transaction
High Level: None Medium Level: Nuevo León, Querétaro; Baja California Norte; Estado de México; Tlaxcala; Sonora; Tabasco; Tamaulipas; Yucatán, Jalisco. Low Level: Sinaloa & Chihuahua, Guanajuato, Coahuila, Durango, Veracruz. No Points: Oaxaca, Chiapas, Distrito Federal, Morelos, Nayarit, Aguascalientes, San Luis Potosí, Zacatecas, Baja California Sur, Colima, Guerrero, Quintana Roo, Campeche, Michoacán, Hidalgo
High Level: None Medium Level: Nuevo León, Chiapas, Aguascalientes, Guanajuato, Hidalgo, Jalisco, Sonora, Sinaloa, Zacatecas. Low Level: Morelos, Michoacán, San Luis Potosí; Estado de México, Yucatán, Tamaulipas, Oaxaca, Veracruz,; Distrito Federal, Tabasco, Campeche; Querétaro; Puebla; Baja California Norte; Chihuahua, Coahuila, Nayarit, Guerrero. No Points: Baja California Sur, Chihuahua, Coahuila, Colima, Durango, Guerrero, Michoacán, Nayarit, Puebla, Querétaro, Quintana Roo, San Luis Potosí, Tamaulipas, Tlaxcala, Veracruz
Integration
High Level: Estado de México, Chiapas Medium Level: Durango, Chihuahua, Michoacán, Sonora, Distrito Federal, Jalisco, Sinaloa, Tabasco, Tamaulipas, Morelos, Yucatán, Coahuila, Zacatecas Quintana Roo, Campeche, Hidalgo, Tlaxcala. Low Level: Nuevo León, Puebla, Querétaro, Oaxaca, Guanajuato, Baja California Norte, Nayarit, Veracruz, Aguascalientes, Chihuahua, San Luis Potosi, Baja California Sur, Colima, Durango, Guerrero, Michoacán. No Points: None
High Level: Nuevo León Medium Level:, Tabasco Tamaulipas, Hidalgo, Sonora, Zacatecas Tlaxcala Veracruz Yucatán, Aguascalientes, Jalisco, Sinaloa, Colima, Estado de México, Distrito Federal, Morelos, Oaxaca, Querétaro, Quintana Roo, Low Level:, Michoacán, San Luis Potosí;, Campeche, Puebla; Baja California Norte; Chihuahua, Coahuila, Nayarit, Guerrero. Chiapas, Guanajuato, Baja California Sur, Durango, Guerrero. No Points: Nayarit
Political Participation
High Level: None Medium Level: None Low Level Chiapas; Estado de México; Querétaro; Sinaloa; Sonora. No Points: Nuevo León, Puebla, Oaxaca, Guanajuato, Distrito Federal, Jalisco, Tabasco, Tamaulipas, Baja California Norte, Morelos, Nayarit, Yucatán, Aguascalientes, Chihuahua, Coahuila, San Luis Potosí, Zacatecas, Baja California Sur, Colima, Durango, Guerrero, Quintana Roo, Veracruz, Campeche, Michoacán, Hidalgo, Tlaxcala.
High Level: None Medium Level: Sonora, Sinaloa, San Luis, Tamaulipas. Low Level: Morelos, Michoacán, y Baja California Norte No Points: Aguascalientes, Baja California Sur, Campeche, Chiapas, Chihuahua, Coahuila, Colima, Distrito Federal, Durango, Estado de México, Guanajuato, Guerrero, Hidalgo, Jalisco, Nayarit, Nuevo León, Oaxaca, Puebla, Querétaro, Quintana Roo, Tabasco, Tlaxcala, Veracruz, Yucatán, Zacatecas.
Note: High Level = 6-5 points; Medium Level = 5-4 points; Low Level = 2-1 points; No Points = 0 points.
Limitations of Evolutionary Approaches to E-Government
Other reasons may include budget reduction, staff changes, and technological advances.
futurE trEnds This study shows that the main trend of Mexican state portals is towards information and interaction. In fact, they were the only two stages or components with a clear increase from 2005 to 2006. The study also shows that there was not much improvement in terms of the number of states at “high level” and “medium level” in any of the other stages. In fact, from 2005 to 2006, the number of state portals in some stages remained the same, while in others this number was even smaller for 2006. The future trends for the Mexican state portals are not as clear as the evolutionary approach would imply (from information to political participation without going backwards). In addition, the evolutionary approach directs governments to integration and political participation as the ultimate goals. However, this is not necessarily good or efficient for all governments. The specific characteristics of the government and the needs of the citizens, businesses, and other stakeholders should be taken into consideration. Future evaluation methods should consider these important elements and think carefully about the assumption that a single approach with standardized characteristics is equally useful for any type of e-government initiative in any place in the world.
conclusion Overall, this study demonstrates that evolutionary approaches are useful to understand and evaluate e-government, but have important limitations. The evidence presented shows that (1) e-government stages are not mutually exclusive; (2) e-government stages are not necessarily consecutive; and (3) the linear progression assumed in evolutionary models is problematic when applied to some specific realities. Evaluating e-government initiatives, including Web portals, is more difficult because they can have characteristics and features
identified with multiple stages. However, these approaches are still useful in providing practical guidance and direction for improvement. One of the potential alternatives, which exploits the strengths and reduces the weaknesses of the evolutionary approach, is to think about information, interaction, transaction, integration, and political participation as components rather than stages. Therefore, state portals are expected to have a heterogeneous combination of these components and they all can be thought as important for the development of better e-government initiatives. There are several important questions that remain unanswered. For instance, how to measure innovation? How to incorporate the perspective of citizens? What is the role of government-wide policies? How to balance between costs and benefits? Is there a “right” combination of components for certain contexts or levels of government? These and other interrelated questions should be the focus of future research in this topic.
futurE rEsEarch dirEctions This study focuses on state portals and shows some of the limitations of evolutionary approaches for understanding electronic government. Future research should perform similar analysis with federal agency portals and local government portals. These studies will uncover differences and similarities between these three levels of government and understand if the limitations presented in this paper are unique to state portals. Similarly, the present study analyzes state portals in Mexico. Being a Latin American country, Mexico has important differences from developed countries and some similarities to other Latina American and developing countries. Future research should explore the applicability and usefulness of evolutionary approaches to understand e-government in other national and cultural contexts. Studies developing these evolutionary approaches have been conducted in the United States, but research assessing the limitations and potential of these approaches is scarce in the literature. Two additional research opportunities are the assessment of software packages that automatically can evaluate some
Limitations of Evolutionary Approaches to E-Government
technical aspects of government portals and studies focusing on how users interact with the portals and how this interaction influence the usefulness and value that these portals generate.
John F. Kennedy School of Government, Harvard University.
acknowlEdgmEnt
Hiller, J. S., & Bélanger, F. (2001). Privacy strategies for electronic government. In M. A. Abramson & G. E. Means (Eds.), E-Government 2001 (pp. 162-198). Lanham, MD: Rowman & Littlefield Publishers
This work was partially supported by the National Science Foundation under Grant No. 0131923. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
rEfErEncEs Bourquard, J. A. (2003). What’s up with e-government? Retrieved April 6, 2005, from http://www. ncsl.org/programs/pubs/slmag/2003/303egov. htm Dawes, S. S., Pardo, T., & DiCaterino, A. (1999). Crossing the threshold: Practical foundations for government services on the World Wide Web. Journal of the American Society for Information Science, 50(4), 346-353. Garson, G. D. (2004). The promise of digital government. In A. Pavlichev & G. D. Garson (Eds.), Digital government: Principles and best practices (pp. 2-15). Hershey, PA: Idea Group Publishing. Gartner. (2000). Gartner says U.S. E-government spending to surpass $6.2 billion by 2005. Retrieved April 6, 2005, from http://www.gartner. com/5_about/press_room/pr20000411c.html Gil-García, J. R., & Luna-Reyes, L. F. (2006). Integrating conceptual approaches to e-government. In M. Khosrow-Pour (Ed.), Encyclopedia of e-commerce, e-government and mobile commerce. Hershey, PA: Idea Group Inc. Gil-García, J. R., & Martinez-Moyano, I. (2005). Exploring e-government evolution: The influence of systems of rules on organizational action. NCDG Working Paper No. 05-001. Cambridge, MA: National Center for Digital Government,
Grönlund, Å. (Ed.). (2001). Electronic government: Design, applications, and management. Hershey, PA: IDEA Group Publishing.
Holden, S. H., Norris, D. F., & Fletcher, P. D. (2003). Electronic government at the local level: Progress to date and future issues. Public Performance and Management Review, 26(4), 325-344. Layne, K. and Lee, J. (2001). Developing fully functional e-government: A four stage model. Government Information Quarterly, 18(2), 122136. Moon, M. J. (2002). The evolution of e-government among municipalities: Rhetoric or reality? Public Administration Review, 62(4), 424-433. Prins, C. (2001). Electronic government. Variations on a concept. In J. E. J. Prins (Ed.), Designing egovernment. On the crossroads of technological innovation and institutional change (pp. 1-5). The Hague, Netherlands: Kluwer Law International. Sandoval-Almazán, R., & Gil-García, J. R. (2006). E-Government portals in mexico. In Mehdi Khosrow-Pour (Ed). Encyclopedia of e-commerce, e-government and mobile commerce. Hershey, PA: Idea Group Inc. Schelin, S. H. (2003). E-government: An overview. In G. D. Garson (Ed.), Public information technology: Policy and management issues (pp. 120-137). Hershey, PA: Idea Group Publishing. UN & ASPA. (2002). Benchmarking E-government: A Global Perspective. New York: United Nations Division of Public Economics and Public Administration and the American Society for Public Administration West, D. M. (2005). Digital government. Technology and public sector performance. Princeton, NJ: Princeton University Press.
Limitations of Evolutionary Approaches to E-Government
furthEr rEading Chengalur-Smith, I., & Duchessi, P. (1999). The initiation and adoption of client-server technology in organizations. Information & Management, 35, 77-88. Cresswell, A. M. (2004). Return on investment in information technology: A guide for managers. Albany, NY: Center for Technology in Government, University at Albany, SUNY. Cresswell, A. M., & Pardo, T. A. (2001). Implications of legal and organizational issues for urban digital government development. Government Information Quarterly, 18, 269-278. Cresswell, A. M., Pardo, T. A., Canestraro, D. S., Dawes, S. S., & Juraga, D. (2005). Sharing justice information: A capability assessment toolkit. Albany, NY: Center for Technology in Government, University at Albany, SUNY. Cushing, J., & Pardo, T. A. (2005). Research in the digital government realm. IEEE Computer, 38(12), 26-32. Dawes, S. S., Gregg, V., & Agouris, P. (2004). Digital government research: Investigations at the crossroads of social and information science. Social Science Computer Review, 22(1), 5-10. Dawes, S. S., Pardo, T., & DiCaterino, A. (1999). Crossing the threshold: Practical foundations for government services on the world wide web. Journal of the American Society for Information Science, 50(4), 346-353. Dawes, S. S., Pardo, T. A., Simon, S., Cresswell, A. M., LaVigne, M., Andersen, D., et al. (2004). Making smart IT choices: Understanding value and risk in government it investments. Albany, NY: Center for Technology in Government. Dimitrova, D. V., & Chen, Y.-C. (2006). Profiling the adopters of e-government information and services: The influence of psychological characteristics, civic mindedness, and information channels. Social Science Computer Review, 24(2), 172-188.
Landsbergen, D., Jr. George, W., Jr. (2001). Realizing the promise: Government information systems and the fourth generation of information technology. Public Administration Review, 61(2), 206-220. Tat- Kei, A. H. (2002). Reinventing local governments and the w-government initiative. Public Administration Review, 62(4), 434-444. Fountain, J. (2001). Building the virtual state. Information technology and institutional change. Waschington, D.C.: Brookings Institution press. Gil-Garcia, J. R., & Helbig, N. (2006). Exploring e-government benefits and success factors. In A.-V. Anttiroiko & M. Malkia (Eds.), Encyclopedia of digital government. Hershey, PA: Idea Group Inc. Luna-Reyes, L. F., Zhang, J., Gil-Garcia, J. R., & Cresswell, A. M. (2005). Information systems development as emergent socio-technical change: A practice approach. European Journal of Information Systems, 14(1), 93-105. Pardo, T. A., Cresswell, A. M., Thompson, F., & Zhang, J. (2006). Knowledge sharing in crossboundary information system development in the public sector. Information Technology and Management, 7(4), 293-313. Reid, V., & Bardski, B. (2004). Communication and culture: Designing a knowledge-enabled environment to effect local government reform. Electronic journal of e-Government, 2(3), 197-206. Richter, P., Cornford, J. et al. (2004). The e-Citizen as talk, as text and as technology: CRM and e-Government. Electronic Journal of e-Government, 2(3), 207-218. Retrieved October 10, 2007, from http://www.ejeg.com/volume-2/volume2-issue3/v2-i3-art7.htm Rocheleau, B. (2000). Prescriptions for public-sector information management: A review, analysis, and critique. American Review of Public Administration, 30(4), 414-435. Snellen, I. (2000). ICTs and the future of democracy. Policy International Journal of Communications Law and Policy,5.
Limitations of Evolutionary Approaches to E-Government
Taylor Nelson Sofres. (2001). Government online: an international perspective. Benchmarking Research Study. Retrieved November 05, 2007, from http://unpan1.un.org/intradoc/groups/public/documents/APCITY/UNPAN007044.pdf West, D. M. (2004). State and federal e-government in the united states. (pp.1-22). Retrieved July 3, 2007 from http://www.insidepolitics.org/ West, D. M. (2003). Urban e-government: an assessment of City Government websites.Paper. Brown University. Retrieved July 3, 2007 from http://www.insidepolitics.org/egovt03city.html Zhang, J., Cresswell, A. M., & Thompson, F. (2002). Participant’s expectations and the success of knowledge networking in the public sector. Paper presented at the AMCIS Conference, Texas.
tErms and dEfinitions Electronic government: The use of information and communication technologies (ICT) to improve and facilitate citizens’ relationships with government through democratic procedures, costeffective transactions and efficient regulations, all of which enhances this relationship. Evolutionary Approach to E-Government: A way to study e-government that identifies different stages as the right path for e-government evolution. Integration Stage: The stage of the evolutionary approach in which government organization—vertical or horizontal—can define their internal organization to create a portal. Openness: A measure of the freedom to search, collect, and exchange governmental information related to public services or to the bureaucratic structure. Political Participation: Stage of the evolutionary approach related to citizens’ relationship with the government through online vote, discussions, and accountability mechanisms using information technologies. State portal: Vertical Web site where information refers mostly to the information of a national, state or local entity. Transaction Stage: The stage of the evolutionary approach in which government services are completely online for the citizens.
0
Chapter IV
Issues and Trends in InternetBased Citizen Participation Stephen K. Aikins University of Nebraska at Omaha, USA
introduction
background
The use of Internet technology to further citizen participation is believed to hold great promise to enhance democratic governance by allowing citizens to access public information and interact with government officials, promoting better accountability of public officials to citizens through efficient and convenient delivery of services, and producing fertile ground for reinvigorated civil society (Barber, 1984; La Port et al., 2000; Scavo & Yuhang Shi, 1999). Empirical evidence suggests that some of the promises of bridging the gap among governments and citizens through enhanced interaction between citizens and government, and between citizens themselves are yet to be fulfilled (Chadwick & May, 2001; the Global e-Policy and e-Government Institute and Rutgers University e-Governance Institute, 2003; Hale, Musso & Weare, 1999; Wales, Kerns, Bend & Stern,2002; West, 2001). This chapter reviews the opportunities and challenges of Internet-based citizen participation, the trend noted in the findings of some of the empirical studies and attempts to explain the reason the Internet has failed in its putative potential to bring citizens closer to their governments.
As the Internet and the World Wide Web (WWW) have come to dominate discussions of organizational change, technology has come to be seen as the solution to a variety of governmental and administrative problems. The Internet has revolutionized the way government operates and delivers public services in recent years. Internet technology is increasingly being applied in government organizations to help improve e-services and make them less expensive to deliver, and to make organizations more responsive to citizens (Negroponte, 1995; Rheingold, 1993; West, 2001,2004). By utilizing the Internet government agencies can seek citizen opinion on particular issues to guide policy making, and potentially help resolve the endemic problems with the statecitizen relationship by providing unmediated access to government officials, and minimizing agenda management. However, the issues noted from empirical studies indicate the opportunities and challenges of Internet-based citizen participation may influence the extent of realization of the previously mentioned potential.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Issues and Trends in Internet-Based Citizen Participation
issuEs in intErnEt-basEd citizEn participation Internet-based citizen participation provides some opportunities to minimize or avoid decision making delays, special interest influence and spatialtemporal barriers that plaque traditional citizen participation. On the other hand, current trends in Web-enabled governance suggest a number of challenges including some unresolved policy and administrative issues of the Internet, as well its potential to erode the roles of mediating institutions and to alienate citizens.
opportunities Minimize Decision-Making Delays and Special Interest Influence Some scholars argue while traditional citizen participation in abstract may appear to be an unadulterated good, in reality it presents a number of threats and challenges. For example, unorganized, direct participation by citizens can result in the stalling of economic development or innovations of any kind. This is because participation, like all procedures that are open, could potentially halt progress toward substantial changes in policies or programs (Cleveland, 1975; Sabatier, 1986). Others argue because higher income people have more time and more skill to act in their interest through channel of public participation, lower income people may be better served by strong representation than by widened avenues for participation (Verba & Nie, 1987). Citizen participation, in some cases, can undermine the reliance on science and scientific thinking for policy developments, resulting in a threat to public health. Public health is imperiled when a delay in a decision caused by expanding public participation makes it more difficult to manage an epidemic. The Internet has the potential to address these problems in diverse ways. As Internet technology becomes widespread with universal access, the cost of citizen participation could be substantially reduced. Internet technology should
make it possible to speed-up the process of citizen participation. This speed up effect could improve e-management by addressing the problem of delays in worthwhile projects such as implementation and application of scientific discoveries. In addition, there is some potential for technologies such as the Internet to help organize and channel citizen participation in effective ways that lead to “system integration” rather than conflict (Gutterbock, 1980).
Avoid Spatial and Temporal Barriers, and Reduce cost to Citizens Klein (1999) argues three structural characteristics of the Internet most greatly differentiate it from other technologies for creating participation forums. The Internet is free from constraints of space and time, and also lowers the costs of citizen participation. These characteristics allow an online forum to avoid the temporal barriers of a meeting-hall forum, lower the expense of traveling to the city hall or the meeting hall, and bring greater continuity to citizen government relationship (Klein, 1999). A meeting hall suffers from severe spatial and temporal barriers in the sense that it could impose considerable hardship for those who reside far away, and also require synchronization of schedules. From strictly economic perspective, a rational individual might determine the cost of involvement exceeds the benefits, and so might refrain from participation (Olsen, 1968). Internet users who cannot attend meeting-hall forums can now use the Internet to participate and actively take part in discussions. Although the Internet has a great potential for e-democracy, empirical studies suggest the trends in Web-enabled governance at all levels of government are yet to serve as catalysts for the Internet to bring citizens closer to their governments.
trends in web-Enabled governance In recent years, scholars in public administration and allied disciplines have focused on the impact of Internet technology in public organizations and the implications for governance and for public
Issues and Trends in Internet-Based Citizen Participation
administration (Chadwick & May, 2001; Kearns, Bend & Stern, 2002; Moon, 2002; Hale, Musso & Weare, 1999; Scavo & Yuhang Shi, 1999; Stanley & Weare, 2004; Warren & Weschler, 1999). Some of these studies have evaluated the openness of the Websites of public organizations in several countries from the standpoint of transparency and interactivity (La Port et al., 2000), others have investigated whether public agencies are taking advantage of the interactive features of the WWW in order to improve service delivery, democratic responsiveness, and public outreach (West, 2001, 2004), and others have measured citizen participation via the Websites of various cities in different parts of the world (Global e-Policy and e-Governance Institute & Rutgers e-Governance Institute, 2003, 2005). The findings in many of these studies show that although Web-enabled governance has produced some notable results in posting information and providing services, the Internet has fallen short of its putative potential to enhance Internet-based citizen participation by bring citizens closer to their governments, and that few of the public agency Websites emphasize democratic participation. West (2001) argues although e-government proponents tout it for its potential to bring citizens closer to their governments, many government Websites have not taken full advantage of the benefits of the technology to facilitate this connection. In an analysis of 290 California municipal Websites, Hale et al (1999) also concluded that in general, information provision is patchy with low interactivity, and current city use of Web technology does little to foster political community through deliberative and value-infused communication. In a study of e-democracy practices in local authorities in England and Wales, Kearns, Bend and Stern (2002) argue that despite extensive policy and research attention on local e-government, there has been far too little focus on the equally if not more important issue of using the Internet in support of citizen centered democratic processes that allow citizen participation. This omission, in the authors’ view, effectively turns the e-government debate into one focused on little more than the application of e-business processes to the
public sector. It ignores the issue of how digital technologies can change the democratic aspects of the state-citizen relationship and indeed the ways in which electronic democracy may be able to help in meeting wider policy goals. In addition, it further questions the real extent of political commitment to respond and respond comprehensively to the outset of the networked world. In a study that examines how information and telecommunication technologies capable of reshaping structures of governance have been integrated into policy pronouncements regarding electronic government and the ‘renewal of democracy by the American and British governments,’ Chadwick and May (2001) argue discussions about electronic government is framed by a ‘managerial model’ of interaction which aims to make state services more ‘efficient’(Efficiency here is defined as increased speed of delivery combined with a reduction in costs). As a result, one of the key promises of electronic government — democratization through enhanced interaction between citizens and government, and between citizens themselves—is yet to be fulfilled. In a study that analyzes the most relevant issues driving e-government in 100 cities in the world, the Global e-Policy and e-Governance Institute and Rutgers e-Governance Institute (2003) found that 56.3 percent of the city Websites did not allow users to provide comments or feedback either to individual government departments/agencies through online forms, or to elected officials. In addition, 73.8 percent of the city Websites did not have online bulletin board or chat capabilities for gathering citizen input on public issues, and 78.8 percent did not have discussion forums on policy issues. In its most recent study that replicated the study conducted in 2003, the Global e-Policy and e-Governance Institute and Rutgers e-Governance Institute (2005) found only 31 percent of municipality Websites worldwide provide online forms for feedback to government departments or agencies. In addition, 68 percent did not have online bulletin board or chat capabilities for gathering citizen input, and 75 percent did not have online discussion forum on policy issues. These findings, which do not suggest significant improvement over
Issues and Trends in Internet-Based Citizen Participation
the 2003 findings, are not surprising considering the emphasis placed by governments at all levels on service delivery and efficiency in their e-government practices. More recent studies (Thomas & Streib 2003, 2005) also conclude that many citizens visit government Web sites not for interactive quality but more for the purpose of e-commerce and eresearch, rather than going online to communicate opinions or complaints to government. This could well be explained that on the government side, although many researchers emphasize the potential role of Web-enabled governance in citizen participation, Web-enabled governance in practice is equated with online service delivery for the purpose of the economy, efficiency and responsiveness. In addition, the very nature of the Internet poses some challenges for Internet-base citizen participation.
As Arterton (1987) argued, improvements in citizen access to decision making and broadened participation often came at substantial costs, and those who were bearing the costs tended to want to have a substantial say in setting the agenda. These barriers, which exist today, make it likely that, at least in the near future, the Internet’s democratic potential will be exploited by relatively elite groups of citizens with the money, access to technology, skills, and general education. This implies that in the short term, there is the need for local governments to formulate and implement appropriate policies and deploy adequate resources to support Internet-based citizen participation in order to ensure a level playing field.
the challenges
One other constraint to the promotion of Internetbased citizen participation by local government officials could be that the Internet, as currently configured, could have the effect of further eroding the role of important mediating institutions. As more citizens become used to contacting their governments directly, they may be less likely to work through existing civic organizations and interest groups. Hence, how governments structure, design, and integrate various capabilities and advertise their own Websites may substantially affect citizens’ involvement in e-democracy and Internet-based citizen participation. Klein (1999) argues this suggests a program of action for those who will promote greater citizen participation in public affairs. An example of this will be greater government investment in Internet technology, broad-based implementation in government, and the education of citizens to enhance their skills needed for effective online collaboration (James & Rykert, 1997). Some observers argue information and telecommunication technologies such as the Internet have led to an increase in the divide between rich and poor with related unequal effects on civic engagement and democracy. The Internet, by introducing digital communications to civic life, may have layered a digital divide over the inequalities
Unresolved Policy and Administrative Issues The Internet raises key policy and administrative issues related to the use of the technology for citizen participation and access. These issues include budgetary constraints, questions regarding the cost of online citizen participation to the government and how it is financed, the cost-benefits of the capabilities and data access, the technological potential and the implications for administration and public policy. In a recent study, Aikins (2005) found that 61 percent of local governments surveyed spent less than 0.25 percent of their annual Information Technology budget to support Internet-based citizen participation. In addition, only 38 percent of the local governments surveyed provided funds for Website features that facilitated Internet-based citizen participation. This may be due to the less enthusiastic beliefs of local government officials surveyed in Internet-based citizen participation. In the same study, while 75 percent of respondents expressed very strong belief in taking citizen opinion into account in making decisions, only 37 percent expressed belief in using city Website to solicit citizen opinion
Potential Eroding of the Role of Mediating Institutions and Alienation of Citizens
Issues and Trends in Internet-Based Citizen Participation
that plague most of the world’s political systems. Perhaps, this is the reason many local government officials do not see the Internet as the primary medium of communication with citizens. In a survey of local government chief administrative officers in five Midwestern states, 83 percent of the respondents indicated they prefer to communicate with citizens outside the medium of the Internet. Indeed, 66 percent indicated they prefer communicating with citizens via regular post office mail, newspaper and face-to-face communication due the universal access and the personal nature of these mediums of communication (Aikins, 2005). The argument here is that less educated and low income citizens who do not have access to the Internet or with little or no literary skills generally do not have the ability to navigate through the World Wide Web, and to express themselves in writing effectively. Consequently, such citizens may become more alienated from the political process and be marginalized from civic life due to the Internet.
futurE trEnds The relentless focus of governments at all levels on using the Internet primarily for service delivery and efficiency jeopardizes the realization of the ideals of electronic democracy. While the issues of service delivery, security and interoperability crowd public servants’ e-government agenda, there is little incentive to invest money and effort in electronic democracy projects such as Internet-based citizen participation. The empirical evidence appears to suggest that until there is a fundamental shift in focus regarding the public sector strategic goals to be accomplished through the Internet, this trend is likely to continue in the foreseeable future. Arguably, there are many considerations and potential implications of designing and implementing e-government, including the impact on economic, social, cultural, educational and political factors and disturbances on the status quo in these areas. This implies future adoption of e-government should take into account social
concerns such as digital divide or the effect of inaccessibility of e-government or other digital resources on the structure of society, and take necessary steps to remedy the adverse effects of any disturbances on the status quo. One area of e-democracy that has the potential for increased government investment and citizen participation in local and state government elections is e-voting. As democracy’s main means of expression, voting exists in all spheres of society. Although electronic voting has been in existence for many years in the form of voting with machines, the development of the Internet has opened up a new field of expression by enabling distance voting. The Internet voting in Arizona democratic primaries, for example, is viewed by some as a new paradigm of democratic expression. In addition to the anticipated management advantages linked to electronic voting such as instant analysis, other advantages such as greater simplicity could result in renewed sense of enthusiasm for citizens and help counteract falling citizen participation in local and state government elections. However, electronic voting raises lots of questions regarding polling security and confidentiality. As government investments in Internet e-voting grows, substantial resources are likely to be deployed to strengthen cyber security and hopefully enhance citizen participation in the political process.
conclusion The empirical evidence suggests an e-government trend of more interest in developing government Websites that integrate a market-based model of Web-enabled governance as a vehicle for government to “service” its “customers” through e-commerce. Perhaps, this is because various governments do think the opportunities of Internet-based citizen participation do not outweigh the challenges. Consequently, this market-based approach to e-government overshadows concerns regarding democratic governance, with implications for citizen participation, deliberation, public accountability and citizenship. Potentially, one way of addressing these concerns is to widen the
Issues and Trends in Internet-Based Citizen Participation
avenue for Internet-based citizen participation, including e-voting. As Stanley and Weare (2004) point out, Web-based participation channels can open up issue networks to new voices and interest and contribute to better governance. If local governments are to realize the benefits of the Internet across the spectrum of its utilization, Web-enabled governance should be broadened to enhance interactivity, voting, and citizen online discussion and feedback to their governments.
futurE rEsEarch dirEctions There is the need for an empirical study to determine how Internet-based citizen participation has changed democratic aspects of state-citizen relationships and helped to meet wider public policy goals. Although the literature on e-government includes a discussion of how the Internet can help improve democratic governance and Internet-based citizen participation, few empirical studies are yet to be done to prove such benefits and their implications for public policy. This situation has hindered the ability to promote the need for public investments in e-democracy and Internet-based citizen participation to public administration practitioners on a wider scale. The results of such empirical study can help elevate the e-government debate from its current focus on little more than the application of e-business processes in the public sector into one that includes democratic governance and Internet-based citizen participation policy. An empirical investigation is also needed to determine the effects of Internet-based citizen participation and e-democracy on the roles of intermediary bodies in public affairs. Some have argued the technically democratic nature of the Internet indicates the medium embodies unprecedented potential to render entities like political parties and large-scale media eventually obsolete in an age when information could be exchanged without obstacle and active citizens could be involved directly in decision making processes. Political parties have been essential in national demo-
cratic systems to aggregate interests groups and collective opinion. Established media networks, despite their known contentions, play significant public role by channeling and contextualizing information streams, providing a common frame of reference and analysis of prevailing issues and relieving the public of filtering significant amount of information simultaneously. Therefore, the capacity of the Internet as an effective medium for citizens to deliberate, find consensus and help reach political and policy decisions without intermediary bodies, should be empirically explored. Further research is also needed to provide empirical evidence on the impact of Internet-based citizen participation on the overall decision making processes of e-democracy. Arguably, online citizen participation is said to reduce democracy to purely its aspects of deliberation and disregards decision making processes. Since online discussions are viewed by some to be non-generative, often suffers from the dominance of a few and draws only a selected audience, the democratic nature of Internet-based citizen participation is worthy of rigorous scholarly investigation.
rEfErEncEs Aikins, S. (2005). Web-enabled governance: The challenge of pursuing Internet-based citizen participation. Dissertation Submitted In Partial Fulfillment of Requirements for Doctor of Philosophy in Public Administration. Arterton, C. (1987). Teledemocracy: Can technology protect democracy? Newbury Park, CA: Sage Publications. Barber, B. (1984). Strong democracy: Participatory politics for a new age. Berkeley: University of California Press. Chadwick, A., & May, C. (2001). Interaction between states and citizens in the age of the Internet.: “E-Government” in the United States, Britain and the European Union. Paper Presented to the American Political Science Association Annual Meeting.
Issues and Trends in Internet-Based Citizen Participation
Cleveland, H. (1975). How do you get everybody in on the act and still get some action? Public Management, 57, 3-6
Sabatier,P.(1986).Top-downandbottom-upapproaches to implementation research: A critical analysis and suggested synthesis. J. of Public Policy, 6, 21-28.
Hale, M., Musso, J., & Weare, C. (1999). Developing digital democracy: Evidence from California municipal web pages. In Barry N. Hague and Brian D. Loader (Eds.), Digital democracy: Discourse and decision making in the information age.
Scavo & Yuhan Shi. (1999). World Wide Web site design and use in public management. In David G. Garson (Ed.), Information technology and computer applications in public administration: Issues and trends. Hershley, PA: Idea Group.
The Global e-Policy and e-Governance Institute & Rutgers University E-Governance Institute. (2003). Assessing websites and measuring egovernment index among 100 world cities. Study Sponsored by Division of Public Administration and Development, Department of Economic and Social Affairs, United Nations.
Stanley, J.W., & Weare, C. (2004). The effects of Internet use in political participation: Evidence from an agency online discussion forum. Administration and Society, 36(5), 503-527.
Gutterbock, T. M. (1980). Machine politics in tradition party and community in chicago. Chicago: University of Chicago Press. James, M., & Rykert, L. (1997). Working together online. Toronto, Ontario: Web Networks. Retrieved August 16 2001, from http://community. web.net/wto Kearns, I., Bend, J., & Stern, B. (2002). E-participation in local government. Retrieved July 12 2002, from www.ippr.org Klien, H. K. (1999). Tocqueville in cyberspace: Using the Internet for citizen associations. The Information Society, 15, 213-220. La Port et al. (2000). Democracy and bureaucracy in the age of the web: Empirical findings and theoretical speculations. Cyberspace Policy Research Group (CyPRG). Moon, M. J. (2002). The evolution of e-government among municipalities: Rhetoric or reality? Public Administration Review, 62(4), 424-33. Negroponte, N. (1995). Being digital. New York: Alfred A Knof Publishing.
Thomas, J. C., & Streib, G. (2003). The new face of government: Citizen-initiated contacts in the era of e-government. Journal of Public Administration Research and Theory, 13(1), 83-102. (2005). E-Democracy, e-commerce, and e-research: Examining the electronic ties between citizen and governments. Administration and Society, 37(3), 259-280. Verba, S., & Nie, N.H.(1987). Participation in america: Political democracy and social equality. Chicago: University of Chicago Press. Warren, M.A., & Weschler, L.F. (1999). Electronic governance on the Internet. In David G. Garson (Eds.), Information technology and computer applications in public administration: Issues and trends. Hershley, PA: Idea Group. West, D. M. (2001, August 30-September 2). E-government and the transformation of public sector service delivery. Paper Presented at the Annual Meeting of the American Political Science Association. San Francisco. West, D. M. (2004). E-government and the transformation of service delivery and citizen attitudes. Public Administration Review, 64(1), 15-27.
Olsen, M. (1968). The logic of collective action. Cambridge, MA: Harvard University Press.
FURTHER READING
Rheingold, H. (1993). The virtual community: Homesteading on the electronic frontier. Reading, MA: Addison-Wesley.
Anderson, D. M. M., Cornfield, M., & Arterton, C. F. (Eds.). (2002). Civic web: Online politics and democratic values. NY: Rowman & Littlefield. 37
Issues and Trends in Internet-Based Citizen Participation
Barabas, J. (2002, September 20-22). Virtual deliberation: Knowledge from online interaction versus ordinary discussion. Paper Presented at the Prospects for Electronic Democracy Conference, Carnegie Mellon University, Pittsburgh, PA. Beierle, T. C. (2002, September 20-22). Engaging the public through online policy dialogues. Paper Presented at the Prospects for Electronic Democracy Conf., Carnegie Mellon University. Browning, G. (2002). Electronic democracy: Using the internet to transform american politics. Medford: CyberAge Books. Cavanaugh, J. W. (2000). E-democracy: Thinking about the impact of technology on civic life. National Civic Review, 89(3), 229-234. Chadwick, A. (2003). Bringing e-democracy back in: Why it matters for future research on e-governance. Social Science Comp. Rev, 21(4), 443-455. Danziger, J. N., & Andersen, K. (2002). Impacts of information technology on public administration: An analysis of empirical research from the ‘golden age’ of transformation. International Journal of Public Administration, 25(5), 591-627. Elberse, A., Hale, M. L., & Dutton. W. H. (2000). Guiding voters through the net: The democracy network in a California primary election. In K. L. Hacker & J. Van Dijk (Eds.), Digital democracy: Issues of theory and practice (pp. 130-148). Thousand Oaks: SAGE Publications. Frissen, P. H. A. (1999). Politics, governance, and technology: A postmodern narrative on the virtual state. Cheltenham, UK: Edward Elgar. Gibson, R. (2001). Elections online: Assessing internet voting in light of the Arizona Democratic primary. Political Science Quarterly, 116(4), 561-583. Hacker, K. L., & Van Dijk, J. (2000). What is digital democracy? In K. L. Hacker & J. van Dijk (Eds.), Digital democracy: Issues of theory and practice (pp. 1-9). Thousand Oaks: SAGE Publications. Hoff, J., Horrocks, I., & Tops, P. (Eds.). (2000). Democratic governance and new technology: Technologically mediated innovations in political
38
practice in western europe. London: Routledge. Holzer, M., & Kim, S.-T. (2004). Digital governance in municipalities worldwide: An assessment of municipal web sites throughout the world. National Center for Public Productivity. Jankowski, N. W., & Van Selm, M. (2000). The promise and practice of public debate in cyberspace. In K. L. Hacker & J. van Dijk (Eds.), Digital democracy: Issues of theory and practice (pp. 149165). Thousand Oaks, CA: SAGE Publications. Jankowski, N. W., & Van Os R. (2002, September 20-22). Internet-based political discourse: a case study of electronic democracy in the city of Hoogeveen. Paper Presented at the Prospects for Electronic Democracy Conf. Kippen, G., & Jenkins, G. (2002, September 20-22). The challenge of e-democracy for political parties. Paper Presented at the Prospects for Electronic Democracy Conf. Klein, H. K. (1999). Tocqueville in cyberspace: Using the internet for citizen associations. Information Society, 15(4), 213-220. Korac-Kakabadse, A., & Korac-Kakabadse, N. (1999). Information technology’s impact on the quality of democracy. In R. Heeks (Ed.), Reinventing government in the information age: International practice in IT-enabled public sector reform (pp. 211-228). London: Routledge. La Porte, T., Demchak, C., de Jong, M. & Friis, C. (2000, August). Democracy and bureaucracy in the age of the web: empirical findings and theoretical speculations. Paper Presented at the International Political Science Association. Milward, H. B., & Snyder, L. O. (1996). Electronic government: linking citizens to public organizations through technology. Journal of Public Administration Research and Theory, 6(2), 261-275. Moon, M. J. (2003, January 6-9). Can IT help government to restore public trust?: Declining public trust and potential prospects of IT in the public sector. Paper presented at 36th Annual Hawaii International Conference on System Sciences. Big Island, HI.
Issues and Trends in Internet-Based Citizen Participation
Norris, P. (Ed.). (1999). Critical citizens: Global support for democratic government. Cambridge: Oxford University Press. O’Looney, J. (2003). Using technology to increase citizen participation in government: The use of models and simulation. IBM Endowment for the Business of Government. Ranerup, A. (1999). Internet-enabled applications for local government democratization: contradictions of the Swedish experience. In R. Heeks (Ed.), Reinventing government in the information age: International practice in IT- enabled public sector reform (pp. 177-193). London: Routledge. Rosen, T. (2003). E-democracy in practice: Swedish experiences of a new political tool. Retrieved on April 6, 2003 from http://www.svekom.se/ skvad/E-democracy-en.pdf. Samuel, A. (2002, September 20-22). From digital divide to digital democracy: Strategies from the community networking movement and beyond. Paper Presented at the Prospects for Electronic Democracy Conference, Carnegie Mellon University, Pittsburgh, PA. Solop, F. I. (2001). Digital democracy comes of age: Internet voting and the 2000 Arizona Democratic primary election. PS: Political Science and Politics, 34(2), 289-293. Stanley, J. W., Weare, C., & Musso, J. (2002, September 20-22). Participation, deliberative democracy, and the Internet: Lessons from a national forum on commercial vehicle safety. Paper Presented at the Prospects for Electronic Democracy Conference, Carnegie Mellon University, Pittsburgh, PA. Thomas, J. C. (2004). Public involvement in public administration in the information age: Speculations on the effects of technology. In M. Mälkiä, A.-V. Anttiroiko & R. Savolainen (Eds.), eTransformation in governance: New directions in government and politics (pp. 67-84). Hershey: Idea Group Publishing.
Weber, L. M. (2002, September 20-22). A survey of the literature on the Internet and democracy. Paper Presented at the Prospects for Electronic Democracy Conference, Carnegie Mellon University, Pittsburgh, PA. Witschge, T. (2002, September 22-22). Online deliberation: Possibilities of the internet for deliberation. Paper Presented at the Prospects for Electronic Democracy Conference, Carnegie Mellon University, Pittsburgh, PA.
tErms and dEfinitions E-Commerce: Exchange of money for goods and services over the Internet. Examples include citizens paying taxes, and government buying office supplies via the Internet. E-Government: Government’s use of information and communication technology (ICT) to exchange information and services with citizens, businesses, and other arms of government. E-government may be applied by legislature, judiciary, or administration, in order to improve internal efficiency, the delivery of public services, or processes of democratic governance. Components are e-services, e-management, edemocracy and e-commerce. E-Democracy: The use of electronic communications technologies, such as the Internet, in enhancing democratic processes within a democratic republic or representative democracy. Typically, the kinds of enhancements sought by e-democracy proponents are framed in terms of making processes more accessible; making citizen participation in public policy decision-making more expansive and direct so as to enable broader influence in policy outcomes; increasing transparency and accountability; and keeping government closer to the consent of the governed. E-Management: The use of information technology to improve management of government by streamlining government business processes and improving the flow of information within government.
Issues and Trends in Internet-Based Citizen Participation
E-Services: Electronic delivery of government information, programs and services. This often, but not always, takes place over the Internet. Internet-Based Citizen Participation: The use of the Internet to support active citizen involvement in policy deliberations and decision making. This includes using government Websites to solicit citizens’ opinion on policies and administrative services, to allow citizens to provide online feedback to administrative agencies and the legislature, and to stimulate online public discussions on policy and the political process.
0
Internet Deliberative Features: Attributes that serve as democratic outreach by facilitating communication, interaction and discussions between citizens and government. These include online discussion forums and feedback forms. Web-Enabled Governance: The use of Internet technology and the World Wide Web (WWW) to open channels of citizen participation, deliberation and integrated action among government and citizens.
41
Chapter V
Public Sector Participation in Open Communities Andrea B. Baker University at Albany, SUNY, USA J. Ramon Gil-Garcia Centro de Investigación y Docencia Económicas, Mexico Donna Canestraro University at Albany, SUNY, USA Jim Costello University at Albany, SUNY, USA Derek Werthmuller University at Albany, SUNY, USA
INTRODUCTION As new Internet-based products, services, and resources are developed, private companies and government agencies are exploring the use of these open standards and open source software for their daily operations. One of the main advantages of the open paradigm is interoperability and re-usability of code. Another significant advantage is data longevity, which means that the data created by these products are not constrained by future technology or vendor changes. They will be compatible with new document formats, applications, or specific pieces of software. However, there are also challenges associated with open standards
and open software, particularly for public sector organizations. Issues such as technical training and support services can be a major concern for government agencies. Another issue that must be explored is associated with participating in online development communities and how this is constrained by the current legal framework and personnel practices. One of the primary ways ideas are exchanged open standards and open source products is by sharing ideas and code through online repositories created by open communities. These communities, such as sourceforge.net and Core.gov, offer development advice, repositories of code, and other useful resources for adopting open standards
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Public Sector Participation in Open Communities
and open source software. This chapter focuses on the challenges that public sector agencies can potentially face when participating in an online community built around a repository called the XML tool kit. The XML tool kit was a result of a project conducted by the Center for Technology in Government (CTG), a research institution affiliated with the University at Albany, SUNY, which involved state agencies interested in using XML, one of the most well-known open standards, for Web site management. As a result of establishing CTGs XML library and a review of the literature associated with open standards and open source software, questions arose about the issues involved with participating in an online development community, particularly for government employees. We suggest that future research be conducted to explore some of these issues, specifically licensing options and the governance structure of online repositories. The chapter is organized in six sections, including the foregoing introduction. Section 2, is an overview of open source and open standards as well as a discussion about the associated benefits and challenges. Section 3, the main focus of the chapter, explores in-depth an example of one of those main challenges: creation and sustainability of online development communities and repositories. A detailed description of CTGs case study in which several New York state agencies were partners in creating the shared online repository of XML code for use in Web site management is highlighted. Section 4 suggests future research into licensing options and governance structure of online repositories, specifically considering the particularities of the public sector. Finally, section 5 provides some concluding remarks and section six suggests future research directions.
Background: Benefits and Challenges of Open Standards and Open Source Software To begin a discussion about open standards and open source, it is necessary to provide some basic
42
definitions and discuss the benefits and challenges associated with them.
Benefits of Open Standards and Open Source Software Open standards are simple language data descriptions that are uniform in a discipline so that other programmers and machines can understand their logic. Dalziel (2003) describes open standards as “transparent descriptions of data and behavior that form the basis of interoperability.” Interoperability has become an important concept for the future. Tim Berners-Lee, the creator of HTML, is working with the World Wide Web Consortium to develop data standards that allow for the seamless flow of information. His vision of a semantic Web is of an environment where all data is accessible, despite applications, browsers or programming languages. One of the primary languages underlying this concept is extensible markup language (XML). Another benefit of open standards is data longevity or the ability to access data at any point in time without the concern of software incompatibility. One of the benefits of open standards such as XML is that “no matter which software was used to create the objects or medium to store the objects, future software that understands XML can extract the relevant information” (Baru, 2006). Open source software is based on concepts similar to open standards, but the two are not the same. Open source software allows anyone to access its source code, which explains how the software works. One of the underlying principles of open source software is that by opening access to code, others could improve it or customize it to fit organizational or personal needs (Stallman, 1999). One of the most powerful benefits of open source software is its flexibility. For example, each Web site is different in more than one way. A single piece of software, following a predefined set of rules has little chance of working on many sites without having the site conform to its way of working. Open source software allows the developer to modify the software to work the way the site needs it and not vice versa.
Public Sector Participation in Open Communities
Another major benefit of open source software, especially to public sector agencies, is the ability to access source code without cost. This is important because until open source arrived, vendors had control of the software market and could not only charge for products, but also for maintenance.
Challenges of Open Standards and Open Source Software With all of the benefits of open standards and open source, there are some challenges. Technical support for open source software was formerly a significant concern and continues to be a concern, but several third party companies are beginning to fill the need (Lai, 2006b). Red Hat, a publicly traded open source company, began offering technical support to companies using Linux in 1999 (Rivlin, 2005). Further, in a move that shows support for the open source movement, IBM adopted an open source approach that will provide access to outside developers to modify the core part of its Jazz platform: IBM calls the approach “open commercial development” (Taft, 2006). IBM, DELL, and HP also offer open source preinstalled on systems. One of the main challenges of open source for developers is to locate code and software and understand how it works in their organization’s environment. Several communities of developers have organized repositories to share information about open standards and open source software. Sourceforge.net, an open source development Web site, hosts over 100,000 open source projects being developed by many collaborators. According to the Web site, a central community “promotes a higher standard of quality, and helps to ensure the long-term viability of both data and applications… [as well as] adherence to open standards” (OpenSourceTechnologyGroup, 2006). Sourceforge is a clearinghouse for most open source projects. It provides development tools and other benefits. Previously, individuals involved in open source development had to host and set up their own tools. Massachusetts had plans to create a software repository for mapping topography and geogra-
phy as well as network management software (Jewell, 2005). Rhode Island also had plans to contribute software to register lobbyists and provide open meeting notices (Weisman, 2005). These repositories were part of the Government Open Code Collaborative, in which government agencies that were willing to share for free would contribute their software (Weisman, 2005). Core. gov is a development community designed for public sector agencies to access code and technical “components.” Based on the open-source model of development Core.gov allows for the exchange of technical information and code in order to support collaboration between agencies and the reuse of code (Adelstein, 2004). The Center for Technology in Government, from the University at Albany, SUNY has adopted a similar open-source model of development to explore the use of a repository of open source code to support the adoption of XML in the public sector. Through CTGs study and a review of current literature, we have identified some challenges associated with public sector agencies participating in online development communities. The next two sections will provide details of CTGs project and analyze two challenges: licensing and governance structure.
PUBLIC SECTOR PARTICIPATION IN AN ONLINE DEVELOPMENT COMMUNITY: THE XML TOOL KIT Since government agencies first created Web sites, they have grown in size and complexity. According to Gil-Garcia et al. (2005), because Webpages are primarily written in HTML the majority of Webmasters time is spent on maintenance, which includes updating documents and creating new formats. This is costly and prohibits Webmasters from engaging in other projects such as developing new applications for their organizations. XML can be a viable solution to these Web site management problems (Gil-Garcia et al., 2005). Because XML describes Web content in a meaningful way, it offers a more effective way to manage complex Web sites (Costello et al.,
43
Public Sector Participation in Open Communities
2004; Rockley Group, 2005). XML separates content from style and can transform content into multiple formats while maintaining a single source document. Thus, instead of requiring individual Web pages to be written in HTML a single source document can produce HTML, PDF, or other formats. This creates a faster and better method of version control and ultimately saves agencies money (Costello et al., 2004; Gil-Garcia et al., 2005). To exploit these advantages of XML, government agencies would need technical training in XML as well as an understanding of the resulting organizational impacts to make a solid business case to justify the change (Gil-Garcia et al., 2005). In addition, agencies need a firm understanding of where to begin, which includes the availability of code for specific techniques and environments (Gil-Garcia et al., 2005). To assist state agencies seeking to utilize XML for Web site management, the Center for Technology in Government (CTG) held a six-month test bed where agencies were required to create a business case, a prototype, and participate in a shared library that would provide the XML code to other agencies and the public. Five New York State agencies participated in the research project involving XML for Web site management. The XML library is one of the products of the test bed and provides resources to assist in managing a Web site using XML. It includes simple approaches to creating Web pages using XML and getting XML to work in a variety of environments, including ASP.NET and other common Web development languages. The site also provides a variety of code samples such as different XML and XSL files for structuring content and transforming to different output formats such as XHTML and PDF. In order to thrive, the library needs an active development community to contribute code and share experiences. According to the site (www. thexmltoolkit.org), the library is “intended to grow over time and benefit from the contributions of its visitors and users”. To date, agencies have only contributed code directly associated with the prototypes. To explore the feasibility of this com-
44
munity managing and sustaining the XML library in the long run, several issues must be addressed. After a review of the literature of existing open source communities, two issues—licensing and governance—seem to be of high importance and should be areas for future research.
Future Trends Both the licensing components and the governance structure of open source projects can be viewed as points on a continuum derived from organizational behavior models such as those discussed by Powell (1990). Similarly, governance structures can range from a decentralized network with individuals collaborating from different agencies to a classic hierarchy with a formal management structure (Goldsmith & Eggers, 2004) to markets dependant on laws and contracts (Powell, 1990). There are benefits and challenges associated with each of the licensing and governance structure alternatives. We believe that meeting these challenges is the key to public agencies’ ability to participate in online communities and repositories created to support open source initiatives. The next two sections elaborate on these future trends and are a call for additional research.
Licensing Because government agencies author several proprietary programs and work with private contractors, it is important that they choose suitable licensing options. Weisman (2005) explains that one of the issues that might concern private contactors working with government agencies using open source is open source licenses that require modified or combined code be open to the public. When deciding to use open source, especially participating in an online community, licensing is an important challenge for government agencies, but one with several options. Licenses can be viewed as being located on a continuum from permissive to restrictive. The GNUGPL license (GNU general public license) offers the most protection to users. The GPL allows
Public Sector Participation in Open Communities
users to modify the code and combine it with other new programs and insists that all code be made available to the general public (Stallman, 1999). GPL is different from the other licenses because it seeks to ensure that the code of modified copies is accessible to users (gnu.org, 1991). While great for users, agencies or companies who wish to keep some of the source code proprietary could not (Hall, 2005). Less permissive than the GPL is the Lesser GPL. The difference is that proprietary programs can use the library under the lesser GPL (gnu.org, 1999). This offers less protection for users, but more for authors because users may be prohibited from accessing some of the code. At the other end of the open source continuum are the BSD or MIT licenses, which allow others to modify the code and use it for any purpose, but does not require that the modified source code be freely open to the public unless the author chooses (Hall, 2005). There are several other licenses available. The Open Source Initiative (www.opensource.org) lists several of these. O’Mahony (2003) suggests several additional alternatives that open source communities can employ to ensure that their intellectual property is protected. She includes licensing, the creation of foundations to hold copyrights, the use of logos or trademarks and creating and protecting a brand (p. 1185).
Governance Structure When communities include a variety of members who are contributing to a product or service, governance structures must also be addressed. Determining who is responsible for maintaining code repositories is an important issue for any community, but especially public sector agencies because it can be a matter of time and money for the contributors. Governance in these communities can be viewed on a similar continuum from decentralized to hierarchical. A completely decentralized community would suggest that developers could modify the source code without any formal vetting process. In communities similar to this, the members judge the quality of the addition. If something is incorrect,
any community member brings attention to the issue and corrects it. We think of true open source communities as decentralized. Powell (1990) labels these forms of organizations as networks. Networks consist of “individuals engaged in reciprocal, preferential, mutually supportive actions” (p. 303). The author suggests that “sanctions are typically normative rather than legal” (p. 301). Goldsmith and Eggers (2004) suggest several advantages to applying network structures in the public sector including the ability for government agencies to collaborate and share resources which can lead to improved services at decreased costs and a faster speed of delivery. However, the authors caution that networked models also have challenges associated with them especially when “complexity is high and responsibility unclear” (p. 45). Those challenges include issues with oversight, task management, and competing interests (Goldsmith & Eggers, 2004). The most restrictive organizations are those where definitive relationships and roles exists. Goldsmith and Eggers (2004) call these organizational structures hierarchies with “rigid bureaucratic systems that operate with command and control procedures” (p. 7). Powell (1990) has a similar definition for hierarchies and suggests that they are generally defined by employment contracts (p. 302). Identifying features of these communities would be “clear departmental boundaries, clear line of authority, detailed reporting mechanisms, and formal decision making procedures” (Powell, 1990, p. 303). Both sets of authors suggest that these types of organizational structures are more restrictive than decentralized networks. Goldsmith and Eggers (2004) indicate that hierarchical structures present many challenges in today’s world of rapid technological change. These challenges include long time for service delivery and the task of citizens dealing with several different agencies (Goldsmith & Eggers, 2004). Powell (1990) identifies a further restrictive environment called markets, which are dictated by the economy and business models. We believe a hybrid model could also exist. This model would combine the best of both communities and promotes collaboration while
45
Public Sector Participation in Open Communities
providing some management and oversight. A development community that allows for modification, but puts in place some formal structure to review the changes could be considered a hybrid (similar to a core-periphery structure). In an open source community a hybrid would consist of management permissions or management by individuals designated to evaluate contributions. There are several open source communities that operate in this fashion and this may be an option for the XML library and similar repositories in the long run. While licensing and governance are at the center of challenges, these issues must further be deconstructed to address issues such as code developed during work hours. Who owns the code if an individual contributes to an open source community—the public or the state? Other issues include security. For instance, if public agencies contribute to or use open source products without the appropriate license protection, will this compromise sensitive information? Empirical studies are needed to determine the success of open source projects in government and to evaluate some of the benefits and challenges that had been identified.
opment communities and repositories. If public sector agencies are to participate, they need to explore licensing. While several are available, the decision can be complex and must be thoroughly understood. The implications of choosing licenses can greatly affect the success of open standards and open source adoption in the public sector. If vendors were to exploit code snippets or software programs from online development communities and repositories to turn around and market the products to the public sector, many of the original benefits associated with open source might not be realized. Governance structure is also an important issue that must be addressed if public sector agencies are to utilize open source software. Public agencies working collaboratively will need to determine whether they can accept the decentralized philosophy of market and network models or maintain the traditional hierarchical structure of government agencies. A hybrid model of governance structure, in which a small group of developers allows many government agencies to contribute, but provides a significant amount of code for the library and maintains authority of the quality of submissions on the library could also be considered.
Conclusion
Future Research Directions
Open standards and open source products offer several benefits, which include reduced costs and consistent access to data over time without concern for software or system incompatibility. In addition, open standards and open source promote interoperability and information sharing. Because of these benefits, open standards and open source software can at least partially address the federal enterprise architecture goals. Those goals require that agencies look at long-term costs associated with system and application maintenance, data security, privacy and vendor neutrality (Adelstein, 2004). To adopt open standards and open source, associated challenges must be examined. One of the main challenges is directly related to one of the main methods programmers use to access open source products and advice—online devel-
As mentioned before, licensing and governance structure issues related to public sector participation in open standards and open source projects will most likely be a topic of future research. If public agencies are to adopt open source, it is important that there are clear definitions of ownership and use of code. Security issues associated with sensitive programs and data may also need to be explored. Researchers can either set up test beds similar to CTGs model or look to real-world examples from companies who are currently participating in open platforms. Certainly, if public sector agencies are to create thriving communities roles must explicitly be defined and procedure adopts to modify code. Therefore, research about the impact of other organizational issues such as business rules and workflow is needed. Agencies could benefit
46
Public Sector Participation in Open Communities
from having a prior awareness of the potential challenges as well as steps to create successful communities. Lastly, researchers should explore the potential incentives for public sector agencies to participate in open standard and open source projects. Since the open climate is new, much of the research predicts benefits and challenges. With time, scholars can develop measures of success to increase our understanding of the impact of the open standard open source environment on organizations.
gnu.org. (1999). GNU Lesser General Public License. Retrieved May 31, 2006 Goldsmith, S., & Eggers, W. (2004). Governing by network: The new shape of the public sector. Washington, DC: Brookings Institution Press. Hall, M. (2005, 12.5.05). Law an order on the open source range. Computer World, 39(49), 29-30. Jewell, M. (2005, November 5). Massachusetts battles microsoft over document formats. The Associated Press State & Local Wire.
References
Lai, E. (2006a). IBM exec see open-source boom in 2006. Computerworld, 45(5), 24.
Adelstein, T. (2004). Linux in Government: CORE. GOV, Linux Journal.
Lai, E. (2006b). Lack of support slowing spread of open-source applications. Computerworld, 40, 16.
Baru, C. A Recipe for MIXing and Searching Online Information. Retrieved September 25, 2006, from http://www.npaci.edu/enVision/v15.3/mix. html
O’Mahony, S. (2003). Guarding the commons: How community managed software projects protect their work. Research Policy 32, 1179-1198.
Costello, J., Adhya, S., Gil-García, J. R., Pardo, T. A., & Werthmuller, D. (2004, August 6-11). Beyond Data Exchange: XML as a Web Site Workflow and Content Management Technology. Paper presented at the 2004 Annual Meeting of the Academy of Management: Creating Actionable Knowledge, New Orleans, LA, USA.
OpenSourceInitiative. License Index. Retrieved June 8, 2006, from http://www.opensource.org/ licenses/
CenterforTechnologyinGovernment. from http:// www.thexmltoolkit.org/
Powell, W. (1990). Neither market nor hierarchy: Network forms of organization. In B. Staw & L. L. Cummings (Eds.), Research in organizational behavior: An annual series of analytical essays and critical reviews (Vol. 12, pp. 295-336). Greenwich, CT: JAI Press Inc.
Dalziel, J. (2003). Open standards versus open source in e-learning Retrieved October 10, 2006, from http://www.melcoe.mq.edu.au/documents/ OpenStandardspaper.doc Gil-Garcia, J. R., Canestraro, D., Costello, J., Baker, A.B., & Werthmuller, D. (2005 April) Fostering Innovation in Electronic Government: Benefits and Challenges of XML for Web site Management. Paper presented at the American Society for Public Administration 67th proceedings: Denver, CO, USA. gnu.org. (1991). GNU General Public License. Retrieved May 31, 2006
OpenSourceTechnologyGroup. Document A01 SourceForge.net: What is SourceForge.net? Retrieved June 8, 2006, from http://sourceforge. net/docs/about
Rivlin, G. (2005). Open source is back in fashion; After high-profile failures, venture funds return to the sector. The New York Times, April 28, p. 13. Rockley Group. (2005). The Role of Content Standards and Content Management. 18. Stallman, R. (1999). Open sources: Voices from the open source revolution. In C. DiBona, S. Ockman & M. Stone (Eds.), The GNU operating system and the free software movement. CA: O’Reilly& Associates.
47
Public Sector Participation in Open Communities
Taft, D. (2006). IBM Jazzes Collaboration, eWeek.
pedia of digital government. Hershey, PA: Idea Group Inc.
Weisman, R. (2005). Government Agencies Adopt Open Source. The Boston Globe, February 14, p. C1.
Gil-Garcia, J. R., & Luna-Reyes, L. F. (2006). Integrating conceptual approaches to e-government. In M. Khosrow-Pour (Ed.), Encyclopedia of e-commerce, e-government and mobile commerce. Hershey, PA: Idea Group Inc.
FURTHER READING
Hall, R. H. (2002). Organizations. Structures, processes, and outcomes. Upper Saddle River, NJ: Prentice Hall.
Babcock, C. (2007). What will drive open source? InformationWeek (1130), 36-44. Cresswell, A. M., & Pardo, T. A. (2001). Implications of legal and organizational issues for urban digital government development. Government Information Quarterly, 18, 269-278.
Jones, R., & Andrew, T. (2005). Open access, open source and e-theses: the development of the Edinburgh Research Archive. Program: electronic library and information systems, 39(3), 198 -212.
Cushing, J., & Pardo, T. A. (2005). Research in the digital government realm. IEEE Computer, 38(12), 26-32.
Long, S.A. (2006). Exploring the wiki world: The new face of collaboration. New Library World, 107(3).
Dailey Paulson, L. (2007). Sun Makes Java Open Source. Computer, 40(1), 24-24.
Luna-Reyes, L. F., Mojtahedzadeh, M., Andersen, D. F., Richardson, G. P., Pardo, T. A., Burke, B., et al. (2004). Scripts for interrupted group model building: Lessons from modeling the emergence of governance structures for information integration across governmental agencies. In Cd-rom proceedings of the 22nd international system dynamics conference. Albany, NY: System Dynamics Society.
Dawes, S. S., Gregg, V., & Agouris, P. (2004). Digital government research: Investigations at the crossroads of social and information science. Social Science Computer Review, 22(1), 5-10. Dawes, S.S., Pardo, T., & DiCaterino, A. (1999). Crossing the threshold: Practical foundations for government services on the world wide web. Journal of the American Society for Information Science, 50(4), 346-353. Foley, M.J. (2006). Open things up, Microsoft. eWeek, 23(35). Gomulkiewicz, R. (2005). General public license 3.0: Hacking the free software movement’s constitution. Houston Law Review, 42. Gil-Garcia, J. R., Costello, J., Pardo, T.A. & Werthmuller, D. (Forthcoming). Invigorating web site management through XML: An e-government case from new york state. International Journal of Electronic Governance. Gil-Garcia, J. R., & Helbig, N. (2006). Exploring e-government benefits and success factors. In A.-V. Anttiroiko & M. Malkia (Eds.), Encyclo48
Luna-Reyes, L. F., Zhang, J., Gil-Garcia, J. R., & Cresswell, A. M. (2005). Information systems development as emergent socio-technical change: A practice approach. European Journal of Information Systems, 14(1), 93-105. Pardo, T. A., Cresswell, A. M., Thompson, F., & Zhang, J. (2006). Knowledge sharing in crossboundary information system development in the public sector. Information Technology and Management, 7(4), 293-313. Rocheleau, B. (2000). Prescriptions for public-sector information management: A review, analysis, and critique. American Review of Public Administration, 30(4), 414-435. Salz, P.A. (2006). Collaboration rules. EContent, 29(9).
Public Sector Participation in Open Communities
Schroer, J., & Hertel, G. (2007). Voluntary Engagement in an open web-based encyclopedia: Wikipedians, and why they do it. Retrieved March 19, 2007, from http://www.abo.psychologie.uniwuerzburg.de/virtualcollaboration/, and http:// opensource.mit.edu/. Srinivas, K. R. (2006). Intellectual property rights and bio commons: open source and beyond. International Social Science Journal, 58(188), 319. Stewart, K., Ammeter, A., & Maruping, L. (2006). Impacts of license choice and organizational sponsorship on user interest and development activity in open source software projects. ISR: a journal of the Institute of Management Sciences, 17(2). Zhang, J., Cresswell, A. M., & Thompson, F. (2002). Participant’s expectations and the success of knowledge networking in the public sector. Paper presented at the AMCIS Conference, Texas
Terms and Definitions BSD Or MIT License: Most restrictive license that does not require that modified source code be openly accessible. Data Longevity: Data is accessible over time and independent of applications or formats. GNU General Public License: Is the most permissive of open source licenses that requires all code is openly accessible. Governance Structure: Organizational structure that defines relationships and roles of individuals working toward a collective goal. Hierarchy: A governance structure where rules and roles are clearly defined. Hybrid: A governance structure that requires that roles and responsibilities are clearly understood, but provides the flexibility of a network structure. Interoperability: The ability for machines to exchange data without the intervention of human agents.
Lesser GPL: License that offers less protection for users, but more for authors because users may be prohibited from accessing some of the code. Network: A decentralized governance structure that consists of a community of individuals who are contributing to the organization without rigidly defined roles and responsibilities. Online Development Community: Community comprised of individuals, generally programmers, who contribute code and information relating to applications and software. These communities are commonly created to support open source software. Open source License: Licenses related to open source software. More permissive licenses require that source code and all modification of code be open to all users. More restrictive licenses require only that source code be accessible to all users. Open Source Library/Repository: A library/ repository that contains portions of software or entire software applications. The applications are generally offered to the public to use and modify, but may have various restrictions, which are dependant on the license. Open Source Software: Software that provides source code and possibly modifications. Its use may have various restrictions, which are dependant on the license. Software can be developed by an individual author or a collaborative community. Open Standards: simple language data descriptions that are uniform in a discipline so that other programmers and machines can understand their logic. XML: extensible markup language. Consists of text and tags which allow content to be separate from style. The tags provide rules that structure the document. XML For Web Site Management: The use of XML to manage Web Site content; accomplished by separating content from style, which allows publication of content in various formats.
49
0
Chapter VI
Community Informatics Larry Stillman Monash University, Australia Randy Stoecker University of Wisconsin, USA
introduction Researchers and practitioners use a wide range of terms when they discuss community involvement with information and communications technologies (ICTs). Common (English-language) terms include ‘community networks,’ ‘community computing,’ ‘community information networks,’ ‘civic networking,’ ‘community technology,’ ‘community computer networks,’ ‘online neighborhood network,’ ‘virtual community,’ ‘online community,’ ‘community e-business,’ and most recently, ‘community informatics.’ Since the late 1990s, the term ‘community informatics’ has come into use amongst many academic researchers as an overarching label for the academic study of projects and initiatives which deliberately engage community groups and organizations with ICTs. Evidence of the term’s acceptance in academic and research circles is found in the titles of at least one academic journal and the language of its articles (the Journal of Community Informatics), as well as in community informatics conferences and workshops held in a number of countries, university research centres, moves towards an ethics statement, and an entry in Wikipedia developed collaboratively by researchers and practitioners in the field. While
many still use the term ‘community technology’ or its variants when referring to practice activity, community informatics has definitely become embedded as an academic reference point. However, community informatics has not yet achieved a stable set of findings or core questions which are commonly used to conduct research. Some practitioners even consider it a form of social movement. Others see it as little more than a convenient label for pragmatic funding and policy purposes (Graham, 2005). Another sympathetic critic regards it as a ‘woefully underdeveloped’ field ‘driven more by anecdotal reports and storytelling’ than effective theory (Stoecker, 2005a).
background The community informatics ‘movement’ can be traced to the United States and Europe in the 1970s and 1980s, when local communities began establishing tele-centers and local dial-up bulletin board networks. The scene exploded in the 90s with the development of the World Wide Web (Milio, 1996; Morino, 1994) and the development of virtual community networks, particularly in the United States, that no longer had a local geographic base. And as if by osmosis, in countries like Australia,
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Community Informatics
Italy, or New Zealand, enthusiastic individuals or people engaged in the public information services copied models which led to the establishment of public internet service provider services as well as public online services with community content. There is no authoritative history of how the international ‘movement’ arose, but David Wilcox’s documentation of linkages and tensions between technically-focused academics and communityoriented practitioners in the United Kingdom and North America in the late 1990s gives some idea of the mix of social visionaries, academics and others who serendipitously met face-to-face and online and formed something of an shared early vision of what might be (Wilcox, 2001). In the decade from the mid-1990s, governments in countries such as Australia, Canada, the United States and the United Kingdom, the European Community, and Latin America began experimenting with new ICT opportunities as a way of enhancing ideas about ‘e-society,’ ‘e-government,’ or e-democracy.’ Interest in ICTs for development is emerging in many third world countries. At the highest policy level, the UN’s World Summit on the Information Society (WSIS) (www.itu.int/wsis) reflects many governments’ attempts to develop visions for particular uses of ICTs for economic and social development. However, long-term sustainability and investment for projects in many countries continues to be a problem.
issuEs in community informatics Defining Community Informatics Academic information systems and management systems professionals have popularized the term community informatics, where it has been seen as akin to other forms of informatics such as health informatics, and thus potentially providing an overarching conceptual and theoretical base for social and community interventions with technology. The use of such a term has also enabled them to carve out a particular niche in academia. Thus:
Community informatics pays attention to physical communities and the design and implementation of technologies and applications, which enhance and promote their objectives. CI begins with ICT, as providing resources and tools that communities and their members can use for local economic, cultural and civic development, and community health and environmental initiatives among others. (Gurstein, 2000, p. 2)1 Much writing reflects reporting about social interventions beginning with technology, rather than more reflective or critical abstraction and research about the relationship between communities and technology or social and economic structures that underpin such relationships. The former form of research reflects the location of many researchers in the information sciences, rather than social or community services and development disciplines in which there is a more robust theoretical base from which to consider issues such as human agency, its relationship to technology, the very nature of community practice, and the nature of social change. Thus, disciplinary differences about how such key concepts as community, human agency, or very concept of technology can only be resolved or at least explored through much more interdisciplinary dialogue (Pigg, 2005).
the digital divide The notion of digital divide—between individuals and communities that have access to skills, knowledge and technological infrastructure and those who do not—was a prominent policy in many countries in the 1990s. The divide was seen as an impediment to democratic participation and social or economic development. Funds were poured into a variety of policy initiatives in many Western countries, including Australia, Canada, the United Kingdom and the United States, though substantial public funding has by and large been ended for such programs. At the highest international level, the World Summit on the Information Society reflects the United Nation’s attempt to develop an international dialogue about connectivity for citizens in all countries.
Community Informatics
The general reasoning for such a policy and funding change is that the digital divide no longer exists, given the apparent widespread uptake of the Internet and the dot.com crash has tempered enthusiasm for technology speculation by government. However, the counter-argument is that pockets of ICT disadvantage do continue to exist (for example, the factors of age and ethnicity have been identified as significant in Sweden (Ferlander & Timms, 2006), and such socially-based issues cannot be solved by market or technical solutions. Salvador and Sherry, from the perspective of Intel researchers, point to the lack of an inclusive ‘corporate intuition,’ which inhibits the inclusion of a complex experience of community needs in the design of technical systems (Salvador & Sherry, 2004, p. 83) Furthermore, from a public benefit perspective, many would argue that policy has had short-term expectations and the lack of sophisticated policy apparatuses to better understand that community technological development is closely linked to long-term investment in social and community infrastructure, rather than technical infrastructure alone. Such policy initiatives, however, are not considered as appropriate by neo-liberal governments which are loath to invest in social infrastructure or recognise embedded disadvantage. Indeed, a preferred term in policy use is now ‘Digital inclusion’ as expressed in the UK’s Digital Inclusion strategy, in which social and voluntary sector organizations are seen to act in partnership with government and business to include all in society, though direct resource allocations do not match the policy rhetoric (Cabinet Office (UK) 2006). Among western countries, only New Zealand continues to openly recognize that a digital divide still exists and that access to technology is an important aspect of social inclusion for its communities (Government of New Zealand, 2005; Williamson, 2005). And in India, the Mission 2007 program is another example of the challenge being taken up in the developing world while many developing countries, government and academics are increasingly interested in what is called development informatics.
Another recent interpretation of the digital divide is for people with disabilities who are excluded from effective participation in the benefits of ICT their well-being. There are several identifiable reasons for this, including the cost of assistive technology (for example, text-to-speech readers), non-compliant design of websites (particularly from e-government), and technology which do not meet recognised standards such as that of the World Wide Web Accessibility Initiative. Alongside such technical issues low incomes, educational levels, and social isolation amongst many people with disabilities are a major inhibitor to effective use (Goggin & Newell, 2006).
Community as a Structure Community informatics primarily works to improve real, and occasionally, virtual communities. The meaning of ‘community’ is one of the continuing and irresolvable debates in sociology (Strath, 2001), but for the purposes of this entry, we assume that ‘community’ is the focus of some form of social practice or intervention among groups and individuals in a local area, and particularly, ‘the people with the problem,’ frequently people in poverty or disadvantage, ethnic minorities, or at a disadvantage because of other social or geographic isolation (Stoecker, 2005b, p. 45). In such a context, community-based organizations, outsider service organizations, local governments, and multi-organizational coalitions have been involved with establishing local computing centers, providing computer and Internet use training, building community websites and bulletin boards, and even creating local wireless clusters. These efforts vary significantly in the extent to which they involve the actual community residents in the implementation process. The hope is that the tremendous and variable opportunities offered by ICT communication will permit people to develop in independent, new and unexpected ways. However, it is erroneous to think that ICTs can entirely substitute for face to face local relationships that are the foundation for solving local problems. It is important to understand
Community Informatics
the interaction of various ICT interventions with local economic factors, patterns of employment, demographic patterns, gender relationships, or patterns of family life.
Ethics issues Recently, there have been moves towards the development of an ethics statement2, in line with practice statements in other fields such as program evaluation in many countries for ethical behavior in working with individuals and communities. An additional dimension to this is work with indigenous or tribal communities. Indigenous peoples such as the Maori in New Zealand, Australian Aborigines or tribal groups in South America have a history of exploitation of their cultural and physical capital by outsiders, and inappropriate interventions with ICTs are no exception to this. Ethics statements should include a specific reference to the need for developing collaborative processes for ensuring cultural safety and cultural competence in working with diverse communities (Stillman & Craig, 2006).
The Concept of Technology As noted, until very recently, professionals have seen the practice of community informatics essentially as a box and wire application to be installed and magically accepted by communities. This viewpoint is reflective of traditional professional cultural practices where designers at a remove from the ‘users’ or other important social factors and influences that affect human capacity and behavior. The insights of researchers concerned with a better understanding technology as a complex ‘basket’ of socially-constructed and emergent human-machine relationships are now familiar in the study of ICTs in the business world (Orlikowski, 1992, 2000) and can be fruitfully applied to the study of communities’ relationships and constructions of technology.
social capital A recent and widely read review of community informatics evaluations in a number of countries
identified five key dimensions that also bear some relationship to intensive work that has been done concerning the meaning of community-building and capacity in the health field, including health informatics. These dimensions are: (1) enhancing strong democracy; (2) increasing social capital; (3) empowering individuals; (4) revitalizing a sense of community; and (5) providing economic development opportunities. (O’Neil, 2002; Parker, Eng, Schulz et al., 1999). Many community informatics projects likewise work to improve any or all of these factors, through the use of ICTs. Practitioners and researchers frequently attached the term ‘social capital’ to such factors, mainly referring to social networks (Coleman, 1988; Putnam, 1995). Many community informaticians view ICTs as positively impacting such networks. There are critics, however, who charge that the focus of social capital in community informatics is on building social networks for economic development. This can undermine social networks that provide community bonds, as they may further divide a community into business owners and wage workers, or community computing center controllers, and end users.
Effective use Community informaticians see intangibles such as improvements in personal relationships, expansion of lifestyle options, or the development of personal goodwill (the stuff of social capital) as a result of the effective use of ICTs in communities. What is meant by ‘effective use’ is subject to debate. Usually, it means the effective installation and management of technical infrastructure, or technocratic and top-down ‘social facilitation’ that can be evaluated through more traditional quantitative or managerial evaluation approaches (Gurstein, 2003, p. 7). In contrast, socially effective use is a more complex thing that begins with the community itself, in conjunction with the researcher exploring and developing technological opportunities and at the same time, developing its own methods of evaluation of the social effectiveness of the technology as well as local economic or other impacts. The latter approach means that researchers and practitioners coming
Community Informatics
from a more traditional management or information technology background need to learn new skills in community-based research. (Stoecker & Stillman, 2006).
Community Based-Research and Community Technology One of the more interesting aspects of community informatics is its attempts to integrate participatory forms of research and design in community technology. Prominent in these attempts have been the “Computer Professionals for Social Responsibility Participatory Design” conferences, and the increasingly widely used practice of communitybased research. Community-based research (CBR), also called community-based participatory research, action research, participatory action research, participatory research, and a variety of other things, engages community members or community organizations in the research process itself (Strand, Marullo, Cutforth et al., 2003). Ideally, community members come up with the questions, employ indigenous research methods where possible, and connect the research to social action. In community informatics, the process begins with a research project exploring a community’s information or technology issues. That research will help to better understand a problem and it possible interventions. Such research can also be used to evaluate community informatics projects and, if done right, can be used to improve the project rather than to just grade it at the end. Because CBR and CI have a joint emphasis on the community, and because both have, at least in part, an information focus, their combination seems natural to many community informaticians, though as already observed, skills in communitybased research cannot be assumed, but need to be consciously acquired. It is as difficult to implement true community control over the research as it is to assure control over the community technology process, however, and both CBR and CI have a long way to go in that regard.
Sustainability Sustainability has been a bug-bear of community informatics activity (Day, 2005) , particularly when linked to funding arrangements. Consistently, governments are accused of taking a short-term view of ICT investment in communities, with the expectation that financial long-term sustainability will magically appear, particularly on the back of social capital or community volunteerism. Of course, the problem is more complex that that. While financial resources are important, sustainability should also be understood to be related to the sustainability of the structures found in community organizations, large and small, and how the relationships and human or physical resources and skills in such organizations are brought to bear to develop ongoing socio-technical networks. The availability of strategic advice and technical support are critical success factors in community-based initiatives, as is the integration of an electronic community network structure into its local community. When coupled with interest in triple-bottom line sustainability and accountability (economic, social, and environmental performance), the picture is even more challenging. Its resolution will necessitate new approaches from policy makers in government for example, who are reluctant to engage in long-term, rather than startup funding, based on flawed assumptions about communities’ capacities to develop financially independent and sustainable network.
futurE trEnds It is likely that academic research and activity will be continue to be closely linked to funding opportunities from either academic or government resource bases and that the capacity for independent initiatives will be somewhat hampered by overriding academic or policy agendas which tend to take a top-down rather than bottom up or collaborative approach. Projects funded by government will continue to be dependent upon policy whims
Community Informatics
and preferences, including interest in what are seen as cutting-edge technological innovations, driven by commercial imperatives. Notwithstanding the push to focus on what is new, community informatics is likely to continue to work for the greater public and community interest in promoting and supporting the widespread use of all forms of ICT for all communities, particularly communities on the other side of the digital divide. The increasing spread of ICTs through all levels of society, and the increased commercialization and privatization of public space (including electronic space) may mean that there is less interest in research activity about community impacts as ICTs are normalized into the private home and business. Independent community networks will need to find sources of sustainability in their local communities and this too may mean that opportunities for experimentation and risk will be hampered.
conclusion As a new label for both research and practice that crosses the intersection between community development and information technology, community informatics faces both a challenge and an opportunity. The challenge involves balancing the emphasis on theoretical questions developing among community informatics researchers, with the practical efforts of community informatics practitioners. Indeed, that challenge is already being addressed in the Journal of Community Informatics, as the editorial group works to provide space for writing about both research and practice. The opportunity centers on bringing together the highly technical field of information technology—a field that it is mathematical in its precision and highly specialized in its terminology, and somewhat at a remove from consideration of the day-to-day human dimension in technology—with various theories and practices of community work. This field is comfortable with the unpredictability of human relationships and steadfast in its commitment to working with people’s own language and experience.
So far community informatics seems comfortable with the challenge and welcoming of the opportunity. As the field matures, and if it is to have any lasting influence, we are likely to see new forms of community that more consciously integrate technology into their everyday practice, and new forms of technology more consciously designed to sustain communities in all their forms.
futurE rEsEarch dirEctions What is the future of community informatics research? There are two possible futures, propelled by the dialectical tensions between the community focus and the informatics focus. From the perspective of the informatics focus, the emphasis will be on researching new technologies. Particularly now that the problem of access is being portrayed as solved by conservative governments and their corporate allies, the push will be on to concentrate on the more technical questions that are of more use to such institutions, rather than on processes and technologies that could counter their power. Given the current push toward unified mobile technologies, we are likely to see more research on social networking supported by such handheld technologies, mostly to inform marketers. A particularly troubling future direction, one consciously disavowed by most community informaticians but still making use of their theories, will be research on tracking technology to customize advertising. Such research could perfect the means to know where any consumer is and even what they are doing at any time through tracking the uses of handheld technologies that can make phone calls, buy goods and services, and send and receive locational data. From the perspective of the community focus, however, the inequalities that persist both within and between countries will continue to motivate community-based initiatives to redress those inequalities. Those initiatives will both keep the issue of technology access on the front burner and will drive new community-based technology innovations. The pressure from this perspective will be to continue basic research on technology access, which has moved from a focus on any
Community Informatics
Internet access to now a focus on broadband access. The future is likely to consider access to both short and long distance wireless access. Given the privacy concerns being created by the informatics trends described above, we may see an increased emphasis on how to protect people from corporate manipulation on the one hand and outright government control on the other. As the open source software movement slowly gains momentum globally, the possibilities for further open source software development informed by a community-driven process should also slowly increase. An increased attempt to consider the concepts, techniques, and research experience of community development and community organization would greatly benefit the conceptualization and implementation of community informatics projects for those coming from an information science sand management perspective. This could be linked to a related familiarity with participative qualitative research methodologies and the field of program evaluation. Research and practice in these areas could take the field away from somewhat simplistic and technocratic assumptions about the uptake and use of technologies in different environments. Furthermore, as community informatics develops as a field, its theoretical acuity should increase. This will allow researchers to increase their recognition of the issues of power, gender, ethnicity and class and how these affect decision-making and action, will provide a more realistic picture of how and why some projects succeed and fail, in both developed and developing countries. For those coming from the community development or organization perspective, greater familiarity with the language, concepts, and processes through which technologies are designed and implemented will help practitioners build better relationships with developers of products intended for community uptake.
rEfErEncEs Cabinet Office (UK). (2006). Enabling a digitally united kingdom: A framework for action.
Day, P. (2005). Sustainable community technology: The symbiosis between community technology and community research. Journal of Community Informatics, 1(2), 4-13. Ferlander, S., & D. Timms. (2006). Bridging the dual digital divide: A Local Net and an IT-Café in Sweden. Information, Communication & Society, 9(2), 137-159. Goggin, G., & C. Newell (2006). Editorial comment: Disability, identity, and interdependence: ICTs and new social forms. Information, Communication & Society, 9(3), 309-311. Government of New Zealand. (2005). The digital strategy: Creating our digital future. Wellington, NZ: NZ Government. Ministries of Economic Development, Health, Research Science and Technology and Education. Graham, G. (2005). Community networking as radical practice. Journal of Community Informatics, 1(3). Gurstein, M. (2000). Community informatics: Enabling communities with information and communications technologies (pp 1-30). Hershey, PA: Idea Group Inc. Gurstein, M. (2003). Effective use: a community informatics strategy beyond the digital divide. First Monday, 8(12). Kling, R. (2000). Learning about information technologies and social change: The contribution of social informatics. The Information Society, 16(3), 217-232. Milio, N. (1996). Engines of empowerment: Using information technology to create healthy communities and challenge public policy. Chicago, IL: Health Administration Press. Morino, M. (1994). Assessment and evolution of community networking. First presented at the 1994 “Ties That Bind” conference on building community networks. Retrieved July, 12005, from http://www.morino.org/under_sp_asse.asp
Community Informatics
O’Neil, D. (2002). Assessing community informatics: A review of methodological approaches for evaluating community networks and community technology centers. Internet Research, 12(1), 76-103. Orlikowski, W. J. (1992). The duality of technology: Rethinking the concept of technology in organizations. Organization Science, 3(3), 398-427. Orlikowski, W. J. (2000). Using technology and constituting structures: A practice lens for studying technology in organizations. Organization Science, 11(Jul/Aug), 404-428. Parker, E. A., Eng, E., Schulz A. J., & Israel, B. A. (1999). Evaluating community-based health programs that seeks to increase community capacity. New Directions for Evaluation, 83(Fall), 37-54. Pigg, K. (2005). Introduction: Community informatics and community development. Community Development: Journal of the Community Development Society, 36(1), 1-7. Putnam, R. (1995). Bowling alone: America’s declining social capital. Journal of Democracy, 6(1), 65-78. Salvador, T., & Sherry, J. (2004). Local learnings: an essay on designing to facilitate effective use of ICTs. Journal of Community Informatics, 1(1), 76-83. Stillman, L., & Craig, B. (2006). Incorporating indigenous world views in community informatics. In OTM Workshops 2006, (LNCS 427, pp. 237246). Montpellier, France. Berlin: Springer. Stoecker, R. (2005a). Is community informatics good for communities? Questions confronting an emerging field. The Journal of Community Informatics, 1(3), 13-26. Stoecker, R. (2005b). Research methods for community change : A project-based approach. Thousand Oaks: Sage Publications. Stoecker, R., & Stillman, L. (2006). Who leads, who remembers, who speaks? Constructing and sharing memory: Community informatics, identity and empowerment. Prato, Italy: Centre for Community Networking Research.
Strand, K., Marullo, S., Cutforth, N., Stoecker, R. & Donohue, P. (2003). Community-based research and higher education: Principles and practices. San Francisco, CA: Jossey-Bass. Strath, B. (2001). Community/Society: History of the concept. In N. J. Smelser & P. B. Baltes (Eds.), International encyclopedia of the social & behavioral sciences (pp. 2378-2383), Oxford: Elsevier Science Ltd.. Wilcox, D. (2001). The story of UK community networking. Retrieved September 13, 2005 from http://www.partnerships.org.uk/stories/intro. htm Williamson, A. (2005). A review of new zealand’s digital strategy. The Journal of Community Informatics 2(1), 71-75.
furthEr rEading Bijker, W. E. (2001). Technology, Social construction of. In N. J. Smelser and P. Baltes (Eds.) International Encyclopedia of the Social & Behavioral Sciences (pp. 15522-15527). Oxford: Elsevier Science Ltd. CRACIN (Canadian Research Alliance for Community Innovation and Networking). Working Papers, Works in Progress. Retrieved March 1, 2007 from http://www3.fis.utoronto.ca/iprp/cracin/publications/ CRACIN (Canadian Research Alliance for Community Innovation and Networking). Case Study Series. Retrieved March 1, 2007 from http://www3. fis.utoronto.ca/research/iprp/cracin/research/case. htm Charmaz, K. (2001). Grounded Theory: methodology and theory construction. In N. J. Smelser and P. B. Baltes (Eds.) International Encyclopedia of the Social & Behavioral Sciences: (pp. 6396-6399). Oxford: Elsevier Science Ltd: Day, P. (2001) The Network Community: Policies for a participative information society. Unpublished PhD, University of Brighton.
Community Informatics
Day, R. E. (2000). The conduit metaphor and the nature and politics of information studies. Journal of the American Society for Information Science and Technology, 51(9), 805-811.
Gaved, M., & Anderson, B. (2005). The impact of local ICT initiatives on social capital and quality of life. Chimera Working Paper. Colchester, University of Essex.
Day, R. E. (2001). The Modern Invention of Information. Carbondale & Edwardsville: Southern Illinois Press.
Gaved, M., & Foth, M. (2006). More Than Wires, Pipes and Ducts: Some Lessons from Grassroots Networked Communities and Master-Planned Neighbourhoods. OTM Workshops 2006, (LNCS 4277, pp. 181-188). Montpellier, France. Berlin: Springer.
Day, R. E. and A. K. Pyati (2005). We must now all be information professionals: An interview with Ron Day. InterActions: UCLA Journal of Education and Information Studies, 1(2). Department of Communications Information Technology and the Arts, Australia (2005). Information and communications technology transforming the non-profit sector. Canberra: Department of Communications Information Technology and the Arts. Department of Communications Information Technology and the Arts, Australia. (2005). The role of ICT in building communities and social capital: a discussion paper. Retrieved June 1, 2005, from http://www.dcita.gov.au/ie/community_connectivity/the_role_of_ict_in_building_communities_and_social_capital_a_discussion_paper Department of Communications Information Technology and the Arts, Australia. (2005, May 6). Community ICT transformation: Case studies. Retrieved 1 November, 2005, from http://www. dcita.gov.au/ie/community_connectivity/community_ict_transformation_case_studies DiMaggio, P., Hargittai, E., Russell Neuman, W. R., & Robinson, J. P. (2001). Social implications of the Internet. Annual Review of Sociology, 1, 307-336. Fainstein, S. S. (2001). Community Power Structure. In N. J. Smelser and P. B. Baltes (Eds.) International Encyclopedia of the Social & Behavioral Sciences (pp. 2371-2374). Oxford: Elsevier Science Ltd. Fernback, J. (2005). Information technology, networks and community voices. Information, Community & Society, 8(4), 482-502.
Giddens, A. (1990). The consequences of modernity. Stanford, CA: Stanford University Press. Giddens, A. (2000). Runaway world: how globalization is reshaping our lives. New York: Routledge. Gregory, D. (1986). Time-geography. In R. J. Johnston, D. Gregory and D. M. Smith (Eds.) The dictionary of human geography (pp. 485-487). Oxford: Blackwell Reference. Gregory, D. (1986). Time-space distanciation. In R. J. Johnston, D. Gregory and D. M. Smith (Eds.) The dictionary of human geography (pp. 487-492). Oxford: Blackwell Reference. Habermas, J. (1974). Public sphere: an encyclopedia article (1964). New German critique, 1(3), 49-55. Harlow, E. and S. A. Webb (2003). Information and communication technologies in the welfare services. London: Jessica Kingsley Publishers. Harvey, D. (1989). The condition of postmodernity, Blackwell. Huws, U. (2003). The making of a cybertariat: virtual work in a real world. New York, Monthly Review Press. Industry Canada (2004) Evaluation Study of the Community Access Program (CAP), Final Report, January 16, 2004, Ekos Research Associates and Industry Canada, Audit and Evaluation Branch. Jackman, R. W. (2001). Social capital. International Encyclopedia of the Social & Behavioral Sciences. N. J. Smelser and P. B. Baltes. Oxford, Elsevier Science Ltd: 14216-14219.
Community Informatics
Jacobs, B. D. (2001). Community Sociology. International Encyclopedia of the Social & Behavioral Sciences. N. J. Smelser and P. B. Baltes.Oxford, Elsevier Science Ltd: 2383-2387.
Stokman, F. N. (2001). Networks: Social. International Encyclopedia of the Social & Behavioral Sciences. N. J. Smelser and P. B. Baltes. Oxford, Elsevier Science Ltd: 10509-10514.
Marx, G. T. (2001). Technology and Social Control. International Encyclopedia of the Social & Behavioral Sciences. N. J. Smelser and P.B. Baltes. Oxford, Elsevier Science Ltd: 15506-15512.
Wajcman, J. (2001). Gender and Technology. International Encyclopedia of the Social & Behavioral Sciences. Oxford, Elsevier Science Ltd: 5976-5979.
McIver Jr., W. (2006). Community Informatics and Human Development. OTM Workshops 2006, LNCS 4277. Montpellier, France, Springer Berlin / Heidelberg: 149-159.
Warf, B. (2001). Space and Social Theory in Geography. International Encyclopedia of the Social &Behavioral Sciences. Oxford, Elsevier Science Ltd: 14743-14749.
Patton, M. Q. (1990). Qualitative Evaluation and Research Methods. Newbury Park, Sage.
Wellman, B. (2001). “Computer Networks as Social Networks.” Science Retrieved 14 September, 2001, from http://www.chass.utoronto.ca/~wellman/ publications/index.html.
Patton, M. Q. (1997). Utilization-focussed Evaluation. Thousand Oaks, Sage. Pigg, K. (2001). “Applications of Community Informatics for Building Community and Enhancing Civic Society.” Information, Communication and Society 4(4): 505-527. Ray, L. (2001). Critical Theory: Contemporary. International Encyclopedia of the Social & Behavioral Sciences. N. J. Smelser and P. B. Baltes. Oxford, Elsevier Science Ltd: 2984-2986. Relph, E. (2001). Place in Geography. International Encyclopedia of the Social & Behavioral Sciences. N. J. Smelser and P. B. Baltes. Oxford, Elsevier Science Ltd: 11448-11451. Rose, N. S. (1999). Powers of freedom reframing political thought. Cambridge, United Kingdom ; New York, Cambridge University Press. Scanlon, C. (2004). What’s Wrong With Social Capital. The Australian Fabian Society Pamphlet and Arena Publications Blue Book Series (Arena Journal 69). Melbourne. Blue Book 8, Australian Fabian Society Pamphlet 63: 1-12. Stillman, L. (2006). Understandings of technology in community-based organisations: a structurational analysis. Unpublished PhD, Monash University.
tErms and dEfinitions Community Based Research: Communitybased research (CBR), also called communitybased participatory research, action research, participatory action research, participatory research, and a variety of other things, engages community members or community organizations in developing research questions that address community issues, designing research methods, carrying out the research, and using its results. Community Development/Community Organization: Community development, (also called community organization), is a range of practices which aim to work with local communities to improve the quality of life, ranging across many areas, including housing, employment, help, and social connection. Self-help and empowerment often associated with community development. Community development often works through neighborhood, block, village or other formal and informal structures. Community Informatics: As a practice field, aims, through the use of ICTs in conjunction with community development practices, to improve the life of local communities, though it can also work
Community Informatics
to create virtual communities as an adjunct to local connections and networks. As an emerging academic field, Community Informatics studies and theorizes the role and influence of technology in community settings. Digital Divide: The gap between ICT haves and have nots, whether through lack of direct access to infrastructure such as computers of adequate connection, or sufficient skills and training to take advantage of ICTs. The cost of connectivity (computers, software, broadband, and support) is also a contributory factor to the divide. Disability or cultural and linguistic factors such as the lack of support or content in minority or national languages, can also contribute to the divide. Effective Use: The use of ICTs in conjunction with social and community development techniques. Social Capital: Robert Putnam, a key writer on social capital, defines social capital as the ‘connections among individuals—social networks and the norms of reciprocity and trustworthiness that arise from them’ as a resource that can be drawn upon the rebuilding or strengthening of communities. (Putnam 1995). From a community informatics perspective, social capital should also be defined as a community resource that is developed in partnering, rather than exploitative relationships with communities. Sustainability: Sustainability is the extent to which an intervention lasts over time, and
0
particularly after the main change agents who implemented the intervention are no longer present. The sustainability of community informatics projects is dependent upon several factors, including external and internal funding support for the cost of ongoing hardware, software and staff, the ICT skill base (paid and volunteer), the management of ICT-people interactions, and the degree of support for ICTs learning and innovation. More broadly, ICT sustainability in communities can be linked to concerns about environmental and social responsibility.
EndnotEs 1
2
This definition is reminiscent of the broader field of social informatics, a term particularly associated with the work of Rob Kling, though Community Informatics ideally attempts to move the perspective from technology-to-people to one which looks at people, then technology: ‘Social informatics is a field that is the new working names for the interdisciplinary study of the design, uses, and consequences of information technologies that takes into account their interaction with institutional and cultural contexts’ (Kling, 2000, p. 218) See (Draft) Code of Ethics for Community Informatics Researchers, http://vancouvercommunity.net/lists/arc/ciresearchers/200607/msg00024.html, 27 July 2006.
Chapter VII
Public Wireless Internet Dale Nesbary Adrian College, USA
introduction There exists a growing controversy over whether the government should be in the business of providing wireless broadband Internet. Public sector entities, particularly counties and cities, are developing the physical and intellectual infrastructure designed to provide wireless broadband Internet to their residents. Opponents of government entry into the wireless broadband market argue that existing private broadband vendors are fully capable of providing wireless Internet in an efficient manner. Supporters argue that government is uniquely capable of building and supporting, at least initially, wireless broadband at a lower cost and in a more pluralistic and efficient manner than private vendors have done thus far. Moreover, government wireless broadband provision stands to change the landscape of broadband Internet generally. If publicly-provided wireless does attain the objective of providing service in a more pluralistic and efficient manner, many more residents and businesses will benefit. From our perspective, this is an issue worth exploring.
Providing new venues for economic growth is critical for the future of state economies and that of the United States. in general. The decline of American manufacturing has caused economic disruption throughout large portions of the United States, particularly in the Midwest. This region is seeking to ensure that they are competitive for years to come. One such solution is the public provision of wireless Internet. Oakland County, Michigan recently revealed a plan to offer free and low cost wireless Internet access to all of its residents by 2007. Many communities in Michigan and around the country are taking similar action. This chapter will examine several dimensions of the public provision of wireless Internet, including: • Issues driving the development of wireless technologies • Public wireless provision in the United States. • A case study—Michigan • Implementation issues and future directions
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Public Wireless Internet
background Wireless network access technologies have existed for nearly 100 years, although wireless Internet technologies have been in use from a practical perspective for about 15 years. Since Guglielmo Marconi patented the first wireless communications device over 100 years ago, transmission of voice and data has undergone tremendous changes (Leeper, 2002). The technology has moved from basic telegraph to the vast array of technologies that we know today as the Internet. Wireless networking has been defined as technology that allows two or more computers to communicate using standard protocol, but without the use of network cabling (Keynetwork, 2006). Any technology that achieves this, including Internet, e-mail, and FTP, may be defined as a wireless network. Moreover, any Internet application, including the web, FTP, e-mail, messaging, and chat may be defined as part of a wireless network. The Internet is a gateway into vast amounts of knowledge and relationships. It provides users with the ability to conduct almost every aspect of their life through its use, and has helped engender economic growth (Horn, 2005). These obvious advantages explain why the Internet has grown so quickly. The primary technology behind wireless connections for computers has been in use for cell phones for quite sometime, but not until recently have we begun to see the development of long range wireless access for computer use (Harrington, 2000). Wireless Internet access has begun to make the home computer versatile for more uses then just being stationary at one location. Everyday objects including phones, kitchen appliances and motor vehicles are becoming equipped with wireless capacity, including Bluetooth and IEEE 802.11 connections (Anderberg, 2002). Autos equipped with Bluetooth technology have the capacity to use navigation screens to display Internet access for receiving electronic mail and other messaging systems. Constructing a wireless wide area network is not a simple task because of the nature of wireless signals. These signals must travel around and through trees, homes, and other natural and
man-made barriers. Water acts as the key interrupter for wireless frequencies and all plant life contains water within their structures (Barthold, 2002). Designing wireless access points that can reach for hundreds of square miles require many different signal distributor stations that must be placed within certain boundaries to provide adequate signal strength for consumers. Wireless Internet service is a relatively new technology, having been used commercially during the past decade. The first instance in which wireless Internet was provided by a governmental entity was Zamora, Spain, in 2003 (Intel, 2006), while Grand Haven, Michigan, was the first governmental entity in the United States to provide such services in 2004 (Azulstar, 2006). The two primary implementations of wireless Internet are WiFi, the existing standard, and WiMax, an emerging standard (Thomas, 2004). WiFi is a short range system, reaching up to 300 feet indoors and one quarter mile out of doors. WiFi typically requires a series of wireless routers throughout a building or one wireless router in a small retail establishment or residence. WiFi operates from 1Mbps to 55 Mbps. WiFi quality of service (QoS) has been questioned; however, it has proven to be reliable over time (Sapronov & Kumar, 2005). WiMax comes in various flavors, however the primary advantages are speeds of up to 75 Mbps and a range of up to 30 miles from a central tower (Thomas, 2004). The desire to provide public wireless Internet is in part a function to bridge the social, economic, educational, and political barriers limiting access to digital technology. Reports regarding this “digital divide” show that both income and ethnicity play a major role in determining the difference in access among groups (Guillen & Sanchez, 2005). Higher income families access the Internet at a higher frequency than those lacking resources. Students in schools in which the minority student count comprises 50 percent or more of the population are much less likely to have Internet access than in predominately white districts (Dávila, 2003). The possibility of creating free wireless Internet access on a national scale may help eliminate these problems. Increasing Internet access would allow all schools to have
Public Wireless Internet
an adequate Internet access for students, faculty, and community users of school services.
E-govErnmEnt and wirElEss Advancements in technology have led to changes in the way government functions. These changes are meant to create new ways for the government to interact with its citizens, businesses and with itself. Governments have developed uses for the Internet and networking technologies to build faster methods of dealing with issues, reducing governmental costs, and gaining easier access to information (Perlman, 2003). Examples of these new e-government (electronic) processes are online license registration, marriage licenses, fees, fine and tax payments, passport ordering, booking camp sites, downloading public forms, and communicating with governmental officials (Wang, 2003). Governmental officials are becoming easily accessible through e-mail, chats and blogs. The government is seeking efficiencies by altering methods of transferring data internally and externally. Wireless Internet access is one such method. When implemented, these services will allow all constituents with a personal computer to benefit from these progressions in e-government. The idea that the provision of wireless Internet services should be a public utility is partially a function of the lack of access to the technology by some segments of society. Initial provision of wireless Internet has been provided by the private sector, primarily via broadband cable and so-called broadband Internet hot spots (Boyle, 2003). By many accounts, these services have proven to be relatively expensive for consumers and vendors, fairly reliable, and potentially very profitable (Dornan, 2004). However, service has been described as anything but comprehensive. As a result, governments are stepping in to fill the void for those unable to afford service, such as those living in remote or otherwise unreachable areas by broadband Internet. (Government Finance Review, 2005). A 2005 study (Communications News) indicated that in excess of 200 U.S. communities are
either already providing Wi-Fi access to their citizens, currently installing such systems or in the assessment/planning stages. The reasons for communities developing wireless Internet networks varies from those wanting to reach remote areas (Heckler, 2006) to serving as a mechanism for “kick starting” New Orleans’ devastated economy (Martinez, 2006). Government entry into the wireless internet domain has resulted in a reaction from private wireless broadband providers. Many private providers argue that government should not be in the business of providing Internet access in any form to residents (Anderberg, 2005). Private Internet service providers argue against public provision is: • • • •
An unwise use of tax dollars An unwise allocation of public resources Unfair competition with business Fundamentally not a public good
Public providers of high speed Internet argue that their spending is appropriate for many reasons: • Government and government contractors (DOD, NASA, RAND) funded the creation of the Internet • Government and education funded the vast majority of Internet infrastructure and software development • The apparent inability or unwillingness of private ISPs to offer services to all or even a bare majority of residents • The low level of service provided to low income or low status individuals • The relatively high cost of access as provided by private vendors Michigan is one of a number of states that has revised its telecommunications laws to address public sector provision of wireless Internet. Moreover, fourteen states, including Michigan, have banned or limited local governments from providing Internet services (Scott, 2006). Competing pieces of legislation have been introduced in Congress, some of which allow communities
Public Wireless Internet
to provide wireless Internet services while others severely limit this opportunity.
Development Authority. This program will be available to all Muskegon county residents and is expected to be in place mid-year 2006.
michigan countiEs and wirElEss intErnEt
Survey Results
In addition to an overview of public sector provision of wireless Internet services, a more detailed examination of particular communities can also broaden understanding of the issue. This chapter will examine the implementation of wireless Internet services in Michigan counties. Michigan is interesting in that in its borders are (1) Grand Haven, the first American city to implement city-wide WiFi network (azulstar.net, 2003), (2) Oakland County, which portends to be one of the largest geographic areas serviced by public sector WiFi network, and (3) Muskegon County, the first fully-implemented large scale WiMax installation in the United States.
multiple implementations A solution being offered by many communities is a joint operating agreement between governmental units and private vendors. Oakland and Muskegon Counties in Michigan are developing this kind of working relationship. Oakland County quickly determined that the task of developing a WiFi system on its own was technically difficult and problematic from a legal perspective (Bertollini, 2006). Oakland County ultimately chose to partner with a vendor, ultimately selecting MichTel after a competitive bid process. MichTel will control build-out, operation and maintenance. The county will have a commitment from MichTel to provide basic (ISDN level) high speed Internet access to residents at no cost and faster access available at graduate rates. Muskegon County chose a WiMax implementation. Muskegon feels that tradeoff of the newer technology will be more than offset by the much broader coverage available with WiMax. Muskegon established the Muskegon Digital Divide Investment Program, funded via a federal Department of Housing and Urban Development grant, as well as with loans from the Michigan Broadband
Public wireless Internet access is a new concept for Michigan, having been implemented since 2003. To get a sense of the extent to which public wireless is provided in the state, 83 counties in Michigan were surveyed, first in October of 2005 and a second time in June of 2006. The surveys, conducted via telephone, were designed to determine (1) if county government currently provided wireless Internet services to its residents, and (2) if they did not currently provide such service, did they have plans to do so in the future. The Chief Information Officer of each county or their equivalent was interviewed. We found that several Michigan counties provide or have begun planning their own projects. Figure 1 indicates that 14 counties were either planning or had implemented wireless Internet for their residents while 21 counties had done the same in 2006. Branch and Otsego Counties provided a small amount of free wireless access primarily in county buildings and downtown areas. Oakland County was the in the final stages of providing a timeline of how they were providing free public wireless Internet. Oakland County received enormous amounts of media coverage on their proposal which sparked the interest of many other counties throughout Michigan. The second survey was conducted in June, 2006, eight months after the initial survey. Fourteen of 83 counties were in the planning process and seven counties offered some level of wireless Internet access. The growing number of counties offering wireless Internet demonstrates an interest by the counties, but many lack the resources to cover the entire county. Oakland, Genesee, and Washtenaw Counties all plan to provide wireless Internet countywide. These counties are serving as role models for others. Genesee hopes to use this public service to provide the potential for more jobs and a better standard of living for its residents throughout the county.
Public Wireless Internet
headquarters locations in all of Michigan, which helps drive in both employees and visitors alike. Building a WiFi-based system may require an extensive number of wireless signal transmitters. Oakland County plans on using large wireless antennas, including smaller range access points, to allow for the best available signal within its borders. The access points themselves will be scattered throughout Oakland County, installed on top of buildings, telephone poles, and public utility towers. Because of the large geographic area to be served, maintenance and service are issues worth notice. Antennas will need repairing. Public kiosks will be established to broaden access. Governments considering implanting a wireless Internet service should keep these issues in mind. Geography and topography are issues to be considered as well. A heavily forested environment may cause problems with wireless signals due to the water content within leaves and branches. Water acts as one of the primary deflectors of electronic signal, while wireless has almost no problems transferring through concrete and cement in buildings. The abundance of forestry
implementation challenges Communities face many challenges with respect to providing wireless Internet access to the public. These challenges comprise infrastructure costs, physical barriers, the lack of trained staff, and inconsistent state and federal laws. To demonstrate these challenges, we will focus on Oakland County, with information provided by the Oakland County Wireless Oakland Initiative (Bertolini, 2006). MichTel Communications, the vendor selected to provide WiFi in Oakland County, was willing to agree to take the plunge, in large part because of the potential to serve a wide array of home, business, government and nonprofit clients. Moreover, Oakland County hosts the majority of corporate Figure 1. Michigan counties with wireless internet Wireless Internet Services
2005
2006
In Planning Stages
12
14
Implemented
2
7
Figure 2. Michigan counties with wireless Internet access In Planning Stages 2005
Implemented 2006
2005
2006
•
Antrim
•
Branch
•
Allegan
•
Branch
•
Genesee
•
Otsego
•
Charlevoix
•
Genesee
•
Jackson
•
Crawford
•
Mackinac
•
Lake
•
Ingham
•
Muskegon
•
Macomb
•
Kalkaska
•
Oakland
•
Manistee
•
Lapeer
•
Oceana
•
Mason
•
Leelanau
•
Otsego
•
Muskegon
•
Livingston
•
Oakland
•
Macomb
•
Ontonagon
•
Manistee
•
Osceola
•
Mason
•
Washtenaw
•
Schoolcraft
•
St. Clair
•
Washtenaw
Public Wireless Internet
and mountainous in many parts of the country make these areas extreme challenges for wireless Internet implementation. Lack of training may impede governments from reaching their goal of free and low-cost public wireless. Public administrators need to work to have well-informed technological personnel that will want to implement these strategies for the future. As described earlier, the emergence of public wireless Internet will likely draw telecommunication companies and governments into protracted battles in legislative bodies as well as in the courts over the legality of public wireless. Other issues likely needing a legal resolution include (1) the extent to which public wireless may lead to additional Internet based crime, and (2) the possibility that the government may collect data on users in ways that may not comport with the law. These battles will most likely delay the process of providing public wireless Internet to the constituents. As an example, allowing access to all citizens even with distinct digital signatures may increase computer crime involving copyright material, computer viruses, worms, hacking, and social engineering. Increasing cyber crime and fraud may eventually affect the government provided services, and they may need to have detailed documentation of user online activity. This documentation process may lead to disputes over whether or not government or ISPs should be in the business of collecting data on users. The Electronic Frontier Foundation and the American Civil Liberties Union recently expressed these concerns to the City and County of San Francisco (Ozer & Opshal, 2005). On a broader scope, clients of Verizon, AT&T and other major ISPs expressed concerns that by “knowingly divulge a record or other information pertaining to a subscriber or customer ... to any government entity,” telecom companies may be violating federal law (Savage, 2006).
futurE trEnds The provision of wireless Internet by the public sector is rich with future research opportunities.
Communities are only recently exploring the possibility of providing Internet access to their residents. As cities, counties, and states wrestle with whether or not they should or can provide these services, this paper finds that major telecom companies and ISPs are lobbying the states and Congress to limit the extent to which government has a role. Similarly, governments are enacting statutes and lobbying Congress that more public Internet be provided. It is very likely that many more public sector organizations will provide wireless Internet and it is just as likely that many private ISPs will see this as an inappropriate intrusion by government into what may be defined by Congress as a purely private domain. As these issues sort themselves out, researchers will invariably have the opportunity to examine changes in the law, the number and extent of public Internet access, and the impact of public Internet access on the social, economic, and political profile of their residents.
conclusion Should government provide wireless Internet or should the private sector provide this vital service. It appears that the best source of providing Internet publicly will be as a joint business venture. Government may best develop a framework in which a contractor builds the infrastructure and network framework. The public sector then provides for an efficient regulatory path and protects the interest of those unable to afford or access wireless Internet. The Wireless Oakland model may be seen as a prime example of creating free wireless Internet access while the Muskegon Digital Divide Investment Program is an excellent example of public Internet provided at a reasonable cost. Providing free public Internet is a revolutionary idea from the perspective that it can change how Internet service is provided and lead private companies to revamp the way they offer their services. Moreover, counties providing public wireless Internet may provide those on the wrong side of the digital divide tools to help them participate more fully in modern society. This new environment could reform telecommunications and allow for
Public Wireless Internet
a better-informed public, but there is much work to be completed before the final results are in. Governments must find a way to design wireless infrastructure in remote locations blocked by physical barriers. They will also have to design a method of attracting private businesses to want to offer wireless bandwidth in areas of similar population and economic capacity. The costs of providing a social benefit can be expensive, but the possibility of providing wireless Internet to the public is likely to outweigh all costs.
futurE rEsEarch dirEctions This chapter focuses primarily on the provision of wireless Internet in the United States. Much of the current research is descriptive in nature and focuses on municipalities in the United States and a lesser extent to Canada and Europe. As technologies mature and more public sector organizations implement wireless Internet services, many more opportunities for research will develop. Moreover, there are multiple other opportunities for research in this and related areas. State and municipal issues. Many states are moving quickly to regulate the provision of Internet services by municipalities (Scott, 2006). These statutes and regulations impact all flavors of sub state units of government, including cities, counties, towns, townships, and special districts. There is a clear opportunity for researchers to examine how regulation will impact the ability of government to provide Internet services. Moreover, state fiscal policy may impact the extent to which funding is available for municipalities to support the major infrastructure investments often needed to provide wireless Internet services to residents. Researchers will be interested in evaluating the impact of tax policy and/or direct expenditures designed to spur the development and implementation of public wireless Internet. U.S. national level issues. The U.S. Congress is considering multiple methods of regulating Internet services, including public provision, tax policy. Moreover, Gubbins (2006) argues that “the net neutrality debate—and its attendant fears about
censorship, prices and consumer choice—could fuel interest in municipally owned broadband networks as an alternative to privately owned pipes.” The extent to which public wireless Internet services expand as a result of federal policy changes is a potential source of research. International issues. This chapter briefly discusses how public provision of wireless Internet services is a worldwide phenomenon. Sawada and others (2006), and Strover (2006) discuss the rapid pace of wireless Internet implementation outside the U.S. and as well as lessons that may be learned by all levels of government with plans to provide Internet services to residents. The extent to which the digital is greater in non-developed and non-western countries would indicate the possibility of technology expansion in countries addressing the digital divide. This is an area ripe for research as well. Technology advances. While this paper addresses state of the art technology, wireless broadband is in a constant state of change. On the horizon is wireless mesh technology, already being implemented in the city of Cambridge, Massachusetts (Carnevale, 2006) and at Adrian College in Adrian, Michigan. Such technology comprises a mesh covering a specific area and uses a few antennas hard-wired to the Internet and many smaller antennas functioning as relays. Wireless mesh technology portends to be much less expensive than traditional WiFi technology. As these technologies mature, researchers will have opportunities to assist government and colleagues with respect to preferred methods of implementing and evaluating public wireless Internet.
rEfErEncEs Anderberg, K. (2005). Wi-fi for all? Communications News, 42(7), 4. Azulstar.con. Who We Are. Retrieved June 20, 2006, from http://www.azulstar.com/about-us/ index.html Barthold, J. (2002). Fixed wireless enmeshed in technology debate. Telephony, 242(8), 28.
Public Wireless Internet
Bertolini, P. (2006, February 15) Class Presentation, Oakland University, Public Administration 621, Government Information Systems.
Martinez, I. (2006). EarthLink to unwire new orleans, too. Communications Daily; Today’s News, May 03, 2006.
Boyle, M. (2003). The really, really messy wi-fi revolution. Fortune, 14(9), 86-8, 90, 92.
Ozer, N.A., & Opshal, K. (2005). Joint letter on san francisco wireless internet. Letter to Dept. of Telecommunications and Information Services, City and County of San Francisco.
Dávila, J. J. (2004). Digital divide. In H.Bidgoli (Ed.), The internet encyclopedia (1st Ed., Vol. 1). Hoboken, NJ: Wiley. Dornan, A. (2004). The long arm of wi-fi. Network Magazine, 19(6), 48-52. Ferrera, G. R. (2004). Copyright Law. In H. Bidgoli (Ed.), The internet encyclopedia (1st Ed., Vol.1). Hoboken, NJ: Wiley. Government Finance Officers Association (2005). Philadelphia going wireless. Government Finance Review, 21(3), 7. Guillen, M. E., & Sanchez, S.L. (2005). Explaining the global digital divide: Economic, political and sociological drivers of cross-national internet use. Social Forces, 84(2), 681-708. Hecker, J.Z. (2006). Broadband deployment is extensive throughout the united states, but it is difficult to assess the extent of deployment gaps in rural areas. Government Accountability Office. GAO Report: GAO-06-426. Harrington, L.H. (2000). Exceeding the limits. Transportation & Distribution, 41(11) 95-98. Horn, P.M. (2005). The changing nature of innovation research. Technology Management, 48(6), 28-31, 33. Keynetwork, definition of wireless Internet. Retreived June 28, 2006 from http://www.keyitsolutions.com/wireless_Internet_network.htm. Leeper, D.G (2002). Wireless data blaster. Scientific American, 286(5), 64-9. Macdonald, D. (2003) Zamora Spain scores first with public wireless Internet. RetrievedOctober 22, 2007, from http://www.mobileimperative. com/documents.asp?d_ID=1788
Perlman, E. (2003). The anti-silo solution. Governing, 16(4), 40-1. Piper, P.S. Research on the internet. In H. Bidgoli (Ed.), The internet encyclopedia (1st Ed., Vol. 3). Hoboken, NJ: Wiley. Scott, J. (2006.) San Mateo County: Companies push ban on free wireless. Retrieved June 30, 2006, from http://www.govtech.net/digitalcommunities/story.php?id=98484 Savage, D.G. (2006). Phone firms questioned. Los Angeles Times, Part A, 9. Swope, C. (2006). Wi-fi free-for-all. Governing, 19(8) 54-6, 58. Thomas, N.L (2004). Ready to rumble. America’s Network, 108(18) 18-20, 22, 24. Wang, Y. (2003). The adoption of electronic tax filing systems: an empirical study. Government Information Quarterly, 20(4) 333-52.
furthEr rEading Bidgoli, H. (Ed.) (2005). The Internet encyclopedia. Hoboken, NJ: Wiley. Carnevale, D. (2006). MIT to help cambridge build wireless network. The Chronicle of Higher Education, 52(24), A39 F 17 Gubbins, E. (2006). Neutrality and municipalities. Telephony, 247(3), 24-5. McChesney, R. W., Podesta, J. (2006). Let there be wi-fi. The Washington Monthly, 38(1/2) 14-17. Powell, A., Shade, L.R. (2006) Going wi-fi in Canada: Municipal and community initiatives. Government Information Quarterly, 23(3/4), 381-403.
Public Wireless Internet
Garson, G.D. (Ed.). (2005). The handbook of pubic information systems (2nd Ed.)., Boca Raton, FL: Taylor & Francis. Garson, G.D. (Ed.). (2006). Pubic information technology and e-governance. Managing the virtual state. London: Jones and Bartlett. Sawada, M., Cossette, D., Wellar, B., & Tolga, K. (2006) Analysis of the urban/rural broadband divide in Canada: Using GIS in planning terrestrial wireless deployment. Government Information Quarterly, 23(3/4), 454-79 Sirbu, M., Lehr, W., & Gillett, S. (2006). Evolving wireless access technologies for municipal broadband. Government Information Quarterly, 23(3/4), 480-502. Strover, S. (2006). Wireless broadband, communities, and the shape of things to come. Government Information Quarterly, 23(3/4), 348-58. White, O. (2006). Urban notebook. Governing, 19(4) 15.
tErms and dEfinitions Bluetooth: A telecommunications specification describing methods of interconnecting wireless digital devices, such as computers, personal digital assistants, cell phone and digital appliances. E-Government: Government services provided via electronic means, most prominently via the Internet. IEEE: The IEEE (Institute of Electrical and Electronics Engineers) is a self described technical professional society, whose mission is to set engineering standards with respect to a number of disciplines, including computer technology. Internet: A global array of computers connected by a network, utilizing technologies (primarily Transmission Control Protocol/Internet Protocol) that facilitates transmission of data supporting traditional Internet services (hypertext,
file transfer protocol, telnet, simple mail transport protocol and others). Public Wireless Internet: Wireless Internet access provided by a public sector entity, such as a city or county. The predominant model in the United States comprises: 1. 2. 3.
a private sector firm selected by competitive bid to provide Internet service the firm agrees to provide a wide array of services, including multiple access speeds and bandwidth offerings, and the firm agrees to provide a wide array of price points, including a low or no cost service to the general public.
UMTS: UMTS or universal mobile telecommunications systems represents wireless mobile home standard and is considered a both a supplement and competitor to WiMax. It is implemented in Europe. Data rates are slower that WiFi or WiMax at less than 2Mbps, but it has a range comparable to WiMax. WiFi: WiFi or Wireless Fidelity is a set of compatibility standards for wireless networks licensed by the WiFi Alliance. WiFi standards are 802.11a, 802.11b, and 802.11c. Data transmission rates range from 1Mbps to 66Mbps. The effective range is a quarter of a mile out of doors and 300 feet indoors. WiFi is designed primarily for data transmission. WiMax: WiMax or Worldwide Interoperability for Microwave Access is a set of compatibility standards for wireless networks supported by the WiMax Alliance. The basic WiMax standard is 802.16 It is faster than WiFi (75Mbps maximum) and has superior range, of up to31 miles. WiMax is designed primarily for data transmission. World Wide Web: The portion of the Internet structured for use with a web browser. The Web is a subset of the Internet. Wireless Internet: Internet access provided through a wireless device, such as a wireless laptop, personal digital assistant or cell phone.
0
Chapter VIII
The Current State and Future of E-Participation Research Chee Wei Phang National University of Singapore, Singapore Atreyi Kankanhalli National University of Singapore, Singapore
introduction The past decade has witnessed an increasing trend of information and communication technologies (ICT) exploitation by governments around the world to enhance citizen participation. This is reflected in the emergence of a plethora of terms associated with the phenomenon, such as e-consultation or online consultation (Whyte & Macintosh, 2002), online rule-making (Charlitz & Gunn, 2002), online deliberation (Price & Cappella, 2006), online public engagement (Coleman & Gotze, 2001), and e-participation (Macintosh, 2004). In this chapter, we will use the term “e-participation” initiatives to refer to government’s use of ICT to engage citizens in democratic processes. The term “e-participation” is chosen because it is sufficiently general to encompass all such efforts by governments. Instances of e-participation initiatives can be found globally, such as Denmark’s Nordpol.dk (http://www.nordpol.dk), U.S.’s Regulations.gov (http://www.regulations.gov), and Singapore’s REACH portal (http://www.reach.gov.sg). Table 1 presents a list of e-participation initiatives that are sampled from around the globe.
The emergence of e-participation initiatives can be attributed to governments’ growing awareness of the need to attain more democratic governance (Coleman & Gotze, 2001), coupled with a widespread public interest in the potential of ICT to empower citizens (Hart-Teeter, 2003). It is argued that enhanced citizen participation can lead to policies that are more realistically grounded in citizens’ needs, and improved public support for the policies (Irvin & Stansbury, 2004). Institutionalization of citizen participation programs can be traced back as far as the 1950s (Day, 1997), with the potential of ICT to enhance participation being recognized about two decades later. Eztioni, Laudon, and Lipson (1977) wrote: “The technological means exist through which millions of people can enter into dialogue with one another and with their representatives, and can form the authentic consensus essential for democracy” (p. 64). Today, e-participation initiatives are exploiting the Internet’s capabilities of 24/7 accessibility as well as the mass transmission and reception of information to facilitate citizen participation. The growing interest of governments in e-participation initiatives has been echoed in academic research communities.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Current State and Future of E-Participation Research
background Current research on e-participation encompasses a diverse array of issues and spans various disciplines such as, political sciences, public administration, planning, and information systems. The number of studies related to e-participation has been increasing rapidly. Notably, at the 2006 Annual International Conference on Digital Government Research, over 20 percent of accepted papers were related to e-participation, a figure double that of the previous year. With the burgeoning of e-participation research, it appears worthwhile to review the relevant literature to assess the current state of e-participation research. Such an effort may serve to provide directions for future research and uncover insights for practitioners engaged in e-participation efforts.
Based on a review of existing literature on e-participation, the foci of research in this area can be broadly classified into five principal topics: systems and tools for e-participation, factors influencing individual’s e-participation, effects of ICT features on individual’s e-participation, implications of e-participation initiatives, and best practices of e-participation initiatives. Research under the systems and tools for e-participation topic describes the nature and capabilities of a range of ICT currently employed to enhance participation, as well as the emerging systems and tools for this purpose. The use of ICT, however, does not by itself guarantee participation. Therefore, another stream of research investigates the factors that influence individual’s e-participation. These studies take into account a range of motivational, resource, and efficacy factors that
Table 1. Examples of e-participation initiatives Continent: Country
E-Participation Initiative
Description
Asia: Israel
SHIL (Hebrew acronym for “Citizen Advisory Bureau”) (http://shil.shil.info/)
Established by the Welfare Ministry of Israel and the Information Society Research Center in the University of Haifa to provide citizens a new channel to communicate with decision makers in government agencies, and to improve information flow between the public and authorities
Asia: Singapore
REACH portal (http://www.reach.gov.sg)
A national level consultation portal that aims to encourage inputs from citizens with regard to policy making
Australia
Community Builder (http://www.communitybuilders.nsw.gov.au/)
An interactive electronic clearing house for community level social, economic and environmental renewal. It is used to enable online community consultation, e-petition and to broadcast parliamentary activities
Europe: Denmark
Nordpol.dk (http://www.nordpol.dk)
To enhance citizens’ interest in and knowledge of politics and to strengthen the dialogue among citizens and politicians
Europe: Estonia
TOM (Tana Otsustan Mina/“Today I Decide”) (http://tom.riik.ee/)
To enhance citizens’ participation in policy making by allowing citizens to comment on draft laws and submit their own ideas for new legislations
Europe: Italy
Iperbole (http://www.iperbole.bologna.it/)
An online civic network in Bologna set up to widen the use of ICT, supply information and interactive services to the citizens of Bologna, and create a dialogue between citizens and government
Europe: Sweden
Kalix Annual Consultation (http://www.kalix.se)
To engage citizens in a series of efforts to renew town politics, including the remodeling of the city center of Kalix and tax issues
Europe: UK
UK Government Consultations (http://www.cabinetoffice.gov.uk/regulation.aspx)
Designed to facilitate public feedback on policy issues, and to offer useful political and civic information to citizens
North America: U.S.
Regulations.gov (http://www.regulations.gov)
A cross-agency e-government effort to transform the US’s Federal rulemaking process by enhancing public’s ability to participate in government’s regulatory decision-making
The Current State and Future of E-Participation Research
are salient to individual’s participation. A related stream of research examines how features of ICT may promote citizen participation. In spite of the general optimism about the ability of ICT to enhance democracy, there are also skepticisms and concerns about its potentially negative impact. Articles categorized under the fourth stream discuss and debate about the positive and negative implications of e-participation initiatives. Studies of this nature often describe the pitfalls of e-participation, and may provide suggestions on how such pitfalls can be avoided while maximizing the benefits of e-participation. Last, best practices of e-participation initiatives have been described and documented in a number of studies. Research categorized under this topic may offer insights to practitioners who are embarking on e-participation efforts. In the following sections, we will first examine the state of research of each of the topics based on a selection of salient studies related to the topic. We will then discuss the possible future trends of research in these areas.
statE of E-participation rEsEarch Systems and Tools for E-participation Previous literature (Coleman & Gotze, 2001) has discussed the nature, capabilities, and limitations of a comprehensive list of ICT tools for e-participation. These ICT tools include e-mail, instant messaging, newsgroup, Web forms, chat rooms and bulletin boards. Among the tools, Kumar and Vragov (2005) call for more active use of bulletin boards to support e-participation. They argue that bulletin boards can foster deliberation and be used as repositories of civic knowledge. Apart from these commonly employed ICT tools, there are efforts to investigate emerging technologies for e-participation. For instance, Nyerges, Brooks, Jankowski et al. (2006) report a participatory geographic information system that was developed to support public participation in
decision-making concerning transportation improvement. Kavanaugh, Zin and Carroll (2006) assess the use of Web-logs to support e-participation, and conclude that such technology provides enhanced opportunities for social interaction and informal discussion among citizens. However, opinion leaders who initiated the Web-logs are likely to heavily influence such citizen interaction and discussion. In trying to resolve the difficulty faced by government agencies in processing and incorporating inputs when a large volume of participation inputs is received, active research is currently underway to develop automated techniques that can facilitate the mass organization and analysis of the inputs (e.g., Kwon, Shulman & Hovy, 2006). In general, studies categorized under this topic may serve to provide updates on the technologies employed for e-participation initiatives.
Factors Influencing Individual E-participation The mere employment of ICT cannot guarantee participation of citizens. It is important to investigate the factors that can motivate or hinder citizen’s e-participation. However, research conducted in this area has thus far been limited. Garramone, Harris and Pizante’s (1986) work represented an initial effort along this direction. The study focused on the motivations behind individual’s e-participation (the term “computer-mediated political communication systems”1 was used), and found that such motivations include individuals’ desire to express their views, personal needs related to local politics, and anticipated satisfaction from e-participation. Interestingly, socio-economic characteristics of individuals (i.e., age, income, education level) that have long been considered important predictors of citizen participation (Verba & Nie, 1972) were not found to be significant. Further, individuals’ age was found to negatively influence their e-participation, which was contrary to expectation. Thrane, Shelley, Shulman et al.’s findings (2004) agreed with Garramone et al.’s (1986) study. The authors analyzed data from a computer-assisted telephone
The Current State and Future of E-Participation Research
interview and found that the younger generation is more likely to embrace e-participation than the elderly. They attribute this to the older generation’s apathy towards new technologies and their relatively lower use of ICT in daily life. Contradictory findings underscore the need for additional research about the factors that influence individual’s e-participation. Another gap in this research stream is the lack of studies that systematically build on previous literature and theories. Particularly, well-established theories to explain citizen participation from disciplines such as political science2 can be employed for pursuit of research in this area. Phang and Kankanhalli (2006) synthesized two comprehensive participation theories, the civic voluntarism model (Verba, Scholzman & Brady, 1995) and the general incentives model (Seyd & Whiteley, 2002), to explain individual’s e-participation (Phang & Kankanhalli, 2006). The two theories that are complementary in nature were employed to construct a research model of individual’s eparticipation. The study provides evidence of the applicability of existing participation theories to the electronic context. Specifically, the study highlights a number of factors that contribute to individual’s e-participation, which include different types of participation incentives, civic skills, and individual’s political efficacy. Research classified under this category may serve to sensitize practitioners about the motivators and de-motivators of e-participation. This may in turn provide them with guidance on the development of appropriate interventions to promote citizen’s e-participation.
Effects of ict features on individual’s E-participation There is currently a lack of research that investigates the effects of ICT features on individual’s e-participation. This is surprising because the deployment of ICT capabilities to enhance participation is the distinguishing characteristic of e-participation initiatives from their offline counterparts.
Early studies in this area tended to focus on the anonymity and reduced social cues in online environments and their impacts on individual’s political behavior (e.g., Hill & Hughes, 1997). It was argued that anonymity and the lack of social cues in online environments might encourage flaming and uncivil behaviors that are unfavorable for political discourses. However, the net impact of anonymity on participation remains ambiguous. Anonymity may instead encourage participation by reducing the fear of negative assessment and retaliation that tends to inhibit participation in general (Shepherd, Briggs, Reinig et al., 1995). Researchers have also refuted the simplistic treatment of the lack of social cues in technology-mediated communications as invariant, arguing that individuals may perceive richer cues with their increased experience of using a technology (Carlson & Zmud, 1994). More recent studies in this area attempt to empirically test the effects of specific ICT features on individual’s e-participation. Using experimental design, Ng and Detenber (2005) examined the effects of synchronicity of ICT on individual’s intention to participate in online political discussions. Their results, however, do not show significant direct effect of ICT synchronicity on individual’s e-participation. In the context of online policy discussion forum, Phang and Kankanhalli (2006) surveyed a sample of 121 youths to explore the effects of connectivity and communality features of ICT on their e-participation intention. Their results indicate that the positive effect of connectivity on individuals’ e-participation is mediated by their enjoyment of interacting with others during the participation process. Interestingly, they find negative impact of communality of ICT on individual’s e-participation. An implication from this finding is that ICT features can have both positive and negative impacts on citizen participation. Such findings underscore the need for practitioners to exercise caution while deploying ICT for citizen participation and not to assume e-participation as a magic bullet that enhances participation. Potential contributions of research in this area are promising. It can help provide practitioners
The Current State and Future of E-Participation Research
with prescriptions of ICT design and deployment to maximize utility of ICT as well as minimize its potentially negative effects. Theoretically, studies of this nature may provide a better picture of why e-participation initiatives are sometimes beneficial while at other times detrimental to the attainment of democratic outcomes.
implications of E-participation initiatives While e-participation initiatives have spurred hopes for the revival of democracy, they have also raised skepticisms and caution concerning their potentially negative impact. A fundamental question asked in this line of research is “Do e-participation initiatives truly lead to better democratic outcomes?” Advocates of e-participation hold that ICT can reduce political ignorance by improving the ease of information dissemination and can also enable large-scale citizen inputs with enhanced accessibility and connectivity (Schwartz, 1996). On the other hand, doubts have been raised about e-participation initiatives’ capability to reinforce representative democracy (Coleman & Gotze, 2001), since ICT tools can be manipulated by powerful political actors to achieve their own interests (Jensen, 2003). Additionally, there are concerns about whether ICT, particularly the Internet, will result in further fragmentation and individualization that are detrimental to democracy (Sunstein, 2004). Such concerns are founded on the observation that individuals tend to form into echo chambers of like-minded members when participating in online discussion with others. Individuals’ ability to use the Internet as a filtering device may worsen the situation as they now have the choice to filter out unwanted “noise” and only expose themselves to topics and opinions they find interesting (Sunstein, 2004). The issue of digital divide also plagues the realization of the promised benefits of e-participation. While ICT facilitates participation of those who are resource-rich, it also prevents individuals with no access to ICT from participating. Even if ICT access is not an issue, political scientists argue that
the public may suffer from “stealth democracy” beliefs –perceptions that political matters should be left to experts and citizens need not participate (Hibbing & Theiss-Morse, 2002). Debates and discussions on these issues remain active to-date. However, some interesting findings have emerged recently. For instance, a recent study on online political discussion networks by Kelly, Fisher and Smith (2006) indicates that participants tend to engage in crosscutting debates that comprise opposing viewpoints, contrary to the expectation that they will fragment into ideological echo chambers. This implies that findings from general online social participation (e.g., in virtual communities) may not apply to the online political participation context. Further, studies have found that individuals’ e-participation increases their policy knowledge and openness to different policy viewpoints (Price & Cappella, 2006) as well as ameliorates their stealth democracy beliefs (Muhlberger, 2006). In general, the debates on e-participation implications serve as a reality check and prompt researchers and practitioners alike to be more rational in evaluating the potential of ICT to enhance participation. While some of the more recent studies discussed may provide partial answers to the concerns about e-participation, they are far from being conclusive. Particularly, individual’s use of the Internet to filter out information (Sunstein, 2004) and its implications remain to be investigated. Practitioners should pay particular attention to the potential negative implications of ICT while implementing e-participation initiatives.
best practices of E-participation initiatives A great deal of current research on e-participation falls under this category. Studies under this topic aim to describe or document best practices of eparticipation initiatives. Examples of these studies include Charlitz & Gunn (2002); Coleman & Gotze (2001); Gronlund (2002); Jensen (2003); and Whyte & Macintosh (2002). Among these, Whyte and Macintosh (2002) construct a framework to analyze and evaluate e-participation initiatives that takes
The Current State and Future of E-Participation Research
into consideration political, social and technical dimensions. The framework offers practitioners a set of items to pay attention to so that the chance of attaining effective e-participation initiatives may be improved. Gronlund (2002) provides a comparison of the different approaches employed for e-participation initiatives based on cases from the Swedish context. The study suggests that different approaches for implementing e-participation initiatives may lead to the inscription of different democratic models. Overall, research in this topic may be particularly useful to inform an emerging field such as e-participation.
futurE rEsEarch trEnds Future research on e-participation may continue to explore new systems and tools for e-participation, explicate a more comprehensive list of factors to explain individual’s e-participation, uncover the participation effects of different ICT features, further deliberate on the implications of e-participation initiatives, and build up cumulative knowledge and experience from the best practices of these initiatives. With the increasing range of systems and tools to choose from, efforts are needed to systematically categorize the different technologies and evaluate the conditions under which the deployment of these technologies is most appropriate. Such efforts may provide practitioners with guidelines of how to employ ICT tools in a more systematic manner so as to improve e-participation outcomes. For research on the factors contributing to and the effects of ICT features on individual’s e-participation, future research should strive to build on previous literature and employ sound theoretical bases to advance knowledge in these areas. Relevant work from political sciences and public administration can be integrated to obtain a better understanding of individual’s eparticipation. Further, researchers could attempt to investigate, in a holistic manner, how pertinent features of ICT may interact with the motivational and resource factors of participation in influencing individual’s e-participation. Information systems
researchers may play an active role in this pursuit. Existing areas of information systems research, such as group support systems and computermediated communications, may be drawn on to identify ICT features pertinent to individual’s e-participation. Future research on the implications of e-participation initiatives should continue to deliberate on the issue in a systematic manner building on previous work, and offer practical suggestions on how the positive implications of e-participation initiatives can be capitalized while the negative implications minimized. More avenues need to be created to encourage discourses among researchers and practitioners on this topic. Likewise for research on the best practices of e-participation initiatives, researchers should strive to employ more comprehensive and rigorous methodologies (e.g., case studies and action research) with sound theoretical foundation to uncover insights from cases of e-participation initiatives for both researchers and practitioners.
conclusion The review of existing literature on e-participation unveils a dominance of studies consisting of conceptual articles (e.g., Charlitz & Gunn, 2002; Whyte & Macintosh, 2002) and case descriptions (e.g., Coleman & Gotze, 2001; Nyerges et al., 2006), or empirical work without explicit theoretical bases and implications (e.g., Jensen, 2003; Thrane et al., 2004). This highlights the need for more systematic empirical research on e-participation that is driven by theory. There is also a particular lack of attention on the effects of ICT on individual’s e-participation. To address these gaps, researchers should draw on relevant literature from political sciences, public administration, communications, and information systems. A multidisciplinary research perspective that includes both the social and technical dimensions may serve to better inform both theory and practice of e-participation. As a whole, the advancement of e-participation research in the different areas may provide practitioners with guidance on what ICT tools to
The Current State and Future of E-Participation Research
employ for a particular e-participation initiative, how ICT can be more effectively deployed in conjunction with the interventions developed to promote e-participation initiatives, the pitfalls of e-participation to avoid, and the knowledge and experiences that can be learnt from previous e-participation efforts. Collectively these insights may facilitate the success of e-participation initiatives in attaining desired democratic outcomes.
futurE rEsEarch dirEctions In the pursuit of recommending guidance of how the multitude of ICT tools can be better deployed, researchers can begin by delving into the different objectives served by participation initiatives. For instance, Glass (1979) derived a set of objectives of citizen participation initiatives such as, the mutual exchange of information between policy makers and citizens, and the building of support among citizens for upcoming policies. Based on the requirements of the different participation objectives, researchers may then identify the appropriate ICT tools to employ based on their capabilities that can fulfill those requirements. One of the emerging efforts to engage citizens in civic and political matters electronically is through community electronic networks. The initiatives provide citizens of a community (e.g., town, region) access to technologies (Internet, electronic mail, etc.), government information, community services, and community-oriented discussion (Sullivan, Borgida, Jackson et al., 2002). Electronic networks of this nature may help in the development of social capital in a community (Blanchard & Horan, 1998), which may in turn promote citizen participation (Putnam, 1993). A well-known example of a community electronic network is Santa Monica’s Public Electronic Network (PEN). Instances of these specific e-participation initiatives should be carefully documented to unveil the implications of these initiatives on citizen participation, and whether there are mutual influences between the ensuing online community and the existing offline community (e.g., synergy between the two in engaging citizens).
Future research to develop model to explain individual’s e-participation may examine how citizens’ interactions in community electronic networks influence their participation in both civic and political matters. A particular theory that can be drawn on for this pursuit is social capital theory of participation (Putnam, 1993). The theory posits that a community that is rich in social capital is conducive for citizen participation. Three particular social capital features that are deemed to be relevant to participation are highlighted: networks of civic engagement, norms of reciprocity, and trust. According to Putnam (1993), such features of social capital may “reduce incentives to defect, reduce uncertainty, and provide models for future co-operation” (p.177), and may thus serve to promote citizen participation in a community. Future research may investigate how these social capital factors are related to other participation factors e.g., participation incentives, in influencing individual’s e-participation. The effects of the relevant ICT features on the formation of social capital in the community electronic networks may also be investigated.
rEfErEncEs Blanchard, A., & Horan, T. (1998). Social capital and virtual communities. Social Science Computer Review, 16(3), 293-307. Carlson, J.R., & Zmud, R.W. (1994). Channel expansion theory: Dynamic view of media and information richness perceptions. In D. P. Moore (Ed.), Academy of management, (pp. 280-284). Madison WI: Omnipress. Charlitz, R.D., & Gunn, R.W. (2002). Online rulemaking: A step toward e-governance. Government Information Quarterly, 19(4), 389-405. Coleman, S., & Gøtze, J. (2001). Bowling together: Online public engagement in policy deliberation. London, UK: Hansard Society Day, D. (1997). Citizen participation in the planning process: An essentially contested concept? Journal of Planning Literature, 11(3), 421-434.
The Current State and Future of E-Participation Research
Eztioni, A., Laudon, K., & Lipson, S. (1975). Participating technology: The Minerva communications tree. Journal of Communication, 25(Spring), 64-74. Garramone, G.M., Harris, A.C., & Pizante, G. (1986). Predictors of motivation to use computermediated communication systems. Journal of Broadcasting & Electronic Media, 30(4), 445457. Glass, J.J. (1979). Citizen participation in planning: The relationship between objectives and techniques. American Planning Association Journal, 45(2), 180-189. Gronlund, A. (2002). Emerging infrastructures for e-democracy: In search of strong inscriptions. E-Service Journal, 2(1), 62-89. Hart-Teeter. (2003). The new e-government equation: ease, engagement, privacy and protection. Retrieved May 23, 2006 from http://www.gcn. com/wp/research/17-1.phtml?cat=e_government Hibbing, J.R., & Theiss-Morse, E. (2002). Stealth democracy. New York: Cambridge University Press. Hill, K.A., & Hughes, J.E. (1997). Computer-mediated political communication: The USENET and political communities. Political Communication, 14(1), 3-27. Irvin, R.A., & Stansbury, J. (2004). Citizen participation in decision making: Is it worth the effort? Public Administration Review, 64(1), 55-65. Jensen, J.L. (2003). Virtual democratic dialogue? Bringing together citizens and politicians. Information Polity, 8(1/2), 29-47. Kavanaugh, A., Zin, T.T., & Carroll, J.M. (2006). When opinion leaders blog: New forms of citizen interaction. In J.A.B. Fortes and A. Macintosh (Eds.) Proceedings of the 7th International Conference on Digital Government Research (pp. 79-88). University of Southern California Information Sciences Institute. Kelly, J.W., Fisher, D., & Smith, A. (2006). Friends, foes, and fringe: Norms and structure in political discussion networks, In J.A.B. Fortes and A. Ma-
cintosh (Eds.). Proceedings of the 7th International Conference on Digital Government Research (pp. 412-417). University of Southern California Information Sciences Institute and Columbia University: Digital Government Research Center. Kumar, N., & Vragov, R. (2005). The citizen participation continuum: where does the US stand? In N. C. Romano (Ed.), Proceedings of the 11th Americas Conference on Information Systems. (pp. 1984-1990).Association for Information Systems. Kwon, N., Shulman, S.W., & Hovy, E. (2006). Multidimensional text analysis for eRulemaking. In J.A.B. Fortes and A. Macintosh (Eds.) Proceedings of the 7th International Conference on Digital Government Research (pp. 157-166). University of Southern California Information Sciences Institute and Columbia University: Digital Government Research Center. Macintosh, A. (2004). Characterizing e-participation in policy-making. In Proceedings of the 37th Hawaii International Conference on System Sciences (HICSS), IEEE. Retrieved May 23, 2006, from http://csdl.computer.org/comp/proceedings/ hicss/2004/2056/05/205650117a.pdf Muhlberger, P. (2006). Should e-government design for citizen participation? Stealth democracy and online deliberation. In J.A.B. Fortes and A. Macintosh (Eds.). Proceedings of the 7th International Conference on Digital Government Research (pp. 53-61). University of Southern California Information Sciences Institute and Columbia University: Digital Government Research Center. Ng, E.W.J., & Detenber, B.H. (2005). The impact of synchronicity and civility in online political discussions on perceptions and intentions to participate. Journal of Computer-Mediated Communication, 10(3). Retrieved February 8, 2006 from http://jcmc.indiana.edu/vol10/issue3/ng.html Nyerges, T., Brooks, T., Jankowski, P., Rutherford, G.S., & Young, R. (2006). Web portal implementation to support public participation in transportation decision making, In J.A.B. Fortes and A. Macintosh (Eds.). Proceedings of 7th International Conference on Digital Government Research (pp.
The Current State and Future of E-Participation Research
67-68). University of Southern California Information Sciences Institute and Columbia University: Digital Government Research Center. Phang, C.W., & Kankanhalli, A. (2005). A research framework for citizen participation via e-consultation , N. C. Romano (Ed.). In Proceedings of the 11th Americas Conference on Information Systems (pp. 2003-2010). Association for Information Systems,. Phang, C.W., & Kankanhalli, A. (2006). Engaging youths via e-participation initiatives: an investigation in the context of online policy discussion forum, E.M. Trauth, D. Howcroft, T. Butler, B. Fitzgerald and J.I. DeGross (Eds.). In Proceedings of IFIP 8.2 Working Conference on Social Inclusion (pp. 105-121). Boston: Springer. Price, V., & Cappella, J.N. (2006). Bringing an informed public into policy debates through online deliberation: the healthcare dialogue project. In J.A.B. Fortes and A. Macintosh (Eds.) Proceedings of the 7th International Conference on Digital Government Research (pp. 89-90). University of Southern California Information Sciences Institute and Columbia University: Digital Government Research Center. Putnam, R.D. (1993). Making democracy work. New Jersey: Princeton University Press. Schwartz, E. (1996). Netactivism: How citizens use the internet. Sebastopol, CA: Songline Studios. Seyd, P., & Whiteley, P. (2002). New labor’s grass roots: The transformation of the labor party membership. London: Palgrave Macmillan. Shepherd, M.M., Briggs, R. O., Reinig, B. A., Yen, J., & Nunamaker, J. F. (1995). Invoking social comparison to improve electronic brainstorming: Beyond anonymity. Journal of Management Information Systems, 12(3), 155-170. Sullivan, J.L., Borgida, E., Jackson, M.S., Riedel, E., Oxendine, A., & Gangl, A. (2002). Social capital and electronic networks: For profits vs. for community approaches. American Behavioral Scientist, 45(5), 868-886.
Sunstein, C.R. (2004). Democracy and filtering. Communications of the ACM, 47(12), 57-59. Thrane, L.E., Shelley II, M.C., Shulman, S.W., Beisser, S.R., & Larson, T.B. (2004). E-political involvement: Age effects or attitudinal barriers? Journal of E-Government, 1(4), 21-37. Verba, S., & Nie, N. (1972). Participation in America: Political democracy and social equality. New York: Harper and Row. Verba, S., Schlozman, K., & Brady, H. (1995). Voice and equality: Civic voluntarism in american politics. Cambridge: Harvard University Press. Whyte, A., & Macintosh, A. (2002). Analysis and evaluation of e-consultations. E-Service Journal, 2(1), 9-34.
furthEr rEading Anderson, K.V., & Henriksen, H.Z. (2005). The first leg of e-government research: Domains and application areas 1998-2003. International Journal of Electronic Government Research, 1(4), 26-44. Barber, B. (1984). Strong democracy: Participatory politics for a new age. Berkeley, CA: University of California Press. Barber, B. (1998). A place for us: How to make society civil and democracy strong. New York: Hill and Wang. Bongers, F.J., Geurts, J.L.A., & Smits, R.E.H.M. (2000). Technology and society: GSS-supported participatory policy analysis. Journal of Technology Management, 19(3/4/5), 269-287. Brady, H.E., Verba, S., & Schlozman, K.L. (1995). Beyond SES: a resource model of political participation. American Political Science Review, 89(2), 271-294. Elgarah, W., & Courtney, J.F. (2002). Enhancing the G2C relationship through new channels of communication: Web-based citizen input. In N. C. Romano (Ed.). Proceedings of the 8th Americas
The Current State and Future of E-Participation Research
Conference on Information Systems, (pp. 564-568). Association for Information Systems.
ment. British Journal of Political Science, 16(1), 87-113.
Finkel, S.E., & Opp, K.-D. (1991). Party identification and participation in collective political action. Journal of Politics, 53(2), 339-371.
Parry, G., Moyser, G., & Day, N. (1992). Political participation and democracy in Britain. New York. US: Cambridge University Press.
Finney, C. (1999). Extending public consultation via the Internet: The experience of the UK Advisory Committee on genetic testing electronic consultation. Science and Public Policy, 26(5), 361-373.
Paxton, P. (2002). Social capital and democracy: An interdependent relationship. American Sociological Review, 67(2), 254-277.
Green, D., & Shapiro, I. (1994). Pathologies of rational choice theory. New Haven: Yale University Press. Grossman, L.K. (1995). The electronic republic: Reshaping democracy in the information age. New York, US: Viking. Habermas, J. (1989). The structural transformation of the public sphere. Cambridge, MA: MIT Press. Krueger, B. (2002). Assessing the potential of internet political participation in the united states: A resource approach. American Politics Research, 30(5), 476-498. Leighley, J.E. (1995). Attitudes, opportunities and incentives: A field essay on political participation. Political Research Quarterly, 48(1), 181-209. Lubell, M., & Scholz, J.T. (2001). Cooperation, reciprocity, and the collective-action heuristic. American Journal of Political Science, 45(1), 160-178. Macintosh, A. (2003). E-forum e-democracy work group 4 Report: Initial Report. Retrieved April 4, 2005 from http://itc.napier.ac.uk/ITC/Documents/WG4e-democracy-results-v4.pdf Ogden, M.R. (1994). Politics in parallel universe: Is there a future for cyber democracy? Futures, 26(70), 713-729. Olson, M. (1965). The logic of collective action: Public goods and the theory of groups. Cambridge, MA: Harvard University Press. Opp, K.D. (1986). Soft incentives and collective action: Participation in the anti-nuclear move-
Price, V., & Cappella, J.N. (2002). Online deliberation and its influence: The electronic dialogue project in campaign 2000. IT and Society, 1(1), 303-328. Putnam, R.D. (2000). Bowling alone: The collapse and revival of American community. New York: Simon and Schuster. Scholzman, K., Verba, S., & Brady. H. (1995). Participation’s not a paradox: The view from American activists. British Journal of Political Science, 25(1), 1-36. Shulman, S.W. (2005). E-rulemaking: Issues in current research and practice. International Journal of Public Administration, 28(7/8), 621-641. Treeratpituk, P., & Callan, J. (2006). Automatically labeling hierarchical clusters. In J.A.B. Fortes and A. Macintosh (Eds.) Proceedings of the 7th International Conference on Digital Government Research (pp. 167-176). University of Southern California Information Sciences Institute and Columbia University: Digital Government Research Center. Wilhelm, A. (1999). Virtual sounding boards: How deliberative is online political discussion? In B. Hague and B. Loader (Eds.), Digital democracy: Discourses and decision making in the information age (pp. 154-177). London: Routledge. Yang, H., Callan, J., & Shulman, S. (2006). Next steps in near-duplicate detection for eRulemaking. In J.A.B. Fortes and A. Macintosh (Eds.). Proceedings of the 7th International Conference on Digital Government Research (pp. 239-248). University of Southern California Information Sciences Institute and Columbia University: Digital Government Research Center.
The Current State and Future of E-Participation Research
tErms and dEfinitions Civic Skills: Individuals’ organizational and communications abilities that can facilitate their political participation. Communality: The availability of a commonly accessible pool of information enabled by ICT tools such as databases or online forums. Connectivity: Capability of ICT that enables individuals who share common goals and interests to easily communicate with each other. Deliberation: Consideration of all sides of an issue and carefully weighing the consequences of various options for action. Digital Divide: Gap between those with easy and effective access to ICT resources and those without. E-Participation Initiatives: Governments’ efforts in employing ICT to engage citizens in democratic processes.
0
Political Efficacy: Individual’s perception that political change is possible, and that the individual citizen can play a part in bringing about such change. Social Capital: Relational resources having to do with connections among individuals that form networks of civic engagement, and the resulting norms of reciprocity and trust arising from the networks.
EndnotEs 1
2
An example given is videotext, which uses a telephone line to connect home television sets to a central computer to allow for communication between individuals through a hand-held console. Readers can refer to Phang and Kankanhalli (2005) for a review of existing theories of individual participation.
Chapter IX
Blogging David C. Wyld Southeastern Louisiana University, USA
introduction What is a blog? According to a recent report from The Pew Internet & American Life Project, well over half of the American adult population do not know what a blog is (Rainie, 2005). A blog can be simply defined in the following manner: “A blog is an easy-to-use content management tool. When you ‘blog,’ you are instantly adding new content to your site via a Web interface. No technical or programming skills are necessary.” (Weil, 2004, n.p.). In a nutshell, a blog is a “do-it-yourself” Website. Gone are the days (of say 2003) when one would have to be knowledgeable in html or xml programming or make use of complex, and often expensive, Web creation software to create or update a Website. With a blog, your Website can be constantly added to and updated, without having to do anything more than typing (or cutting and pasting) into a text box. Through posting links, you can link your blog to any other site on the Web. You can even add audio/visual material to your blog site by uploading them, much as you would add an attachment to an email. Others who find your site
of interest can use RSS (really simple syndication) or sign-up for e-mail alerts to be notified when you post or add material to your blog. Blogging—the act of creating and maintaining a blog—has been characterized in nothing less than laudatory terms, hailed as: • • • •
“The ‘next big thing’ on the Internet” (Gallo, 2004) “The next killer app” (Weil, 2003), “The Web’s coup de grace, the heart of a personal publishing revolution to rival desktop publishing’ (Johnson, 2005), “The most profound revolution in publishing since the printing press” (Sullivan, 2005).
On the other hand, many people associate blogs as a phenomenon of teenagers and college students. When they do think about them, they think of either the folks who blog about their cats, dogs, or hamsters (Butler, 2006) or the “bad” news stories about blogs, such as when a blogger—the person creating and maintaining the blog—named his murderer in his last, dying entry in his blog (Wikipedia, 2006). Chris Anderson is the author
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Blogging
of The Long Tail, which speaks to the fragmentation of the marketplace and mass audiences with the advent of the Internet. He recently commented that blogs are an extension of this overall trend, as they are an excellent way of communicating with microaudiences, with blogs like his own being “exactly what four people want” (quoted in Schechner, 2006, p. W2). Blogging may thus become one of the megatrends of the next decade. Writing in the prestigious journal, Foreign Policy, Drezner and Farrell (2004) commented that: “Although the blogosphere remains cluttered with the teenage angst of high school students, blogs increasingly serve as a conduit through which ordinary and not-so-ordinary citizens express their views on international relations and influence a policymaker’s decision making”.
background The roots of the phenomenon that is now referred to as ‘blogging’ can be traced back to Tim Berners Lee (the originator of the World Wide Web), who created a ‘What’s New’ page in 1992 (Dvorak, 2005). In 1994, online personal diaries began to emerge on topics spanning areas such as diets, movies, politics, and sex (Sullivan, 2005). Jorn Barger, editor of one of the original sites, coined the term ‘Weblog’ in 1997, defining it as “a Webpage where a Weblogger ‘logs’ all the other Webpages she finds interesting.” The shorter version, “blog,” was coined by Peter Merholz in 1999, when he broke the word “Weblog” into the phrase “we blog.” “Blog” then grew in acceptance as a shorter form of the noun (Weblog) and also for the first time as a verb, with to blog meaning “to edit one’s Weblog or a post to one’s Weblog” (Blood, 2004). The key to the rapid rise of blogging is the ability of users to easily create content and to be able to instantly update their online Websites. Probably the seminal event in the growth of blogging was the innovation made by Evan Williams. In 1999, Williams, then living in San Francisco, was seeking a way to more easily update his own
Website. He created a simple software solution that eliminated the need to know how to use HTML to do so, allowing for Websites to be updated by simply typing text into a text box. This software became the foundation for blogger.com, one of the first blog creation and hosting sites (Ramos, 2004). In fact today, the vast majority of blogs are created and maintained by individuals making use of a variety of free or low-cost software and/or hosting services, such as those listed in Table 1. Blogging reached its “tipping point” in 2002, when the blogosphere grew from a self-contained community to a wider, global marvel (Manjoo, 2002a). Today, as Drezner and Farrell (2004) observed, the blogosphere has become “a new medium” that has become “an elaborate network with agenda-setting power” Undoubtedly however, blogging is fast-changing the way many of us interact with the Internet. Mortensen (2004) chronicled that blogging is now following the same development pattern as the Internet itself. Whereas in the early days of the Internet, access was difficult and limited to academicians, researchers, government officials, and other elites, the rise of the World Wide Web and the development of browser technologies enabled the Internet to widen its audience and reach, while greatly changing—and perhaps decreasing—the quality of the content and interactions online. With the wide availability of blog creation software tools and blog hosting services, one no longer needs specialized computer knowledge and resources to create content online. Indeed, blogs have been categorized as the rise of easily self-created Web Table 1. Major blog software/hosting providers Service Provider
URL
Blogger
www.blogger.com
LiveJournal
www.livejournal.com
Moveable Type
www.sixapart.com/movabletype/
MSN Spaces
www.spaces.msn.com
Radio Userland
www.radio.userland.com
TypePad
www.typepad.com
Word Press
www.wordpress.org
Xanga
www.xanga.com
Blogging
content. As reported in a report from the Pew Internet & American Life Project: “One of the earliest observations about the Internet turns out to be true: anyone can be a publisher on the Web. The online commons is full of virtual chatter and teeming with self-made content. It ranges from the simplest vanities like pictures of ‘me and my puppy’ to the most profound kinds of political argument—and everything in between” (Lenhart, Fallows & Horrigan, 2004, n.p.). According to Dan Hunter, of the University of Pennsylvania, blogging ‘is not a fad…It’s the rise of amateur content, which is replacing the centralized, controlled content done by professionals” (quoted in Knowledge@ Wharton, 2005, n.p.). No less an authority than Bill Gates, Microsoft’s Chairman, has categorized the rise of user-generated content on the Web as nothing less than a “fantastic thing” (Swisher & Mossberg, 2006, p. R6). The statistics on blogging are indeed mindboggling. According to the blog analyst firm Technorati, a new blog is created every second of every day. Every hour, 50,000 posts are made to blogs, meaning that there are 1.2 million new blog posts each day (see Figure 1) (Sifry, 2006a). All told, the blogosphere—the totality of all blogs—continues to double in size every six months. With approximately 30 million blogs in existence today, the blogosphere is an astonishing 60 times larger in size than it was a mere three years ago (Sifry, 2006b) (see Figure 2). Approximately half of all blogs are ‘active,’ in that they have been updated in the last 90 days, with approximately 13 percent having been updated to on a weekly basis (Perrone, 2005). More and more top executives, both in the United States and around the world, are launching their own blogs (a compilation of top blogging U.S. executives is provided in Table 2). For companies and other large organizations, including government agencies and non-profit groups, blogging promotes a new sense of openness with an organization’s stakeholders—including employees, customers, the public, and the media. Such an environment of openness is especially valuable in an era of intense scrutiny and an age of mistrust of large institutions.
Figure 1. (Source: Blogs Continue to Multiply, eMarketer, 2006) Number of Blogs Created Monthly, 2004-2006 100,000 90,000 80,000 70,000 60,000 50,000 40,000 30,000 20,000 10,000 0 January 2004 January 2004
January 2006 January 2006
According to a November, 2005, report from eMarketer (2005a), blogging executives are still relatively rare. In a survey of 131 prominent CEOs, researchers found that only 7 percent of them currently had an executive blog and only 8 percent of their firms had a blog at all. This was despite the fact that approximately two-thirds of the surveyed CEOs reported that they had a familiarity with blogs. As can be seen in Table 3, top executives recognize the power of blogs for their organizations. Yet, the most important part of blogging may not be obvious to the blogger himself, as the very exercise of writing the blog raises one’s self-awareness. According to a recent survey of bloggers, approximately half of them view their blogging activity as a form of therapy (eMarketer, 2005b). Indeed, writing has been shown to be an extremely powerful activity, and the more one writes, the better one thinks (Manjoo, 2002b). Thus, as an executive can use the blog as a means of self-analysis, the organization’s stakeholders can, at the same time, gain a better awareness of the individual in the office. In the view of Dave Sifry, CEO of Technorati, a blog can thus be looked upon as, “the record of the exhaust of a person’s attention stream over time,” he said. “You actually feel like you know the person. You see their style, the words they use, their kids, whatever there is” (quoted in Penenberg, 2005).
Blogging
Figure 2. (Source: Blogs Continue to Multiply, eMarketer, 2006).
Table 3. CEO views on the benefits of blogging (Source: Bloggers in the corner office, eMarketer, 2005a)
Table 2. Top corporate executives with blogs Company
Executive
Advanced Human Technologies
Ross Dawson, CEO
Berkshire Publishing Group
Karen Christensen, CEO
BetterPPC
Joe Agliozzo, CEO
Bluebill Advisors, Inc.
Frank Gilbane, President and CEO
Boeing Commercial Airplanes
Randy Baseler, VP of Marketing
Cheskin
Darrel Rhea, CEO
Dallas Mavericks
Mark Cuban, Owner
EVDB, Inc.
Brian Dear, CEO
Exodus Capital Advisors
Tom O’Neill, CEO
General Motors
Bob Lutz, Vice Chairman
Godaddy.com
Bob Parsons, President
Macmillan Publishers Ltd.
Richard Charkin, CEO
The Norwich Group
Anne Stanton, President and CEO
Pheedo
Bill Flitter, CEO
The Staubach Company
Roger T. Staubach, Chairman of the Board and CEO
Sun Microsystems
Jonathan Schwartz, CEO
Technorati
David Sifry, CEO
UserLand Software
Scott Young, President and CEO
WhatCounts
David Geller, CEO
Whole Foods Market
John Mackey, CEO
Benefit
Percentage(%)
Enables quick communication of new ideas and recent news
40.5
Provides a more informal venue to communicate
39.7
Enables immediate feedback from own company
35.9
Promotes regular readership/traffic to company Web site
29.8
Provides a forum for innovation and thought leadership
29.0
Promotes a culture of transparency
28.2
Provides material to encourage links from other bloggers
18.3
There are no benefits
16.0
Other
3.1
blogging in thE public sEctor Just as in the private sector, public officials are finding blogging to be an excellent way to communicate both within their organizations and with their wider constituencies. This is exemplified by the rapid growth of blogs created and maintained by public officials in the United States and abroad. As of July 2006, the following blogs have been identified at all levels of government: • • • • •
Table 4: Members of Congress Table 5: Governors/Lieutenant Governors Table 6: Mayors Table 7: City Managers Table 8: Police and Fire Chiefs.
There were also more than 100 blogs written by local representatives, either by city/county council members or state legislators/senators. If you are interested in obtaining a current listing of these, please contact the author directly. Officials are communicating with their constituencies in a variety of ways on their blogs. They are reporting on their activities, expressing
Blogging
their views on issues, chronicling their contacts and travels, and giving glimpses of their personal lives and interests. There are even limited instances where political officials are posting podcasts and other media for their constituents to listen to and view on their blogs. The U.S. military is also seeing blogs as a way of speeding communication and changing their top-down command and control information Table 4. Active blogs of members of congress Congressperson
Date Started
Table 6. Active blogs of mayors City
March 2006
Jim Willey
November 2004
Franklin Township, New Jersey
Brian D. Levine
May 2006
Oakland, California
Jerry Brown
April 2005
Parker, Colorado
David Casiano
April 2006
Portland, Oregon
Tom Potter
April 2006
Reading, Pennsylvania
Tom McMahon
March 2005
Round Lake, Illinois
Bill Gentes
July 2005
Royal Borough of Kingston upon Thames (UK)
Mary Reid
January 2005
Francis G. Slay
April 2005
Suzanne Fletcher
April 2006
Mike McNown
March 2006
Anthony Williams
August 2005
Mary Jo Carson
May 2006
Elburn, Illinois
July 2005
Rep. Mike Conaway (R-Texas)
November 2005
Rep. Katherine Harris (R-Florida)
January 2005
St. Louis, Missouri
Rep. Dennis Hastert (R-Illinois)
October 2005
Stockton on Tees (UK)
Rep. Jack Kingston (R-Georgia)
October 2005
Valley Center, Kansas
Rep. Mark Kirk (R-Illinois)
January 2005
Rep. John Linder (R-Georgia)
November 2005
Rep. Edward J. Markey (D-Massachusetts)
December 2005 March 2005
Rep. Frank Pallone, Jr. (D-New Jersey)
January 2005
Rep. Mike Pence (R-Indiana)
March 2004
Rep. George Radanovich (R-California)
February 2006
Rep. Tom Tancredo (R-Colorado)
February 2005
Table 5. Active blogs of governors/lieutenant governors
Date Started
Marty Chavez
Rep. John Boozman (R-Arkansas)
Sen. Barack Obama (D-Illinois)
Mayor
Albuquerque, New Mexico
Washington, DC Wisconsin Rapids, Wisconsin
Table 7. Active blogs of city managers City
City Manager
Date Started
Chelsea, Michigan
Mike Steklac
May 2005
Davison, Michigan
Peter Auger
June 2005
Eden Prairie, Minnesota
Scott Neal
March 2003
Kent, Ohio
Dave Ruller
April 2006
State
Official
Date Started
Leesburg, Florida
Ron Stock
May 2005
Connecticut
Lt. Governor Kevin B. Sullivan (D)
April 2006
Portland, Oregon
Sam Adams
June 2005
Delaware
Governor Ruth Ann Minner (D)
May 2006
Prior Lake, Minnesota
Frank Boyles
February 2006
Iowa
Governor Tom Vilsack (D) and Lt. Governor Sally Pederson (D)
April 2006
Santa Paula, California
Wally Bobkiewicz
November 2004
Tennessee
Governor Phil Bredesen (D)
May 2005
West Des Moines, Iowa
Jeff Pomeranz
February 2006
Wisconsin
Governor Jim Doyle (D)
January 2005
Blogging
Table 8. Active blogs of police/fire departments Location
Agency
Date Started
Boston, Mass.
Boston Police Department
November 2005
Eden Prairie, Minnesota
Eden Prairie, Minnesota Fire Department (Chief George Esbensen)
January 2005
Eden Prairie, Minnesota
Eden Prairie, Minnesota Police Department (Chief Dan Carlson)
June 2004
Los Angeles, California
Los Angeles Fire Department
December 2004
Los Angeles, California
Los Angeles Police Department
May 2006
Mangalore, India
Dakshina Kannada Police Department
November 2005
Miami-Dade County, Florida
Miami-Dade Fire Rescue Department
March 2004
Northfield, Minnesota
Northfield, Minnesota Police Department (Chief Gary G. Smith)
July 2004
Tulsa, Oklahoma
Tulsa, Oklahoma Police Department
May 2006
structure. Military leaders recognize that in today’s environment, the ability to share information across locations, commands and rank will be key to fighting the War on Terror (Rogin, 2006). From the perspective of General James E. Cartwright, Commander of U.S. Strategic Command, it is critical that such information move quickly, and blogs are an excellent forum for facilitating such wide-ranging and open communications. “The metric is what the person has to contribute, not the person’s rank, age, or level of experience. If they have the answer, I want the answer. When I post a question on my blog, I expect the person with the answer to post back. I do not expect the person with the answer to run it through you, your OIC, the branch chief, the exec, the Division Chief and then get the garbled answer back before he or she posts it for me. The Napoleonic Code and Netcentric Collaboration cannot exist in the same space and time. It’s YOUR job to make sure I get my answers and then if they get it wrong or they could have got it righter, then you guide them toward a better way... but do not get in their way” (Cited in Defense Industry Daily, 2005).
futurE trEnds According to Mort Zuckerman (2005), Editor-inChief of U.S. News & World Report, “Blogs are transforming the way Americans get information and think about important issues. It’s a revolutionary change—and there’s no turning back” (n.p.). The trend is clear that the blogosphere will continue to grow, and with that growth, it will become more and more common for highlyplaced corporate executives and public officials to become bloggers themselves. In fact, over the next few years, those public officials who do not blog may become suspect as to why they do not use this new medium as a communications medium to connect both with their internal organizations and their wider constituencies. As the blogging trend develops, there will be ample opportunities for research into how this phenomenon impacts a wide variety of communications and practices. From an organizational communications and management perspective, there will be opportunities for research into how blogging impacts the effectiveness of both private and public sector executives in areas such as communications effectiveness and knowledge management. There will be specific opportunities for communications researchers to examine the use of blogging both versus and in tandem with other means of communications, with audiences both internal to and outside of the organization. In the public sector realm, there will be opportunities to examine specifically how public executives use blogs in ways both similar to and in contrast with leaders in for-profit and non-profit organizations. Further, in the specific case of elected officials, there will be opportunities to study officeholders’ use of blogging as a means of staying in touch with their constituencies, looking at both the overall trends of the practice, and, through case study approaches, at best practice leaders and innovators. With elected officeholders, there will be opportunities to examine the subjects about which they blog and the frequencies of their blogging activities, as well as the interest and feedback generated by such. There will also be opportunities to examine elected officials use of blogs for campaigns versus official
Blogging
office blogs, which are—of necessity—distinct and different. There will also be opportunities to measure the “outcomes” of blogging for public officials in terms of outcome measures, such as their popularity pollings, effectiveness ratings, and ultimately, their abilities to be reelected. Also, in both public and private enterprises, there will be opportunities to map and measure the viral nature of blogging, in order to understand how leaders can influence others in their organizations to not only engage in blogging, but to make use of tools such as RSS, podcasting, and so forth. Finally, it has been predicted that best practices for corporate, executive, and public official blogs will evolve over time (Payne, 2003). This will be an area of intense interest, as there will be a ready audience for practical answers to the questions of highlyplaced officials who will want to know how and why they should engage in this new medium. In short, as this is a communications practice and technology in its infancy, there will be vast opportunities for important and interesting research to be carried out over the next decade.
conclusion As we have seen, the blogosphere is growing at an incredible rate, and blogging is an activity that is increasingly moving from the fringes to the mainstream, with intense interest in both Corporate America and in public offices as to how to join the conversation. Blogs may well become, as AOL Vice President Bill Schreiner described them, an “oral history” for our times (eMarketer, 2005c). Thus, it will be incumbent for public officials to stay abreast of the development of the blogging phenomenon and seek ways that they can best use blogging as a new communications technology to simply and immediately stay in touch with their various constituencies.
futurE rEsEarch dirEctions The evolution of the blogging will be of intense research interest as it takes shape over the next decade, especially in terms of trying to determine
what the ROI of blogging is—if anything at all? Holloway (2007) observed that from a corporate perspective, blogging’s return on investment is “less straightforward” than the ROI of traditional marketing techniques, which he added is quite “un-straightforward.” However, it has been demonstrated that a well-formatted, frequently updated and informative blog will: • • •
Generate buzz and interest Encourages repeat visits to the blog and associated Websites Increase page ranks with the major search engines.
With blogging for public sector organizations and lead officials, the ROI calculation can be even more indirect and incalculable. With a for-profit company, blogging can be seen as producing both direct, tangible results (i.e., increased traffic to corporate Website, RSS and other subscriptions to updates of the site and the associated executive blog[s]) and making indirect improvements in corporate image and/or personal reputations, company/brand awareness, and even product sales/service utilization levels. Even with a nonprofit organization, many of the same visibility and awareness measures could be applied, with contributions/fund-raising serving as the proxy for sales results. In the public sector, there can be a bright-line drawn between blogging metrics for the campaign blogs of officials and the blogs that they use as they administer their area/agency. Lenhart and Fox (2006) suggest using what they term “on-blog” and “off-blog” metrics for assessing the amount of attention being garnered by the blog. The former category includes the number of comments made on a specific blog post and the postings made on the blog’s tagboard (which is a general space that is available for viewers to comment on the entire blog or Website). Such off-blog metrics may include all mentions of the blog outside of the blog itself, including: • •
News articles/stories regarding the blog in any media (or another blog or Website); E-mails about the blog and/or forwarding of blog posts; and
Blogging
•
Conversations about the blog.
While campaign blogs have an ultimate metric for success (i.e., election), blogs used in public administration have less defined ROI metrics. Certainly, discussions about ROI must always include “soft” aspects like: • • •
Did blogging bring personal satisfaction to the official? Did blogging enable the official to get insightful comments and timely feedback from constituents? Did blogging contribute to the official’s decisions to remain (or retire) from office?
Insights on these soft metrics can really only be garnered through intensive surveys and/or interviews with the blogging public officials, with comparisons to be drawn with the non-blogging brethren in similar positions. Through such research, one could delineate the specific factors that may motivate an official to begin a blog, and conversely, the factors that may lead one to blog less frequently or to discontinue their blog entirely. Finally, we have seen the case of the British city of Ampthill, where the current mayor, Penny Foster, is continuing to blog, as did her direct predecessor, Mark Smith; thus, as such instances develop of “successor blogging,” it will be interesting to investigate both the motivations behind the official continuing the practice and the citizen/public worker expectations, reactions, and so forth. It will also be very interesting going forward to see applied, analytical research conducted on the blogging behaviors of public officials. Such longitudinal research could examine the blogging behaviors of public officials in quantifiable terms, such as: • • • • •
The frequency of their posting activities Average number of days between posts Topics covered in posts (personal, travel, news and issues, etc.) Comments allowed (yes/no) and number of comments Number of links to the blog posts of the official
• •
Amount (if any) of video/audio content posted Official versus personal blogging activities.
It would be interesting to then compare the blogging behaviors of public officials at similar levels of government (i.e., Congresspersons, state legislators, mayors, etc.) and between the different ranks of public officials. One could develop many varied hypotheses that would seek to study the blogging behaviors of the public official to characteristics, such as: • • • •
The rank of the official, The size of the constituency population, The demographic characteristics of the population, and The Internet usage/blogging behaviors of the population.
Finally, over time, it will be interesting to compare both the penetration of blogging in general and the blogging behaviors/perceptions of public officials to comparable groups. As this is a global phenomenon, one can envision research projects comparing blogging between different countries (i.e., Members of Parliament in the United Kingdom to Member of Congress in the U.S., mayors of cities of X size in the U.S. to mayors in other countries of the comparable sized cities). One can also see projects that would compare the blogging penetration and blogging activities between top officials in the public and private sectors. For instance, between Congress people or Fortune 500 CEOs, what percentage blogs more, who blogs more often, who uses trackbacks, pings, who responds to readers’ comments, and so forth? As can be seen, many, many interesting studies on not just blogging, but the use of other Web 2.0 forums and tools, can be conducted in the coming years. What will emerge from the work that will be done by university researchers, consulting firms, and independent researchers, such as the Pew Internet & American Life Project, will be snapshots that will evolve into a mosaic of how these new tools are being used to foster better communication and new methods of online engagement
Blogging
between public officials and the governed, who are themselves increasingly living online lives. Such research will provide critical feedback for those making not only personal decisions on whether or not—and then how to—engage in blogging, but in setting blog strategies for both their organization and themselves.
rEfErEncEs Blood, R. (2000). Weblogs: A history and perspective. Rebecca’s Pocket (September 7, 2005). Retrieved February 17, 2005, from http://www. rebeccablood.net/essays/Weblog_history.html Butler, C.K. (2006). Blogging their way through academe. U.S. News & World Report, 140(13), 48-51. Defense Industry Daily (2005). Four-Star blogging at STRATCOM. DefenseIndustryDaily.com, (March 28, 2005). Retrieved August 22, 2006, from http://www.defenseindustrydaily.com/2005/03/ fourstar-blogging-at-stratcom/index.php Drezner, D.W., & Farrell, H. (2004). Web of influence. Foreign Policy, November/December. Retrieved July 20, 2005, from http://www.foreignpolicy.com/story/cms.php?story_id=2707 Dvorak, J.C. (2005). Understanding and reading a blog (for newcomers). Dvorak.org. Retrieved November 2, 2005 from http://www.dvorak.org/ blog/primer/blogprimer1.htm eMarketer (2005a). Bloggers in the corner office. eMarketer Report, November 11, 2005. Retrieved November 20, 2005, from http://www.emarketer. com/Article.aspx?1003678 eMarketer (2005b). Blogging for one. eMarketer Report, September 21, 2005. Retrieved September 29, 2005, from http://www.emarketer.com/Article. aspx?1003595 eMarketer (2005c). The business of blogging. eMarketer Report, June 10, 2005. Retrieved September 2, 2005, from http://www.emarketer. com/Report.aspx?blogs_ jun05
eMarketer (2006). Blogs continue to multiply. eMarketer Report, April 20, 2006. Retrieved April 24, 2006, from http://www.emarketer.com/ Articles/Print.aspx?1003930 Gallo, J. (2004). Weblog journalism: Between infiltration and integration. In L.J. Gurak, S. Antonijevic, L. Johnson, C. Ratliff, & J. Reyman (Eds.), Into the blogosphere: Rhetoric, community, and culture of Weblogs. Retrieved July 30, 2005, from http://blog.lib.umn.edu/blogosphere/Weblog_ journalism.html Johnson, B. (2005). Posting for profit: As Weblogs soar in number and influence, their business potential lands many in the money. The Guardian, February 24, 2005. Retrieved September 8, 2005, from http://www.guardian.co.uk/online/Weblogs/ story/0,14024,1423493,00.html Holloway, A. (2007). To blog or not to blog? Canadian Business, January 14, 2007. Retrieved January 24, 2007, from http://www.canadianbusiness.com/columnists/andy_holloway/article. jsp?content=20061225_84386_84386 Knowledge@Wharton (2005). The future of blogging. CNET News.com, April 5, 2005. Retrieved October 30, 2005, from http://news.com. com/The%20future%20of%20blogging/20301069_3-5654288.html Lenhart, A., & Fox, S. (2006). Bloggers: A portrait of the Internet’s new storytellers — A report from the Pew Internet & American Life Project, July 19, 2006. Retrieved August 10, 2006, from http://www.pewinternet.org/pdfs/PIP%20Blogge rs%20Report%20July%2019%202006.pdf Lenhart, A., Fallows, D. & Horrigan, J. (2004). Content creation online: 44% of u.s. internet users have contributed their thoughts and their files to the online world. The Pew Internet & American Life Project, February 29, 2004. Retrieved July 30, 2005, from http://www.pewinternet.org/pdfs/ PIP_Content_Creation_Report.pdf Manjoo, F. (2002a). Blogging goes corporate. Wired, May 9, 2002. Retrieved October 3, 2005, from http://www.wired.com/news/culture/0,1284,52380,00.html
Blogging
Manjoo, F. (2002b). Blah, blah, blah and blog. Wired, February 18, 2002. Retrieved October 3, 2005, from http://www.wired.com/news/ print/0,1294,50443,00.html Mortensen, T.E. (2004). Personal publication and public attention. In L.J. Gurak, S. Antonijevic, L. Johnson, C. Ratliff, & J. Reyman (Eds.), Into the blogosphere: Rhetoric, community, and culture of Weblogs. Retrieved July 30, 2005, from http://blog. lib.umn.edu/blogosphere/personal_publication. html Payne, B. (2003). Blog for business: Is it right for your company? Marketingprofs.com, October 14, 2003. Retrieved August 30, 2005, from http://www. marketingprofs.com/3/payne2.asp Penenberg, A. (2005). Technorati: A new public utility. Wired, July 14, 2005. Retrieved October 1, 2005, from http://www.wired.com/news/culture/0,1284,68204,00.html Perrone, J. (2005). What is a Weblog? The Guardian, May 20, 2005. Retrieved September 3, 2005, from http://www.guardian.co.uk/ print/0,3858,4087590-111748,00.html Rainie, L. (2005). New data on blogs and blogging: About 6% of U.S. adults have created blogs and 16% of them read blogs. The Pew Internet & American Life Project, May 2, 2005. Retrieved July 30, 2005, from http://www.pewinternet. org/PPF/r/104/press_release.asp Ramos, A. (2004). FAQ: What is a blog? Andreas. com, August 2004. Retrieved September 19, 2005, from http://www.andreas.com/faq-blog.html. Rogin, Josh (2006). Stratcom leads DOD cyberdefense efforts. Federal Computer Week, June 19, 2006. Retrieved August 23, 2006, from http://www. fcw.com/article94954-06-19-06-Web Schechner, S. (2006). Weekend adviser: Books. The Wall Street Journal, July 7, 2006, W2. Sifry, D. (2006a). State of the blogosphere, April 2006, Part 1: On Blogosphere Growth. Technorati, April 17, 2006. Retrieved April 20, 2006, from http://www.sifry.com/alerts/archives/000432. html 0
Sifry, D. (2006b). State of the blogosphere, February 2006, Part 1: On Blogosphere Growth. Technorati, February 6, 2006. Retrieved February 15, 2006, from http://www.sifry.com/alerts/archives/000419.html Sullivan, A. (2005). The blogging revolution: Weblogs are to words what Napster was to music. Wired, October 2005. Retrieved October 3, 2005, from http://www.wired.com/wired/archive/10.05/ mustread.html?pg=2 Swisher, K., & Mossberg, W. (2006). All things digital: The Wall Street Journal Executive Conference – Bill Gates on the competition. The Wall Street Journal, June 19, 2006, R4, R6. Weil, D. (2004). Three reasons to publish an e-newsletter AND a blog. Marketingprofs.com, April 13, 2004. Retrieved August 29, 2005, from http://www.marketingprofs.com/4/weil11.asp Weil, D. (2003). Top 20 definitions of blogging. Marketingprofs.com, July 8, 2003. Retrieved December 9, 2005, from http://www.marketingprofs. com/3/weil9.asp Wikipedia (2006). Weblog. Retrieved May 15, 2005, from http://en.wikipedia.org/wiki/Blog Zuckerman, M.B. (2005). The wild, wild Web. U.S. News & World Report, December 5, 2005. Retrieved December 15, 2005, from http://www.usnews.com/usnews/opinion/articles/051205/5edit. htm
furthEr rEading Anonymous. (2007). Today’s leaders juggle emails, blogs and integrity. CNN.com, January 8, 2007. Retrieved January 20, 2007, from http://edition.cnn.com/2007/US/01/07/pysk.overview/ Anonymous. (2003). The Blogger Manifesto (or, do Weblogs make the Internet better or worse?). PeriodicDiversions.com, September 16, 2003. Retrieved October 1, 2005, from http://www. periodicdiversions.com/archives/2003/09/16/ the_blogger_manifesto_or_do_Weblogs_make_ the_internet_better_or_worse.html
Blogging
Branscombe, M. (2005). The business of blogs: Company blogs have mushroomed, so how do you find out who’s saying what about you? The Guardian, August 25, 2005. Retrieved September 5, 2005, from http://www.guardian.co.uk/online/ story/0,3605,1555358,00.html
Garrett, D. (2006). Tech trends: Gartner predicts the blogosphere’s future. Top Tech News, December 15, 2006. Retrieved December 17, 2006, from http://www.toptechnews.com/news/ Gartner-Predicts-Blogosphere-s-Future/story. xhtml?story_id=033001289TX9
Budd, A. (2005). Blogging in government. Andybudd.com, September 7, 2005. Retrieved October 1, 2005, from http://www.andybudd. com/archives/2005/09/blogging _in_government/index.php
Glover, K.D. (2006). Beltway blogroll: The rise of blogs. National Journal, January 21, 2006. Retrieved July 14, 2006, from http://beltwayblogroll.nationaljournal.com/archives/2006/01/ the_rise_of_blo.php
Caslon Analytics (2006). Caslon analytics profile: Web logs and blogging. Retrieved February 25, 2006, from http://www.caslon.com.au/Weblogprofile1.htm
Godwin, B. (2006). Blogs in government: DRAFT Issues Paper—A work in progress (last revised, July 27, 2006). Retrieved December 1, 2006, from http://www.firstgov.gov/Webcontent/documents/ Blogs_in_Government_June_2006.pdf
Chapman-Norton, M. (2005). Why Congress doesn’t blog...And a few members who do. Personal Democracy Forum, March 2, 2005. Retrieved September 20, 2005, from http://www. personaldemocracy.com/node/403/#norton Clift, S. (2006). Citizens 2.0 (Keynote Speech Delivered to the Annual Conference of the International Association for Public Participation in Montreal, Canada, November 13, 2006). Retrieved November 16, 2006, from http://www.dowire. org/notes/?p=309 Congressional Management Foundation (2005). How Congress uses blogs. Congress Online Newsletter, Issue 41. Retrieved January 27, 2006, from http://www.cmfWeb.org/CongressOnline070105. asp eMarketer (2006). Blogs continue to multiply. eMarketer Report, April 20, 2006. Retrieved April 24, 2006, from http://www.emarketer.com/ Articles/Print.aspx?1003930 Fitzgerald, M. (2006). Welcome to my blog: Blogging isn’t the same as writing a memo or a message in the corporate newsletter. And while it may not be as revolutionary as some make it out to be, there’s still value there. Here’s how to get started and how to do it right. CIO Magazine, June 1, 2006. Retrieved July 4, 2006, from http://www.cio.com/archive/060106/blogging. html?action=print
Graves, C. (2006). The executive blogger’s guide to building a nest of blogs, wikis & RSS. Retrieved May 3, 2006, from http://www.ogilvypr.com/pdf/ bloggers-guide.pdf. Grossman, L. (2006). Person of the year: You. Yes, you. You control the information age. Welcome to your world. Time, December 13, 2006. Retrieved December 17, 2006, from http://www.time.com/ time/magazine/printout/0,8816,1569514,00.html Gunther, M. (2006). Corporate blogging: WalMart’s fumbles—Big companies are blogging, for better (Sun CEO’s geeky but candid blog) or worse (Wal-Marting across America), Fortune’s Marc Gunther reports. CNNMoney.com, October 18, 2006. Retrieved November 1, 2006, from http://money.cnn.com/2006/10/17/technology/pluggedin_gunther_blog.fortune/index. htm?postversion=2006101809 Kelly, C.J. (2006). Stratcom (DOD) wants to blog. Computerworld, June 20, 2006. Retrieved August 22, 2006, from http://www.computerworld.com/ blogs/node/2805 Madden, M., & Fox, S. (2006). Riding the Waves of “Web 2.0”—A report from the Pew Internet & American Life Project. October 5, 2006, from http://www.pewinternet.org/pdfs/PIP_Web_ 2.0.pdf
Blogging
Maney, K. (2006). Mass collaboration could change way companies operate. USA Today, Decmeber 27, 2006. Retrieved January 10, 2007, from http://www.usatoday.com/tech/columnist/ kevinmaney/2006-12-26-wikinomics_x.htm Meattle, J. (2007). Top-20 Websites: Where DO we spend our time online? Compete.com, January 25, 2007. Retrieved February 2, 2007, from http://blog.compete.com/2007/01/25/top-20-Websites-ranked-by-time-spent/ Nail, J. (2006). Influence 2.0—An eBook on the implications of Web 2.0 for business professionals in market-facing functions. Retrieved December 6, 2006, from http://www.cymfony.com/influence2.pdf National Conference of State Legislatures. (2006). Links to state legislators’ blogs and legislatures with podcasts and RSS feeds. Retrieved October 30, 2006, from http://www.ncsl.org/programs/lis/ NALIT/blogs.htm. Panepento, P. (2006). Blogs on the rise: Online forums about charity offer advice and discuss controversies. The Chronicle of Philanthropy, December 7, 2006. Retrieved December 10, 2006, from http://philanthropy.com/free/articles/v19/ i05/05003501.htm Parry, T. (2006). Defending yourself against the blogs. Multichannel Merchant, May 1, 2006. Retrieved May 23, 2006, from http://multichannelmerchant.com/Webchannel/defending_yourself_blogs_05012006/ Reece, B. (2006). E-Government literature review. Journal of E-Government, 3(1), 69-110. Rogin, J. (2006). House makes blogging easy. Federal Computer Week, December 15, 2006. Retrieved January 1, 2007, from http://www.fcw. com/article97131-12-15-06-Web&printLayout Rogin, J. (2006). Cartwright: Warfighters need to share information regardless of rank. Federal Computer Week, August 22, 2006. Retrieved November 1, 2006, from http://www.fcw.com/article95747-08-22-06-Web&printLayout
Shafer, J. (2006). Who are all these Bloggers? And what do they want? Slate, July 19, 2006. Retrieved July 31, 2006, from http://www.slate. com/id/2145896/?nav=ais Shinder, D. (2006). 10 ways to become a better blogger. TechRepublic, September 29, 2006. Retrieved October 1, 2006 from http://articles. techrepublic.com.com/5100-10881_11-6120257. html?tag=nl.e138 Sifry, D. (2006). State of the blogosphere, October 2006. Technorati, November 6, 2006. Retrieved November 10, 2006, from http://www.sifry. com/alerts/ Solheim, S. (2005). Industry giants press blogs into service. eWeek, October 24, 2005. Retrieved November 11, 2005, from http://www.eweek. com/article2/0,1759,1876269,00.asp Steele, J. (2006). Sudan expels UN official for blog revealing Darfur military defeats: Report details loss of hundreds of soldiers’ lives, move likely to sour relations further. The Guardian, October 23, 2006. Retrieved October 30, 2006, from http:// www.guardian.co.uk/sudan/story/0,,1929019,00. html Sternstein, A. (2006). House offers standard blog software to members. National Journal, December 15, 2006. Retrieved December 17, 2006, from http://beltwayblogroll.nationaljournal.com/archives/2006/12/house_of_blogs.php Sullivan, A. (2005). The blogging revolution: Weblogs are to words what Napster was to music. Wired, October 2005. Retrieved October 3, 2005, from http://www.wired.com/wired/archive/10.05/ mustread.html?pg=2 Terdiman, D. (2006). Congress catching on to the value of blogs. News.com, January 26, 2006. Retrieved May 2, 2006, from http://news.com. com/Congress+catching+on+to+the+value+of+ blogs/2100-1028_3-6031314.html Vaas, L. (2005). What blogs, podcasts, feeds mean to bottom line. eWeek, August 25, 2005. Retrieved September 21, 2005, from http://www.eweek. com/article2/0,1895,1852423,00.asp
Blogging
Vigas, F. (2004). Blog survey: Expectations of privacy and accountability. MIT Media Lab, May 2004. Retrieved July 24, 2005, from http://Web. media.mit.edu/~fviegas/survey/blog/results.htm Whatis.com (2007). Blog terms: Glossary. Retrieved January 14, 2007, from http://whatis.techtarget.com/definition/0,289893,sid9_ gci1186975,00.html Wyld, D.C. (2006). Presiding in the carnival of ideas: Are innovative college and university presidents following the lead of corporate executives into the blogosphere? Journal of Academic Administration in Higher Education, 2(1 & 2), 29-36.
tErms and dEfinitions Blog: An easy-to-use content management tool, which enables a person to instantly add content to a Website, via a Web interface, without the necessity of any special technical or programming skills. Blogosphere: The totality of all blogs. Also refers to the blogging community. Blogger: A person who originates, maintains, and updates a his or her blog. Blogging: The act of writing on a blog. Commentariat: The community of people who leave comments on a blog. Commenter: A person who leaves remarks in the ‘comments’ section of a blog.
Flame: The act of making a hostile post or comment on a blog, usually personal in nature. Permalink: This is a link to a specific article in the archives of a blog, which remains valid after the article is no longer listed on the blog’s front page (i.e., after it has been archived). Podcasting: To record audio files and make them available online so that a user can download them for listening, either immediately or at a later time. Although a podcast can be listened to on any suitable hardware (i.e. a computer or MP3 player), the term ‘podcast’ derives its name from the iPod, the very popular, portable MP3 player made by Apple. RSS (really simple syndication): A form of programming code that allows Website or blog readers to subscribe to sites of interest, in order to automatically get updates fed to them in a Newsreader. The content can be anything from small articles to whole blogs or press releases. Sidebar: Columns that run along one or both sides of the main page of a blog. Sidebars typically contain links, blog archives, contact information etc. Thread: A series of remarks posted by people in a public comment section of a blog that follows a conversational and topic-related sequence. Trackback: A piece of programming that shows a blogger who is linking to their blog and delivers the snippets of what they said. Wiki: A type of collaborative on-line software that allows for a site to have its content updated and edited by readers. Wikipedia is an example of such a site.
Chapter X
E-Government and SMEs Ron Craig Wilfrid Laurier University, Canada
introduction This chapter looks at a particular focus of e-government, that of support for business in general and SMEs (small and medium-sized enterprises) in particular. While this is only one segment of PIT (public information technology), it is an important one. Figure 1 illustrates this, showing overlap between the three areas of government, citizens, and businesses (note the diagram is not to scale). The chapter starts with an overview of the importance of SMEs to regional and national economies, showing why governments encourage their start-up, survival and growth. Following this, an overview of e-G2B (e-government to business) initiatives around the world is provided, with parFigure 1. E-government areas of focus E-Government Areas of Focus
Business & Other Orgs.
SMEs
Citizens
Govt
ticular attention directed to the SME perspective. The chapter closes with consideration of future trends, conclusions, references, and keyword definitions.
background Small and medium-sized enterprises (SMEs) are important to national economies, and hence to the world economy. SMEs provide employment, create new jobs, and contribute to a country’s GDP (gross domestic product). Some (a very small but important proportion) develop into the large businesses of the future (Microsoft and Apple are technology examples) while other successful ones are purchased outright by larger firms. The size definition of what constitutes a micro, small, or medium-sized business varies from country to country, and even between government departments and programs within a particular country. Countries with larger economies have higher revenue and employee thresholds for segmenting large firms from small and medium, while countries with smaller economies have lower ones. A common segmentation approach uses number of employees—micro (or very small) businesses having less than five employees, small businesses
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
E-Government and SMEs
having 100 or fewer, and medium-sized firms having 101-500. A variation on this (used by the European Union) has the upper employee limit set at 250. Another segmentation method uses sales, and is based on the type of firm (manufacturing, wholesale, retail, service, and so forth). Often a combination of employee numbers, revenue size, and independent ownership is used in defining an SME. It is important to note that different countries use different definitions, and these definitions can vary significantly (e.g., in many developing countries a firm with 250 employees is a ‘large firm’). In Canada, small firms (those with fewer than 100 employees) make up 97 percent of goods-producing employer businesses and 98 percent of all service-producing employer businesses1. For the United States, small firms represent 99.7 percent of all employer firms, employ half of all private sector employees, pay 45 percent of total U.S. private payroll, have generated 60 to 80 percent of net new jobs annually over the last decade, and create more than 50 percent of non-farm private GDP2. In Australia, some 95 percent of businesses are SMEs3. Within the UK, there are more than four million small and medium-sized businesses (defined as less than 49 and 250 employees respectively). SMEs there employ some 58.5 percent of the private sector workforce (or more than 12 million people), and contribute more than 51 per cent of the national GDP4. Within the entire 25-member European Union (EU) there are some 23 million SMEs, representing 99 percent of all EU companies, and employing around 75 million people5. The new EU SME definition considers staff, revenue, and balance sheet size; micro enterprises have fewer than 10 employees, small enterprises fewer than 50, and medium-sized fewer than 2506. Typical advantages attributed to SMEs include being able to service small markets, having a quick reaction time to changes in market conditions, innovativeness, and closeness to their customers. On the negative side, SMEs usually are ‘resource poor’ (in terms of finances, time and expertise), and generally lag in integration into the new eeconomy. Of course, there is tremendous diversity
among SMEs. They cover all industry segments, from manufacturing to service to trade, and from traditional style firms to modern knowledge-based ones. Profitability varies significantly between types of SMEs and among businesses within the same or different industry segments. In particular, a small business is not simply a scaled down version of a large business and the owner/operator typically has much more at personal risk than managers in larger firms. The literature addresses the reluctance of many SMEs to integrate into the new e-economy (Al-Qirim & Corbitt, 2004; Canadian e-Business Initiative, 2004; Fisher & Craig, 2005). Moreau, Raymond and Vermot-Desroches (2005) point out that with the advent of global competition and new organization forms based on networks of cooperating firms, the successful assimilation of e-business is bound to take on added importance for many SMEs in terms of survival, growth, and competitiveness. Levenburg (2005) is one of the few empirical researchers to consider size (micro, small, and medium) within the SME segment and its impact on IT adoption. She found increasing ebusiness technology use as firm size increased. So, just as SMEs tend to lag larger firms in e-business uptake, so micro and small firms tend to lag their small and medium-sized counterparts. Governments are naturally concerned that these SME ‘economic engines’ continue to function well. Despite reticence on the part of some SMEs, the Internet has proven to be a helpful tool for many, and governments have done much to encourage SMEs in this area. Various ICTs (information and communications technologies) and e-commerce initiatives have been undertaken at national and regional levels in many countries (see Corbitt & Al-Qirim, 2004 for examples).
E-govErnmEnt and smEs There are many definitions of e-government and its scope (Asia Oceania Electronic Marketplace Association, 2006). The World Bank states “e-government refers to the use by government agencies of information technologies (such as wide area
E-Government and SMEs
networks, the Internet, and mobile computing) that have the ability to transform relations with citizens, businesses, and other arms of government.” In many ways the move to e-government mirrors the earlier move by firms to e-commerce; in other ways it differs (for example, profit maximization or cost minimization is often subservient to enhanced service levels and/or meeting the needs of stakeholders). E-government includes a wide range of services, including dissemination of information, commerce with the private sector, services to individual citizens and businesses, and participatory democracy (Jones et al., 2006). The emphasis on these services varies from country to country, with the scope broadening over time. There have been many types of initiatives directed towards business in general and SMEs in particular. Goals include efficiency, effectiveness, and competitive advantage for firms (leading to job creation, economic stimulus, and a higher standard of living). For example, Canada Business states its mission is to: (1) improve the start-up, survival and growth rates of small and medium-sized enterprises, (2) reduce the complexity of dealing with various levels of government, (3) enable clients to make wellinformed business decisions in a global economy, and (4) encourage business success through sound business planning, market research and the use of strategic business information (Government of Canada, 2005). This illustrates the broader context of e-government—the need for strategic direction, policy making, planning, and managing implementation; in other words, the need for e-governance (see Sheridan & Riley, 2006 for a comparison of e-government and e-governance). Because of the cost and complexity of e-government, resources can be expended in a strategic or non-strategic manner. Multi-year planning, with five and 10-year horizons, is common (e.g., Canada and Singapore). Issues of ICT infrastructure availability and laws governing privacy, security and tax treatment of Internet commerce all have an impact on the ability and desirability of citizens and businesses to engage in electronic transactions. Al-Qirim and Corbitt (2004) illustrate the significant role a gov-
Table 1. Policy directions for national governments Business environment Network infrastructure Trust infrastructure Digital products and information services Skill upgrading Intangible investments and assets Information Government on-line Competition Intellectual property
ernment can play, showing how the government of New Zealand supports SMEs by providing leadership, building the country’s ICT and e-commerce capability, and providing an enabling regulatory environment. Considering SMEs in particular, the OECD (2004) identified several policy directions for governments (see Table 1). During the past decade many governments focused on developing SME awareness of and capability with e-commerce. With the maturation of the Web and e-business, this laid the foundation for streamlining many business-government interactions. Yet, the reluctance of some SMEs to embrace e-commerce (and by inference, e-government) remains a concern. An example of a successful initiative aimed at SMEs is the Greek Go-Online Web Portal (Manouselis, Sampson, & Charchalos, 2004). Part of the European ‘GoDigital’ initiative, it targets more than 50,000 Greek SMEs. While the overall initiative involves more than just the portal, the portal is a cost effective means of providing information and training to a diverse group of SMEs, and particularly to very small SMEs (vSMEs) or micro businesses. With the growth of e-government, national comparisons are now possible. Various evaluations have been completed by Accenture, the Conference Board of Canada, the United Nations Department of Economics and Social Affairs, IDS Information Society Index, Brown University, the United Kingdom’s Office of e-Envoy, and Taylor Nelson Sofres plc (Government of Canada, 2006). Accenture has completed an annual e-government survey for several years, with Canada
E-Government and SMEs
and the USA currently leading the 22 countries surveyed. The latest results (Cole & Jupp, 2005) show the reviewed countries to be well advanced, with e-government an integral part of service delivery. Consequently the pace of advancement is diminishing. Accenture calls for future leadership to be defined in terms of strength in all areas of customer service. CapGemini recently completed a report for the European Commission (CapGemini, 2006) which studied public services for both citizens and businesses (Austria led the 28 countries studied). To date, these comparisons have been quite broad and have not selectively focused on services to SMEs. To the extent e-government initiatives support business in general, SMEs will also usually benefit. Yet, as stated earlier in this chapter, SMEs have many different needs than large firms. Countries also look to each other for ‘best practices,’ and joint initiatives are sometimes undertaken. One example is between Canada and the United Kingdom (Industry Canada, 2005). Both countries share a common interest in creating a positive environment for the growth of e-commerce and e-government, based on: (1) building trust for users and consumers, (2) establishing transparent, objective ground rules for the digital marketplace, (3) enhancing the information infrastructure, (4) maximizing the social and economic benefits, and (5) promoting global participation. While the results will benefit both citizens and businesses, SMEs are typically included in both groups.
sponsored development of community or regional portals, many which support local SMEs. Some of these have succeeded while others have not (lessons learnt will be noted later in this chapter). Table 2 provides URLs for a selected sample of general business and SME portals and Web sites. SMEs can use these portals and Web sites to quickly find out about government departments and programs, as well as learn more about common e-commerce initiatives (often supported by case examples), or even avail themselves of online training (as with the Greek Go-Online initiative). The trend is for regional and local governments to have their own Web sites, although these usually are not as well developed as national ones and focus on local matters. This national/local dichotomy provides a useful means of separating the vast amount of information available from various levels of government. In the case of lesser developed countries, the URL is usually for a Web site with basic information for SMEs, as contrasted to a full portal. There are also multi-country government initiatives—some by governments (such as the UK and Canada), and others by government-sponsored organizations (such as the OECD and World Bank). Table 3 lists selected URLs for such initiatives. The SME Toolkit portal provides country and language specific advice for many countries and regions, including those where the emphasis is on micro and small businesses.
futurE trEnds portals and other initiatives for business As discussed earlier, governments are strong supporters of SMEs. They have directly supported SMEs through two major types of portals. First, national and regional governments usually maintain information portals through which SMEs can access information on government departments, programs, legal requirements, and so forth. Increasingly, transactional functionality is available through government Web sites. Secondly, as part of e-business initiatives, governments have
Progress in e-government has more or less followed the virtual value chain three-stage development process (Rayport & Sviokla, 1995): (1) visibility: improving ability to track operations more effectively, (2) mirroring: substitute virtual activities for physical, and (3) creating new customer relationships: using information to deliver value in new ways. Progressive firms went through this transformation during the 90s and both governments and many SMEs have since followed. While more developed countries are working at stages 2 and 3, lesser developed ones are still at the beginning of stage 2. This will change over time.
E-Government and SMEs
Table 2. Selected government web sites Country
Level
SME Focus
General Business www.business.gov.au www.ezgov.com.au
Australia
National
www.australia.gov.au/212
Australia
State (WA)
www.sbdc.com.au
Brunei Darussalam
National
http://www.bsmenet.org.bn/site/index.asp
Canada
Federal
www.strategis.ic.gc.ca
Canada
Provincial (BC)
www.smallbusinessbc.ca
France
National
http://www2.evariste.org/introeng.html
Greece
National
www.go-online.gr
Hong Kong
National
http://www.success.tid.gov.hk/
Korea
National
http://www.smba.go.kr/main/index.jsp
Singapore
National
South Africa
National
http://www.dti.gov.za/thedti/seda.htm
Switzerland
National
http://www.intercoop.ch/sed/main/
UK
National
www.sbs.gov.uk
www.direct.gov.uk
USA
National
www.sba.gov
www.business.gov
USA
State (NY)
http://www.nylovessmallbiz.com/home. asp
canadabusiness.gc.ca
www.igov.gov.sg http://www.info.gov.za/faq/ business.htm
Table 3. Other selected web sites Agency/Group
SME Focus
European Union—Business Startup Agency
http://www.apce.com/index.php? type_page=IL&pays=1 &rubrique_id=300000111
European Union—European Commission
http://ec.europa.eu/youreurope/ nav/en/business/shortcuts/information-sme/index.html
Global Information Network for SMEs
http://www.gin.sme.ne.jp/
OECD—Centre for Entrepreneurship, SMEs and Local Development
http://www.oecd.org/department/0,2688, en_2649_33956792_1_1_1_1_1,00.html
World Bank Group—SME Toolkit
http://ec.europa.eu/ youreurope/nav/en/ business/index.html
http://www.smetoolkit.org/
Researchers have proposed stages of growth models for e-government. Budge (2002) identifies five stages that governments need to work through to develop comprehensive e-government capability: (1) emerging presence (basic Web site), (2) enhanced presence (emerging portal), (3) interactive, (4) transactional, and (5) seamless (fully networked government with all agencies linked).
General Business
Janssen and Van Veenstra (2005) provide a similar five-stage model for local government. While Budge’s work focuses on developing nations, it is based on the experience of developed nations (for example, see Becker (2006) for a brief history of the U.S. government development of an online marketplace). Different countries are currently at different stages of the growth process, and will continue to progress forward.
E-Government and SMEs
Those countries will lesser developed SMEdirected e-government initiatives have the opportunity to learn from the experiences of others. This includes both what to copy (best practices, as long as they suit local needs) and what to avoid (pitfalls). One can also expect increased breadth and depth in e-government initiatives, both nationally and worldwide. Nationally, more information and capabilities will be placed on government Web sites. Local and regional Web sites will evolve in their capabilities of supporting local SMEs, while linking to national Web sites for more general things. Breadth will also come from increasing multi-country cooperation (as seen with the European Union and other initiatives).
Conclusions can also be drawn concerning SMEs. Just as e-government is maturing, so has e-commerce. Since SMEs are so numerous, there are firms at all stages of the adoption curve (from innovators and early adopters to late adopters and laggards). SMEs must take responsibility for their decisions (or lack of them) concerning use of e-commerce and e-government. From an e-government perspective, SMEs should continue to expect government initiatives that encourage SME success and growth. And from a research perspective, there are opportunities for researchers to study the impact of e-government on SMEs, and their particular needs. While both larger and smaller firms have many similarities, their differences are significant.
conclusion
futurE rEsEarch dirEctions
Most of the lessons learnt so far from studies of e-government and businesses apply to general business initiatives rather than ones restricted to SMEs. However, since SMEs are businesses, successful initiatives that help business in general also help SMEs. Table 4 lists lessons from recent studies in three separate countries (Australia, the UK, and the USA). One study focused on SMEs while the other two were broader. Many governments are now relatively advanced with e-government (as evidenced by the latest Accenture and CapGemini studies). While the trend to continued improvement and enhanced functionality will continue, the pace will slow and projects with significant impact will often carry more risk. Some researchers (Evangelidis, 2004; Margetts, 2005) have drawn attention to the risk management needs of e-government. For governments who are not so advanced, established best practices provide a proven path. Opportunities also exist for partnering with others (as Canada and the UK are doing). With the ongoing pace of technological development and rising expectations from both business and the public (the latest Accenture study shows both citizens and businesses want more), there will be continuing opportunities for enhanced e-government and need for improved e-governance.
Research into public information technology and e-government is advancing on several fronts, including: •
• • • • • •
•
Evaluation of leading edge PIT and e-government initiatives (from the providers or the recipients perspective, or both; best practices, identification of leadership and trends); Response to PIT and e-government from government, private organizations (for profit, not-for-profit), and individuals; Potential and/or actual impact of emerging information and communication technologies on PIT and e-government; Role of SMEs and larger firms with respect to PIT and e-government; International comparisons, showing leaders, laggards, best practices, etc.; Evaluation/Testing of existing or new research frameworks; Case studies of successful and/or unsuccessful PIT and e-government initiatives, deriving lessons learnt and important (or even critical) factors; Research on PIT and e-government research (literature surveys, state-of-the art summaries, identification of research gaps, etc.).
E-Government and SMEs
Table 4. Some e-government lessons Lessons from Australian Regional Economic Marketplaces (Gengatharen & Standing, 2005); factors affecting success: • SME owner innovativeness; • REM ownership structure and governance that engender trust and build critical mass by including SMEs in development and management; • Matching REM focus and structure with regional profile; • Adopting a staged approach to REM development; • Ensuring REM benefits are understood by SMEs. Lessons from UK e-government initiatives (Jones et al., 2006): • Senior executives must engage with e-government investment decision processes; • Organizations should consider the appropriateness and validity of evaluation techniques; • Organizations should consider relating notions of success other than costs (such as user satisfaction); • Organizations should adequately resource e-government evaluation; • Organizations should identify and articulate who is responsible for e-government evaluation; • A senior executive should sponsor e-government evaluation. Lessons from US state Web sites (Gil-Garcia, 2006); factors affecting state Web site functionality & e-government success: size of IT organization, budget structure, IT training, in-house development, outsourcing, and marketing strategy.
There are opportunities for both normative and descriptive studies. Empirical research opportunities, based on survey data, with a hypothesized model of anticipated interactions and impacts, evaluated by statistical techniques such as structural equation modeling, will provide confirmation or denial of such hypotheses, and likely raise additional research questions. The geographical scope of these studies could be regional, national, or international. The continuing pace of technological improvement provides new opportunities to enhance existing PIT applications or even launch new ones. Researchers have the opportunity to separate the reality from the marketing hype, as well as identifying technologies with the greatest potential. Stakeholder analysis in general, with SMEs as particular stakeholders, can provide various perspectives and identify win-win scenarios. While the SME portion of PIT and e-government research will continue to be relatively small, there are opportunities for significant findings (since, as shown earlier in this chapter, SMEs are an important economic engine in modern economies). What emerging information and communication technologies support e-government initiatives to bolster the SME sector? What can be learned from the latest successful government initiatives, or the recent failures? And are local, regional and national governments effectively learning from each other?
00
From the SME perspective, researchers can learn from those firms making better/best use of PIT and identify best practices and approaches. Likewise, lessons learnt from ongoing PIT-SME failures can prove valuable in minimizing or preventing future mistakes.
rEfErEncEs Al-Qirim, N.A., & Corbitt, B.J. (2004). The government and e-governance: A policy perspective on small businesses in new zealand. In B.J. Corbitt & N.A. Al-Qirim (Eds.) e-business, e-government & small and medium-size enterprises: Opportunities and challenges. Hershey, PA: Idea Group Publishing. Asia Oceania Electronic Marketplace Association. (2006). E-government: Definitions and objectives. Retrieved July 12, 2006 from http://www.aoema. org/E-Government/Definitions_and_Objectives. htm Becker, S.A. (2006). Dot-gov success in the online marketplace. Journal of Electronic Commerce in Organizations, 4(1), 1-2. Budge, E.C. (2003). Foundations of e-government. In Digital Opportunites For Development: A Sourcebook for Access and Applications (pp 331368). Retrieved July 13, 2006 from http://learnlink. aed.org/Publications/Sourcebook/home.htm
E-Government and SMEs
Canadian e-Business Initiative (2004). Net impact study canada: Strategies for increasing SME engagement in the e-economy: Final report. September 2004. Retrieved July 12, 2006 from http://www.cebi.ca/Public/Team1/Docs/net_impact_english.pdf CapGemini (2006). 2006 Online Availability of Public Services: How is Europe Progressing? Retrieved July 14, 2006 from http://www.capgemini. com/resources/thought_leadership/2006_online_ availability_of_public_services Cole, M., & Jupp, V. (2005). Leadership in customer service: New expectations, new experiences. The Government Executive Series, Accenture. Retrieved July 10, 2006 from http://www.accenture.com/NR/rdonlyres/F45CE4C8-9330-4450BB4A-AF4E265C88D4/0/leadership_cust.pdf Corbitt, B.J., & Al-Qirim, N.A., (eds). (2004). e-business, e-government & small and mediumsize enterprises: Opportunities and challenges. Hershey, PA: Idea Group Publishing. Evangelidis, A. (2004). FRAMES—A risk assessment framework for e-services. Electronic Journal of e-Government, 2(1), 21-30. Retrieved July 7, 2006 from http://www.ejeg.com/volume2/volume2-issue-1/v2-i1-art3-evangelidis.pdf Fisher, J., & Craig, A. (2005). Developing business community portals for SMEs—issues of design, development and sustainability. Electronic Markets, 15(2), pp 136-145. Gengatharen, D.E., & Standing, C. (2005). A framework to assess the factors affecting success or failure of the implementation of governmentsupported regional e-marketplaces for SMEs. European Journal of Information Systems, 14(4), 417-433. Gil-Garcia, J. R. (2006). Enacting state web sites: A mixed method study exploring e-government success in multi-organizational settings. In Proceedings of the 39th Hawaii International Conference on System Sciences. Retrieved July 10, 2006 from http://www.ctg.albany.edu/publications/journals/ hicss_2006_enacting/hicss_2006_enacting.pdf
Government of Canada. (2005). About Us. Retrieved July 10, 2006 from http://www.cbsc.org/ servlet/ContentServer?cid=1063391060815&page name=CBSC_FE/CBSC_WebPage/CBSC_WebPage_Temp&lang=eng&c=CBSC_WebPage Government of Canada. (2006). Government OnLine. Retrieved July 10,2006 from http://www. gol-ged.gc.ca/rpt2006/rpt/rpt00_e.asp Industry Canada (2005). Canada — United Kingdom Joint Statement on Global Electronic Commerce and E-Government. Retrieved July 11, 2006 from http://strategis.ic.gc.ca/epic/internet/inecic-ceac.nsf/en/gv00387e.html Janssen, M., & Van Veenstra, A.F. (2005). Stages of growth in e-government: An architectural approach. Electronic Journal of e-Government, 3(4), 193-200. Jones, S., Irani, Z., Sharif, A., & Themistocleous, M. (2006). E-government evaluation: Reflections on two organizational studies. Proceedings of the 39th Hawaii International conference on System Sciences. Retrieved July 11, 2006 from http://csdl2.computer.org/comp/proceedings/ hicss/2006/2507/04/250740076a.pdf Levenburg, N.M. (2005). Does size matter? Small firms’ use of e-business tools in the supply chain. Electronic Markets, 15(2), 94-105. Manouselis, N., Sampson, D., & Charchalos, M. (2004). Evaluation of the greek go-online web portal for e-business awareness and training of vSMEs: Log files analysis and user satisfaction measurement. In Proceedings of the 9th International Telework Workshop. Retrieved July 12, 2006 from http://www.ted.unipi.gr/Uploads/Files/Publications/En_Pubs/1090483589.pdf Margetts, H. (2005). Smartening up to risk in electronic government. Information Polity, 10(2), 81-94. Moreau, E.M., Raymond, L., & Vermot-Desroches, B. (2005). E-business and the development of SMEs: Promising initiatives in the context of local and regional development. In Proceedings of the 50th World Conference of the International Council for Small Business. 0
E-Government and SMEs
Organization for Economic Co-operation and Development (OECD). (2004). Promoting entrepreneurship and innovative SMEs in a global economy: Towards a more responsible and inclusive globalisation. Retrieved July 12, 2006 from https://www.oecd.org/dataoecd/5/24/31919590. pdf Rayport, J.F., & Sviokla, J.J. (1995). Exploiting the virtual value chain. Harvard Business Review, 73(6), 75-85. Sheridan, W.& Riley, T.B. (2006). Comparing e-government vs e-governance. Research paper, Commonwealth Centre for e-Governance. Retrieved September 25, 2006 from http://www. electronicgov.net/pubs/research_papers/SheridanRileyComparEgov.shtml
furthEr rEading Abouzeedan, A. (2006). Information technology (IT) and small and medium-sized enterprises (SMEs) management. Global Business Review, 7(2), 243-257. Achanga, P., Shehab, E., Roy, R., & Nelder, G. (2006). Critical success factors for lean implementation within SMEs. Journal of Manufacturing Technology Management, 17(4), 460-471. Affisco, J.F., & Soliman, K.S. (2006). E-government: A strategic operations management framework for service delivery. Business Process Management Journal, 12(1), 13-21. Andersen, K.V. (2006). E-Government: Five key challenges for management. Electronic Journal of e-Government, 4(1), 1-8. Brewer, G.A., Neubauer, B.J., & Geiselhart, K. (2006). Designing and implementing e-government systems: Critical implications for public administration and democracy. Administration & Society, 38(4), 472-499. Chen, Y., & Dimitrova, D.V. (2006). Electronic government and online engagement: Citizen
0
interaction with government via web portals. International Journal of Electronic Government Research, 2(1), 54-76. Chou, T., Hsu, L., Yeh, Y., & Ho, C. (2005). Towards a framework of the performance evaluation of SMEs’ industry portals. Industrial Management & Data Systems, 105(4), 527-544. Davidrajuh, R. (2004). Planning e-government start-up: A case study on E-Sri Lanka. Electronic Government, an International Journal, 1(1), 92106. Davis, C.H., & Sun, E. (2006). Business development capabilities in information technology SMEs in a regional economy: An exploratory study. The Journal of Technology Transfer, 31(1), 145-161. Del Aquila-Obra, A.R., & Padilla-Melendez, A. (2006). Organizational factors affecting Internet technology adoption. Internet Research, 16(1), 94-110. Ferneley E., & Bell F. (2005). Tinker, tailor: Information systems and strategic development in knowledge-based SMEs. In Bartmann, D., Rajola, F., Kallinikos, J., Avison, D., Winter, R., Ein-Dor, P. et al. (Eds.), Proceedings of the 30th European Conference on Information Systems , Regensburg, Germany. Fisher, J., Bentley, J., Turner, R., & Craig, A. (2005). SME myths: If we put up a web site customers will come to us—why usability is important. In D. R. Vogel, D.R.,Walden, P., Gricar, J. & Lenart, G. (Eds.), Proceedings of the 18th Bled eConference (pp. 1-12). Bled, Slovenia. Floyd, D., & McManus, J. (2005). The role of SMEs in improving the competitive position of the European Union. European Business Review, 17(2), 144-150. Garson, G.D. (2006). Public information technology and e-governance: Managing the virtual state. Boston, MA: Jones & Bartlett Publishers. Garson, G.D. (2004). The promise of digital government. In A. Pavlichev, and G.D.Garson, (Eds.), Digital government principles and best practices (pp. 2-15). Hershey, PA: Idea Group Publishing.
E-Government and SMEs
Gengatharen, D.E., & Standing, C. (2004). Evaluating the benefits of regional electronic marketplaces: Assessing the quality of the REM success model. Electronic Journal of Information System Evaluation, 7(1), 11-20. Gronlund, A. (2005). State of the art in e-gov research: Surveying conference publications. International Journal of Electronic Government Research, 1(4), 1-25. Holden, S.H. (2003). The evolution of information technology management at the federal level: Implications for public administration. In D.G. Garson (Ed.), Public information technology: Policy and management issues (pp. 53-73). Hershey, PA: Idea Group Publishing. Hong, W., & Zhu, K. (2006). Migrating to internetbased e-commerce: Factors affecting e-commerce adoption and migration at the firm level. Information and Management, 43(2), 204-221. Jeon, B. N., Han, K. S., & Lee, M. J. (2006). Determining factors for the adoption of e-business: The case of SMEs in Korea. Applied Economics, 38(16), 1905-1916. Levy, M., & Powell, P. (2003). Exploring SME internet adoption: Towards a contingent model. Electronic Markets, 13(2), 173-181. Locke, S. (2004). ICT adoption and SME growth in New Zealand. The Journal of American Academy of Business, 4(1/2), 93-102. Lockett, N., & Brown, D. H. (2006). Aggregation and the role of trusted third parties in SME e-business engagement. International Small Business Journal, 24(4), 379-404. Molla, A., & Licker, P. S. (2005). eCommerce adoption in developing countries: a model and instrument. Information and Management, 42(6), 877-899. Norris, D.F. (2006). Electronic government at the American grassroots: The state of the practice. In A.V. Anttiroiko and M. Malkia (Eds.), Encyclopedia of digital government. Hershey, PA: Idea Group Reference.
Norris, D.F., & Moon, M.J. (2005). Advancing e-government at the grassroots: Tortoise or hare? Public Administration Review, 65(1), 64–75. Parker, D. (2003). SME clusters within community-portal constellations. Management Services, 47(2), 14-18. Shulman, S.W. (2005). E-rulemaking: Issues in current research and practice. International Journal of Public Administration, 28(7-8), 621-641. Tatnall, A., Burgess, S., & Singh, M. (2004). Community and regional portals in Australia: A role to play for small businesses? In N. Al-Qirim (Ed.), Electronic commerce in small to medium-sized enterprises: Frameworks, issues and implications, (pp.304-320). Hershey, PA: Idea Group. Titah, R., & Barki, H. (2006). E-government adoption and acceptance: A literature review. International Journal of Electronic Government Research, 2(3), 23-57. Tolbert, C.J., & Mossberger, K. (2006). The effects of e-government on trust and confidence in government. Public Administration Review, 66(3), 354-369.
tErms and dEfinitions Community Portal: Web site aimed at geographical community or special interest group; goal is to provide a virtual community for users. E-Governance: Governance (the exercise of political authority and the use of institutional resources to manage society’s problems and affairs) of information and communication technologies and their use. E-government: The use of ICTs to improve the efficiency and effectiveness of government operations. ICTs: Information and communication technologies
0
E-Government and SMEs
Micro Business: less than five or 10 employees (size definition depends upon the country); sometimes called vSME Regional Economic Marketplace: Web site supporting economic activity within a particular region. SME: small and medium-sized enterprises; generally considered as less than 500 employees (less in smaller countries), and independently owned/operated. vSME: very Small/Medium-sized enterprise; less than five employees (sometimes called Micro Business).
0
EndnotEs 1 2
3
4 5 6
Source: Industry Canada Source: Small Business Administration (USA) Source: Australian Government Information Management Office Source: Small Business Services (UK) Source: European Commission / Eurostat Source: European Commission
0
Chapter XI
EU E-Business and Innovation Policies for SMEs Anne Wiggins London School of Economics and Political Science, UK
introduction This chapter provides an overview of the current UK situation of e-business1 adoption and implementation, and outlines the primary UK government policies and initiatives that have been introduced since 2000 in order to stimulate e-business adoption and implementation by SMEs. Companies of all sizes that have adopted e-business believe that it contributes to improved performance in four main ways: • • • •
The development of new products and services. The generation of new customers and business channels. A reduction in costs. Improved productivity (HM Treasury, 2001c).
Timmers (2000) and Rayport and Jaworski (2001) have analyzed how the Internet has enabled business models that were not possible previously. E-business adoption has been advocated as a way of reducing transaction costs, gaining market share, streamlining business processes, achieving competitive advantage and improving relationships
with business partners (Porter, 2001). E-business can improve the ability of SMEs to compete with larger organizations (Watson, Akelsen, & Leylan, 1998), and enable them to operate on an international scale (OECD, 1998b). Adopting e-business is a cost-effective way for small organizations to market their business, launch new products, improve communications, gather information, and identify potential business partners (Basu, 2001) with few barriers to entry (Chaston & Mangles, 2002). E-business therefore challenges traditional strategic management thinking. However, the extent to which ICT and Internet usage by SMEs features in the literature is still, relatively, undeveloped (Dixon, Thompson, & McAllister, 2002; JCESB, 1999). The smaller the enterprise, the less statistically likely it is to use technology, let alone operate as an e-business. Cost remains the biggest restraint to new technology uptake (Dixon & Marston, 2002; FSB, 2002a; 2002b). SMEs still tend to use the Internet only to send emails, to transfer files or documents, and/or to gather information. Despite the widely touted benefits of broadband, the UK SME sector lags leaders such as Sweden and Germany in terms of connectivity (BAH, 2002). In a 2001 survey, 25 percent of SMEs did not believe that the Internet
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
EU E-Business and Innovation Policies for SMEs
Figure 1. Use of IT by UK SMEs (HM Treasury, 2001c, p. 146)
was relevant to their business, and 11 percent felt they lacked the skills or knowledge to go online (SBS, 2002c). Figure 1 illustrates IT usage by UK SMEs. E-business adoption by SMEs seems to be strongly influenced by the innovativeness of their customers, suppliers and competitors. However, while close contact with key customers, suppliers and competitors is an advantage, SMEs often lack an understanding of how to assess and control the risks associated with managing them (Berthon et al., 1999), and harnessing e-business requires a thorough understanding of technology and its capabilities. Unlike previous technological initiatives, ebusiness adoption is a “disruptive” innovation (Evans & Wurster, 1999). Whereas previous technological innovations sought to minimize dependency on other organizations, enabling businesses to dictate matters such as production and marketing, e-business requires organizations to reassess their boundaries and to focus attention inter-organizationally rather than organizationally (Kalakota & Robinson, 1999). In the past, adaptation to technology tended to be predictable, sequential and measurable, but adaptation to ebusiness is often unpredictable, non-sequential and immeasurable. In the pre-Internet environment, strategies surrounding the adoption and implementation of ICTs could be planned and controlled, and therefore the (potential) benefits derived from adopting a
0
given technology could be ascertained in light of a cost:benefit equation resulting in direct expected outcomes. The implementation of e-business, however, requires new knowledge and skills. Mathiyalakan (2003) found that e-business skills and expertise affects e-business implementation, but not Internet adoption, as technical and managerial skills are necessary in order to conduct e-business (Grover et al., 1998). And although it is possible to sub-contract or outsource Web site development and maintenance to a third party (for example), in-house e-business knowledge (if not expertise) is necessary for an organization to achieve full implementation (Bode & Burn, 2001). None of these inconsistencies appear to be reconcilable based on the business’ “size” factor alone. E-business is not only rapidly innovating traditional business processes, but also the very nature of competition, enabling market fragmentation, the ability to treat mass clients as individuals, convergence between products and services, global production networks, and simultaneous co-operation and competition between organizations. As e-business facilitates this radical transformation of both technical and business operations, it is truly innovative. Innovation is an important engine of long-term competitiveness, growth and employment (Tushman & O’Reilly, 2002). The OECD estimates not only that between 1970-1995, more than half of the total growth in output of the developed world resulted from innovation, but also that this proportion is increasing as economies become more knowledge-intensive (Irwin, 2000). The cross-functional nature of innovation management requires strong leadership in managing through turbulence (Tushman, 2002). Collective knowledge and activities become embedded within organizations (Hanseth & Braa, 2000) that tend, as a result, to develop stable routines and cultures. Change does not therefore usually take place without the motivation to do so. Such motivation is usually provided when “assumptions, attitudes, or behavioral routines” no longer work or are out of date (Schien, 1999, p. 104-5). This “disconfirmation” is usually brought about by a champion in the organization who is
EU E-Business and Innovation Policies for SMEs
spearheading the change, who is responsible for the performance of the organization, and who is seen as a credible information source (Martin & Matlay, 2001). Along with achieving growth and maintaining performance, SME owner/managers are responsible for detecting new possibilities and ideas, for combining them with other resources and ideas, and for giving them appropriate organizational form. Many SMEs cannot afford the financial risk needed to innovate and develop new products, processes and systems in spite of latent capabilities, as they often do not have resources in reserve after meeting day-to-day requirements. Customers, organizational structures, and prejudices dispose them to stick with the familiar. Innovation adoption rates tend to be higher when the immediate expected profitability is high. Otherwise, decisions are constrained by a tendency to continue down a known path, guided by “routine and past practice” (Nelson & Winter, 1977, 1982).
background UK SME e-business adoption rates have been considerably lower than governments had hoped for (Dixon,et al., 2002; SBS, 2003d; 2004a; 2004b). Although the UK has seen significant growth of business Internet usage (DTI, 2002a; 2002b; 2002c; Quayle, 2002a; Sharma & Wickramasinghe, 2004), this is mainly due to adoption by larger businesses. Uptake of e-business in smaller businesses is relatively low. In 2002, although 77 percent of UK SMEs had an Internet connection, only 10 percent of these (540,000) traded online, and only 3 percent regarded themselves as an e-business (Dixon et al., 2002; European Commission, 2002b). Despite the widely perceived advantages of adopting and implementing e-business, SMEs tend to use ICTs more as tools to support specific organizational tasks such as administration and accounting, relying primarily on standard off-the-shelf solutions, rather than strategically incorporating e-business into their businesses (Southern & Tilley, 2000).
uk E-businEss and information policiEs Relevant technological innovation policy promoting innovation/e-business adoption and implementation depends on an understanding of what “really” drives adoption and implementation, of the external barriers that prevent or delay it, and of how it impacts on competitiveness and employment. Incentive schemes and policies intended to benefit the SME sector need, therefore, to take into account the culture, performance and abilities of SMEs. While an evolution towards more interactive support is visible, there is a high degree of heterogeneity in policy instruments aiming to foster innovation in SMEs (HM Treasury, 2001a). Current UK government-funded projects designed to assist SMEs to adopt e-business include the promotion of online trading and the creation of virtual business networks to promote technology diffusion (Jeffcoate, Chappell & Feindt, 2002; Smith et al., 2002). The UK government advocates the development of a “legal, regulatory and fiscal environment” to facilitate its stated goal of the UK becoming “the best place in the world” to conduct e-business. Believing online business success to be critical to the future competitiveness of UK businesses, the government established its agenda and laid down targets that set out the roles of government and business to improve the UK’s competitiveness. Indeed, the UK is well placed to become the “best place in the world” to conduct e-business, as it has a world-class IT, communications and digital services infrastructure, including a highly sophisticated wireless device market, relatively low telecommunications costs, the highest DTV penetration in the world, and a regulatory structure that facilitates one of the world’s most important and competitive financial marketplaces. The UK also has the world’s second largest private equity and venture capital sector (behind the US), and a strong educational infrastructure that provides a high number of graduates in disciplines such as math and computer science (BAH, 2002). UK business attitudes on the cost and security of transacting online, however, are comparatively conservative (BAH, 2002). 0
EU E-Business and Innovation Policies for SMEs
A considerable number of policies have been formulated and introduced to facilitate the creation of a business environment in which SMEs can innovate and flourish. The UK government has obtained the support of senior and committed political leadership on e-economy topics to supplement a regulatory regime that promotes e-business (BAH, 2002). To this end, The Office of the e-Envoy (OofE) was created. Sitting inside the Cabinet Office, with advisory input into the Treasury’s financing decisions, the OofE sets policy, ensures the co-ordination of e-economy issues across government, and manages selected projects that are of cross-departmental benefit. Another key mechanism for e-business penetration is the Information Age Partnership (IAP), a forum for dialogue between the public and private sectors involving the UK’s IT, communications, electronics and creative content industries. Chaired by the Secretary of State for Trade and Industry, the IAP brings together government ministers, senior officials and the heads of UK companies from across the information, technology, electronics and communications supply chains to advance the UK’s e-business capabilities. The Cabinet Office’s Performance and Innovation Unit report
[email protected] (Cabinet Office, 2000) prompted the development of a number of subsequent UK government reports, such as: UK Online for Business; Technology Means Business; and Supply Chain. The DTI’s UK Online for Business is effectively the re-branded tnformation society initiative, which is in turn part of UK Online, a nationwide e-business initiative that builds on existing support services. The activities of UK Online for Business focus on raising business awareness and understanding and incorporating e-business into the businesses of SMEs. UK Online for Business is a private-public sector initiative in which public, private and non-profit organizations promote e-business to SMEs. UK Online for Business has an annual budget of £67m and a network of 400 advisers in over 100 contact centers (DTI, 2002). The SME advisers are based in Business Links in England and their equivalents in Scotland, Wales and Northern Ireland: their remit is to raise understanding of e-business opportunities (Business Link, 2002). 0
Having examined UK e-business and innovation policies, the next section of this chapter concentrates on SME-specific UK policies.
smE-spEcific uk policy initiativEs The estimated total cost of UK government services to SMEs is approximately £8bn a year, most of which is in lower rate tax relief and CAP Pillar 1 payments (HM Treasury, 2001c). About £2.5bn of this is spent on DTI and Small Business Service (SBS) expenditure (the main core of the UK’s SME-specific policy), providing business support schemes that reach between 8-12 percent of SMEs (DTI, 2002b). The total capital budget of the SBS for 2004-05 is £355.7 million (DTI, 2004; SBS, 2004a). Over 30 percent of total UK public sector spending on services for SMEs is spent at the local and regional levels. Indeed, SMEs’ experience of government services is primarily at local level (HM Treasury, 2001c). Figure 2 illustrates the types of government support offered to SMEs in the UK, while Figure 3 illustrates the UK government’s expenditure on services to SMEs. The range of programs introduced has included providing funds, training, resources, and targeted advice (BAH, 2002; HM Treasury, 2001c). Such programmes can have a real, if limited, impact on SMEs (O’Neill, 2000). To this end, a number of UK government policies and initiatives have been developed to specifically support and stimulate the SME sector. The Opportunity for All in a World of Change White Paper (DT1, 2000b) announced a £30 million initiative for a three year period to assist businesses of all sizes to move beyond having a Web site or trading online to transform themselves through the effective use of e-business (Dixon et al., 2002). The SBS was set up as an executive agency of the DTI in April 2000, reporting to the Secretary of State for Trade and Industry, which sets its overall strategy and direction. The SBS works closely with the Cabinet Office’s Regulatory Impact Unit, the Better Regulation Task Force, and with smallbusiness|Europe, an independent
EU E-Business and Innovation Policies for SMEs
Figure 2. Type of support: UK expenditure on services to SMEs (£m) (HM Treasury, 2001c, p. 36)
Figure 3. Expenditure on services to SMEs by UK government bodies 2001-2002 (HM Treasury, 2001c, p. 34)
Figure 4. The “virtuous circle” (SBS, 2004b, p. 7)
state for trade and industry. It has also developed the UK Business Advisor Barometer to map and analyse how government expenditure on services for SMEs translates into customer experiences (SBS, 2004b; 2004c). The SBS also provides: • • • • •
organization it established in 2001 to ensure that SME interests are taken into account by EU decision-makers (HM Treasury, 2001c). The SBS promotes the principles of Think Small First in the development and implementation of new regulations. The DTI aims to put the SBS at the centre of the “virtuous circle” (illustrated in Figure 4), a process where all government decisions that affect SMEs are taken after reviewing the evidence on what is needed, what works, and where lessons learned further refine and develop subsequent services. Established to enable SMEs to have a voice in government, the SBS has developed The Government Action Plan for Small Business (SBS, 2004b), which is jointly endorsed by the prime minister, the Chancellor of the Exchequer, and the secretary of
A network of Business Links. Access to finance. Access to focused advice and support. Incubators. Managed workspace.
The Social Enterprise Unit (SenU) joined the SBS in 2004. Their shared agenda is to deliver DTI objectives. The SBS also works with the Small Business Council (SBC), which was established in 2000 as an advisory Non-Departmental Public Body providing independent advice for the Chief Executive of the SBS on the needs of SMEs. The SBC reports to the Secretary of State for Trade and Industry on the effects of government policy on small businesses, and produces recommendations in an annual report. Unfortunately, despite a common goal and complementary interests, it seems that the SBS and SBC operate “quite separately” (HM Treasury, 2001c, p. 147). Services which impose duties and obligations have the most directly-perceived impact on SMEs. The local authorities (LAs) and inland revenue (IR) therefore have the greatest reach, as both impose taxes on all SMEs that require at least one payment a year. They are followed by the Jobcentre Plus, the Environment Agency, DEFRA and the Insolvency Agency. SMEs’ experience of government services is primarily at local level, with
0
EU E-Business and Innovation Policies for SMEs
regional development agencies (RDAs), Local strategic partnerships (LSPs), local authorities and other agencies sharing key roles (DEFRA, 2000; HM Treasury, 2001c). RDAs are responsible for regional economic strategies. Most RDAs have set up sub-regional partnerships, which include all public sector providers of business support services, including: LAs, LSPs and business link operators (BLOs); private and voluntary sector bodies, including banks, accountants and enterprise agencies; and public providers of business services.
futurE trEnds The key to delivering improved government services lies not only in ascertaining what SMEs want, and designing services in light of this, but also in finding better ways of advertising these services, so that they are utilized (HM Treasury, 2001c). Even relatively small improvements to universal services could have a cumulatively large impact (HM Treasury, 2001c). Aggregating the information distributed at regional and local levels could therefore greatly improve the ability of governments to design national policy. Despite the range of services provided by government, it would seem that many SMEs remain unaware of them (HM Treasury, 2001c).
conclusion Ways need to be found to begin a meaningful dialogue between SMEs and policy makers if government(s) and the public sector and policy makers are to meet the needs of SMEs. Future policy makers might then become better informed about what SMEs feel they “need” and about what sort of initiatives might work. The introduction of more appropriate government policies could assist more SMEs to understand the relevance of e-business in relation to their operations, and could also assist them to employ strategies that would enable them to harness the opportunities that e-business enables.
0
futurE rEsEarch dirEctions More information is needed on what the broad spectrum of SMEs think and feel will affect them. This, the researcher posits, can be achieved through government departments holding a workshop or series of workshops, attended by SME owner or managers and policy makers alike. One way to enable such a debate between the relevant actors to take place could be to divert funds from a part of an existing initiative. Such a workshop or series of workshops would place SME owner or managers in an arena where they would likely be able to discuss the issues that concern them in a much more detailed and participative manner than has previously been possible. As a result of the information that comes to light from these discussions, policy makers would likely be better informed, and therefore better equipped, to meet the needs of SMEs than they currently are. Such collaboration could provide an unparalleled opportunity to “tease out” and make explicit the various attitudes held by people in the different agencies concerned. Such attitudes, though strongly held, would (likely) be verbalized, discussed and questioned for the first time in such an arena (Galliers, Whittaker, Clegg, et al., 1981). At the very least, the dialogue between SMEs and policy makers is likely to be more substantial as a result of the workshop(s) taking place. One option could be for the organizing government department and/or agency to invite (say) 1 in 100 SME owner or managers to contribute to a relevant discussion, and to incentives them to attend by offering not only a payment for the day, but also the prospect of winning a prize (of, say, £50,000). Incentivising SME owner or managers to attend such workshops would likely ensure a higher attendance rate than is the typical response rate in questionnaires. It would also (likely) provide an arena for a discussion that would be a more in-depth and meaningful dialogue than the two actors seem currently able to engage in. Policy makers would be able to build on the findings of the workshop(s) and, as a consequence, more successfully generate and promote policies for SMEs.
EU E-Business and Innovation Policies for SMEs
A series of workshops of this kind could be set up relatively quickly and could proceed in tandem with the current policies and initiatives in place. However, the orchestration of these workshops would entail a considerable amount of subsequent work for the agencies operating the workshops, and also for those responsible for compiling and analyzing the details. Potential outcomes that may arise from these workshops could include:
Booz|Allen|Hamilton. (2002). International eeconomy benchmarking: The world’s most effective policites for the e-economy.. London: IAP.
•
Dixon, T., & Marston, A. (2002). London’s information economy: The impact of e-business on the city office market. College of Estate Management. Retrieved August 27, 2003 from www. cem.ac.uk
•
• •
A dialogue begun with a view to improving co-ordination, co-operation and communication between SMEs and policy makers. A clarification of what policy makers can actually offer—on a practical, tangible and measurable level—to SMEs throughout all tiers of government, thereby enabling resources to be allocated to the areas where they would have maximum impact. The generation of co-ordinated policies regarding the adoption and implementation of e-business by SMEs. The development of more cohesive, accessible and relevant policies for SMEs (Galliers et al., 1981).
rEfErEncEs
Business Link (2002). London e-business survey. London: Business Link. Chaston, I., & Mangles, T. (2002). E-commerce in small UK manufacturing firms: A pilot study on competencies. Journal of Marketing Management, 9(3-4), 341-360.
Dixon, T., Thompson, B., & McAllister, P. (2002). The value of ICT for SMEs in the UK: A critical literature review. A report for small business service research programme. Small Business Service, College of Estate Management. Retrieved March 26, 2003 from http://www.sbs.gov.uk/content/research/ DTI—Department of Trade and Industry. (2004). A government action plan for small business. Making the UK the best place in the world to start and grow a business: The evidence base. www. sbs.gov.uk/content/7-strategies/sbs_evidence.pdf (Accessed 12 March 2003).
Basu, S. (2001). Electronic management of relationship: The trial and tribulation of CRM. In J.R. Brooks, Jr. (Ed.), Marketing advances in pedagogy, process, and philosophy (pp.293-296). Houston, TX: Society for Marketing Advances.
DTI. (2002a). UK competitiveness indicators. London: DTI.
Berthon, P., Hulbert, J.M., & Pitt, L.F. (1999). To serve or create? Strategic orientations toward customers and innovation. California Management Review.
DTI (2002c). Business in the information age: Business benchmarking study 2002. Department of Trade and Industry.
Bode, S., & Burn, J. (2001). Consultancy engagement and e-business development: A case analysis of Australian SMEs. S. Smithson, J. Gricar, M. Podlogar, & Avgerinou, S. (Eds.), 9th European Conference on Information Systems (pp.568-578). Bled, Slovenia.
DTI (2002b). Cross cutting review of government services. Norwich: The Stationery Office.
European Commission. (2002b). European conference on benchmarking national and regional policies in support of e-business for SMEs. Retrieved December 3, 2003, from http://europa. eu.int/comm/enterprise/ict/policy/benchmarking. htm
EU E-Business and Innovation Policies for SMEs
Evans, P.B., & Wurster, T.S. (2000). Talking strategically about e-business. Perspectives. Retrieved August 12, 2002 from http://www.bcg.com.
Kalakota, D.R., & Robinson, M. (1999). E-business: Roadmap for success. Reading, MA: Addison-Wesley.
Federation of Small Business. (2002a). Small firms reluctant to make full use of the web. Retrieved October 12, 2002 from http://www.fsb.org.uk/ news.asp?REC=P/2002/18.
Martin, L.M., & Matlay, H. (2001). ‘Blanket’ approaches to promoting ICT in small firms: Some lessons from the DTI ladder adoption model in the UK. Internet Research: Electronic Networking Applications and Policy, 11(5), 399-410.
Federation of Small Businesses (2002b). Lifting the barriers to growth in UK small businesses. FBS. Retrieved August 27, 2003 from http://www. fsb.org.uk/policy/lbg2002/default.asp. Galliers, R.D., Whittaker, B.D., Clegg, J.D., & Mouthon, M. (1981). Improving employment prospects for mentally handicapped people in camden: A systems study. Journal of Applied Systems Analysis, 8, 101-112. HM Treasury, Inland Revenue. (2001a). Designs for innovation: A consultative note. Retrieved June 1, 2002, from http://www.ht-treasury.gov. uk/mediastore/otherfiles/ACF525.pdf HM Treasury, Inland Revenue. (2001b). Increasing innovation. Retrieved June 1, 2002, from http://www.hm-treasury.gov.uk HM Treasury (2001c). The cross cutting review of government services for small business. Retrieved October 7, 2004, from http://www.hmtreasury.gov. uk/Spending_Review/spend_ccr/spend_ccr_business.cfm Irwin, D. (2000). Enhancing the competitiveness of small businesses in the global economy: Enhancing the competitiveness of small businesses through innovation. OECD. Retrieved August 3, 2002 from http://www.sbs.gov.uk/content/pdf/oecd.pdf Jeffcoate, J., Chappell, C., & Feindt, S. (2002). Best practice in SME adoption of e-commerce. Benchmarking: an International Journal, Electronic Commerce: a Best Practice Perspective, 9(2), 122-132. Joint Committee on Enterprise and Small Business (JCESB) (1999). The impact of e-commerce on small and medium sized enterprises. Dublin, Ireland: Stationery Office.
Mathiyalakan, S. (2003). A longitudinal examination of web technology adoption and implementation in small and micro sized businesses. Journal of E-Business, 3(2), 1-17. Nelson, R.R., & Winter, S.G. (1982). An evolutionary theory of economic change. Cambridge, MA: Belknap Press. OECD. (1998, October7-9). SMEs and electronic commerce. Ministerial Conference on Electronic Commerce. Ottawa, Canada Porter, M. (2001). Strategy and the internet. Harvard Business Review, 79(3), 62-79. Rayport, J.F., & Jaworski, B.J. (2001). Introduction to e-commerce. Boston, MA: McGrawHill/Irwin. Schien, E.H. (1999). Information technology strategy. The role of the CEO in the management of change: The case of information technology. In R.D. Galliers, E.E. Leidner, & B.S.H. Baker (Eds.), Strategic information and management: Challenges and strategies in managing information systems (2nd Ed.) (pp.102-122). Oxford: Butterworth-Heinemann. Sharma, S.K., & Wickramasinghe, N. (2004). Obstacles to SMEs for e-adoption in the asia pacific region. In B.J. Corbitt, & N.A.Y. Al-Qirim, (Eds.), e-business, e-government & small and medium-sized enterprises: Opportunities and challenges (pp. 112-133). IDEA Group Publishing, Hershey. Small Business Service (2004a). A government action plan for small business. Retrieved October 4, 2004, from www.sbs.gov.uk/action.
EU E-Business and Innovation Policies for SMEs
Small Business Service (2004b). A government action plan for small business: Making the UK the best place in the world to start and grow a business. The evidence base. Retrieved October 4, 2004, from www.sbs.gov.uk Small Business Service (2002c). Omnibus survey of small business opinions 2002. Sheffield: Small Business Service. Smith A.J., Boocock G., Loan-Clarke J., & Whittaker J. (2002). IIP and SMEs: Awareness, benefits and barriers. Personnel Review, 31(1), 62–85. Southern, A., & Tilley, F. (2000). Small firms and ICTs: Toward a typology of ICTs usage. New Technology: Work and Employment, 15(2), 138-154. Timmers, P. (2000). Global and local in electronic commerce. In Proceedings of the First International Conference on Electronic Commerce and Web Technologies. pp.191-205. Tushman, M.L. (2002). Managing strategic innovation and change. Oxford: Oxford University Press. Tushman, M.L., & O’Reilley, C.A. (2002). Winning through innovation: A practical guide to leading organizational change and renewal. Cambridge, MA: Harvard Business School Press. Watson, R.T., Akselsen, S., & Leylan, F.P. (1998). Attractors: Building mountains in the flat landscape of the world wide web. California Management Review, 40(2), 37-56.
furthEr rEading Alonso Mendo, F., & Fitzgerald, G. (2004). An analysis of stages of growth models in SMEs ebusiness progression. In Proceedings of the 1st European and Mediterranean Conference on Information Systems, 2004 Tunis, Tunisia. Azzone, G., Bianchi, R., & Noci, G. (2001). Corporate Web sites: The drivers of their different configurations. Electronic Markets, 11(2), 126–139.
Barry, H., & Milner, B. (2002). SMEs and electronic commerce: A departure from the traditional prioritisation of training. Journal of European Industrial Training, 25(7), 316–326. Brown, D.H., & Lockett, N. (2004). Potential of critical e-applications for engaging SMEs in e-business: a provider perspective. European Journal of Information Systems, 13(1), 21–34. Chan, C., & Swatman, P.M.C. (2004). B2B e-commerce stages of growth: The strategic imperatives. In Proceedings of the 37th Hawaii International Conference on Systems Sciences. Chapman, P., James-Moore, M., Szczygiel, M., & Thompson, D. (2000). Building internet capabilities in SMEs. Logistics Information Management, 13(6), 353-360. Chaston, I., Adger, B.B., Angles, T.M., & SadlerSmith, E. (2001). The internet and e-commerce: An opportunity to examine organisational learning in progress in small manufacturing firms. International Small Business Journal, 19(2), 13–30. Currie, W. (2000). The global information society. Chichester: J. Wiley and Sons. European Commission. (2007). Green paper. The european research area: New perspectives. http://ec.europa.eu/research/era/index_en.html Damsgaard, J., & Scheepers, R. (2000). Managing the crises in intranet implementation: A stage model. Information Systems Journal, 10(2), 131–150. Daniel, E., Wilson, H., & Myers, A. (2002). Adoption of e-commerce by SMEs in the UK; towards a stage model. International Small Business Journal, 20(3), 253–270. Dixon, T., Thompson, B., & McAllister, P. (2002). The value of ICT for SMEs in the UK: A critical literature review. Report for Small Business Service Research Programme. Drew, S. (2003). Strategic uses of e-commerce by SMEs in the east of england. European Management Journal, 21(1), 79-88.
EU E-Business and Innovation Policies for SMEs
DTI. (2006). Business in the information age — International Benchmarking Study 2006, London.
Layne, K., & Lee, J. (2001). Developing fully functional e-government: A four stage model. Government Information Quarterly, 18, 122-136.
European Commission. (2005). e-government beyond 2005: CoBrA recommendations. www. epma.cz/Docs/eGovernment_beyond_2005.pdf
Levenburg, N.M., Schwarz, T.V., & Dandridge, T.C. (2002). Understanding adoption of internet technologies. The United States Association for Small Business and Entrepreneurship (USASBE) National Conference, 2002 , Reno, Nevada, USA.
European Commission. (2005). Working paper. eGovernment beyond 2005: An Overview of Policy Issues.Retrieved from ec.europa.eu/information_society/activities/ egovernment_research/ doc/working_paper_beyond_2005.pdf European Commission. (2003). Adapting e-business policies in a changing environment: The lessons of the Go Digital initiative and the challenges ahead. European Commission. (2002). eCATT and Empirica. Benchmarking progress on new ways of working and new forms of business across Europe – ECaTT final report 2000. http: www. ecatt.com European Commission. (2002) .eEUROPE Go Digital. Benchmarking National and Regional E-business Policies for SMEs – Final Report of the E-business Policy Group. Fallon, M., & Moran, P. (2000, April10-11). Information communication technology ICT and manufacturing SMEs. The 2000 Small Business and Enterprise Development Conference, Manchester University, Manchester. Kai-Uwe Brock, J. (2000). Information and technology in the small firm. In Carter, S. and JonesEvans, D. (Eds.), Enterprise and the small business (pp. 384-408). Financial Times, Prentice Hall,. Keindl, B. (2000). Competitive dynamics and new business models for SMEs in the virtual marketplace. Journal of Developmental Entrepreneurship, 5(1), 73-85. Kowtha, N., & Choon, T. (2001) Determinants of Web site development: A study of electronic commerce. Information and Management, 39, 227-242.
Levy, M., & Powell, P. (2003). Exploring SME internet adoption: Towards a contingent model journal electronic markets. Business and Management, Entrepreneurship and IT and Innovation, 13(2), 173-181. Levy, M., & Powell, P. (2002, June 17-19). SME internet adoption: Towards a transporter model. eReality: Constructing the eEconomy. Fifteenth Bled Electronic Commerce Conference, Bled, Slovenia. Levy, M., Powell, P., & Yetton, P. (2002) Critical issues for growing IS capabilities in SMEs. Small Business Economics, 19(4), 341-354. Levy, M., Powell, P., & Yetton, P. (2001). SMEs: Aligning IS and the strategic context. Journal of Information Technology, 16, 133-144. Martin, L.M., & Matlay, H. (2001). ‘Blanket’ approaches to promoting ICT in small firms: Some lessons from the DTI ladder adoption model in the UK. Internet Research: Electronic Networking Applications and Policy, 11(5), 399–410. Mehrtens, J., Cragg, P.B., and Mills, A.M. (2001). A model of internet adoption by SMEs. Information and Management, 39, 165–176. Miller, N.L., and Besser, T.L. (2000). The importance of community values in small business strategy formation: Evidence from rural iowa. Journal of Small Business Management, 38(1), 68–85. Poon, S. (2000). Business environment and internet commerce benefits: A small business perspective. European Journal of Information Systems, 9, 72-81.
EU E-Business and Innovation Policies for SMEs
Poon, S., and Swatman, P. (1999). An exploratory study of SME Internet commerce issues. Information & Management, 35, 9-18. Quayle, M. (2004). E-business in a turbulent world: Usage in European small and medium size enterprises. International Journal of Electronic Business, 2(4), 41-52.
Tagliavini, M., Ravarini, A., & Antonelli, A. (2001). An evaluation model for electronic commerce activities within SMEs. Information Technology and Management, 2(2), 211–230. Taylor, M., & Murphy, A. (2004). SMEs and ebusiness. Journal of Small Business and Enterprise Development, 11(3), 280-289.
Quix, C., Schoop, M., & Jeusfeld, M. (2002). Business data management for business-to-business electronic commerce. SIGMOD Record, 31(1), 49-54.
Teo, T.S.H., & Pian, Y.A. (2003). A contingency perspective on Internet adoption and competitive advantage. European Journal of Information Systems, 12(2) 78–92.
Quix, C., & Schoop, M. (2000, September 4-6). Towards effective negotiation support in electronic marketplaces. Electronic Commerce and Web Technologies, First International Conference, ECWeb 2000, (pp. 442-451). London, UK.
Tetteh, E., & Burn, J.J. (2001). Global strategies for SME-business: Applying the SMALL Framework. Logistics Information Management, 14(1), 171–180.
Rao, S.S., Metts, G., & Mora Monge, C.A. (2003). Electronic commerce development in small and medium sized enterprises; a stage model and its implications. Business Process Management, 9(1), 11-32. Raymond, L. (2001). Determinants of web site implementation in small business. Electronic Network Applications and Policy, 11(5), 411-422. Rusten, G., & Cornford, A. (2003). Web site strategies and performance in SMEs: Performance indicators and regional challenges. The Regional Studies Association International Conference 2003, Pisa, Italy. Schoop, M., & Quix, C. (2001). DOC.COM: A framework for effective negotiation support in electronic marketplaces. Computer Networks, 37(2), 153-170. Sillence, J., MacDonald, S., Lefang, B., & Frost, B. (1998) Email adoption, diffusion, use and impact with small firms. International Journal of Information Management, 18(4), 231-242. Southern, A., & Tilley, F. (2000). Small firms and information and communication technologies ICTs: Toward a typology of ICTs usage. New Technology, Work, and Employment, 15(2), 138–154.
Wade, M., Johnston, D., & McClean, R. (2003). Adoption of new economy practices by SMEs in eastern europe. European Management Journal, 21(2), 133–145.
tErms and dEfinitions Business-To-Business (B2B): Commercial activity between two or more organizations, or the use of ICTs to facilitate payment management, inventory management and distribution management. The B2B model has become the most significant form of e-procurement, in terms of growth, volume, and financial impact of the e-business models (Addo et al., 2003). The B2B sector provides the most opportunity for exploitation, especially in the supply chain, where B2B activity is concentrated. Business-To-Consumer (B2C): Commercial activity between businesses and consumers, or the use of e-business to enable customer information interaction, personal finance management, purchasing products and the dissemination of aftersales information (Rayport & Jaworski, 2001). Consumers-To-Business (C2B): The banding together of consumers who present themselves as a buyer group to businesses (Rayport & Jaworski, 2001).
EU E-Business and Innovation Policies for SMEs
Consumer-To-Consumer (C2C): Electronic exchanges between and among consumers (e.g., auctions mediated by third parties such as eBay. com). E-Business: Those business activities related to the business operations of an organization over the Internet. E-business encompasses all commercial activities. The definition of e-business that was agreed by the Organization for Economic Cooperation and Development (OECD) and the EU is the method by which the order is placed which determines whether a transaction is e-business, not the payment or delivery channels. Shifting business activities from paper-based, local, face-toface and manual processes to electronic, dispersed, mediated and automatic processes is the essence of e-business, whether in dealing with customers or suppliers (Wilkins et al., 1999). E-business activities include, but are not limited to: • • • • • • • • • • • •
Web site E-mail order confirmation Intranet E-mail E-procurement Web catalogues Staff remote online ERP via web Trading exchanges Internet auctions B2C Online ordering on ERP Viewing orders on ERP online
E-Commerce: Those business activities related to the actual buying and selling of goods and services among organizations over the Internet. E-commerce has been defined as “the buying and selling of information, products and services via computer networks” (Kalakota & Whinston, 1996, p. 1) or the application of ICTs with the aim of increasing the effectiveness of the business relationships between trading partners (Kalakota & Whinston, 1997). E-commerce systems can be divided according to whether or not it is internally and externally integrated (Kalakota & Whinston, 1996). Internal integration refers to processes and
systems within an organization, while external integration refers to the integration of processes and systems with other organizational partners. Types of external integration have previously included EDI systems, but the advent of the Internet has created new systems for B2B and B2C e-business. E-Distribution: A critical component of the supply chain for most organizations. Electronic Data Interchange (EDI): The computer-to-computer exchange of data (e.g., invoices) in a structured format that enables the automatic processing of data without manual intervention. EDI is based on a trust relationship between an organization and its partners or between an organization and a value-added network (VAN) provider that handles the access, security, and other issues related to EDI transmissions. Certain inherent characteristics of EDI discourage unauthorized access into the network by non-EDI partners, including the stringent data formatting requirements for specific industries and the secure, dedicated nature of EDI transmissions between specific trusted partners. E-Government: The use by government agencies of ITs that have the ability to transform relations with citizens, businesses and other arms of government. These technologies can serve a variety of different ends: better delivery of government services to citizens, improved interactions with business and industry, citizen empowerment through access to information, or more efficient government management. The resulting benefits can be less corruption, increased transparency, greater convenience, revenue growth, and/or cost reductions (World Bank, 2002). E-Marketplace: The arena within which businesses conduct Internet-mediated B2B trade. Within this arena, suppliers can advertise and market their products and services. Purchasing in the E-Marketplace (E-Procurement): The e-procurement process involves the following steps:
EU E-Business and Innovation Policies for SMEs
• • • • •
• •
Search: such as the ability to search online for appropriate suppliers, contacts, brochures, etc. Qualify: such as online research of company background, credit history, comparisons with competitors, etc. Negotiate: such as negotiating the price, credit terms, quality, timing, etc. Purchase: such as ordering the product, initiating the purchase order and entering the information into the system. Invoicing: such as receiving the invoice and matching the purchase order, entering the information into the financial and production systems. Shipping: such as the shipping and delivery of goods, entering the information into the shipper’s tracking system. Remittance Payment: such as receiving goods, verifying and correcting invoices, sending payment(s), entering the record into the system, and providing online after-sales support (Laudon & Traver, 2001).
Information and communication technologies (ICTs)
The EU recommended definition for a “micro” business is that it must have a maximum of nine employees. A “small” business must satisfy the following criteria: • • • •
The EU recommendation states that a “medium-sized” business must satisfy the following criteria: • • • •
Intra-Business: The use of ICTs to share information internally within an organization. Information Systems (IS): An information system has been described as “a system to collect, process, store, transmit, and display information” (Avison & Wood-Harper, 1990, p. 3). Small- and Medium-Sized Enterprises (SMEs): In February 1996, the EU adopted a single definition of SMEs to be applied across all EU programs and proposals dating from December 31, 1997. The Communication recommended that member states, the European Investment Bank and the European Investment Fund adopt the definitions. However, the communication permits the use of lower threshold figures, if desired.
A maximum number of 49 employees; A maximum annual turnover of seven million euros; A maximum annual balance sheet total of five million euros; and The maximum percentage of 25 percent owned by one, or jointly by several, enterprise(s) not satisfying the same criteria.
A maximum number of 249 employees; A maximum annual turnover of 40 million euros; A maximum annual balance sheet total of 27 million euros; and The maximum percentage of 25 percent owned by one, or jointly by several, enterprise(s) not satisfying the same criteria.
EndnotE 1
E-business is defined for the purposes of this chapter as those activities related to the business operations of an organization conducted over networks or the Internet, and thereby encompassing all e-commerce activities (those business activities related to the trading of goods and services over networks or the Internet).
Chapter XII
Exploitation of Public Sector Information in Europe Ioannis P. Chochliouros Hellenic Telecommunications Organization S.A. (OTE), Greece Anastasia S. Spiliopoulou Hellenic Telecommunications Organization S.A. (OTE), Greece Stergios P. Chochliouros Independent Consultant, Greece
introduction The gradual “penetration” of an innovative, digitally-oriented information society, in the scope of the actual convergence among telecommunications, broadcasting and information technology, creates primary opportunities for access and exploitation of public sector information (PSI), in the context of a fully competitive and liberalized European electronic communications market. There are now significant challenges on the scene, for improving mutual communication between public sector and private companies, thus creating chances for exploiting new opportunities, to the benefit of the broader European market(s). However, the non-existence of an appropriate legal framework governing the conditions and terms for the commercial use of PSI constitutes a severe drawback for any serious attempt towards evolution, and for an effective development of a European e-communications market.
Recent European regulatory-oriented policies have established and supported suitable measures and provisions, to ensure access to PSI for all interested parties and for their “free” circulation among member states. In the context of the suggested contribution we examine current European harmonization regulatory measures, towards creating transparency and legal security for all market players involved in the wider content market, thus contributing to growth.
background: pErspEctivEs and potEntial from disposal of psi The evolution towards an information society and a digital knowledge-based economy can influence, very drastically, the life of every citizen, by enabling him to gain new ways of accessing
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Exploitation of Public Sector Information in Europe
and acquiring knowledge (Commission of the European Communities, 2000). In such a challenging environment, digital content can perform an extremely important and guiding role in all relevant evolutionary processes. In addition, the continuous penetration of Internet-based (broadband) technologies (Chochliouros & SpiliopoulouChochliourou, 2005a) provides new opportunities for growth and thus creates innovative perspectives for all actors involved in the related market initiatives, policies and other measures. Within such a challenging context, the public sector collects, produces, reproduces, and disseminates a wide range of information in many areas of activity, such as social, economic, geographical, weather, tourist, traffic, business, patent and educational information, at different distinct levels of government (Advisory Panel on Public Sector Information(APPSI), 2004). A better use of public sector information can be seen as an important element in view of the rapid evolution towards a global information society. The importance of this issue has been widely recognized and a clear consensus exists that action is necessary to take advantage of the new technologies that improve the access to and exploitation of PSI (Pira International, 2000). Citizens and businesses can greatly obtain valuable assets from an improved use and the proper (and effective) provision of such informative data on the Internet. Their communication with the public administrations can be further enhanced, while their involvement in open and democratic processes becomes significantly greater. In fact, PSI is quite significant for democratic and civic life, since civil services operate on a non-profit basis (Poullet, 1995). So, it is necessary for the exercise of fundamental and civil rights, and should be as widely as possible accessible and disseminated throughout society, mainly through e-government applications (Chochliouros & Spiliopoulou-Chochliourou, 2005b). In particular, this information should be made easily available for the citizens and actively disseminated whenever possible free of charge. As a result, better quality information will be used by a larger group of citizens and companies and it will allow them to better take
advantage of their rights in the internal liberalized market(s). Simultaneously, PSI is a basic module for vital economic activity, for scientific research and for the proper functioning of the European market, specifically focused on the creation and the establishment of conditions for growth of communitywide services. Thus, it should be expected that PSI would become an even more important content resource of large economic potential, especially with the development of wireless content (Internet-based) services (UK Department of Trade and Industry (DTI), 2000). In any case, all kinds of relevant data constitute an essential basis for multiple digital information products and can become an important “raw material” for innovative services. Modern information society technologies can lead to unprecedented possibilities to combine data taken from different sources and consequently, to create added-value products and competitive facilities (especially in the framework of the well-being European content industry). The benefits from these opportunities can be invested in new ventures, even further increasing their advantage and consequently leading to catalysing developments in the market (European Commission—Directorate General for Information Society and Andersen, 2002). Broad cross-border geographical coverage will also be necessary in this context. Wider possibilities of re-using public sector information should, among others, allow European companies to exploit its potential and contribute to economic activities and job creation, with limited probabilities of investment failure. This has a clear effect especially on the underlying European market, since uncertainty about the conditions for using public sector data prevents companies to start various related forms of businesses.
thE actual challEngE for a propEr EuropEan rEgulatory framEwork Contemporary European policies support the establishment of an internal liberalized market and
Exploitation of Public Sector Information in Europe
of a system ensuring that market competition is not distorted (Chochliouros & Spiliopoulou-Chochliourou, 2003). Harmonization of the rules and practices in the European Member States relating to the exploitation of public sector information contributes to the achievement of these fundamental objectives; however, among the traditional, already existing European approaches, there are considerable variations, as different concepts have been developed and tested, in very disparate ways. Existing differences in national regulations and practices (or even any probable absence of clarity) create significant barriers and hinder both the smooth functioning of the market and the appropriate development of the broader information society. In practice, several types of barriers exist that result from diversities mainly in (administrative) rules or accepted practices (occasionally found at the regional or local level): these include, among others, differences in replying time, the denial to transmit the information in digital format (or in a specific pre-existing digital format), the need to verify that the information company is directly concerned by the information (not justified by data-protection rules), exclusive deals that may already exist between public and private firms, or even the requirement to consult the information on the spot (Commission of the European Communities, 2001). Other potential barriers are mainly due to adopted pricing issues. Most companies do not consider the fact that they have to pay for the information as a big problem in itself. Nevertheless, there are considerable price differences for the same type of information, indicating that European countries use widely diverging rules and measures when setting and validating the price-level for the necessary re-use of information. Thus, without a “minimum” accepted set of measures or conditions guaranteeing certainty and transparency, legislative activities at national level, which have already been initiated in a number of member states to respond to the technological challenges, might result in even more significant and complex differences. The impact of such legislative differences (and uncertainties) becomes more significant with the further development of digital
0
facilities, which have already greatly increased cross-border exploitation of information (Commission of the European Communities, 2001). On the contrary, in the United States access to and re-use of government federal information has been enhanced early by a clear and quite simple legislative framework (Pira International, 2000). Residential and corporate users enjoy a broad right to electronically access this information and have extensive possibilities to reuse it for several commercial purposes. Actually, there is no copyright on PSI and there are no particular restrictions to re-use (U.S. National Commission on Libraries and Information Science, 2001). Furthermore fees for re-use are limited to, at most, marginal costs for reproduction and dissemination, thus facilitating any further business effort. These extensive possibilities for re-use have given way to a related information market that is estimated to be up significantly greater (at least five times the size) of the corresponding European one (Commission of the European Communities, 2001). Moreover, as a result of the applied policies, American firms possessed a strong competitive advantage over their European (or global) counterparts. Consequently, the economic challenge for the EU, in the global market arena, was to create better conditions for exploitation of all relevant information and to support conditions for further growth. In any case, the availability of reliable information products covering different European member states is essential for all firms operating in the international environment. High quality information on specific sectors (for example administrative procedures, company registers, traffic data, investment conditions, scientific and technical conditions, cultural information, the environmental situation, etc.) can make the difference, when offering goods or services outside the limited national borders of a country. Policy-makers in Europe accept relatively easily the concept that more PSI should be given to citizens in the context of participation in a modern democracy. However, the knock-on effect of the availability of this information on the web to private companies already in the market to disseminate this information has not until recently been as widely recognized.
Exploitation of Public Sector Information in Europe
With the active background for the promotion of an effective regulation for electronic communications activities (European Parliament and Council of the European Union, 2002), the appropriate response to the above key-issue was the validation and the gradual implementation of the latest European Directive 2003/98/EC; this has acted as an “appropriate guideline” to reinforce and complement other parallel measures taken at various levels (national, regional or local), focused on access to and commercial exploitation of PSI (European Parliament and Council of the European Union, 2003).
thE nEw approach in rEgulation Regulating the use of PSI within the European Union was first attempted by the European Commission in the mid-eighties (European Commission, 1998), but only recently resulted into a specific, distinct and really effective measure. The mere fact that it took about 20 years before a regulative framework came into being clearly shows the difficulty, and the importance of public sector information. Among previous approaches, various issues have appeared, including aspects for different categories of government information, price definition, conditions of use, protection of personal data, access to information rights, fair competition, copyright, organization of government agencies etc. Several reports provide a wider overview of the entire European sector (MEPSIR, 2006; Pira International, 2000) with several issues and recent developments from the European market. A general but well-defined framework for the conditions governing re-use of public sector documents is essential, in order to ensure fair, proportionate and non-discriminatory conditions for all corresponding activities: economic information (financial information, information concerning undertakings and economic statistics), environmental information (of a hydrographical nature, information on the use of land, information regarding the quality of the environment,
geographical and meteorological information), agricultural and fishing information (information on harvests, use of resources, and fisheries), social information (demographic information, behavioral information, information concerning health and sickness), legal information (information regarding crimes, but also on laws and jurisprudence) and finally, political information (press releases by governments, proposals and consultation). In the real world, various public sector bodies (or “bodies governed by public law”) use an extensive variety of informative data or “documents” to fulfill their public tasks; therefore, any use of such documents for other reasons (including use performed by physical persons or legal entities for commercial or non-commercial purposes other than the initial rationale within the public task for which these were produced) is known as “re-use” (Janssen & Dumortier, 2003). In any case, national EU policies can go beyond the minimum standards applied in practice, to allow for more extensive opportunities for re-use, thus to respond to the global challenges for more enhanced markets, products and services/facilities (OECD, 2006). The latest European “PSI Directive” lays down a generic definition of the term “document,” in line with actual developments in the global information society. Consequently, it covers any representation of acts, facts or information—and any compilation of such acts, facts or information—whatever its medium (written on paper, or stored in electronic form or as a sound, visual or audiovisual recording), held by various public sector bodies. Any relevant “document” is, in fact, a specific creative object, for which the appropriate public sector body has the right to authorize re-use to a third party. The proposed regulatory approach establishes a minimum set of rules governing corresponding options and intending to really facilitate further re-use of existing documents or other such information in the EU. To this aim, the directive is built on and is without prejudice to the existing access regimes and does not change the national rules for access to informative resources; however, it does not apply in certain well defined cases such as: these where citizens or companies have to prove a particular interest under the access regime to obtain access
Exploitation of Public Sector Information in Europe
to the specified “document;” cases of information related to the protection of national security (i.e., state security), defence, or public security, and; cases of data related to statistical or commercial confidentiality. Moreover, the existing framework also delimits some other possible exceptions, including: documents the supply of which is an activity falling outside the scope of the public task of the public sector bodies concerned as defined by law or by other binding rules at national level; documents for which intellectual property rights are held by third parties; specific information held for educational and research establishments and; data held by cultural establishments (such as museums, libraries, archives, orchestras, operas, ballets and theatres). All possible suggested regulatory measures do not impose any form of re-use of public sector or governmental information. It is up to the national governments of the European states or their public sector bodies to decide on allowing any re-use or not. However, where such an activity is allowed, governments have to ensure that the corresponding informative resources should be re-usable for both commercial and non-commercial purposes (Pas & De Vuyst, 2004). Where possible and appropriate, all kinds of the so-called “documents” shall be made available through various electronic means. Under such conditions, it should be expected that the appropriate public bodies will further promote and encourage re-use activities for various forms of informative assets (also including official texts of a legislative and administrative nature). In fact, this is an essential instrument for extending the right to knowledge, which is a basic principle of democracy. This objective is applicable to institutions at every level, be it local, national or international. In fact, the directive becomes applicable to informative resources that are made accessible for re-use when public sector bodies license, sell, disseminate, exchange or give out information. To avoid any effects of cross-subsidies, re-use also includes further usage of information within the relevant organization itself for activities falling outside the scope of its public tasks.
The existing legal European Community instruments (competition rules, non-discrimination rules, rules on the free movement of services) may be used in some very specific cases where the re-use of public sector information by the private sector is at stake. Furthermore, the new suggested regulatory approach leaves intact and in no way affects the level of protection of individuals with regard to the processing of personal data under the provisions of existing common European and/or national law (European Parliament and Council of the European Union, 1995).
applicablE rEQuirEmEnts for accEss and rE-usE The directive supports the option for effectively processing requests for re-use, through appropriate electronic means or any other relevant “digital tools” available to the citizens. Documents shall be made available to the corresponding applicant or, if a license is needed, a finalization of license offer to the applicant has to be performed and completed, within a “reasonable” time limit. The latter action needs to be in line with the existing time-frames for the processing of requests for access, as defined by the relevant national access regimes. In any case, reasonable time limits throughout the European territory can motivate the creation of new aggregated information products and services at pan-European level, promoting the applicability of the contemporary policies and applied initiatives. The fundamental aim is to establish a proper time frame that allows the full economic potential of any PSI to be exploited, for all potential recipients. If there is no specific framework in force (or any other rules regulating the timely provision of documents), all possible requests need to be processed within a timeframe of not more than 20 working days (with a possible extension of another 20 working days for more complex requests, with a parallel notification (positive or negative) of the applicant). This option is of particular importance for dynamic content (e.g., traffic data), the economic value of which depends
Exploitation of Public Sector Information in Europe
on the immediate availability of the information and of all necessary regular updates. Public sector bodies may take into consideration re-use of their assets without conditions or may impose conditions, where appropriate through a standard license (Uhlir, 2003), dealing with all relevant issues (such as liability, the proper use of documents, guaranteeing non-alteration and the acknowledgement of source). These conditions shall be fair and transparent, shall not unnecessarily restrict possibilities for re-use and shall not be used to restrict competition. Standard licenses that are available online may also play a significant role in this respect. When a license may be required, the timely availability of documents may be a part of its specific terms. PSI holdings should be subject to a regime of access principles. These comprise the right of anyone to obtain PSI. Exemptions should only be based on consideration of personal privacy, preservation of significant private commercial interests where explicitly protected by copyright, or legitimate national security concerns. As for the quality of the PSI, it should be provided in the same quality as these assets have been provided by the public sector.
availability of formats The possibilities for re-use can be improved by limiting the need to digitize paper-based documents or to process digital files to make them mutually compatible. For that reason, any “composition” or further “creation” of information is suggested to become available in any pre-existing format or language, through existing electronic facilities and infrastructures. This, of course, does not implicate that the appropriate public sector bodies have to create or adapt documents in order to comply with such a request. Public sector bodies will view requests for extracts from existing documents favorably when to grant such a request would involve only a simple, easy to be performed, operation. Public sector bodies
should not, however, be obliged to provide an extract from a document where this involves any disproportionate effort, which implies further usage of potential resources. A practical solution to ease attempts for re-use is the option, for public sector bodies, to make their own assets available in a format which is not dependent on the use of specific software. The applied choice principle suggests that, if available (or easily transformable) information should be provided in the requested format. The requester may be charged with transformation costs, provided administrative costs of recovering these do not exceed those.
charging issuEs The new regulation predicts that where charges are imposed to the applicant(s), the total income from supplying and allowing re-use of documents shall not exceed the cost of collection, creation, production, reproduction and dissemination, in common with a reasonable return on investment, having due regard to the self-financing requirements of the public sector body concerned, where applicable. Thus, any charges will be cost-oriented over the appropriate accounting period and calculated in line with the accounting principles applicable to the public sector bodies involved. In fact, the basic cost principle suggests that any costs chargeable to any requester shall not exceed marginal costs of distribution (Weiss, 2003). All appropriate recoveries of costs, consistent with applicable accounting principles and the relevant cost computation method of the public sector body concerned do constitute an upper limit to the charges, as any excessive prices have to be precluded. However there is also the possibility for the European countries, according to their national policies, to apply lower charges or even to waive such costs in cases where they decide to further support reproduction and dissemination of their informative assets.
Exploitation of Public Sector Information in Europe
fairnEss, transparEncy and notification of information Public sector institutions have to make an inventory of their information holdings, update it regularly and actively make it generally and easily accessible. Ensuring that the conditions for re-use of PSI are clear and publicly available is a pre-condition for the development of a European-wide information market. To this aim, all relevant conditions and standard charges will be pre-defined and become known to any potential applicant, to the extent possible. On request, the corresponding public sector body is obliged to provide notification of the calculation basis for any announced charge. Moreover, the relevant body has to indicate which factors are taken into account in the calculation of charges, for typical cases or applications. Simultaneously, applicants for re-use of documents have to be informed of available means of redress relating to decisions or practices affecting them. The lack of transparency, clarity, and consistency can create difficulties for potential re-users (in particular small and medium-sized enterprises which do not have the resources required to overcome obstacles and negotiate complex agreements with the public sector) and deter them from developing added value products and services. In addition, European governments will make certain that realistic arrangements are in place that ease the search for documents available for re-use, such as assets lists, accessible preferably online, of main documents, and portal sites that are linked to decentralized assets lists.
non-discrimination tErms The directive imposes that any applicable conditions for the re-use of PSI will be objective and non-discriminatory, for all comparable categories of such activities. If documents are re-used by a public sector body as input for its commercial activities which fall outside the scope of its public tasks, the same charges and other conditions shall apply to the supply of the documents for those activities as apply to other users.
Public sector institutions may extend, improve quality and format of their information provided they do so after a transparent procedure and in order to improve quality or extent of their services. Public bodies should not feel compelled to discontinue a service that is to the public benefit simply because a commercial vendor chooses to duplicate it. The re-use of documents has to be considered as “open” to all potential actors in the market, even if one or more market players already exploit added-value products based on these documents. Competition rules need to be fully respected avoiding as far as possible exclusive agreements. Contracts or other arrangements between the public sector bodies holding the documents and third parties (from the private sector) shall avoid granting of exclusive rights, apart from justified cases, mainly for the provision of a service in the public interest. Then, the validity of the reason for granting such an exclusive right is subject to regular review (every three years).
intEllEctual propErty rights (ipr) and control of origin principlEs PSI holdings need to be exempted from IPR and also copyright and database protection regimes (Reichman, 2002). The public sector will be entitled to ensure through minimal regulation that responsibilities for any changes to the original information after its transfer are made appropriately transparent. The latest European regulatory provisions do not apply to documents covered by industrial property rights, such as patents, registered designs and trademarks. Moreover, they do not affect the existence or ownership of intellectual property rights of public sector bodies, nor limit the exercise of these rights (European Parliament and Council of the European Union, 2001). Any imposed obligations will apply only insofar as they are compatible with the provisions of international agreements on the protection of intellectual property rights, in particular the Berne Convention for the Protection of Literary and Artistic Works
Exploitation of Public Sector Information in Europe
(the Berne Convention) and the Agreement on Trade-Related Aspects of Intellectual Property Rights (the TRIPS Agreement). Public sector bodies should, however, exercise their copyright in a way that facilitates re-use.
in the re-use of government information having a negative influence on the sound functioning of the internal market.
futurE rEsEarch dirEctions conclusion One of the ultimate goals of any society is the empowerment of all its citizens through proper access and use of information and knowledge. A significantly essential, element of the emerging information society is the vast amount of information already in the public domain, or that can potentially be placed in the public domain. Commercial exploitation of PSI is not a new phenomenon in the EU. It is an established practice that the public sector creates, gathers and distributes information for the public good and that the private sector finds ways of exploiting that assets and those processes for commercial gain by delivering products and services that benefit their customers. Better opportunities and a clear and transparent regulatory framework to exploit public sector information will help the European information industries to create added-value information products (Lalopoulos, Chochliouros, & Spiliopoulou-Chochliourou, 2005) thus contributing to employment and growth. In addition, these opportunities can considerably improve the information flows on and so the functioning of the internal market (UNESCO, 2000). Such options if developed under proper, objective and transparent regulatory approaches—in parallel with the principles of subsidiarity and proportionality for the practices of the EU member states—can so enhance an effective crossborder use of public sector documents by private companies for innovative services and can limit distortions of competition on the European market. The main objective of regulating the matter on a European level was to establish a minimum set of principles, rules and standards across Europe, to achieve harmonization, and to bring down the differences that existed between the Member States
Public sector information is a prime content resource. In particular, it is singled out as one of the core issues to be tackled in the context of the broader “knowledge-based economy,” as realized in recent European and international strategic frameworks, all aiming to support novel solutions for the market and the society. It can very significantly contribute to the completion/ achievement of an immense variety of activities aiming to the promotion of growth, competitiveness and jobs while, at the same time, improving citizens’ quality of life and living standards via the facilities offered. Without any doubt, public sector information has a considerable economic potential and due to this reason, it attracts interests from various sectors: It is an “essential basis” for many digital information (added-value) products and, therefore, it can become an important “raw material” for the development and the exploitation of new electronic communications services (such as for the wireless internet) to fulfill challenges from the expansion of the global digital revolution. Among these (very much promising) areas are geographic, weather, tourist, traffic, navigation and business related information as well as various economic, educational, patent, cultural and social data, all able to offer opportunities for considerable market growth in the future. The fast developing convergence-based effects not only promote broadband connections but also impose the necessity for the formation of addedvalue services and products, originating from content belonging to public sector authorities. This can support (and further enhance) options for commercial activities and business investments in the context of a fully competitive and liberalized European electronic communications market, offering opportunities for co-operation between public sector and private companies to the benefit of all potential actors involved. The
Exploitation of Public Sector Information in Europe
implementation of an appropriate governing legal framework with the exact definition of strategic priorities in the European market according to the specific nature,by sector, of the information used (and consequently of the appropriate technical or informative tools) will constitute future priorities in the Internet world. Market will provide priority to certain thematic areas where the re-use of PSI can offer immediate solutions, all supported by necessary “means” to guarantee fast, always-on, easily accessible and secure transmission and reception. This suggests that specific care will be given on a variety of IPRrelated issues. Major emphasis is also expected to be provided on privacy requirements and on data protection rules, so that to guarantee security and commercial confidentiality; this implicates the necessity for the development of new “digital tools” (together with the enhancement of existing ones), proper for the processing and the management of the informative data used in multimediabased environments, and so extending the right to knowledge.
rEfErEncEs Advisory Panel on Public Sector Information (APPSI) (2004). 1st Annual Report. Norwich, UK: Her Majesty’s Stationery Office (HMSO). Retrieved June 06, 2006, from http://www.appsi. gov.uk/reports/annual-report.htm Berne Convention on the Protection of Literary and Artistic Works, Paris Act of 24 July 1971, as amended on 28 September 1979. Chochliouros, I. P., & Spiliopoulou-Chochliourou, A. S. (2003). Perspectives for achieving competition and development in the European information and communications technologies (ICT) market. The Journal of The Communications Network (TCN), 2(3), 42-50. Chochliouros, I. P., & Spiliopoulou-Chochliourou, A. S. (2005a). Broadband access in the European Union: An enabler for technical progress, business renewal and social development. The International Journal of Infonomics, 1, 05-21
Chochliouros, I. P., & Spiliopoulou-Chochliourou, A. S. (2005b). Exploiting public sector information through innovative e-government policies. In M. Khosrow-Pour (Ed.), The encyclopedia of e-commerce, e-government and mobile commerce, (pp.509-513). Hershey, PA: IRM Press. Commission of the European Communities. (2000). Communication on eEurope 2002: An Information Society for All (COM(2000) 330 final, 24.05.2000). Brussels, Belgium: Commission of the European Communities. Commission of the European Communities. (2001). Communication on eEurope 2002: Creating a EU framework for the exploitation of public sector information (COM(2001)607 final, 23.10.2001). Brussels, Belgium: Commission of the European Communities. European Commission. (1998). Green Paper on Public Sector Information in the Information Society, Public Sector Information: A key Resource for Europe, (COM(1998) 585, 20.01.1999). Brussels, Belgium: European Commission. European Commission—Directorate General for Information Society and Andersen. (2002). Digital Content for Global Mobile Services. Retrieved May 03, 2006, from http://cordis.europa. eu/econtent/studies/stud_mobile.htm European Parliament and Council of the European Union. (1995). Directive 95/46/EC of 24 October 1996, on the protection of individuals with regard to the processing of personal data and on the free movement of such data (Official Journal (OJ) L281, 23.11.1995, pp.31-50). Brussels, Belgium: European Parliament and Council of the European Union. European Parliament and Council of the European Union. (2001). Directive 2001/29/EC of 22 May 2001 on the harmonisation of certain aspects of copyright and related rights in the information society (Official Journal (OJ) L167, 22.6.2001, pp.10-19). Brussels, Belgium: European Parliament and Council of the European Union. European Parliament and Council of the European Union. (2002). Directive 2002/21/EC of 7
Exploitation of Public Sector Information in Europe
March 2002, on a common framework for electronic communications networks and services (“Framework Directive”) (Official Journal (OJ) L108, 24.04.2002, pp.33-50). Brussels, Belgium: European Parliament and Council of the European Union. European Parliament and Council of the European Union (2003). Directive 2003/98/EC of 17 November 2003, on the re-use of public sector information (Official Journal (OJ) L345, 31.12.2003, pp.90-96). Brussels, Belgium: European Parliament and Council of the European Union. Janssen, K., & Dumortier, J. (2003). Towards a European framework for the re-use of public sector information. International Journal of Law and Information Technology, 11(2), 184-201. Lalopoulos, G.K., Chochliouros, I.P., & Spiliopoulou-Chochliourou, A.S. (2005). Challenges and perspectives for web-based applications in organizations. In M. Pagani (Ed.), The encyclopedia of multimedia technology and networking, (pp.82-88). Hershey, PA: IRM Press. MEPSIR (Measuring European Public Sector Information). (2006). Final Report of Study on Exploitation of public sector information—benchmarking of EU Framework Conditions. Brussels, Belgium: European Commission Measuring European Public Sector Information (MEPSIR). OECD. (2006). Digital Broadband Content: Public Sector Information and Content (JT03206702). Paris, France: OECD (Organization for Economic Cooperation and Development). Pas, J., & De Vuyst, B. (2004). Re-establishing the balance between the public and the private sector: Regulating public sector information commercialization in Europe. The Journal of Information, Law and Technology, 2004(2). Retrieved June 20, 2006, from http://www2.warwick.ac.uk/fac/soc/ law/elj/jilt/2004_2/pasanddevuyst/ Pira International. (2000). Commercial exploitation of Europe’s public sector information. September 2000. Final Report for the European Commission, Directorate General for the Information Society. Pira International, Ltd., University of East
Anglia and Knowledge View Ltd. Retrieved June 01, 2006, from http://www.ekt.gr/cordis/news/ eu/2001/01-01-19econtent/econtent_study2.pdf Poullet, Y. (1995). A study of comparative law and European law. In C. De Terwange, H. Burkert, and Y. Poullet (Eds.), Towards a legal framework for the diffusion policy for data held by the public sector. USA: Deventer, Kluwer. Reichman, J.H. (2002). Database protection in a global economy. Revue Internationale de Droit Economique, 16(2/3), 455-504. Uhlir, P. (2003). Draft policy guidelines for the development and promotion of public domain information (CI-2003/WS/2). Paris, France: United Nations Educational Scientific and Cultural Organization (UNESCO). UK Department of Trade and Industry (DTI). (2000). Government information and the UK information market. (Report prepared by Electronic Publishing Services for the UK Department of Trade and Industry, May 2000). Retrieved February 07, 2006, from http://cabinetoffice.gov. uk/e-government/docs/annualreports/ 2000/AnnualReport2000.doc UNESCO (2000). The role of public authorities in access to information: The broader and more efficient provision of public content. Elizabeth Longworth, Moderator of Theme A1 Discussion, 18 July 2000, New Zealand. Paris, France: United Nations Educational Scientific and Cultural Organization (UNESCO). U.S. National Commission on Libraries and Information Science. (NCLIS) (2001). A comprehensive assessment of public information dissemination - 26 January 2001. Washington D.C., US: NCLIS. Retrieved June 06, 2006, from http://www.nclis. gov/govt/assess/assess.html Weiss, P. (2003). Borders in cyberspace: Conflicting government information policies and their economic impact. In Proceedings of the Symposium on the Role of Scientific and Technical Data and Information in the Public Domain (pp. 15-18). Washington, DC: National Academies Press.
Exploitation of Public Sector Information in Europe
furthEr rEading Afonso, A., Schuknecht, L., & Tanzi, B. (2003). Public sector efficiency: An international comparison (ECB Working Paper no 242, July 2003). Retrieved on October 6, 2004 from http://www. ecb.int/pub/pdf/scpwps/ecbwp242.pdf Alabau, A. (2003). Understanding the e-government policy of the European Union. A comparative analysis with the e-government policies of some supra national organizations. CJM-UPV. Ref PTSI/24. Valencia, Spain. American Library Association. (1992). Less access to less information by and about the U.S. government: A 1988-1991 chronology. Washington, DC: American Library Association Washington Office. Andriot, L. (1999). Internet blue pages: The guide to federal government web sites. Medford, NJ: Information Today, Inc. Association of Research Libraries. (2000). Government information dissemination programs: Proposals for change and related initiatives. Washington, DC: Association of Research Libraries. Bass, G. D. (1995). A stealth attack on public protections: The anti regulatory agenda in the “contract with America.” Government Information Insider, 4, 2-7. Bayoumi, T., Laxton, D., & Pesenti, P. (2004). Benefits and spillovers of greater competition in Europe: A macroeconomic assessment. (Papers No. 803, Board of Governors of the Federal Reserve System, April 2004). Retrieved on March 25, 2005, from http://www.newyorkfed.org/research/staff_reports/sr182.pdf Bellamy, C., & Taylor, J. A. (1998). Governing in the information age. Bristol, PA: Open University Press. Bertot, J. C., McClure, C. R., & Zweizig, D. L. (1996) The 1996 national survey of public libraries and the internet: Progress and issues. Washington, DC: U.S. Government Printing Office.
Bortnick-Griffith, J., Relyea, H. C., & Buffalo, F. A. (1996). Compilation of statutes authorizing dissemination of government information to the public. Washington, DC: Library of Congress, Congressional Research Service. Braman, S. (1995). Horizons of the state: Information policy and power. Journal of Communications, 45(4), 4-24. Branscomb, A. W. (1994). Who owns information? From privacy to public access. New York: Basic Books. Bureau Fédéral du Plan. (2003). Les charges administratives en Belgique pour l’année 2002, Bureau Fédéral du Plan, Décembre 2003. Brussels, Belgium: Bureau Fédéral du Plan. Retrieved September 14, 2004, from http://www.simplification.fgov.be/downloads/Plan_rapport_final_2002.pdf CapGemini (2004). Does e-government payoff? EUREXEMP Final Report. Amsterdam, Netherlands: Dutch Ministry of the Interior and Kingdom Relations. Cisco Systems. (2004). Net Impact 2004: From Connectivity to Productivity. Cisco Systems, March 2004. Retrieved on February 15, 2005, from http://www.netimpactstudy.com/pdf/NetImpact_04b.pdf Commission of the European Communities. (2000). Communication on Strategic Objectives 2000-2005. Shaping the new Europe (COM(2000)154 final, 09.02.2000). Brussels, Belgium: Commission of the European Communities. Commission of the European Communities. (2001). Communication on A new framework for co-operation on activities concerning the information and communication policy of the European Union (COM(2001)354 final, 27.06.2001). Brussels, Belgium: Commission of the European Communities. Commission of the European Communities. (2001). Communication on European Governance: A White Paper (COM(2001)428 final, 25.07.2001). Brussels, Belgium: Commission of the European Communities.
Exploitation of Public Sector Information in Europe
Commission of the European Communities. (2002). Communication on productivity: The key to competitiveness of European economies and enterprises (COM(2002)262 final, 21.05.2002). Brussels, Belgium: Commission of the European Communities. Commission of the European Communities. (2003). Communication on The Role of eGovernment for Europe’s Future (COM(2003)567 final, 26.09.2003). Brussels, Belgium: Commission of the European Communities. Commission of the European Communities. (2003). Communication on Some Key Issues in Europe’s Competitiveness—Towards an Integrated Approach (COM(2003)704 final, 21.11.2003). Brussels, Belgium: Commission of the European Communities. Crawford, J. H., Abdian, G., Fazar, W., Passman, S., Stegmaier, Jr., R. B., & Stern, J. (1962). Scientific and technical communication in the government. Task Force Report to the President’s Special Assistant for Science and Technology (pp. 299-545). Danish Technological Institute & University of Bremen. (2004). Reorganisation of government backoffices for better electronic public services—European good practices (back-office reorganisation), January 2004. Brussels, Belgium: Commission of the European Communities. De Terwange, C. (1995). Effect of fair trading laws on the commercialisation of data held by the public sector. In C. De Terwange, H. Burkert, & Y. Poullet, (Eds.), Towards a legal framework for the diffusion policy for data held by the public sector. Deventer: Kluwer. Deloitte Research. (2003). Citizen advantage: Enhancing economic competitiveness through e-government. Retrieved June 15, 2006, from http://www.deloitte.com/dtt/research/ 0,1015,sid%253D2230%2526cid%253D26333,00. html Directorate-General Information Society, eGovernment Unit of the European Commission. (2004).
Top of the web—User satisfaction and usage survey of eGovernment services. Copenhagen, Denmark: Rambøll Management. Directorate-General Information Society, eGovernment Unit of the European Commission. (2004). Working Paper on eGovernment Beyond 2005An overview of policy issues. Brussels, Belgium: Commission of the European Communities. Diewert, E.W. (2001). Productivity growth and the role of government. University of British Columbia. Retrieved on March 22, 2002 from http://www. econ.ubc.ca/discpapers/dp0113.pdf European Commission. (2004). An analysis of EU and US productivity developments—A total economy and industry perspective. Brussels, Belgium: European Commission. Retrieved on January 07, 2005, from http://europa.eu.int/comm/ economy_finance/publications/economic_papers/2004/ecp208en.pdf European Commission. (2005). Communication to the European Parliament, the Council, the Economic and Social Committee and the Committee of the Regions, on i2010—A European Information Society for Growth and Development”, (COM(2005) 229 final, 01.06.2005). Brussels, Belgium: European Commission. European Parliament and Council of the European Union. (1999). 1720/1999/EC: Decision of the European Parliament and of the Council of 12 July 1999 to adopt a series of actions and measures in order to ensure interoperability of, and access to, trans -European networks for the electronic interchange of data between administrations (IDA). Brussels, Belgium: European Parliament and Council of the European Union. European Parliament and Council of the European Union. (2002). Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications) (Official Journal (OJ) L201, 31.07.2002, pp.37-47). Brussels, Belgium: European Parliament and Council of the European Union.
Exploitation of Public Sector Information in Europe
European Parliament and Council of the European Union. (2004).Decision 2004/387/EC. Decision of the European Parliament and of the Council on Interoperable Delivery of pan-European Services to Public Administrations, Businesses and Citizens (IDABC). Brussels, Belgium: European Parliament and Council of the European Union.
OECD (2001). Businesses’ views on red tape: Administrative and regulatory burdens on small and medium-sized enterprises (422001101P1). Paris, France: OECD (Organization for Economic Cooperation and Development).
Fountain, J. (2001). Building the virtual state. Brookings Institution Press.
OECD. (2003). Regulation, productivity and growth: OECD evidence (JT00137570). Paris, France: OECD (Organization for Economic Cooperation and Development).
IDABC. (2004). European interoperability framework for pan-european eGovernment services. Luxembourg: Office for Official Publications of the European Communities.
OECD. (2004). The economic impact of ICT: Measurement, evidence and implications (9204051E). Paris, France: OECD (Organization for Economic Cooperation and Development).
IDABC eGovernment Observatory. (2005). The impact of e-government on competitiveness, growth and jobs. (Background Research Paper, February 2005). Brussels, Belgium: IDABC.
O’Mahony, M., & Stevens, P. (2004). Outcome based measures in international comparisons of productivity in public service pro-vision: A review. Retrieved on October 28, 2005 from http://www. clrgr.cf.ac.uk/events/OMahony%202004.pdf
Lopez, X. R. (1998). The dissemination of spatial data: A North American-European comparative study on the impact of government information policy. Ablex Publishing Corporation. Maroto, A., & Rubal-Caba, L. (2004). Structure and size of pubic sector in an enlarged Europe. Brussels, Belgium: Publin Project (Innovation in the public sector). Retrieved on August 10, 2004, from http://www.step.no/publin/reports/D5-final15April2004.pdf Mock, W. (1999). On the centrality of information law: A rational discussion of information law and transparency. John Marshall Journal of Computer and Information Law, 1999, 1074. Molholm, K. N. (1991, November 26-7). The issue of access to federal information. In Proceedings of the Federal Pre-White House Conference on Library and Information Services. Washington, DC: Federal Library and Information Center Committee. National Research Council, Computer Science and Telecommunications Board. (2001). LC21: A Digital Strategy for the Library of Congress. Washington, DC: National Academy Press.
0
Papapavlou, G. (2000). Public Sector Information Initiatives in the European Union. In UNESCO Infoethics 2000. Retrieved May 15, 2003, from http://webworld.unesco.org/infoethics2000/papers.html#papapavlou Schreyer, P., & Pilat, D. (2001). Measuring productivity-OECD Economic Studies No. 33, 2001/II. Paris, France: OECD. Retrieved on April 12, 2002, from http://www.oecd.org/dataoecd/27/42/1959006.pdf Society of College, National and University Libraries (SCONUL). (2004). News: EU Public Sector Information Directive—Library and information lobby rescues UK academic and cultural future: EU amendment quashed. London, UK: SCONUL. Retrieved June 06, 2004 from http://www.sconul. ac.uk/news/eu_pub_sector_info_directive The Work Foundation. (2004). Efficiency, Efficiency, Efficiency—The Gershon Review: Public Service Efficiency and the Management of Change. Retrieved on July 14, 2005 from http://www. theworkfoundation.com/pdf/Gershon_response. pdf
Exploitation of Public Sector Information in Europe
UK Treasury. (2003). Public services: Meeting the productivity challenge. London, UK: UK Treasury. Retrieved on May 29, 2005 from http:// www.hm-treasury.gov.uk/media/574/F7/adpubserv304kb03.pdf
to management supervision by those bodies; or having an administrative, managerial or supervisory board, more than half of whose members are appointed by the State, regional or local authorities or by other bodies governed by public law.
UK Treasury. (2004). Releasing resources for the front line - Independent Review of Public Sector Efficiency. London, UK: UK Treasury. Retrieved on May 29, 2005 from http://www.hm-treasury. gov.uk/media/B2C/11/efficiency_review120704. pdf
E-Government (Electronic Government): the use of information and communication technologies in public administrations combined with organizational change and new skills in order to improve public services and democratic processes and strengthen support to public policies.
United Nations Economic and Social Council (2000). The role of information technologies in the context of a knowledge based global economy. New York: UN. Retrieved on March 31, 2002, from http://www.un.org/documents/ecosoc/docs/2000/ e2000-52.pdf
Document: (a) any content whatever its medium (written on paper or stored in electronic form or as a sound, visual or audiovisual recording, and; (b) any part of such content.
U.S. National Commission on Libraries and Information Science. (2000). Preliminary Assessment of the Proposed Closure of the National Technical Information Service (NTIS): A Report to the President and the Congress. Washington, DC: U.S. Government Printing Office. Retrieved January 31, 2002, from http://www.nclis.gov/govt/ ntis/presiden.pdf Van Gompel, R., & Steyaert, J. (2004). Going beyond access: accessibility of government information in the electronic media age. In Proceedings of the International Communication 23 Conference and General Assembly IAMCR, July 21-26 2002, Barcelona. Retrieved on April 11, 2005, from http://www.portalcomunicacion. com/bcn2002/n_eng/programme/prog_ind/papers/v/pdf/v003se03_vango.pdf
tErms and dEfinitions Body Governed By Public Law: any body: (a) established for the specific purpose of meeting needs in the general interest, not having an industrial or commercial character; and (b) having legal personality; and (c) financed, for the most part by the state, or regional or local authorities, or other bodies governed by public law; or subject
Knowledge-Based Economy: a form of modern economy referring to a specific structural transformation, where the fast creation of new knowledge and the improvement of access to various knowledge bases increasingly constitute the main resource for greater efficiency, novelty and competitiveness. Public Sector Information (PSI): It is any kind of information that is produced and/or collected by a public body and is part of the institution’s mandated role. It often has characteristics of being: dynamic and continually generated, directly generated by the public sector, associated with the functioning of the public sector (for example, meteorological data, business statistics), and readily useable in commercial applications. Public Sector Body: the state, regional or local authorities, bodies governed by public law and associations formed by one or several such authorities or one or several such bodies governed by public law. Re-Use: the use by persons or legal entities of documents held by public sector bodies, for commercial or non-commercial purposes other than the initial purpose within the public task for which the documents were produced. Exchange of documents between public sector bodies purely in pursuit of their public tasks does not constitute re-use.
132
Chapter XIII
Information Technology Among U.S. Local Governments Donald F. Norris University of Maryland, Baltimore County, USA
INTRODUCTION
BACKGROUND
The purpose of this chapter is to provide an overview of the adoption, uses, and impacts of information technology (IT), including electronic government, among local governments in the United States1. In the 1950s, these governments began to adopt IT for a variety of purposes and functions, and they continue to do so today. Since at least the mid 1970s, a small, but prolific group of scholars has conducted a large body of research on various aspects of IT and local government.2 It is from that research and my own studies into this subject that I have based this chapter (regarding e-government, see also, Norris, 2006). Given the constraint of space, this chapter can only highlight aspects of this important topic. Readers who wish to delve more deeply into the subject of information technology and local government may wish to avail themselves of the works found in the bibliography as well as references from other, related works which can be found through those works.
In the early days of commercially available electronic computing, essentially the 1950s and 1960s, large, expensive mainframe computers dominated the IT landscape. They were the only game in town. As a result, only large organizations with sizeable budgets could afford computers at all. In the public sector, this meant that only the largest governments (e.g., federal agencies, some state governments and their larger agencies, and large city and county governments) were computerized. This began to change after 1965 when the Digital Equipment Corporation (DEC) developed the minicomputer. Minicomputers differed from mainframes principally in size and cost. Additionally, and unlike mainframes, they did not require large rooms, their own air power supplies and air conditioning, and round the clock technical supervision and support. Minis were also much easier to operate than mainframes, especially if they included “packaged” municipal software. (Packaged software is a type of generic
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Information Technology Among U.S. Local Governments
programming written for a narrow market, such as municipal government, and for targeted functions, like billing and accounting. Organizations that use packaged software do not need staffs of computer programmers.) All of this meant that smaller local governments not only could afford computers (albeit minicomputers), but they could also operate these computers to automate basic governmental functions without the staff required for mainframes. The next major stage in local government computing began in the early to mid 1980s with the commercial introduction of the microcomputer (aka, the personal computer or the PC). PCs function exactly the same as mainframes or minicomputers (input, process, storage, output), but were and are much smaller, user friendly and less costly (even the early versions) than their larger cousins. Certainly, early PCs had a number of limitations, but by the late 1990s most of those limitations had been overcome by a combination of increased speed and power and the development and maturation of networking. Today, PC networks can and do perform functions that could only be performed by mainframes 25 years ago. As anyone who is the least bit familiar with computers well knows, every year computers improve in terms of speed, processing, storage capacity, user friendliness and more, and do so while costs decrease (e.g., Moore’s Law, Wikipedia, 2006). This trend, combined with the availability of minicomputers and then PCs and networks, helped to diffuse computer technology throughout local government in the United States. However, the diffusion began slowly. Data from surveys conducted by the International City/County Management Association (ICMA) show that by 1975 only half of all municipal governments in the U.S. of 10,000 persons or more had computers. By 1985, this had increased to 97 percent and then to 99 percent in 1997 (ICMA, 1975, 1985,1997). PC adoption by local governments began slowly, too, but ramped up quickly. In 1982, only 13 percent of municipalities had PCs (Norris & Webb, 1984). By 1994, this had increased to 92 percent of all cities (Norris & Kraemer, 1996).
Today, it would be fair to say that only the smallest of local governments (and probably very few of them) do not use computers of any kind. (Later in the chapter I will discuss the diffusion of e-government among U.S. local governments.)
REASONS FOR ADOPTION Why did local governments adopt computer technology? There are at least three reasons associated with local government motivations and rationales for adopting: generic, function specific, and “Keeping up with the Joneses.” In addition, adoption tends to be associated with a number of factors internal to local governments and also to local government demographic variables. As I discovered when examining local government adoption of leading edge information technologies, these governments are interested in generalized results from information technology in areas like efficiency, economy, effectiveness, accuracy, cost savings, revenue enhancement, time saving, reducing staff, and related areas (Norris, 2003). “Although these reasons are often stated quite generally [by local officials], they nevertheless carry meaning. This is another way of saying that local governments do not adopt leading edge information technologies frivolously (p. 158).” Of equal importance, local governments adopt information technologies to address specific issues or problems. Examples include using IT to replace personnel because of downsizing; to provide better and faster service for power restoration to utility customers; to increase revenues from a billing function; to improve the cost-effectiveness and safety of the arraignment process (remote video arraignment); and to improve computer training at lower unit cost (Norris, 2003). In other words, local officials examine extant processes or problems and decide to adopt specific information technologies to address (that is, to improve or fix) specific processes or problems. Sometimes, local governments adopt IT because their neighbors have done so. Local officials, from top officials well down the hierarchy, are aware of what neighboring and comparable gov-
133
Information Technology Among U.S. Local Governments
ernments are doing. They are part of a variety of networks in which they learn about best (and worst) practices. Through these and other mechanisms (e.g., local government associations of various kinds), they learn about what other governments are doing effectively (or ineffectively). And, many try to emulate the successes of others and to avoid their failures. It is not unusual to hear from local officials that a mayor, a city manager, or some other official came back from a local government conference or from visiting another government and asked, “Why can’t we do what City X is doing? Create a website? Communicate with citizens via the Web? Let people pay parking tickets on-line?” and so forth. An especially good example of the “Joneses” effect can be seen in the city of Baltimore, Maryland’s CitiStat program. According to the director of the CitiStat program, the number of visits that other governments (worldwide) have made to review this program “exceeds thousands” (Gallagher, 2006; see also Behm, 2006). I do not cite the “Joneses” effect to trivialize the fact that local governments often adopt technologies in order to keep up with what other governments are doing. This is wholly appropriate. The point here is that local governments learn from the successes and failures of their peers and adopt technologies (often modifying them to suit local circumstances) in order to address problems and issues in their own jurisdictions. Factors internal to local governments also affect their adoption of information technologies. These include such things as support of top officials, end user pressure, a positive organizational climate for IT and for innovation, slack budgetary resources, IT champions, and the existence of a central IT department (e.g., Norris, 2003; Norris & Kraemer, 1996). The existence of one or more of these factors makes the adoption of new technologies somewhat easier. For example, the support of top officials may signal to others in the government that adopting a particular technology is a favored thing to do. This may also help leverage financial resources for adopting a technology. Slack resources means that money may be more readily available to adopt a technology. IT champions are persons within an
134
organization who advocate for information technology and push for adoption. If they are in key places (e.g., top administrators or elected officials, the budget director, department heads, etc.), their advocacy can facilitate adoption. A central IT department can be critical to the adoption of new information technology because that department contains the infrastructure to support new IT as well as staff who are the experts about IT and in any event, will be needed to support whatever technologies are adopted. Finally, local government adoption of computer technology is associated with certain local government demographic variables. Typically, these are: size (as measured by local government population); type of government; form of government; metropolitan status; and region of the country (Norris & Campillo, 2003). In the research to date, none but size, however, is consistently and strongly associated with adoption. Size is viewed as a surrogate for need and budget. Larger governments are generally thought to have greater needs for automation—think about preparing checks to pay all of New York City’s employees and accounting for that money manually—and greater budgets from which to find the money to support needed IT systems. Regardless of the rationale, research has consistently found that larger governments adopt IT earlier, adopt more IT, have a greater penetration of IT and adopt more innovative IT. They also are more likely to adopt e-government and to adopt it earlier. The research has also found that city governments tend to have greater levels and deeper organizational penetration of IT than county governments. Local governments that are professionally managed (council-manager cities and council-administrator counties) tend to be more advanced in IT use than politically led governments (mayor-council cities and counties led by elected executives, commissioners or supervisors). Central cities and counties within metropolitan areas tend to be more likely to automate and to be more automated than, first, suburban governments and then independent governments (those outside of metro areas).
Information Technology Among U.S. Local Governments
Local governments in the western United States. also tend to be more advanced with IT than those in other regions (possibly because of the greater number of professionally managed cities in the western states). Again, while different studies have produced different findings about many of these variables, size is consistently and strongly associated with adoption in nearly all studies of IT adoption among local government in the United States.
IT UTILIZATION In the 1980s and 1990s, surveys about local government information technology conducted by the ICMA routinely asked about IT utilization, that is, which functions the local governments had automated. The last ICMA survey that included utilization questions was conducted in 1993, more than 13 years ago. However, its findings should be relevant today. If anything, the passage of time would have meant that more automation would have occurred among these and other functional areas in the ensuing years. Typically, organizations, including local governments, first automate functions that involve repetitive processing of large amounts of information (e.g., payroll, billing, accounting, budgeting, and record keeping of various kinds). The top functions that were automated by local governments by the mid 1990s were: finance (including budgeting and accounting); utility services (including billing and accounting); personnel; and administrative office support. These were followed by automation in the following departments: law enforcement, public works, fire, and parks and recreation. In terms of general office functions, the top areas that were automated included word processing, spreadsheets, and database management (Norris & Kraemer, 1996). It is safe to say that, today, nearly any function that a local government performs can be and probably is automated. Moreover, the vast majority of local governments employ information technologies in nearly all departments and for nearly all functions. Indeed, it is hard to imagine how
local governments (except for the very smallest among them) could function without information technology. Local governments also adopt innovative or leading edge information technologies—those that go beyond the automation of routine functions. I examined this phenomenon in the mid 1990s and found that city governments were in the process of adopting a range of innovative technologies. Some of these technologies included: automated finger printing systems (AFIS), CD ROM technology, e-mail, fiber optics, geographic information systems (GIS), portable or laptop computers, local area networks (LANs), intranets, personal digital assistants (PDAs), wireless services, video arraignment, and others (Norris, 2003). Today, several of these, then leading edge technologies have become mainstream technologies for local governments (e.g., CDs, e-mail, fiber optics, GIS, laptops, LANs, intranets, and PDAs). And, new, more innovative, leading edge technologies have displaced them. As one generation of leading edge information technology becomes institutionalized or routine, it is replaced by the next generation of newer and more innovative IT. Throughout this process, local governments are adopting new technologies to apply to address a variety of functions and problems, just as these governments have adopted information technologies in the past. (I will address e-government, and along with it, the functional areas to which e-government has been applied, later in this chapter.)
IT IMPACTS Evidence about whether and the extent to which local governments adopt and use IT, however, tells us nothing about the impacts of IT on those governments. Fortunately, a good bit of research has been conducted in this area over the past three decades (see, for example: Danziger & Kraemer, 1986; Kling, 1978; Kraemer & Dedrick, 1997; Kraemer, Dutton & Northrop, 1981; Norris, 1989; Norris, 1992; Norris & Kraemer, 1996; Northrop, et al., 1990; Norris & Demeter, 1999). I will ad-
135
Information Technology Among U.S. Local Governments
dress the impacts of e-government separately later in this chapter. This literature tells us several things about the impacts of IT in governmental organizations. First, IT produces both positive and negative impacts in local governments. The results of computer use are not one-sided. In general (that is to say that there are exceptions), the principal positive impacts are that IT in local governments: • • • • • • •
Enhances and enlarges jobs Makes jobs easier and more enjoyable Increases efficiency, effectiveness and accuracy Improves speed and timeliness Provides more (and generally better) information for decision-makers Can (but does not always) replace people with technology Can (but does not always) produce cost savings or cost avoidance
The adoption of IT by local governments, however, also brings with it a set of problems including, but not limited to: • • • • • • • •
Difficulties and higher costs during start-up and implementation Inadequacies of personnel training Under-utilization of computers Staff resistance to change and to computer use Poor vendor service Poor software reliability Poor hardware reliability Increased costs (associated with purchase, implementation and on-going operation and maintenance of IT).
Then there is the well-known productivity paradox (Solow, 1987) that states that despite massive spending on IT, there really has not been much, if any, improvement in productivity (Brynjolffson, 1993). Brynjolffson reviewed the then current literature on this subject and concluded that the productivity paradox could be real or that it may
136
be explained by one of four hypotheses: measurement error (the measurement tools available don’t capture the improvements in productivity); time lag (it takes time for productivity pay-offs to occur after the investment in technology); redistribution (“…IT may be beneficial to individual firms, but unproductive from the standpoint of the industry as a whole or the economy as a whole: IT rearranges the shares of the pie without making it any bigger, p. 75”); and mismanagement (managers have not figured out how to apply the technology properly and, therefore, are missing the potential productivity gains). Other literature on IT and productivity, using different measures and perspectives on IT and productivity, tends to show that computers have indeed led to increased productivity and have also enabled organizations to do such things as improve quality, to innovate and to do things that would not have been possible without computers. This literature is generally much more optimistic about IT induced productivity improvement (e.g., Attwell, 1996; King, 1996; Triplett, 1999). Local government officials certainly believe that, for many functions, IT not only improves productivity, but also is essential. Some types of work could not get done without IT (again think about payroll processing for New York City’s employees). Other types would be much more difficult and costly to perform (e.g., automated fingerprint identification via AFIS versus manual matching of fingerprints). I realize that these and similar observations are based on anecdotes (see, for example, Norris, 2003). But anecdotes such as these are repeated throughout the world of local government. Therefore, they should be given appropriate credence. IT in local government can and often does improve productivity, although certainly not always. IT also has other impacts that may be positive but may not figure into productivity calculations. Finally, studies have shown that no matter the rhetoric or wishful thinking around this subject, IT does not lead to nor is it associated with governmental reform (Danziger & Andersen, 2004; Kraemer, 1991; Kraemer & King, 2006). As Kraemer (1991) has noted:
Information Technology Among U.S. Local Governments
…although information technology has long been viewed as capable of bringing about organizational change, it has never been shown to play this role in reality. Rather, information technology has tended to reinforce existing organizational arrangements and power distributions within organizations. Moreover, information technology will have the same effects in the future because of fundamental relationships between the technology’s uses, control of the technology, and interests served by the technology (p. 167). On the whole, while the adoption of IT by local governments provides mostly positive results, it is not revolutionary, does not lead to organizational reform, and is not without costs and problems.
a principal component in several of the principal models of e-government (Baum & Di Maio, 2000; Hiller & Belanger, 2001; Layne & Lee, 2001; Ronaghan, 2001; Westcott, 2001). Some e-government scholars also include what I consider to be traditional IT functions in the definition of e-government (e.g., Heeks, 2006). Thus, the application of IT to virtually any aspect of government would be defined as e-government. Another group of scholars, of which I am a part, defines e-government in an empirical and not a normative sense and restricts e-government to the outward face of government, not to internal processing. Here, we offer the following definition:
ELECTRONIC GOVERNMENT
E-government is the electronic provision of governmental information and services 24 hours per day, seven days per week (Holden, Norris & Fletcher, 2003).
In roughly the past 12 years, local governments throughout the United States have adopted electronic or e-government. Indeed, according to the latest data, more than 95 percent of local governments with populations greater than 10,000 have adopted sites on the World Wide Web from which they deliver information and services (Coursey & Norris, 2006). This has been perhaps the most rapid diffusion of any single information technology among U.S. local governments ever—much more rapid than adoption of any earlier generation of IT. The exact dimensions of e-government are not yet settled among scholars and advocates. One group of scholars and advocates associates egovernment with governmental reform and transformation. That is, and contrary to the findings in the field of IT and government (e.g., Kraemer, 1991; Kraemer & King, 2006), they argue that e-government is part of a governmental reform effort (i.e., to make government more efficient, effective, open, transparent, user-friendly, etc.). My colleague Jeffrey Roy at Dalhousie University, in Halifax Nova Scotia, represents this point of view and has presented it eloquently in several publications (see, especially, Roy, 2006). Others in this group also link e-government to governmental transformation. Indeed, transformation is
We include mechanisms of e-participation or e-democracy in this definition so that electronic discussion forums, public meetings and hearings, consultations and electronic voting are also considered e-government. This definition differs from the previous definition of e-government in at least two respects. First, e-government involves governments electronically providing information and services outwardly to citizens (G2C), businesses (G2B) and other governments (G2G). It is not internal government information processing. Second, a government’s motives or purposes beyond providing information and services electronically (e.g., reform or transformation) are not relevant to the definition. Governments may or may not intend for e-government to produce reform or transformation. We believe that this distinction is important because some governments (e.g., the UK central government and the Canadian federal government) do have express reform if not also transformation goals for their e-government initiatives. (Of course, it remains to be seen whether these goals will be achieved.) Other governments, however, do not have reform or transformation as part of the reasons
137
Information Technology Among U.S. Local Governments
Table 1. Online service adopted (Source: ICMA 2000, 2002 and 2004 e-government surveys) 2000
2002
2004
No.
%
No.
%
No.
%
Request for service
284
18.1
587
33.3
749
28.6
Request for local government records
234
14.9
573
32.2
674
25.7
Interactive mapsa
175
11.1
722
27.5
118
7.5
272
15.7
427
16.3
Non-financial Transactions:
Registration for programs
b
77
4.9
201
11.4
265
10.1
Business license application or renewalb
52
3.3
101
5.8
163
6.2
Voter registration
31
2
40
2.4
63
2.4
Property registration
15
1
45
3.3
72
2.7
Delivery of local government recordsc
371
21.3
458
17.5
Download forms for manual completionc
1064
65.8
1519
57.9
Communication with individual elected and appointed officialsc
1271
76.1
1611
61.4
Employment Info/ Applications
1587
60.5
Council agendas
1931
73.6
E-newsletter
688
26.2
Permit application or renewal
b
Streaming Video
249
9.5
Codes/Ordinances
1869
71.3
65
2.5
Other Financial Transactions: Payment of taxes
41
2.6
114
6.5
242
9.2
Payment of utility bills
35
2.2
105
6.1
257
9.8
Payment of license feesd
27
1.7
26
1.7 98
5.6
201
7.7
Payment of ticket/finesd Payment of fines and fees
e
a. This question was not asked in the 2002 survey. b. There were slight wording differences in these questions between the 2000 and 2002 surveys. c. These questions were not asked in the 2000 survey. d. These questions were not asked in the 2002 survey. e. This question is a combination of the previous two questions for the 2002 survey.
for offering e-government (U.S. federal, state and local governments, for the most part). A considerable amount of hype surrounds egovernment (see, Garson, 2004, for a good summary of some of the hype). This hype contends that governments will not only provide basic in-
138
formation and services electronically (principally through official sites on the World Wide Web), but will also increasingly provide interactivity and transactional capability. The hype further says that e-government will make governments more efficient, effective, open, transparent, and
Information Technology Among U.S. Local Governments
Table 2. Impacts (Source: ICMA 2000, 2002 and 2004 e-government surveys). 2000 No.
2002 %
No.
2004 %
No.
%
Increased demands on staff
344
21.9
616
28.9
687
26.2
Changed role of staff
323
20.5
570
26.8
719
27.4
Business processes are being re-engineered
283
18
453
21.3
518
19.7
Business processes are more efficient
214
13.6
367
17.2
501
19.1
Reduced time demands on staff
135
8.6
319
15
547
20.9
Reduced administrative costs
79
5
147
6.9
235
9.0
Reduced number of staff
11
0.07
23
1.1
57
2.2
Increased non-tax-based revenues
10
0.06
16
0.8
27
1.0
Citizen contact with officials
836
31.9
Improved customer service
1182
45.1
Improved communi-cation with public
1392
53.1
79
3.0
Other
user-friendly. E-government will enable governments to re-engineer back office procedures to make them more efficient and effective as well. The hype goes on to say that eventually (no time frame is ever provided) e-government will literally transform the relationship between citizens and governments. E-government will produce higher levels of citizen trust in government, more citizen participation, and even greater levels of political participation including voting (see, for example, the principal models of e-government). We are at a point today, after three rounds of e-government surveys by the ICMA (ICMA and PTI, 2000, 2002; ICMA 2004), as well as other studies, to address the reality of e-government versus the hype surrounding it (see also Coursey & Norris, 2006; Norris & Moon, 2005). At least at the local level in the United States, e-government as practiced is quite different from what the hype has suggested it would be. We know, for example, that local governments have adopted a number of online services, involving both non-financial and financial transactions (with the former being in the great majority). With four exceptions (the really low hanging fruit!), fewer than one third of local governments report having adopted any online functions at all and many of those that have been adopted are
pretty mundane (requests for service, requests for records, program registration, etc.) Additionally, very few local governments (less than 10 percent) have adopted online financial transactions, admittedly the more difficult and costly online applications. Taken together, these data suggest that although local governments are online, they are not doing much in terms either of numbers of services provided numerically or of relative sophistication of on-line offerings. For the most part, local government websites remain mainly informational, provide only limited interactivity and transactional capability, and are not very sophisticated—all of which is highly contrary to the hype surrounding e-government. The hype also suggests that the impacts from e-government will be highly positive (efficiency, economy, effectiveness, user-centricity, etc). Here again, the reality is different from the rhetoric (Table 2). The data in Table 2 show, first, that few local governments report impacts from e-government at all. With two exceptions, fewer than one third of these governments report any impacts. Second, not all impacts are positive. For example, over a quarter reported that e-government increased demands on staff while only nine percent said that e-government reduced costs. There was not a question
139
Information Technology Among U.S. Local Governments
about increasing costs, but case studies and focus groups that I have conducted have found that local officials believe that e-government is a net add-on and does not replace traditional means of service delivery. Therefore, e-government represents an additional cost. Third, the positive impacts (with two notable exceptions) found in the survey were reported by relatively few governments. Thus, the impacts of e-government are not fully consistent with the hype. Overall, local e-government has come a very long way in a very short time. However, it is much more prosaic and mundane than its advocates, the early models of e-government and the considerable hype surrounding this phenomenon would suggest. Moreover, and contrary to the hype, there are no good reasons to believe that e-government, any more than IT applications in government before it, will fundamentally reform or transform governmental organizations or solve problems of citizen trust and participation in government.
FUTURE TRENDS It is almost always risky to predict future trends, especially where technology is involved. Nevertheless, some future trends involving IT and U. S. local governments are readily apparent. To begin with, almost all local governments of any size have adopted IT in their internal operations and have also adopted e-government (even if primarily informational today). Two trends seem obvious. First, local governments will continue to use IT to automate their internal processes and operations and will expand and enhance their uses of IT incrementally. Second, local governments will also incrementally expand and enhance their e-government offerings, moving slowly toward greater adoption of interactive and transactional capabilities and services. Neither of these trends should surprise anyone. They are fully consistent with the history of the use of IT in local governments. My final observation about future trends concerns one that is unlikely to happen. As I have already noted, much of the rhetoric around
140
e-government has to do with its presumed transformational character. The historical evidence (i.e., from the IT and government literature) and the contemporary data (i.e., from e-government research) strongly suggest that e-government will not be transformational (Danziger & Andersen, 2002; Kraemer & King, 2006; Norris & Coursey, 2006). Instead, it is likely to support existing political and administrative mechanisms, structures and processes.
CONCLUSION Information technology, including e-government, really is found everywhere, or nearly so, among U.S. local governments. Since the mid 1950s, a continuous, albeit incremental, trend of IT adoption by local governments has occurred, first to adopt basic and then more innovative IT and, more recently, e-government. For the most part, however, IT and e-government are fairly mundane and prosaic rather than reformist and transformative. This does not mean that IT and e-government are unimportant. To belabor a point I have already made, try paying New York City’s employees and accounting for these funds without IT. This or similar observations could be repeated for hundreds of activities, services and functions in tens of thousands of local governments. IT helps to get the work done in local governments. E-government puts governmental information and services on-line and accessible to citizens 24/7. IT and, to a lesser extent, e-government are important in some local government arenas, essential in others and, for the many functions and activities, irreplaceable.
Future Research Directions Scholars should continue to study IT and e-government for at least five reasons: the sheer numbers of local governments; their direct impacts on their residents; the amounts of money they spend on IT and e-government; and the impacts (real and potential) that IT and e-government have or can
Information Technology Among U.S. Local Governments
have on these governments and their citizens. Finally, as new, more powerful, and easier to use IT and e-government systems and applications are adopted by local governments, it is possible that they may have impacts far different from those in the past, impacts that are impossible to predict with today’s knowledge and from today’s vantage point. Additionally, there are several areas or concerns of interest for future research, including but not limited to: •
• • • • •
•
Factors affecting the success or failure of IT implementation in governmental organizations. Here much can be learned from studies in the private sector (e.g., the CHAOS Report). Appropriate measures and metrics for determining success and failure of implementation. The impacts of IT within governmental organizations (e.g., on people and processes). The impacts of IT on governmental outputs and outcomes. Appropriate measures and metrics for determining impacts. Examining whether the transformation hypothesis (i.e., e-government will transform government as we know it) has proven valid or not. Whether e-government has become mainstream within governmental IT applications and, thus, no longer has a unique status for future study.
These and presumably other research areas not identified here ought to be sufficient to keep the present generation of scholars interested in IT and government occupied for some time.
References Attwell, P. (1996). Information technology and the productivity challenge. In R. Kling (Ed.), Computers and controversy: Value conflicts and social choices (2nd ed.). San Diego: Academic Press.
Baum, C.H., & Di Maio, A. (2000). Gartner’s four phases of e-government model. Retrieved on October 15, 2003, from www.gartner.com. Behm, R. (2006). The varieties of CitiStat. Public Administration Review, 66(3), 332-340. Brynjolfsson, E. (1993). The productivity paradox of information technology: Review and assessment. Communications of the ACM, 36(12), 66-77. Coursey, D., & Norris, D.F. (2006, under consideration at Public Administration Review.) Models of e-government: Are they correct? An empirical assessment. Danziger, J.N., & Andersen, K.V. (2002). The impacts of information technology on public administration: An analysis of empirical research from the golden age of transformation. International Journal of Public Administration, 25(5), 591-627. Gallagher, M. (2006, July 15). Personal email communication to the author regarding Baltimore’s CitiStat program. Gallagher is the director of that program. Garson, G.D. (2004). The promise of digital government. In A. Pavlichev & G. D. Garson (Eds.), Digital government principles and best practices (pp. 2-15). Hershey, PA: Idea Group Publishing. Gartner Group (2000). Gartner says U.S. e-government transformation providing opportunities for new vendors. Press release 2000411d. Retrieved March 26, 2003 from http://www.gartner. com/5_about/press_room/pr2000411d.html Heeks, R. (2006). Implementing and managing eGovernment: An international text. London: Sage. Hiller, J. S., & Belanger, F. (2001). Privacy strategies for electronic government. In M.A. Abramson & G.E. Means, E-government 2001. Lanham, MD: Rowman and Littlefield. Holden, S.H., Norris, D.F., & Fletcher, P.D. (2003). Electronic government at the local level: Progress
141
Information Technology Among U.S. Local Governments
to date and future issues. Public Productivity and Management Review, 26(3), 1-20. International City/County Management Association. (1975). Information technology survey. Washington, DC. International City/County Management Association. (1985). Information technology survey. Washington, DC. International City/County Management Association. (1997). Information technology survey. Washington, DC. International City/County Management Association and Public Technology, Inc. (ICMA/PTI). (2000). Digital government survey. Washington, DC. International City/County Management Association and Public Technology, Inc. (ICMA/PTI). (2002). Digital government survey. Washington, DC. International City/County Management Association (ICMA). (2004). Digital government survey. Washington, DC. King, J.L. (1996). Where are the payoffs from computerization: Technology, learning, and organizational change. In R. Kling (ed.), Computers and controversy: Value conflicts and social choices (2d ed.). San Diego: Academic Press. Kraemer, K.L. (1991). Strategic computing and administrative reform. In C. Dunlop & R. Kling (Eds.), Computers and controversy: Value conflicts and social choices (pp. 167-180). Boston: Academic Press. Kraemer, K.L., & King, J. L. (2006). Information technology and administrative reform: Will e-government be different? International Journal of Electronic Government Research, 2(1),1-20. Layne, K. & Lee, J. (2001). Developing fully functional e-government: A four stage model. Government Information Quarterly, 18, 122-136. Moore’s Law, see Wikipedia at: http://en.wikipedia. org/wiki/Moore’s_law. Retrieved July 10, 2006.
142
Norris, D.F. (2003). Leading edge information technologies and American local governments. In G. D. Garson (Ed.), Public information technology: Policy and management issues (pp. 139-169). Hershey, PA: Idea Group Publishing. Norris, D. F. (2006). Local e-government in the U.S.: The state of the practice. In A.V. Anttiroiko & M. Malkia (Eds.), Encyclopedia of electronic government. Hershey, PA: Idea Group Publishing. Norris, D.F., & Campillo, D. (2003). Factors affecting the adoption of leading edge information technologies. Working paper. Baltimore: Maryland Institute for Policy Analysis and Research, University of Maryland, Baltimore County. Norris, D. F., & Coursey, D. (2006). Models of e-government—are they correct: An Empirical Assessment. Under revise and resubmit to Public Administration Review. Norris, D.F., & Demeter, L.A. (1999). Computing in American city governments. In The 1999 municipal yearbook. Washington, D.C.: International City/County Management Association. Norris, D.F., & Kraemer, K.L. (1996). Mainframe and pc computing in American cities: Myths and realities. Public Administration Review, 56(6), 568-576. Norris, D.F., & Moon, M.J. (2005). Advancing e-government at the grass roots: Tortoise or hare? Pubic Administration Review, 65(1), 64-75. Norris, D.F., & Webb, V.J. (1984). Microcomputers: A survey of local government use. In 1984 municipal yearbook. Washington D.C.: International City Management Association. Ronaghan, S.A. (2001). Benchmarking e-government: a global perspective. New York: United Nations Division for Public Economics and Public Administration and American Society for Public Administration. Retrieved October 1, 2003 from www.unpan.org/e-government/ Benchmarking%20E-gov%202001.pdf
Information Technology Among U.S. Local Governments
Roy, J. (2006). E-government in canada: Transformation for the digital age. Ottawa: University of Ottawa Press.
ings from asia-pacific countries. World Development, 22(12), 1921-1931.
Solow, R.M. (1987, July 12). We’d better watch out. New York Times Book Review, 36.
Kraemer, K.L., & Dedrick, J. (1997). Computing in public organizations. Journal of Public Administration Research and Theory, 7(1), 89-112.
Wescott, C. (2001). E-government in the AsiaPacific region. Asian Journal of Political Science, 9(2), 1-24.
Kraemer, K, L., Dutton, W.H., & Northrop, A. (1981). The management of information systems. New York: Columbia University Press.
Triplett, J. (1999). The Solow productivity paradox: What do computers do to productivity? Canadian Journal of Economics, 32(2), 309-334.
Kraemer, K.,Gurbaxani, V., & King, J.L. (1992). Economic development, government policy, and the diffusion of computing in asia-pacific countries. Public Administration Review, 52(2), 146-156.
Further Reading
Kraemer, K.L., & King, J.L. (1986). Computing and public organizations. Public Administration Review 46, 488-496.
Danziger, J. (1979). Technology and productivity: A contingency analysis of computers in local hovernments. Administration and Society, 11(2), 144-171. Danziger, J. N., & Kraemer, K.L. (1986). People and computers: The impact of computing on end users in organizations. New York: Columbia University Press. King, J.L. & George, J. (1991). Examining the computing and centralization debate. Communications of the ACM, 34(7), 64-73. Kling, R.. (1978). The impacts of computing on the work of managers, data analysts, and clerks. Irvine, CA: Public Policy Research Organization, University of California, Irvine. Kraemer, K.L. (1991). Strategic computing and administrative reform. In Computerization and Controversy: Value Conflicts and Social Choices, edited by Charles Dunlop and Rob Kling. Boston: Academic Press. Kraemer, K. L., & Danziger, J. (1990). The impacts of computer technology on the work life of information workers. Social Science Computer Review, 8(4), 592-613. Kraemer, K. L., & Dedrick, J (1994). The Payoffs from investment in information technology: Find-
Norris, D.F., & Kraemer, K.L. (1996). Mainframe and PC computing in american cities: Myths and realities. Public Administration Review, 56(6), 568-576. Kraemer, K.L., & Pinsonneault, A. (1993). The impact of information technology on middle managers. MIS Quarterly, 17(3), 271-292. Norris, P. (2001). Digital divide: Civic engagement, information poverty, and the internet worldwide. Cambridge: Cambridge University Press. Norris, D.F., & Lloyd, B.A. (2006). The scholarly literature on e-government: Characterizing a nascent field. International Journal of Electronic Government Research, 2(4), 40-56. Northrop, A., Kraemer, K.L., Dunkle, D., &King, J.L. (1990). Payoffs from computerization: Lessons over time. Public Administration Review, 50(5), 505-514.
Terms and definitions Adoption: The act by a local government (or other unit or organization) to acquire and implement a new technology, process or system. In this
143
Information Technology Among U.S. Local Governments
chapter, the concern is with adoption of information technology and e-government. Electronic Government (E-Government): E-government is the electronic provision of governmental information and services 24 hours per day, seven days per week (Holden, Norris & Fletcher, 2003). Information and Communication Technologies (ICTs) Impacts: The consequences or results produced by the adoption and utilization of IT and e-government.
Personal Computers (PCs): The smallest, most easy to use and least expensive computers, invented in the late 1970s by companies like Apple Computer, Inc., and made commercially available in the early 1980s (initially primarily by IBM) and characteristic of computing in nearly all organizations today.
EndNotes
1
Information Technology (IT): computers hardware, and associated software, processes, and procedures, etc. Mainframes: Large scale computers characteristic of computing the 1950s through the 1980s. Minicomputers: Smaller, less costly and easier to use versions of mainframes, invented by DEC in 1965 and characteristic of computing in smaller organizations from the 1970s into the 1990s. Moore’s law: Computers improve in terms of speed, processing, storage capacity, user friendliness and more, and do so while costs decrease.
144
2
According to the 2002 Census of Governments (U.S. Census Bureau, 2002), the United States had 87,525 local governments. These included: 3,034 county governments, 19,429 municipal governments, 16,504 township and town governments (for a total of 38,967 general purpose local governments), 13,506 school districts and 35,052 special districts. Certainly the most prolific and consequential single group of scholars studying IT and local government in the United States are those who began their work with the URBIS project at the University of California, Irvine, in the 1970s and have continued that study into these early years of the 21st century. Among others, they include: Kenneth L. Kraemer (who led the URBIS project), James Danziger, William Dutton, John L. King, Rob Kling and Alana Northop.
Chapter XIV
Public Sector Human Resources Information Systems Christopher G. Reddick The University of Texas at San Antonio, USA
introduction Human resources information systems (HRIS) is any technology that is used to attract, hire, retain, and maintain talent, support workforce administration, and optimize workforce management (Tannenbaum, 1990). Examples include computers, Internet (Web and e-mail) or other technological means of acquiring, storing, manipulating, analyzing, retrieving, and distributing pertinent information regarding human resources (HR). This chapter examines HRIS’ impacts on operations, relationships, and transformations of local government organizations.
background Information technology (IT) investments in HR have traditionally focused solely on their role of reducing costs and automating tasks (Lepak & Snell, 1998). Historically, IT has been adopted in HR as an attempt to substitute capital for labor (Snell, Pedigo, & Krawiec, 1995). There was a tendency for employers to view HRIS as a “quick
fix” rather than a systematic solution (Keebler & Rhodes, 2002). Automating existing processes with IT without a strategic direction has been described as paving “cow paths” (Snell et al., 1995). For example, employers may simply add some Web-based technology to their preexisting processes while leaving everything else the same. They essentially go for the “low hanging fruit” in the implementation of HRIS; as a result employers see some costs savings, but not the total amount that was envisioned. In this chapter, survey findings indicated that stressing the operational benefits of HRIS may be misguided, since the major benefits are found in the relational and transformation aspects of its adoption. HR departments can use the Web as a medium for a self-service HR function. Research shows that Web-based self-service reduces staff, improves timeliness, and improves accuracy of HR data (Lippert & Swiercz, 2005; Towers Perrin, 2001). There are benefits and costs of Web-based self-service to employees and managers. The benefit of HRIS is that employees using self-service functionality of the Web for HR information and/or services can easily update and verify information, consult
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Public Sector Human Resources Information Systems
online lists of internal job vacancies, access government employee handbooks, and receive notices about upcoming training sessions. Managers can analyze job candidate profiles online, construct salary models, view benefits programs, monitor employee absentee trends, and retrieve government labor regulations and forms for compliance. However, one of the by products of this Webbased self-service model is that it empowers employees to have access to their personal information as well as the responsibility of ensuring that data in the HRIS is accurate and complete. The downside of this empowerment is that employee information can be compromised, which reduces the individuals technology trust of the storage and use of personal information (Lippert & Swiercz, 2005).
opErational, rElational, and transformational impacts of hris Phases of adopting HRIS have been classified into three stages (Kovach & Cathcart, 1999; Shrivastava & Shaw, 2003; Snell, Stueber, & Lepak, 2002). The first phase is the operational impact of IT of automating routine activities, alleviating the administrative burdens, reducing costs, and improving productivity internal to the HR function itself. The second phase, after the operational impact of IT is the relational impact, is providing managers and employees’ remote access to HR databases and services, reducing response times, and improving service levels. Finally, the transformational phase of IT is the redefinition of the scope and function of the HR organization to focus more on strategic issues (Snell, Stueber, & Lepak, 2002; Yeung, 1995).
operational impacts of hris For many organizations, the starting point for IT utilization within HR focuses on improving operational efficiency. Given the heavy administrative burden within HR, efforts to automate record keeping and routine clerical activities such as pay-
roll and benefits administration makes sense. By eliminating paperwork, automated systems have the potential to reduce organizational overhead and generate significant costs savings (Snell et al., 2002). IT can help reduce costs and improve productivity by automating routine tasks and practices (Lepak & Snell, 1998). The operational impact of HRIS is often one of the first arguments presented to gain project support and funding.
relational impacts of hris The operational impact of IT focuses on efficiency and productivity improvements internally within HR. IT also influences HR’s relationship externally with other parties within the organization. IT allows HR to enhance service by providing managers and employees with remote access to HR databases, supporting their HR-related decisions, and increasing their ability to connect to other parties. By making information accessible online, HR can eliminate waste, improve decision quality, and enhance flexibility and customization. However, some have argued that this disintermediation within HR may simply shift the burden of administration back to line personnel overloading them (Snell et al., 2002). The relational aspect of HR implies increasing the timeliness and service levels with employees and managers, as well as outside parties. By providing managers and employees remote access to HR databases and information, and increasing their ability to connect with other parts of the organization as well as outside service providers; managers and employees can perform HR activities themselves, thereby reducing response times and improving service levels (Lepak & Snell, 1998).
transformational impacts of hris While IT can improve operational efficiency within HR and enhance relational connections with personnel and individuals outside the organization, the transformational impact of IT involves fundamental changes in the scope and function of the HR department (Gardner, Lepak, & Bartol,
Public Sector Human Resources Information Systems
2003; Snell et al., 2002). The transformational impact involves reengineering or aligning employee activities with the needs of customers or clients. In this new environment, jobs are much more flexible and are designed around skills, roles, and projects rather than stable tasks. The idea of a more flexible workforce where IT allows for shared information within and outside of the organization (Hempel, 2004; Snell et al., 1995; Ulrich, 1997). Perhaps the most dramatic impact of IT on structural integration within HR is its transformational role. As IT has enabled people to communicate across geographic boundaries and share information, it has eliminated barriers of time and space (Lepak & Snell, 1998). The following sections provide information on a survey that was conducted to examine the impact of HRIS on operational, relational, and transformation of HR.
survEy data collEction mEthods The data collected for this study was done by a mail survey in the winter of 2006. The mailing list was obtained from the Texas Municipal League. HR directors or an equivalent position classification were sent a copy of the survey with a cover letter explaining the research project. A reminder letter was sent to HR directors approximately three weeks after the initial mailing. In order to get more candid responses, HR directors were assured that their individual responses would be kept anonymous. This study only surveyed cities that served populations of 10,000 residents or greater because smaller cities are not as likely to need advanced HRIS; size of city government is a strong determinate of IT adoption (Norris & Moon, 2005). Therefore, crosstabs were conducted with χ2 statistics used to determine the impact that size of the city government has on operational, relational, and transformational HR. Out of 152 cities in Texas that were sent a survey 88 HR directors responded, which is a response rate of 58 percent.
opErational, rElational, and transformation impacts of it on hr Tables 1 to 3 provide survey data that shows how developed IT is in the HR function. It specifically examines the everyday operational impacts, improvements in organizational relationships, and the transformational impacts of IT (Snell et al., 2002). Many of the arguments articulated for the benefits of adopting IT in the HR function can be grouped within these three broad categories. As shown in Table 1, improving HR operational efficiency was an important impact because of IT. Automating record keeping and other clerical duties was another important operational benefit. Increasing the productivity of HR employees was a vital operational impact. However, there was disagreement whether IT will reduce the labor force. Eliminating paperwork and lowering HR operating costs were generally not perceived as an operational benefit. Overall, from the survey results of IT impacting the operational aspects of HR there is some consensus among HR directors that IT has increased HR operating efficiency, automated routine aspects of the daily HR function, and made HR workers more productive. However, what HRIS did not accomplish were a reduction in the HR labor force and the elimination of paperwork, which were aspects one would imagine should happen because of IT adoption. The survey results for the operational impacts of HRIS are mixed, but the relational aspects are much more certain. In the relational impact of IT in HR there are some very pronounced findings. As shown in Table 2, there is evidence that HRIS has reduced response times to serve customers and clients. Improving the quality and timeliness of services to employees was also perceived to be a relational benefit of HRIS. HR staff acceptance was an important relational impact of HRIS. HRIS has empowered employees and managers to make more decisions about HR on their own. The results in the relational impacts of HRIS showed that it has increased the quality of HR services to employees, made HR more responsive to its customers, and HR staff is
Public Sector Human Resources Information Systems
Table 1. Operational impacts of IT and HR IT in our HR function has…
strongly agree %
agree %
neutral %
disagree %
strongly disagree %
Automated record keeping and other clerical duties
25.0
52.3
Alleviated administrative burdens
17.0
40.9
12.5
9.1
1.1
22.7
13.6
5.7
Improved HR operating efficiency
31.8
47.7
Increased volume of work
6.8
21.6
13.6
5.7
1.1
31.8
36.4
3.4
Shifted additional administrative burdens to line managers because of automation
3.4
27.3
38.6
25.0
5.7
Reduced HR labor force
1.1
18.2
21.6
46.6
12.5
Lowered HR operating costs
1.1
26.1
33.0
34.1
5.7
Eliminated paperwork
4.5
30.7
19.3
33.0
12.5
Improved productivity of HR employees
19.3
53.4
21.6
4.5
1.1
IT in our HR function has…
strongly agree %
agree %
neutral %
disagree %
strongly disagree %
Reduced response times to serve our customers or clients
20.5
62.5
9.1
6.8
1.1
Improved working relationships with upper management
12.5
36.4
39.8
10.2
1.1
Enhanced our ability to recruit and retain top talent
5.7
31.8
48.9
12.5
1.1
Table 2. Relational impacts of IT and HR
Received HR staff acceptance
20.5
54.5
20.5
3.4
1.1
Empowered employees and managers to make more decisions on their own about needs
6.8
20.5
39.8
29.5
3.4
Improved employee awareness, appreciation, and use of city government HR programs
9.1
25.0
44.3
20.5
1.1
Improved line managers’ ability to meet HR responsibilities*
9.1
29.5
37.5
22.7
1.1
Improved quality and timeliness of services to employees
18.2
62.5
12.5
5.7
1.1
Improved relationships with citizens and business and HR
6.8
31.8
51.1
8.0
2.3
Notes: * indicates that the χ2 statistic was significant at the 0.10 level for impact of employee size.
generally accepting of IT systems. Compared to the operational impacts of HRIS there seems to be some consensus that it has impacted relational aspects of the daily HR function. According to the χ2 statistic, employee size only had an impact on improving line managers’ ability to meet HR responsibilities and this was only significant at the 0.10 level. The transformational impact of HRIS is also examined to see whether IT has fundamentally changed city government HR.
The transformational impact of IT and HR in Table 3 indicated that it has improved overall quality of HR services. Second, HRIS has increased knowledge management, or the creation, capture, transfer, and use of knowledge in the organization. HRIS has enabled HR directors to become more effective managers. Unlike the operational impacts of HRIS the transformational and relational impacts are generally more positive, with the exception of one category. This was the perception
Public Sector Human Resources Information Systems
Table 3. Transformational impacts of IT and HR strongly agree %
agree %
neutral %
disagree %
strongly disagree %
Redefined the scope of HR to focus more on strategic issues*
3.4
29.5
46.6
19.3
1.1
IT in our HR function has…
Reengineered HR
4.5
35.2
31.8
23.9
4.5
Increased the flexibility of HR
8.0
59.1
19.3
12.5
1.1
Improved quality of HR services
17.0
58.0
17.0
6.8
1.1
Enabled HR to focus on its mission
4.5
44.3
38.6
11.4
1.1
Made HR a strategic partner in city government
8.0
37.5
35.2
15.9
3.4
Enabled me to become and more effective manager
13.6
53.4
20.5
11.4
1.1
Increased knowledge management (i.e., creation, capture, transfer, and use of knowledge)
17.0
55.7
18.2
8.0
1.1
Reduced the levels of bureaucracy within city government
3.4
15.9
51.1
22.7
6.8
Notes: * indicates that the χ2 statistic was significant at the 0.10 level for impact of employee size.
that HRIS would reduce the levels of bureaucracy within city government was not perceived to be occurring. This finding is not surprising since many HR departments often use more command and control approaches, which impede the reduction of bureaucracy (Snell et al., 2002). Overall, the quality of HR services has been increased, knowledge management has been enhanced, but HRIS has not reduced bureaucratic red tape. The only impact that city size had on operation HR was for redefining the scope of HR to focus on strategic issues; although this was only significant at the 0.10 level. Therefore, as the size of the city increases HR would focus more on strategic issues because of HRIS.
futurE trEnds What do the results of this survey demonstrate for future trends in public sector HRIS? First, this implies that the typical justification for the implementation of HRIS of reducing costs is not well founded in the survey findings presented in this chapter. It seems that improving customer service and providing higher quality services are more important factors. Second, more effort should be devoted to finding ways to increase the use of HRIS to support more advanced strategic decisionmaking tools within public sector organizations.
This would enhance the transformational impacts of HRIS, probably the greatest long-term benefit for public sector organizations.
conclusion It should be noted that this study only examines data from cities in a single state; therefore, the findings may not be generalizable to other states. However, broadly in terms of the impact of HRIS on the operational, relational, and transformational aspect of the HR function, some interesting results surfaced. One would expect that the implementation of HRIS would have operational benefits because it takes advantage of the “low hanging fruit” principle of cost savings justification when implementing IT. However, when examining the survey data of HR directors there was much less consensus that operational goals have in fact been realized. There is more agreement that relational and transformational impacts have been achieved because of HRIS.
futurE rEsEarch dirEctions There are several avenues of future research that could be pursued in public sector HRIS. First, one
Public Sector Human Resources Information Systems
possible research area is to provide a comparison of the efficiency and effectiveness of HRIS in the public and private sectors. This could explore a very important theme in public sector IT research, the differences between the public and private sectors in the provision of public services. A second research area to possibly pursue could be a focus on employee satisfaction with HRIS in public sector organizations. This would examine the perception, possibly through surveys or focus groups, of employees on the satisfaction in the adoption of HRIS in their organizations. Is the typical employee satisfied with the response of HRIS in their work environment? This also could be extended to examine the perception of HRIS by external actors to the organization such as the business community and potential employees who come into contact with HRIS. A final area of research could examine the efficiency and effectiveness of HRIS in other countries, since most of the research has focused on the United States. It would be interesting to determine whether the use of HRIS follows similar or different patterns in other countries to determine the extent of diffusion of this important area of public sector IT.
rEfErEncEs Gardner, S.D., Lepak, D.P., & Bartol, K.M. (2003). Virtual HR: The Impact of Information Technology on the Human Resource Professional. Journal of Vocational Behavior, 63(2), 159-179. Hempel, P.S. (2004). Preparing the HR profession for technology and information work. Human Resource Management, 43(2&3), 163-177. Keebler, T.J., & Rhodes, D.W. (2002). E-HR: Becoming the “path of least resistance.” Employment Relations Today, 29(2), 57-66. Kovach, K.A., & Cathcart, C.E. (1999). Human resource information systems (HRIS): Providing business with rapid data access, information exchange and strategic advantage. Public Personnel Management, 28(2), 275-282.
0
Lepak, D.P., & Snell, S.A. (1998). Virtual HR: Strategic human resource management in the 21st century. Human Resource Management Review, 8(3), 215-234. Lippert, S.K., & Swiercz, P.M. (2005). Human resource information systems and technology trust. Journal of Information Science, 31(5), 340-353. Norris, D.F., & Moon, M.J. (2005). Advancing e-government at the grassroots: Tortoise or hare? Public Administration Review, 65(1), 64-76. Shrivastava, S., & Shaw, J.B. (2003). Liberating HR through technology. Human Resource Management, 42(3), 201-222. Snell, S.A., Pedigo, P.R., & Krawiec, G.M. (1995). Managing the impact of information technology on human resource management. In G.R. Ferris, S.D. Rosen, & D.T. Barnum, (Eds.) Handbook of human resource management. Cambridge, MA: Blackwell Publishers. Snell, S.A., Stueber, D., & Lepak, D.P. (2002). Virtual HR departments: Getting out of the middle. In R.L. Heneman, & D.B. Greenberger, (Eds.), Human resource management in virtual organizations. Greenwich, CT: Information Age Publishing. Tannenbaum, S.I. (1990). Human resource information systems: User group implications. Journal of Systems Management, 41(1), 26-32. Towers, P. (2001). Web-based self-service: The current state of the art. Retrieved March 15, 2006 from http://www.towersperrin.com Ulrich, D. (1997). Human resource champions: The next agenda for adding value and delivering results. Boston, MA: Harvard Business School Press. Yeung, A. (1995). Reengineering HR through information technology. HR.Human Resource Planning, 18(2), 25-37.
Public Sector Human Resources Information Systems
furthEr rEading
tErms and dEfinitions
Ashbaugh, S., & Miranda, R. (2002). Technology for human resource management: Seven questions and answers. Public Personnel Management, 31(1), 7-20.
Bureaucratic Red Tape and IT: This is the levels of organization that an individual has to navigate through to get information or services. IT may help to reduce the level of bureaucracy within HR.
Ball, K.S. (2001). The use of human resource information systems: A Survey. Personnel Review. 30(1), 677-693. Bell, B.S., Lee, S.W., & Yeung., S.K. (2006). The impact of e-HR on professional competence in HRM: Implications for the development of HR professionals. Human Resource Management, 45(3), 295-308. Coursey, D. H., & McCreary, S. M. (2005). Using technology in the workplace. In S. Condrey (Ed.), Handbook of human resource management in government (2nd ed., pp. 189-214). San Francisco: Jossey-Bass. Coursey, D.H. (2005). Human resource management challenges in government information technology. Review of Public Personnel Administration, 25(3), 203-206. Elliott, R.H., & Tevavichulada, S. (1999). Computer literacy and human resource management: A public/private sector comparison. Public Personnel Management, 28(2), 259-274. Florkowski, G.W., & Olivas-Luján, M.R. (2006). The diffusion of human-resource informationtechnology innovations in US and non-US firms. Personnel Review, 35(6), 684-710. Kinnie, N. J., & Arthurs, A. J. (1996). Personnel specialists’ advanced use of information technology. Personnel Review, 25(3), 3-19. Norris, D.F., & Moon, M.J. (2005). Advancing e-government at the grassroots: Tortoise or hare? Public Administration Review, 65(1), 64-76.
Human Resources Information Systems (HRIS): Any technology that is used to attract, hire, retain, and maintain talent, support workforce administration, and optimize workforce management. Examples include computers, Internet (Web and e-mail) or other technological means of acquiring, storing, manipulating, analyzing, retrieving, and distributing pertinent information regarding human resources (HR) Knowledge Management: HRIS may increase knowledge management, or the creation, capture, transfer, and use of knowledge in the organization. Operational Impacts HRIS: The operational impact of IT is automating routine activities, alleviating the administrative burdens, reducing costs, and improving productivity internal to the HR function itself. Relational Impacts HRIS: The relational impact of IT is providing managers and employees’ remote access to HR databases and services, reducing response times, and improving service levels. Transformational Impacts HRIS: The redefinition of the scope and function of the HR organization to focus more on strategic issues. Web-Based Self-Service: Managers and employees are able to use the Web instead of contacting HR in person, over the phone, or via e-mail, to enroll in benefits and get information or services from their HR department.
West, J.P., & Berman, E.M. (2001). From traditional to virtual HR: Is the transition occurring in local government? Review of Public Personnel Administration, 21(1), 38-64.
Chapter XV
Digital Libraries Micah Altman Harvard University, USA
introduction Digital libraries are collections of digital content and services selected by a curator for use by a particular user community. Digital libraries offer direct access to the content of a wide variety of intellectual works, including text, audio, video, and data; and may offer a variety of services supporting search, access, and collaboration. In the last decade digital libraries have rapidly become ubiquitous because they offer convenience, expanded access, and search capabilities not present in traditional libraries. This has greatly altered how library users find and access information, and has put pressure on traditional libraries to take on new roles. However, information professionals have raised compelling concerns regarding the sizeable gaps in the holdings of digital libraries, about the preservation of existing holdings, and about sustainable economic models. This chapter presents an overview of the history, advantages, disadvantages, and design principles relating to digital libraries, and highlights important controversies and trends. For an excellent comprehensive discussion of the use,
cost and benefits of digital libraries see Lesk (2005), for further discussion of architectural and design issues see Arms (2000), and see Witten and Bainbridge (2002) for a detailed example of the mechanics of implementing a digital library.
background In 1939, before the first digital computer system was designed, Vannevar-Bush, a professor of electrical engineering at MIT, proposed a system that in many ways foreshadowed modern digital libraries. (Bush, 1939, 1945) (Bush would become head of the Office of Scientific Research and Development during World War II and then one of the chief advocates for the creation of the National Science Foundation.) This system, the “Memex,” was designed to microfilm entire libraries of books and journals, combine these with individuals’ private notes and indexes, and make them available on the desktop. Bush envisioned that the Memex would enable users and information professionals to create new organizations of knowledge through ‘associative trails,’ links among parts of different
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Digital Libraries
documents. Although this system was never built, Bush’s ideas inspired generations of future computer scientists, including J.C.R. Licklider, who made fundamental contributions to the development of personal computer interfaces, artificial intelligence, the internet, and digital libraries. Licklider envisioned much of the design of modern digital libraries, including the integration of indexing, search, retrieval, and storage services. (Licklider, 1965) Although lacking the characteristic search and direct access capabilities of modern digital libraries, social science data archives were, in a sense, the first digital libraries, since they maintained large collections of digital material and provided access to it to outside users. Many of these collections were started in the 1950s when social scientists realized that it was crucial that their research surveys, and so forth, were recorded in digital form, in order to be preserved for future research. (Bisco, 1970) In the 1970s through the late 80s, digital technology was adopted in most libraries, primarily in the form of OPAC’s (online public access catalog), which replaced card catalogs. It was not until the early 1990s, when the burgeoning World Wide Web, made dramatically more useful by indexing services such as Lycos (one of the early and dramatic successes of Internet search), greatly accelerated the growth of digital libraries and brought the combination of access and content that is their modern hallmark. Government funding was crucial to early developments in digital library technology, and continues to remain important. For example, the Lycos search engine emerged from work done by the Informedia project at Carnegie-Mellon, and the immensely popular Google search service emerged from Stanford’s Interlib project. Both of these projects were initially funded under the Digital Library Initiative, a joint project of NSF, NASA, and DARPA. The two phases of this initiative sponsored some of the most innovative efforts in digital libraries across a decade. (see Griffin, 1998) Other U.S. government programs such as the National Digital Information Infrastructure Preservation program (NDIIPP), funded by the Library of Congress, and the NSF’s National Science Digital Library, continue to support in-
novative research in this area. Other countries have also contributed funding, mostly focused on the digitization of content, although some organizations such as the U.K.’s JISC (Joint Information Systems Committee) have funded a mix of content and innovative research. Search and information retrieval have long been significant components of digital libraries, and commercial search engines such as Google, Yahoo, and MSN are now extremely popular. Search engines do not, however, constitute digital libraries, which integrate collection management, access, and other services. Some notable examples of modern digital libraries include the arXiv preprint server (McKiernan, 2000); and the many online electronic journals collections made available through the JSTOR project (Guthrie, 2001) and by many of the major commercial and open publishers. (Also see D-lib magazine, which routinely highlights notable digital collections.) Moreover, within the last five years, software systems that provide complete digital library services have become available, including: Greenstone (Witten & Bainbridge, 2002), VDC (Altman, Andreev, Diggory, et al. 1999), Fedora (see Lagoze, Krafft, Payette, et al., 2005), and DSPACE (Tandley, Bass, Stuve, et al., 2003).
digital librariEs: advantagEs and dEsign Digital collections have a number of distinct advantages over traditional physical collections. One of the largest is that they typically allow convenient access, at all hours, to a wide variety of materials, from any location that provides Internet access. By offering different forms of access, digital libraries may also expand the potential user community, by offering, for example, more convenient access to disabled users, digitizing fragile materials for popular use, or offering access to remote populations. As a case in point, by digitizing collections of older journal articles, JSTOR increased the usage of these articles by the same user communities by a factor of 10 (Guthrie, 2001). Offering open access to journals has been shown to further increase usage (See Wilinsky, 2005).
Digital Libraries
Table 1. Some advantages of digital libraries • • • • • • •
Convenient access New for ms of search Eases infor mation shar ing Availabilit y Lowered costs Ease or replication of content Enables new for ms of content and collection
Cost is another significant motivation. Recent adoptions of digital libraries show that they can dramatically reduce the costs of some types of access (Montgomery & King, 2002). Although not yet uniformly less expensive than off-site depositories of paper materials, in the foreseeable future (for many materials) digitizing and distributing digital collections will be simply less expensive than maintaining the corresponding physical space. Digital libraries have a number of other advantages: they can easily be replicated as a safeguard against loss. They can more easily be shared for collaborative work. They enable new forms of search, since both the metadata and the content of digital objects can be indexed. They enable new forms of information organization, since digital materials are not limited to a single location on a single shelf, and they can be included in many different cross-cutting virtual collections. They facilitate new forms of content, such as newspaper articles with interactive graphs, journal articles with embedded data and simulations, works that include accumulated commentary, and “mashups” intermingling content from multiple works in distinctive (and often dynamic) ways. Clearly, these advantages apply to many aspects of e-government information management and dissemination (for a brief overview see Marchionin, Sarnet & Brandt, 2003). Some particular applications of digital libraries in public information technology include: Dissemination of massive amounts of government statistical information (see for example, Dippo, 2003); providing access to information on natural resource management (Delcambre & Tolle, 2003), and use of digital libraries as a means of preservation (see Lyons, 2006)
Architecturally, a digital library system typically includes four types of components: repositories, catalogs, identifier systems, and user interfaces. This is sometimes called the Kahn-Wilensky architecture (Kahn & Wilensky,1995,2006) Repositories store the raw bits comprising each digital object contained in the library (Repositories may be little more than file-systems, or may be interfaces to distributed, multi-level storage systems.). Catalogs support search, by indexing information in the digital objects, and metadata describing them. Identifier systems provide a framework for identifying and locating objects (“resolving” an identifier). User interfaces bring together the functions of the other components to perform or coordinate services for the users—such as, searching, browsing, visualization and delivery. Many other components, services, and agents are also often incorporated into digital library architecture, such as: •
•
•
•
Discovery services that provide a mechanism (such as a central directory) by agents or components which interface to the digital library system dynamically discover the services that are available for the user. Security services to support authentication of users (perhaps by recognizing authentication from another party), provide a framework for making authorization decisions, and secure communication services. Presentation services that assist the user in viewing complex digital objects that are not otherwise easily viewable at the users desktop or in a browser. This may include online systems for data analysis, reformatting services, and streaming media players. Software agents, that run independently of the main digital library system, and which perform additional services with respect to the library’s content, may be used to support services that notify the user of new content of interest, that harvest the content of one library into another (see Van de Sompel & Lagoze, 2000), or that aggregate search results from multiple digital libraries for the user.
Digital Libraries
futurE trEnds Although digital libraries have yielded significant benefits and gained rapid adoption, a number of risks are foreseeable. These risks also represent disadvantages for adopters, especially if the digital library provides the sole means to access a collection. A digital library is, as Arms (1995) aptly points out, far more than a collection of digital objects, systems, and software. He identifies a number of core digital library principles, paraphrased here: • • • • • • •
Digital library technology exists within a legal and social framework. Digital library concepts are obscured by technical jargon. Software systems should be separated from digital library content. Persistent identifiers are a basic building block in the digital library. Different forms of an object may be necessary for storage, delivery, display, and use. Repositories should preserve the information in digital objects. Users want access to intellectual works, not digital library technology and objects.
Many of the trends and challenges surrounding digital libraries are corollaries of these basic principles. One of the largest challenges is preservation. Archivists face a new paradigm in the digital age. Previously, much of archiving physical objects centered on maintaining them in their original forms. This is neither necessary nor sufficient for works in digital form. Both hardware storage media and software file formats are constantly evolving, and thus digital objects must be frequently migrated from one media and format to another in order to ensure future usability. At the same time, these reformatted objects may need to be carefully checked in order to ensure no intellectually significant information is lost in the reformatting. “Universal numeric fingerprints” offer a possible approach to solving this problem (Altman, Gill, & McDonald, 2003). Complex dynamic objects
such as data-driven Websites and software pose particularly difficult technical challenges. In addition, the responsibilities and rights of institutions with respect to archiving and disseminating digital materials are often hazy. The elimination of the requirement to register or even to include notices of copyright by the Copyright Act of 1976 (moving U.S. law towards the international Berne Convention) implies that copyright applies automatically to almost all digital works. The act’s extension of copyright from dual 28 year terms to the life of the author plus 50 years (changed in 1998 to life plus 70 years) implies that newly created digital objects are likely to remain under copyright until long after any systems capable of displaying them have disappeared. (The Digital Millennium Copyright Act’s prohibition against reverse engineering and decryption, and the Supreme Court’s extensions of patents to software in Diamond v. Diehr (1982) have further complicated digital preservation efforts.) To ensure preservation multiple approaches may be needed, including: independent archival copies, migration and emulation approaches. Moreover, many libraries may need to take on additional preservation responsibilities, in order to ensure that one publisher’s demise does not wipe out swathes of important content. (Keller, Reich, & Herkovic, 2003) The issue of persistent identifiers is related to preservation. In order to manage digital works over time, to cite those works, and to manage copies of them, those objects need to have identifiers that are unique and persistent. This is essentially a problem of preserving the linkages among different digital objects and catalogs. Digital object identifiers (DOI) are a form of identifier (Paskin, 2000) that is increasingly being adopted by the publishing community. However, no persistent identifier technology currently has the broad base of institutional and technical support (e.g., support in end-users’ browsers) and offers all of the functionality needed for digital libraries in the future. Another issue that has generated concerns among researchers, educators, activists, and librarians is the wide variation in the quality and
Digital Libraries
coverage of content in many digital libraries. This variation has a number of sources. Reports in the press have highlighted the willingness of search engines and hosting sites such as Google and MSN to actively self-censor controversial content in response to government pressures (Mills, 2006). Information scientists who study Web search have suggested that a number of more inadvertent biases may also be present in current digital libraries. Wouters, Hellsten and Leydesdorff (2004) find a bias in both content selection and search results weighting that tilts toward more recent material. This “recency bias” occurs, at least in part, because new material are “born digital” and thus easier to include in digital collections; current journal articles and other similar content demands a price premium and is thus more profitable for the publisher; search engines and digital libraries often do not retain copies of previous versions when material is updated; and search engines may explicitly weight recently updated material more heavily. In addition, Gerhart’s (2004) analysis suggests that search engines have an inadvertent tendency to present the sunny side of controversial topics, because the structure of the Web in general more strongly reflects organizations than ideas, because controversy may be lost in junk links, and because controversial Websites often lack organizational clout. Such biases may be slow to self-correct, because, as Blair and Maron (1985) show in early work, users are not very good at telling how well an information retrieval system actually works, and they tend to overestimate the completeness of the results that such systems deliver. The increasing ‘Googlization’ of information-seeking suggests that users are not becoming more critical of search systems. Underlining this, in a recent large survey, two thirds of college students reported that they started their research projects with a Google search and held Google to be as trustworthy as their library. Furthermore, almost one third of the student respondents reported that they did not validate the results of Web searches against any print sources (De Rosa, et al., 2006). Lesk offers an interesting perspective on the issues of usability and accuracy: “Users are neither clear about what they want, able to operate the systems
well, nor doing much to get help. However, they’re satisfied with the results. To the extent that we can tell, the acceptance of new systems is partly based on the inability of users to tell how badly things are actually working, and partly on the probability that older systems were also not being used very effectively” (Lesk, 2005, pg. 217). Despite the complexity of the technical issues, the biggest challenges for future digital libraries are likely to be institutional. Business models for libraries are now a huge unsolved issue. As Lesk aptly summarizes, libraries continue to rely on institutional subvention and have yet to find a way to monetize most transactions between users and libraries in a way that is fair, incentive-compatible, and does not involve exorbitant administrative overhead. The economic pressures on libraries are created in large part by the disintermediation of publishers and end-users made possible by network technology, and also, in part, by changes in the intellectual property rights regime associated with digital materials. Both driving and driven by the changing intellectual property regime, the increasing use of digital rights management (DRM) software to restrict access to digital materials complicates the libraries dual roles in disseminating information and in preserving it. In the past, libraries would typically own a physical copy of the works that they purchased, and could make a straightforward decision as to whether to circulate it, to preserve it, or even to duplicate portions as “fair use.” DRM moves libraries from a position of ownership to a more limited and ambiguous position, since, fair use notwithstanding. DRM may place limits on such things as how many times an object can be viewed, limiting the number of pages accessed, the ability to print the content, or the ability to export that content to other formats. The issue of “orphan works” is a telling example of the unintended effects of intellectual property law, and a serious problem for libraries. “Orphan works,” are works for which the current copyright status or copyright holder is unknown. These works constitute a significant portion of works of some types and for some periods, especially: photographs that are embedded in other works; and older films,
Digital Libraries
books, and music that have not yet clearly entered the public domain. Since most created works are not commercially published, and most of those go out of publication quickly, most orphan works are not a potential source of profit to anyone. Prior to 1976, when registration and renewal were required to gain and retain copyright, only 15 percent of who registered copyrights were renewed (U.S. Register of Copyrights, 2006). Nevertheless, orphan works cannot be legally distributed (or preserved) by libraries which do not possess a physical copy. Many libraries would readily pay reasonable fees for the restricted redistribution and preservation rights that they have traditionally relied upon for works in physical form. However, the administrative costs and complexities involved in DRM and intellectual property management are often prohibitive. For example, when IBM produced a digital commemoration of Columbus’s voyage, they spent a million dollars on copyrights, 99 percent of which was administrative costs, excluding actual copyright fees (Garrett & Walters, 1996, cited in Lesk 2005). Moreover, publishers often do not devote effort to these issues because they perceive libraries to be a relatively small source of profit. For scholarly libraries, the costs of academic journals have become an increasingly important issue, due to a dramatic increase in prices. Exacerbating this increase, publishers have adopted the practice of ‘bundling’ electronic access to large groups of journals together, which makes it more difficult for libraries to be selective in their subscriptions. As a reaction to this, and to take advantage of the opportunities that the Internet offers for widespread distribution, a movement toward “open access journals” has arisen. Open access journals, proposed in 1995 by Steven Harnad, a cognitive scientist and a specialist in peer review (Okerson & O’Donnell, 1995), offer free public access while maintaining professional peer review and quality standards. Many new open access journals have been created, and some, such as the Annals of Mathematics, are even constituted as “overlays” that select and organize exiting content in pre-print servers, such as arXiv, where the journal itself essentially constitutes a table of contents and procedure for peer review.
Table 2. A summary of critical issues of digital library technologies Preservation
Intellectual property
Preservation of complex digital objects
Copyright
Persistent identifiers
DRM
Citations
Patents Orphan works Open Access to Journals Bias
Role of library
Search algorithm bias
Collaboration
Censorship
Selection
Incomplete coverage, recency bias
Preservation
Indiscriminating users
In addition, some funding agencies are beginning to mandate open access to publications based on research they sponsor (See Wilinsky, 2005, for a detailed discussion).
conclusion The technological, legal, and economic trends guiding the use and development of digital libraries underscore fundamental changes in the role of the library, as well as in the form of its contents. Possession of information by a library, which was once a primary measure of institutional success, is becoming less important. Individuals may soon have the equivalent of large libraries in their pockets. At the same time, the ability to facilitate an individual’s search for, access to, and collaboration with information is becoming much more important. Such facilitation is not straightforward, however. As Brown and Duguid (2000) point out, networks of information can have great reach, but still fail to promote the types of interactions among users that produces “social knowledge.” How to facilitate effective collaboration within a digital library remains an open technical and institutional question. Digital libraries offer expanded access to information at all levels of complexity—from recipes
Digital Libraries
to research. These libraries have not only made it possible for experts to access information far more rapidly and efficiently, but also lowered the bar to non-experts who are seeking information. Currently, however, such libraries offer only a portion of the collections available at large traditional libraries, and significant institutional and technical challenges have yet to be overcome.
futurE rEsEarch dirEctions The next decade promises to be an exciting one for digital library research. Much of the advances in digital libraries can be expected to come not from pure research, which advances the “state of the art,” but from production engineering that develops the “state of the practice.” Many open problems beckon in this area: •
•
•
Economic issues loom large, and research from the micro-level (e.g., how to price various uses of information goods) to macrolevel (how various producers, distributors, archives, and consumers of information can form and maintain sustainable relations) is likely to be crucial to the success of digital libraries. Digital preservation presents a wide set of open issue. For example: What institutional arrangements can make preservation sustainable? How can the contents of complex, dynamic, multi-media, data-driven, hyperlinked, digital works possibly be preserved for future generations? The research questions in preservation span many fields: from identifying and creating the appropriate legal frameworks to enable preservation; to developing economic models of the value of information in the future; to creating methodologies and technologies for emulation, virtualization, and normalization to cope with ever-changing data formats; to establishing networks of distributed storage that can be trusted to preserve content for hundreds of years. Currently, most intellectual works enter the collections of digital libraries only after
publication of some sort. Integrating digital libraries and information management into the scientific workflow from its inception would revolutionize some areas of research by capturing the products of the research process as it occurs. More generally, individuals are increasingly collecting personal information on a near-continuous personal (or even continuous) basis. This personal and ambient information, from blogs, “lifelogs,” and the like, suggests that the role and functionality of personal digital libraries is going to expand tremendously. Extending digital libraries to all stages of the information life-cycle is an area in which there is much fundamental work to be done.
rEfErEncEs Altman, M., Andreev, L., Diggory, M., Krot, M. King, G., Kiskis, D. et al. (2001). A Digital library for the dissemination and replication of social science research. Social Science Computer Review, 19(4), 458-71. Altman, M., Gill, J., & McDonald, M.P. (2003). Numerical issues in statistical computing for the social scientist. New York: John Wiley and Sons. Arms, W. Y. (1995, July), Key concepts in the architecture of the digital library. DLIB Magazine, 1(1). Retrieved November 8, 2007, from http://www. dlib.org/dlib/July95/07arms.html Arms, W.Y. (2000). Digital libraries. Cambridge, MA: MIT Press. Bisco, R. (1970). Data bases, computers and the social sciences. New York: Wiley-Intersciences. Blair, D.C., & Maron, M.R. (1985). An evaluation of retrieval effectiveness for a full-text document retrieval system. Communications of the ACM, 28(3), 289-99. Brown, J.S., & Duguid, P. (2000). The social life of information. Cambridge, MA: Harvard Business School Press. Bush, V., (1939). Mechanization and the record. (Letter to the Editor.) Fortune.(Vannevar Bush
Digital Libraries
Papers, Library of Congress), Box 138, Speech Article Book File. Bush, V. (1945). As we may think. The Atlantic Monthly, 176(1), 101-108. De Rosa, C., Cantrell, J., Hawk, J., & Wilson, A. (2005). perceptions of libraries and information resources, Dublin, OH:OCLC Online Computer Library Center, Inc. Delcambre, L., & Tolle, T., (2003). Harvesting information to sustain forests. Communications of the ACM, 46(1), 38-9. Dippo, C. (2003). FedStata: the gateway to federal statistics. Communications of the ACM, 46(1), 55. Gerhart, S. L. (2004). Do web search engines suppress controversy? First Monday, 9(1). URL: Garrett, J., & Waters, D. (1996). Preserving digital information. Mountain View, CA: Commission on Preservation and Access and Research Libraries Group joint publication. Griffin, S. M. (1998). NSF/DARPA/NASA digital libraries initiative: a program manager’s perspective. D-Lib Magazine (July/Aug). Retrieved November 8, 2007, from http://www.dlib.org/dlib/ july98/07griffin.html Guthrie, K.M., (2001). Revitalizing older published literature: preliminary lessons from the use of JSTOR. In J. MacKie-Mason and W. Lougee (Eds.), Bits and bucks: Economics and usage of digital collections. Cambridge, MA: MIT Press. IFLA Study Group on the Functional Requirements for Bibliographic Records. (1998). Functional requirements for bibliographic records. International Federation of Library Associations and Institutions. Retrieved from http://www.ifla. org/VII/s13/frbr/frbr.pdf Kahn, R., & Wilensky, R. (2006). A framework for distributed digital object services. International Journal on Digital Libraries, 6(2), 115-123. Retrieved from http://www.cnri.reston.va.us/kw.html
Keller, M. A, Reich, V.A., & Herkovic, A.C. (2003). What is a library anymore, anyway? First Monday, 8(5). Retreived from http://firstmonday. org/issues/issue8_5/keller/index.html Lesk, M. (2005). Understanding digital libraries (2nd Edition). San Francisco: Morgan Kaufman. Lagoze, C., Krafft, D.B., Payette, S., & Jesuroga, S. (2005). What is a digital library anymore, anyway? D-Lib Magazine 11(11). Retrieved from http:// www.dlib.org/dlib/november05/lagoze/11lagoze. html Licklider, J.C.R. (1965). Libraries of the future. Cambridge, MA: MIT Press. Lyons, S. (2006). Preserving electronic government information. The Reference Librarian, 45(94), 207-223. Marchionini, G., Sarnet, H., & Brandt, L., (2003). Digital government. Communications of the ACM, 46(1), 24-27. McKiernan, G. (2000). ArXiv.org: The Los Alamos national laboratory e-print server. The International Journal on Grey Literature, 1(3), 127-138. Mills, E. (2006, January 24). Google to censor China web searches. Retrieved October 24, 2007 from http://www.news.com/Google-to-censorChina-Web-searches/2100-1028_3-6030784. html?tag=item Montgomery, C. H., & King, D.W. (2002). Comparing library and user-related costs of print and electronic journal collections. D-Lib Magazine 8(10). Retrieved from http://www.dlib.org/dlib/ october02/montgomery/10montgomery.html Okerson, A., & O’Donnell J. (eds.). (1995). Scholarly journals at the crossroads: A subversive proposal for electronic publishing. Washington, D.C.: Association of Research Libraries. Paskin, N. (2000). E-citations: Actionable identifiers and scholarly referencing. Learned Publishing, 13(3), 159-168. Tandley, R., Bass, M., Stuve D., Branchofsky, M., Chudnov, D., & Mclellen, G., et al. (2003). The Dspace institutional digital repository system:
Digital Libraries
current functionality. In C. Marshall, G.Hengry, and L. Delcambre (Eds.), Proceedings of the 2003 Joint Conference on Digital Libraries (pp.87-97). New York: ACM Press. U.S. Register of Copyrights. (2006). Report on orphan works. Washington, D.C.: United States Copyright Office, Library of Congress. Van De Sompel, H., & Lagoze, C., (2000). The Santa Fe convention of the open archives initiative. D-Lib Magazine, 6(2). Retrieved from http://www.dlib.org/dlib/february00/vandesompel-oai/02vandesompel-oai.html Witten, H.I., & Bainbridge, D. (2002). How to build a digital library. San Francisco: Morgan Kaufman. Wilinsky, J. (2005). The access principle. Cambridge, MA: MIT Press. Wouters, P., Hellsten, I., & Leydesdorff, L. (2004). Internet time and the reliability of search engines. First Monday, 9(10). Retrieved from http://www. firstmonday.org/issues/issue9_10/wouters/index. html
furthEr rEading Altman, M., Andreev, A., Diggory, M., Krot, M., King, G., & Kiskis, D., et al.(1999). A digital library for the dissemination and replication of quantitative social science research. Social Science Computer Review, 19(4), 458-471.
Bishop, A.P., Van House, N.A., Buttenfield, B.P. (2003). Digital library use. Cambridge, MA: MIT Press Borgman, C.L. (2000). From gutenberg to the global Information Infrastructure. Cambridge, MA: MIT Press Buneman, P., Khanna, S., Tajima,K. & Tan, W. (2004). Archiving scientific data. ACM Transactions on Database Systems, 27(1), 2-42 Gladney, H.M. ( 2007). Preserving digital information. Springer Verlag. Gladney H.M., Cantu, Jr., A. (2001). Authorization management for digital libraries. Communications of the ACM, 44(5), 63-65. Ioannidis, Y., Maier, D., Abiteboul, S., Buneman, P., Davidson, S., & Fox, E., et al. (2005). Digital library information-technology infrastructures. International Journal on Digital Libraries, 5(4), 266-274. Lagoze, C., & Davis, J.R. (1995). Dienst: An architecture for distributed document libraries. Communications of the ACM, 38(4), 47. Lagoze, C., & Van de Sompel, H. (2003). The making of the open archives initiative protocol for metadata harvesting. Library Hi Tech, 21(2), 118-1208. Lyman, P., & Varian, H.R. (2000). How much information? The Journal of Electronic Publishing, 6(2). Retrieved from http://www.press.umich. edu/jep/06-02/lyman.html
Altman,M., & King G. (2007). A proposed standard for the scholarly citation of quantitative data. D-Lib, 13(1). Retrieved November 8, 2007, from http://www.dlib.org/dlib/march07/altman/ 03altman.html
Lynch, C. A. (1999). Canonicalization: A fundamental tool to facilitate preservation and management of digital information. D-lib Magazine, 5(9). Retrieved November 8, 2007 from http://www.dlib. org/dlib/september99/09lynch.html
Andrews, J., & Law, D. (Eds.). (2004). Digital libraries: Policy, planning and practice. Ashgate.
Lynch, C. (2001) The battle to define the future of the book in the digital world First Monday, 6(6). Retrieved from http://firstmonday.org/issues/issue6_6/lynch/index.html>
Arms, W. (2000). Automated digital libraries. How effectively can computers be used for the skilled tasks of professional librarianship? D-Lib, Magazine, 6. Retrieved from http://www.dlib. org/dlib/july00/arms/07arms.html 0
Lynch, C. (2005). Where do we go from here? The next decade for digital libraries. Dlib, 11(7). Retrieved November 8, 2007, from http://www. dlib.org/dlib/july05/lynch/07lynch.html
Digital Libraries
Maniatis P., Roussopoulos, M., Giuli, T.J., Rosenthal, D.S.H., Baker, M., & Muliadi, M. (2004). LOCKSS: A peer-to-peer digital preservation system. ACM Transactions on Computer Systems (TOCS), 23(1), 2-50. McClure, C.R., & Bertot, J.C. (2001). Evaluating networked information services: Techniques, policy, and issues. Medford, NJ: Information Today. Odlyzko, A. (2001). Content is not king. First Monday, 6(2). Retrieved from http://firstmonday. org/issues/issue6_2/odlyzko/index.html Paskin, N. (2000). E-citations: Actionable identifiers and scholarly referencing. Learned Publishing, 13(3), 159-66. Samuelson, P. (2003). Preserving the positive functions of the public domain for science. Data Science J., 2(2), 192-197 Shapiro, C., & Varian, H.R. (1998). Information rules. Cambridge, MA: Harvard business school press. Svenonius, E. (2000). The intellectual foundation of information organization. Cambridge, MA: MIT Press. Weibel, S.L., & Lagoze, C. (1997). An element set to support resource discovery: The state of the dublin core. International Journal of Digital Libraries, 1(1), 176-186
tErms and dEfinitions Born Digital vs. Digitized Objects: Objects that were born digital were created originally in digital forms, for example a word processing file. Digitized objects were created from nondigital forms (e.g., via optically scanning a paper book). Digital Repository: A managed system for long-term digital objects. Also, Institutional repositories are a type of digital repository that is designated by an institution for the preservation of digital objects produced under its aegis.
Metadata: Metadata is often defined as data about data. More specifically, it is information that refers to other digital objects. Metadata often consists of descriptive information (such as a title), administrative information (such as a description of the rights required to view the object), and structural information (such as the organization of page images within a larger book). Metadata can be derived from the object itself in order to used as a surrogate for searching and other services, but is more often information that is not contained in the object itself. Simple/Complex Objects: Simple digital objects consist of a single file that can be fully understood by the user and represented by the user’s software. Complex digital objects require multiple separate files, and possibly additional metadata, to be properly understood. Universal Numeric Fingerprint: A universal numeric fingerprint is used to guarantee that a two digital objects (or parts thereof) in different formats represent the same intellectual object (or work). UNFs are formed by generating an approximation of the intellectual content of the object, putting this in a normalized form, and applying a cryptographic hash to produce a unique key. (Altman, et al. 2003) Union Catalog: An online public access catalog (OPAC) formed by indexing descriptive metadata for each item in the library. In the digital library, Union Catalogs are increasingly being supplemented or replaced by a combination of distributed search across multiple independent catalogs, and direct indexing of digital object content. Work/Edition/Manifestation/Item Hierarchy: A set of principles for distinguishing between a distinct work of intellectual creation (e.g., Beethoven’s 5th Symphony), the edition of that work (e.g., the 1979 performance by the Vienna Symphony orchestra), the manifestation of that work (e.g. an MP3 file created with a 192 bit sampling create settings), and a particular item (e.g., a copy of that MP3 file that resides in a particular repository).
Chapter XVI
An Exploratory Study of the E-Government Services in Greece Dimitrios K. Kardaras Athens University of Economics and Business, Greece Eleutherios A. Papathanassiou Athens University of Economics and Business, Greece
introduction The impact of “e-business” on the public sector is the main source of the government’s transformation towards “e-government,” which refers to the public sector’s efforts to use information and communication technologies (ICT) to deliver government services and information to the public. E-government allows citizens to interact more directly with the government, transforming multiple operational and bureaucratic procedures and employing a customer-centric approach to service delivery; it allows intra-governmental communication; it also offers numerous possibilities for using the Internet and other Web-based technologies to extend online government services (Gant, Gant & Johnson, 2002). Governments evaluate the best practices of e-business applications worldwide and establish policies for the development of e-government applications. The aim of this strategy is to develop and provide faster and cheaper public services and contribute decisively to the new knowledgebased economy.
The visions, goals, and policies that encompass e-government vary considerably among practitioners and users, while comparative indicators may not always be precise (U.N., 2001). As e-government consists of various aspects, perspectives and objectives there is not only one valid way for assessing its progress. A number of different methodologies for collecting and analyzing data have been applied to different reviews, depending on their evaluation objectives. The primary goal of the present study is to evaluate e-government services in Greece with a set of carefully chosen criteria, in a manner that can be used for evaluating e-government services world-wide.
background It is with the advent of the Internet technologies e-government has truly become a global phenomenon (Jaeger, 2003; Jaeger & Thompson, 2003; Norris, 2003; Muir & Oppenheim, 2002; Wimmer,
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
An Exploratory Study of the E-Government Services in Greece
Traunmuller & Lenk, 2001) and their services’ importance has been highlighted according to a survey that was carried out by the department of Public Finances and Public Administration of United Nations (U.N.) in 2001 (UN, 2001). The research’s outcomes in 190 states of U.N. indicates that 169 countries (88.9 percent) have launched Web pages for offering e-government services and the 16.8 percent of these were created recently and offer minimal and static content information. Countries that participated in the research have developed applications where the user has access in a number of public information and e-government services, in percentage of 34.2 percent, (65 countries); 17 countries (9 percent), have developed fully transactional e-services for their stakeholders. On the other hand, the European Union (E.U.), investing in the prospect and the potential profits from developing e-gov services, lay out a Plan of Action called “eEurope 2005 – 2010” (eEurope 2002 (2001); Wimmer, 2002). In this plan E.U. places strategies for developing e-government services and supports its members-countries with budgets for that purpose.
Definitions of E-Government There are differences in the philosophical underpinnings of e-government. For some, e-government is the “application of the tools and techniques of e-commerce to the work of government” (Howard, 2001). This perspective focuses on the practical efficiencies and cost reductions of e-government, such as those that can be provided by online procurement and online tax filing. For others, e-government has the potential to “improve democratic participation” and “overcome political alienation” (Noveck, 2003), which emphasizes e-government as an enabler to realize an inclusive and productive public sector along with more efficient administration, (White Paper on European Governance, 2001). A more technical aspect considers e-government as the use of information technology to provide government’s services, support their internal functions and engage citizens (Medjahed et al., 2003).
E-government refers to the delivery of government information and service online through the Internet or other digital means, and may also include opportunities for online political participation (Norris, 2003). Thus it concerns the complete range of automation of government information and services it encompasses the concepts of e-commerce, e-administration, e-service and e-information (Warkentin et al., 2002).
Types of E-Government E-government activities can be examined in terms of the interactions between sectors of government, businesses, and citizens. • Government-to-government (G2G): Government to Government initiatives facilitate increased efficiency and communication between parts of a government. • Government-to-business (G2B): Government to Business initiatives, involving the sale of government services and goods along with procurement facilities have benefits for both businesses and governments. • Government-to-citizen (G2C): Government to Citizen initiatives can facilitate involvement and interaction with the government, enhancing the quantity and quality of public participation in government (Kakabadse, Kakabadse & Kouzmin, 2003).
developmental stages of E-government The four stages of growth model for e-government (Gupta & Debashish, 2003; Layne & Lee, 2001) are described as: 1.
Cataloguing: (online presence, catalogue presentation, and downloadable forms): In this stage, governments create a ‘state Web site.’ They do not have much Internet expertise and they prefer to minimize the risk by doing a small project.
An Exploratory Study of the E-Government Services in Greece
2.
3.
4.
Parts of the government’s non-transactional information are put on the site. Usually at first the index site is organized on the basis of functions or departments as opposed to service access points. Transaction: (Services and forms are online, working database supporting online transactions): This stage empowers citizens to deal with their governments online anytime, saving hours of paperwork, the inconvenience of traveling to a government office and the time spent waiting in long queues. Registering vehicles or filing taxes forms online is only the beginning of such transaction-based services. Vertical integration: Local systems linked to higher level systems, within similar functionality. Information is accessed through citizen’s local portal. The citizen-user should still be able to access the service from the same entry in the local portal, because the local systems are connected to upper level systems, directly or indirectly. Horizontal integration: Systems integrated across different functions, real one stop shopping for citizens. The horizontal integration of government services across different functions of government is significantly based on the efficiency of using the
information technology. Such integration will facilitate “one stop shopping” for the citizen. Public authorities are providing a number of public services online (Table 1, Institute of Technology Assessment, 1998). It is worth noting that among the basic public services listed in eEurope, 2001 (eEurope, 2005), 15 of them (i.e., 75 percent) are transaction services, i.e. services that involve filling-in and submission of electronic forms.
success factors of E-government initiatives Some of the key issues that affect the success and deployment of e-government initiatives are (Ke & Wei 2004; Steimke & Hagen 2003): •
•
The e-government structure involves the policies on the transformation of e-government, the intention to make available resources for the e-government initiatives, the choice of projects for the transformation, the organizational support for the new systems and the continued development of common standards and guidelines. The knowledge developer. The role of the knowledge developer is the accumulation of
Table 1. Electronic government services typology Overview of possible services and their application areas Communication Services Everyday life
Information on work, housing, education, health, culture, transport, environment, etc.,6
Discussion for a dedicated to questions of everyday life; Jobs or housing bulletin boards
Transaction Services e.g., ticket reservation, course registration
Public service directory Tele-administration
Guide to administrative procedures
e-mail contact with public servants
electronic submission of forms
Laws, parliamentary papers, political programs, consultation documents
Discussion for a dedicated to political issues
referenda
Background information in decision making processes
e-mail contact with politicians
Public registers and databases
Political Participation
elections opinion polls petitions
An Exploratory Study of the E-Government Services in Greece
•
systems knowledge; organizational knowledge and knowledge from operating within the given resource constraints as well as the input from the users on the implementation of the project. The knowledge developer also plays a key role in integrating the agencies through tele-cooperation systems. The user participation is the input from the users on the usage and acceptance of the system. The key factors in this layer are targeted towards system acceptance and enhanced participation in the overall transformation of the government.
rEsEarch analysis Research Methodology A questionnaire that addresses all stages of e-government development was developed and tested for its content validity. For its validation a set of evaluation factors were firstly specified after the thorough analysis of the relevant literature and a draft version was sent to university professors who specialize in information systems (IS) as well as to information technology (IT) professionals who work in Public sector organizations. Their comments and suggestions regarding rephrasing the questions and the operationalization of factors were taken into consideration and incorporated in the final version of the questionnaire. Web sites were evaluated with respect to forty-four criteria in an attempt to establish a comprehensive evaluation model for e-government. The criteria used in this study fall into the following categories: • System design: Speed of first page loading (skip intro), number of inactive or outdate (“dead”) links, consistency of the style (layout, text, use of colors), existence of a site map, precautions to ensure correct transactions, support of payment. • Usability: Accessibility of Web site, Web site easy to find, ranking placement after
•
• •
• • • • •
searching with search engines, multilingual support, search engine availability, help desk availability, FAQs section availability. User’s motivation: Existence of a forum where issues can be mentioned, promotion activities for users to join the site, users motivated to feel participation. Communication: Reference to Web site’s administrator, report of postal address, telephone number. Content: Number of links to other government sites, number of links to other non-government sites, advertising on the Web site, company profile, organization chart published on the Web site, information for employment opportunities, legal information. Currency: Was there a broadcast of events, how current is the information. Accuracy: Keeping track of and linking to copies of legislation or other documents of interest. Online services: Sophistication stage, number of online services, number of clicks to find the online services. Support: Reply time for e-mail requesting information, was there a follow-up service to customers. Security: Passwords, public key infrastructure.
A five-point Likert scale (“Not at all,” “Little,” “Moderate,” “Good,” and “Very Good”) was used to answering the questionnaire. The operationalization of the evaluation criteria was clarified in order to minimize bias in answering the questions, for example how to differentiate very good from good. The operationalization specifies how to measure each one of the evaluation criteria considered in the questionnaire. Examples, of the measure intervals used are shown below: The study sample consists of 60 governmental Web sites that support public services in Greece. The data is collected by the authors and analyzed with SPSS version 13.0. The sample was randomly specified from a group of 553 public organizations representing the following categories as classified from the Greek search engine “in.gr”:
An Exploratory Study of the E-Government Services in Greece
Table 2. Examples of evaluation criteria and their measures used in the study How fast is the first page loading (skip intro)? 0
1
2
3
4
Inaccessible
16 sec- 30sec
9sec- 15 sec
5sec - 8sec
1sec-4 sec
Was the style (layout, text, use of colours) consistent? 0
1
2
3
4
Limited
Through information
Limited
Through information
Limited
Are any inactive or outdate (“dead”) links encountered? 0
1
More than 10
7 to 10
2
3
4
4 to 6
1to3
None
Subject index was clustered by different…? 0
1
2
3
4
No clear structure
Theme
Theme/Sub-service
Service/ Sub-theme
User Type
Was the Web site easy to find? 0 Not easy to find
1
2
3
4
Find at 1 search engines
Find at 2 search engines
Find at 3 search engines
Find at all 4 search engines
What is the ranking after searching with Google.com? 0
1
2
3
4
No rank
11 -15 place
7 -10 place
4 -6 place
1 -3 place
th
th
th
th
th
th
st
rd
What is the ranking after searching with Yahoo.com 0
1
2
3
4
No rank
11th-15th place
7 th-10 th place
4th -6th place
1st-3rd place
What is the ranking after searching with In.gr? 0
1
2
3
4
No rank
11 th-15 th place
7 th-10 th place
4th -6th place
1st-3rd place
Haw easy was to find the online services? 0
1
2
3
4
More than 10 clicks
7-10 clicks
5-6 clicks
3-4 clicks
1-2 clicks
• • • • • •
Government and ministries Municipalities and Local Government Public Insurance Organizations Chambers Defense and Security Other Public Organizations such as the National Documentation Centre, the Hellenic Org. of Small Medium Sized Enterprises & Handicraft, the Supreme Council of Staff Selection, etc.
Evaluation Results for System design • Regarding the speed of loading, 58 percent of the sample achieves a “good” rate that means their site loads within 4-8 seconds. This is a promising result for it allows users to read the organization’s Web site quickly without any delays. Only two percent of the sample are in the “not at all” category.
An Exploratory Study of the E-Government Services in Greece
• With respect to outdated links, the sample shows that half of the surveyed Web sites (50 percent) have up-to-date Internet links. Moreover, according to the research a 42 percent percent have one to three inactive links and 8 percent percent more than four. • Style layout was consistent through all Web site’s pages in a rate of 95 percent. Site’s appearance makes the first impression to users and if there is a style similarity through Web site they would not be confused. • There was no sitemap facility in 60 percent rate of surveyed Web sites. This fact suggests that no consideration has been taken for inexperienced users that will visit the Web site. • 80 percent of surveyed Web sites have not taken any actions to ensure correct and secure transactions and data exchange between the site and its users. • There is no direct payment service available through the organization’s Web sites in the 98 percent of the sample. • As an overall assessment of the design features of the sample indicates that there is an absence of integrated processes among government agencies, and luck of information technology and communication infrastructure.
Evaluation Results for Usability • 95 percent of Web sites were always accessible regardless either the number of attempts or the time of the day; that is except night hours between 00:00 and 06:00. Incidentally it is worth mentioning that there was no consideration for handicapped users, who were accessing the Web site, that is audio or vision extra features that would allow them to also be informed. • Four popular search engines were used in order to estimate how easy it is for a user to find a Governmental Web site or service (two search engines with worldwide range and two in Greece). Easiness of finding a Web site depends on the ranking that a search
•
•
• •
engines gives for a Web site. In all of the four search engines, search resulted an 82 percent of success. Separately search engines’ ranking was 90 percent success for “Google.com,” 88 percent success for “Yahoo.com,” 95 percent success for “In. gr,” and 88 percent percent success for “Pathfinder.gr.” The 75 percent of Web sites support at least one foreign language while 62 percent of the Web sites in the sample supports only the English language. However, it is notable that 25 percent of Web sites have absolutely none information translated in any foreign language. 85 percent of surveyed sites have no help desk facilities, 78 percent have no FAQ section available and a marginal 48 percent have no search engine available. The absence of help desk facilities indicates the discrimination between expert and novice users. As a conclusion, Web sites in the sample are very easy for a user to find (according to search engines ranking) but not so easy to conceptualize (lack of extended help facilities, no FAQs), especially if the user is unfamiliar with ICT and Internet usage.
Evaluation results for user’s motivation • Fora are absent from Governmental Web sites at a rate of 90 percent of the study sample. Features for motivating the user to feel that he or she takes part in the agency’s Web site performance are absent at 85 percent of cases. • Although there are some features such as newsletter’s subscriptions, the promotion activities for users to join a site scored “not at all” at 80 percent of cases. • It can be argued that the Web sites in the sample have been designed and developed without considering issues such as user opinions nor user potential to participate in open discussions or debates about governmental or not matters.
An Exploratory Study of the E-Government Services in Greece
• The Web site’s status indicate that basic egovernment’s dimensions such as e-administration and e-democracy are immature and potential innovative force of e-government’s best practice has not been exploited yet.
Evaluation results for communication • Results show that 78 percent of the Web sites have some contact information that includes at least a postal address and telephone number, and in some occasions e-mails lists, faxes numbers and a variety of supplementary information. The remaining 22 percent of Web sites have no contact information made available. • A 78 percent of the Web sites have operational catalogs of departments. Eighty-two percent of the Web sites have e-mail facilities in order to be able to communicate with their stakeholders. • 47 percent of the Web sites provide no reference to a Web administrator or a person responsible for the site’s content.
Evaluation results for content • The study shows that 53 percent of the sample Web sites include adequate links to government sites, while 48 percent include adequate links to non-government sites, as well. This result indicates that in general the site’s visitors have the opportunity to access other sites of similar interest in order to find out additional information about a given issue or to comparably confirm the site’s content. • In 63 percent of investigated Web sites there were no advertisements for product or services, either commercial or noncommercial. • 77 percent of the surveyed Web sites offer company profile information at an assessment range of “Good” and “Very Good” ranking. • The lack of information for employment
opportunities ranged at 75 percent. • Eighty-three percent of the investigated Web sites offer to some extent legal information relative to the site’s index. That is because Governmental sites aim to familiarize users or citizens with national implemented legislation. In fact, this is one of the strategic objectives of e-government development in general.
Evaluation results for up-to-date • Seventy percent of the surveyed Web sites were publishing information whose content had been updated within the last month, while 12 percent were publishing information updated everyday.
Evaluation Results for Accuracy The Web site’s accuracy was assessed by measuring the degree of verification of information that the site provides to users for validating the information they have accessed. • Fifty-three percent of surveyed Web sites offer links to files in PDF format, enabling users to track and keep copies of legislation or other documents of interest, while 20 percent of Web sites offer database search facilities for users to have access to accurate information.
Evaluation results for services •
In terms of the four stage scoring framework (Gupta & Debashish 2003; Layne & Lee 2001) regarding the availability of on line public services, 78 percent of investigated Web sites were found to be in stage one (information), 5 percent were found to be in stage two (one-way interaction), and 13 percent were found in stage three (two-way interaction). A very limited 3 percent of the sample is in stage four (transactional) thus being able to offering complete service fulfilment. • Twenty-two percent of the surveyed Web
An Exploratory Study of the E-Government Services in Greece
sites had already launched at least one service online. • Furthermore, 39 percent of e-services are very easy for the user to locate (according to the required number of clicks) and 61 percent need some additional effort.
Evaluation results for service’s support • Concerning the time for reply to emails requesting information, the study indicates that 50 percent of Web site administrators did not send a reply message at all, 17 percent responded within 24 hours, 13 percent within two days and 20 percent within five working days at most. • Follow up services were provided in 61 percent of the Web sites which offer e-services online.
Evaluation Results for Security • As far as security issues are concerned, this study’s findings were alarming. In the surveyed Web sites there was a complete absence of any reference to public key infrastructure. Web sites with secure server connections accounted for 46 percent of those providing online e-services, whereas 77 percent of Web sites providing on-line e-services were accessible through a password.
futurE trEnds E-government is a great opportunity for governments to improve the relationships with their citizens and to create a public value as have been defined by social priority goals. This study suggests a set of evaluation criteria that comprehensively cover aspects of e-government development. More research could focus on investigating further the evaluation criteria used in this study, towards developing a model of metrics for e-government evaluation. Furthermore, service customization
and Web sites personalization should be the direction forward. Coupling e-government with other technologies such as customer relationship management systems will assist in delivering value adding services. Multicultural issues should be investigated. Research could also focus on building models of factors that may affect e-government such as infrastructure, staff requirements, access to the Internet, and so forth. Another research opportunity is to investigate how the technology acceptance and adoption models and theories can lead to the development of e-government services and management.
conclusion This chapter investigates e-government development status in Greece. It draws on previous research in order to considers and amalgamate a set of evaluation criteria for e-government assessment. The results from this exploratory study indicate that Greece in falling behind in terms of e-government services development. It is not only the delays in offering an attractive and secure applications portfolio to citizens and businesses, but it is also the lack a mentality that would invite the citizens to participate in e-government development. Bearing in mind that there is only a three percent of the sample that can offer horizontal integration across organizations in order to provide full service support and fulfillment (i.e., stage four of e-government development) this study suggests the necessity for a joint strategic development of e-government across the public sector. Such an initiative would ensure the modernization and integration of applications at the administrative, economic and social level. In order to achieve a full implementation of e-government applications, back-office operations should be reorganized and simplified. This study indicates that it is only in terms of Web sites design that the Greek public sector achieves a good performance, as opposed to user motivation, participation and communication. Citizens’ involvement in e-government
An Exploratory Study of the E-Government Services in Greece
services design is utterly left forgotten. This can be attributed to the fact that e-government in Greece, as depicted in the present study, is at the early stages of its evolution with the provision of simple general information to the citizens about public administration issues. In addition Internet use for purchasing in Greece is very limited since the percentage of households with Internet access is 22 percent and the percentage of individuals having purchased/ordered online in the last three months is lower at two percent (eGovernment Fact sheet: Greece 2006). Therefore interaction with the public, which is a critical success factor for e-government development, is left underdeveloped. Future initiatives should therefore focus on creating an environment that supports the use of the Internet and the development and adoption of citizen oriented policies.
futurE rEsEarch dirEctions E-government is a great opportunity for public administrations to improve their efficiency and to offer better quality service to their citizens. Egovernment is not just the delivering mechanisms themselves, but total rethinking the services. Research-wise there are interesting opportunities to investigate e-government from different perspectives, since it is an interdisciplinary domain. From a managerial point of view, strategic planning is at the top of the list. Methodologies and models have been developed for IS strategic planning but there is a need to study the strategic implications of e-government and evaluation its impact on structures, processes, employees, citizens and other stakeholders. Strategic planning studies may on the non profit making nature of public administration organizations and investigate the development of new methods and tools. Further, the restructuring of processes could also be a promising area for research since business process management has been for more than 15 years a topic of research in private sector companies, as well as the relationships between e-government and management styles and culture in public administration. Research could focus
0
then in linking public sector strategies with their e-government services and processes. Along this line the investigation of the importance and the role of stakeholders such as the politicians and political parties as well as the business establishment in developing e-government. The views of citizens and the study of their needs and requirements can also be a topic for further investigation. Issues such as the Internet penetration to population, the level of computer literacy, and social, educational, and legal issues as part of the necessary infrastructure may shed light in the critical success factors for e-government development. In addition developing methods and performance indicators to assess the services and standards of e-government is an area needing more research attention that would help in managing e-government development. E-government is related to delivering services. An important area of research is to investigate the quality of e-government services by incorporating and extending approaches, tools and factors that measure service and e-service quality. From a technological point of view the development of the semantic Web opens up new opportunities for developing advanced services to citizens and companies. Along that line research may focus on service customization and Web service composition. There is a lot of research currently in these areas but it would be interesting to apply these technologies and methods in e-government. Collaboration platforms may be designed and developed in order to provide seamless services across public administration and support process integration. Modeling citizens’ satisfaction along with input from the user cognition domain may lead to the development of new frameworks for e-government requirement analysis and specification and it could also be of value in both research and practice. A promising area of research is also the design and development of recommender systems that may advice citizens on the required documentation from the bureaucracy, the available services that would be of value to the citizen or a company as well as provide customized information on several issues such as funding opportunities, job offers,
An Exploratory Study of the E-Government Services in Greece
legal advice with respect to the public sector, or even e-government service customization.
rEfErEncEs eEurope 2002. (2001). E-Government ind i c a t o r s f o r b e n c h m a r k i n g e E u r o p e. Retrieved July 19, 2004, from http://europa. eu.int/information_society/eeurope/action_plan/ pdf/egovindicators.pdf eEurope 2005. (2005). Common List of Basic Public Services. Retrieved October 22, 2006, from http://ec.europa.eu/idabc/servlets/Doc?id=18402 eGovernment Factsheet: Greece 2005. (2006): eGovernment Factsheet — Greece — Country Profile, Retrieved October 22, 2006, from http:// ec.europa.eu/idabc/en/document/6173/397 Gant, B., Gant, J., & Johnson, C. L. (2002). State web portals: Delivering and financing e-service. Indiana: Indiana University-Bloomington. Gupta, M.P., & Debashish, J. (2003). E-government evaluation: A framework and case study. Government Information Quarterly, 20 (4), 365-387. Howard, M. (2001). E-government across the globe: How will “e” change government? Government Finance Review, 17(4), 6-9. Institute of Technology Assessment. (1998). Austrian Academy of Sciences and Centre for Social, Background Paper. In proceedings of the Information Society, Bringing Administration Closer to the Citizens Conference, November 1998 organised by the Centre for Social Innovation on behalf of the Information Society Forum/Work Group 5/Public Administration. Jaeger, P. (2003). The endless wire: E-government as global phenomenon. Government Information Quarterly, 20(4), 323-331. Jaeger, P., & Thompson, K. (2003). E-government around the world: Lessons, challenges, and future directions. Government Information Quarterly 20, 4, 389-394.
Kakabadse, A., Kakabadse, N. K., & Kouzmin, A. (2003). Reinventing the democratic governance project through information technology: A growing agenda for debate. Public Administration Review, 63(1), 44-60. Ke, W., & Wei K.K. (2004). Successful e-government in Singapore. Communication of the ACM, 47 (6), 95-99. Layne, K., & Lee, J. (2001). Developing fully functional e-government: A four stage model. Government Information Quarterly, 18(2), 122136. Medjahed B., Rezgui, A., Bouguettaya A., & Ouzzani, M. (2003). Infrastructure for e-government web services. IEEE Internet Computing, 7(1), 58-65. Muir, A., & Oppenheim, C. (2002). National information policy developments worldwide I: Electronic government. Journal of Information Science, 28(3), 467-481. Norris, D.F. (2003). Leading-edge information technologies and American local governments, in D. Garson (Ed.), Public information technology: policy and management issues (pp. 139-169). Hershey, PA: Idea Group Publishing. Noveck, B. S. (2003). Designing deliberative democracy in cyberspace: The role of the cyberlawyer. Boston University Journal of Science and Technology, 9(9), 1-94. Steimke F., & Hagen M. (2003). OSCI: a common communication standard for e-government. In Proceedings of The EGOV’03, Lecture Notes in Computer Science, 2739, 250-255, Springer, Berlin.. UN. (2001). Benchmarking e-government: A global perspective. United Nations Division for Public Economics and Public Administration, and American Society for Public Administration. White Paper on European Governance. (2001). Principles of good governance. COM (2001).
An Exploratory Study of the E-Government Services in Greece
Warkentin M., Gefen D., Pavlou P.A., & Rose, G.M. (2002). Encouraging citizen adoption of egovernment by building trust. Electronic Markets, 12(3), 157-162.
Bertot J.C., & Jaeger, P.(2006). User-centered e-government: Challenges and benefits for government Web sites, Government Information Quarterly, 23(2) 163-168.
Wimmer, M., Traunmuller, R., & Lenk, K. (2001). Electronic business invading the public sector: considerations on change and design. In proceedings of the 34th Hawaii International Conference on System Sciences (pp. 1709-1718). Los Alamitos, CA: IEEE Computer Society.
Charif Y., & Nicolas N. (2006). An overview of semantic web services composition approaches. Electronic Notes in Theoretical Computer Science, 146(1) 33-41.
Wimmer, M. (2002). A European perspective towards online one-stop government: the eGOV project. Electronic Commerce Research and Applications, 1(1), 92-103.
furthEr rEading Ardissono, L., Goy, A., Petrone, G., Segnan, M. (2002). The adaptive web: Personalization in business-to-customer interaction. Communications of the ACM, 45(5), 52-53. Akbaba A. (2006). Measuring service quality in the hotel industry: A study in a business hotel in Turkey. International Journal of Hospitality Management, 25(2), 170-192. Akman I., Yazici A., Mishra A., & Arifoglu A. (2005). E-government: A global view and an empirical evaluation of some attributes of citizens. Government Information Quarterly, 22(2), 239-257. Andersen K. V., & Henriksen H. Z. (2006). E-government maturity models: Extension of the layne and lee model. Government Information Quarterly, 23(2) 236-248. Barnes, S., & Vidgen, R. (2006). Data triangulation and web quality metrics: A case study in e-government, Information & Management, 43(6), 767-777. Bauer, H., Falk, T., & Hammerschmidt, M. (2006). eTransQual: A transaction process-based approach for capturing service quality in online shopping. Journal of Business Research, 59(7) 866-875.
Chen C., Wu C., & Wu, R. C. (2006). e-Service enhancement priority matrix: The case of an IC foundry company.Information & Management, 43(5) 572-586. Evans, D., & Yen, D. (2006). e-government: Evolving relationship of citizens and government, domestic, and international development. Government Information Quarterly, 23(2) 207-235. Evans, D., & Yen, D. (2005). E-government: An analysis for implementation: Framework for understanding cultural and social impact. Government Information Quarterly, 22(3), 354-373. Frey, K., & Holden, S. (2005). Distribution channel management in e-government: Addressing federal information policy issues, Government Information Quarterly, 22(4) 685-701. Gil-García, J.R., & Pardo, T. (2005). e-government success factors: Mapping practical tools to theoretical foundations. Government Information Quarterly, 22(2) 187-216. Ha, S.H. (2006). Digital content recommender on the internet. IEEE Intelligent ,March-April, 70-77. Hernon, P., & Calvert P. (2005). E-service quality in libraries: Exploring its features and dimensions, Library & Information Science Research. 27(3) 377-404. Horst, M., Kuttschreuter, M., & Gutteling, J. M. (2007). Perceived usefulness, personal experiences, risk perception and trust as determinants of adoption of e-government services in The Netherlands. Computers in Human Behavior, 23(4) 1838-1852.
An Exploratory Study of the E-Government Services in Greece
Hung, S., Chang, C., & Yu, T.(2006). Determinants of user acceptance of the e-government services: The case of online tax filing and payment system. Government Information Quarterly, 23(1) 97-122. Iwaarden, J., Van der Wiele, T., Ball, L., & Millen, R. (2004). Perceptions about the quality of web sites: a survey amongst students at Northeastern University and Erasmus University. Information & Management, 41(8) 947-959. King, S. (2007), Citizens as customers: Exploring the future of CRM in UK local government, Government Information Quarterly. 24(1) 47-63. Kuter, U., Sirin, E., Parsia, B., Nau, D., & Hendler, J. (2005). Information gathering during planning for Web Service composition. Web Semantics: Science, Services and Agents on the World Wide Web, 3(2-3) 183-205. Lai, J. (2006). Assessment of employees’ perceptions of service quality and satisfaction with e-business. International Journal of HumanComputer Studies, 64(9), 926-938. Liljander, V., Gillberg, F., Gummerus, J., & Van Riel A. (2006). Technology readiness and the evaluation and adoption of self-service technologies. Journal of Retailing and Consumer Services, 13(3), 177-191. Ma, L., Chung, J., & Thorson, S. (2005). Egovernment in china: Bringing economic development through administrative reform. Government Information Quarterly, 22(1) 20-37. Maamar, Z., Narendra, N.C., & Sattanathan, S. (2006). Towards an ontology-based approach for specifying and securing Web services. Information and Software Technology, 48(7), 441-455. Madhusudan, T., & Uttamsingh, N. (2006). A declarative approach to composing web services in dynamic environments. Decision Support Systems, 41(2), 325-357. Ngobo, P. V. (2005), Drivers of upward and downward migration: An empirical investigation among theatregoers, International Journal of Research in Marketing, 22, 2, 183-201.
Ni, A.Y. & Ho, A.T. (2005). Challenges in e-government development: Lessons from two information kiosk projects. Government Information Quarterly, 22(1), 58-74. Nowacki, M. (2005). Evaluating a museum as a tourist product using the servqual method. Museum Management and Curatorship, 20(3), 235-250. Parent, M., Vandebeek, C., & Gemino, A. (2005). Building citizen trust through e-government. Government Information Quarterly, 22(4), 720-736. Rabinovich, E. (2007). Linking e-service quality and markups: The role of imperfect information in the supply chain. Journal of Operations Management, 25(1), 14-41 Rao, J., Küngas, P. & Matskin, M.(2006). Composition of Semantic Web services using Linear Logic theorem proving. Information Systems, 31 (4-5), 340-360 Reddick, C. (2005). Citizen interaction with e-government: From the streets to servers? Government Information Quarterly, 22(1), 38-57 Reddick, C. (2006). Information Resource Managers and E-government Effectiveness: A Survey of Texas State Agencies. Government Information Quarterly, 23(2), 249-266 Stibbe, M. (2005). E-government security. Infosecurity Today, 2(3), 8-10 Torres, L., Pina, V. & Acerete, B. (2005). Egovernment developments on delivering public services among EU cities. Government Information Quarterly, 22 (2), 217-238 Tsai, H., & Lu, I. (2006). The evaluation of service quality using generalized Choquet integral. Information Sciences, 176(6), 640-663 Wang, H., Huang J. Z., Qu, Y., & Xie, J. (2004). Web services: problems and future directions. Web Semantics: Science, Services and Agents on the World Wide Web, 1(3), 309-320
An Exploratory Study of the E-Government Services in Greece
Yang, J., & Papazoglou, M. (2004). Service components for managing the life-cycle of service compositions. Information Systems, 29(2), 97-125
tErms and dEfinitions E-Commerce: This concept is linked to business side of government interactions. In e-commerce the exchange of money for goods and services is conducted over the Internet. For example, citizens paying taxes, renewing vehicle registrations, and auctioning surplus equipment (through online purchasing, e-procurement). E-Services: Describes the use of electronic delivery for government information, programs and services. These are available on-line “24h/7days.” It also refers to electronic service delivery (ESD) and such expression as ‘onestop service centre’ that citizen needs are met through a single contact with the government. The strategic challenge is to deliver quality services to users and cost effectiveness. E-Management: While e-Services focus on so called “front–office” relations, e-management (e-administration) refers to the so called “back-office” organizational systems. E-government initiatives within this domain deal particularly with improving management from streamlining internal processes to crossdepartmental flow of information. E-Democracy: This is the most difficult to generate and sustain feature of e-government. It refers to activities that increase citizen involvement including virtual town meeting, open meeting, cyber campaigns, feedback polls, public surveys and community forums such as e-voting.
Government-to-Business (G2B): Involving the sale of government services and goods along with procurement facilities, have benefits for both businesses and governments. For businesses, G2B interactions can result in increased awareness of opportunities to work with the government and in cost savings and improved efficiency in performing transactions. For governments, G2B interactions offer benefits in reducing costs and increasing efficiency in procurement processes plus providing new avenues for selling surplus items. Government-to-Citizen (G2C): Can facilitate involvement and interaction with the government, enhancing the quantity and quality of public participation in government. G2C interactions can allow citizens to be more informed about government laws, regulations, policies, and services. For the citizen, e-government can offer a huge range of information and services, including government forms and services, public policy information, employment and business opportunities, voting information, tax filing, license registration or renewal, payment of fines, and submission of comments to government officials. Government-to-Government (G2G): Facilitate increased efficiency and communication between parts of a government. G2G initiatives can improve transaction speed and consistency, and at the same time reduce the time employees have to spend on tasks. The G2G initiatives will enable sharing and integration of federal, state and local data to facilitate better leverage of investments in IT systems and to improve grant management capabilities, and also will support “vertical” (that is intergovernmental) integration requirements.
Chapter XVII
E-Government’s Barriers and Opportunities in Greece Giorgos Laskaridis University of Athens, Greece Konstantinos Markellos University of Patras, Greece Penelope Markellou University of Patras, Greece Angeliki Panayiotaki University of Patras, Greece Athanasios Tsakalidis University of Patras, Greece
introduction In recent years, we have witnessed the rapid evolution of the World Wide Web. This development allows millions of people all over the world to access, share, interchange and publish information. In this context, many governments have realized that their information resources are not only valuable to themselves, but valuable economic assets, that fuel of the knowledge economy. By making sure the information they hold can be readily located and passed between the public and private sectors, taking account of privacy and security obligations, it will help to make the most of this
asset, thereby driving and stimulating national and international economy. Governments take advantage of information and communication technologies (ICT) and the continuing expansion of the Web and started e-government strategies to renew the public administration and eliminate existing bureaucracy, therefore reducing costs (Riedl, 2003; Tambouris et al., 2001). In Greece, ICT started being explored at first and then exploited in order to help e-government grow. The main boost towards e-government was initiated by EU funding on respective actions. The Greek approach towards e-government and the information society has undergone, in terms of
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
E-Government’s Barriers and Opportunities in Greece
top-level planning, a radical change between the second (1994-1999) and third (2000-2006) community support framework (CSF) periods. The efforts during the second period concentrated mainly to informational e-government web portals and to supply Public Administration with technological infrastructure in order for the employees to get familiar with technology and quit the traditional paperwork. During the third period some, but not much, transactional e-services are provided by the Public Administration (Hahamis et al., 2005). This chapter presents the efforts took place so far in Greece as far as e-government is concerned. Its aim is to point out the necessity of designing and implementing efficient e-government applications. The vision of an electronically modernized Greek public administration will be realized if a series of key strategic aspects will be considered as well as international best practices and experiences. Moreover, it will demonstrate the arising opportunities and the key challenges.
(G2B) that relates to the relationships between governments and businesses, and (c) Governmentto-Government (G2G) that relates to the activities that improve and upgrade governments’ services (Egov, 2003). Recently, a fourth category has been added, the one of Government-to-Employees (G2E) (Ndou, 2004). E-government is not a one-step process or implemented as a single project. It is evolutionary in nature, involving multiple stages or phases of development. According to the Gartner Group, an international consultancy firm (Baum & Di Maio, 2000), e-government mature according to the following four phases: •
•
background Although the literature relating to this area proliferates, the definition and the various models of e-government are still unclear among researchers and practitioners of public administration. According to the E-governance Institute (2004) “E-governance involves new channels for accessing government, new styles of leadership, new methods of transacting business, and new systems for organizing and delivering information and services. Its potential for enhancing the governing process is immeasurable.” Another quite broad definition which incorporates its four key dimensions that reflects the functions of government, that is e-services, e-democracy, e-commerce and e-management is the following “E-government is the use of information technology to support government operations, engage citizens, and provide government services” (Dawes, 2002). E-government can be distinguished into three basic categories: (a) Government-to-Citizen (G2C) that relates to the relationships between governments and citizens, (b) Government-to-Business
•
•
Stage 1—Presence: The primary goal is to post information such as agency mission, addresses, opening hours and possibly some official documents of relevance to the public. Stage 2—Interaction: This phase is characterized by Web sites that provide basic search capabilities, host forms to download, and linkages with other relevant sites, as well as e-mail addresses of offices or officials. This stage enables the public to access critical information online and receive forms that may have previously required a visit to a government office. Stage 3—Transaction: This phase is characterized by allowing constituents to conduct and complete entire tasks online. The focus of this stage is to build self-service applications for the public to access online, but also to use the Web as a complement to other delivery channels. Typical services that are migrated to this stage of development include tax filing and payment, driver’s license renewal, and payment of fines, permits and licenses. Additionally, many governments put requests for proposals and bidding regulations online as a precursor to e-procurement. Stage 4—Transformation: This phase is characterized by redefining the delivery of government services by providing a single point of contact to constituents that makes government organization totally transpar-
E-Government’s Barriers and Opportunities in Greece
ent to citizens. Examples of transformation include highly tailored Web sites, or “virtual agencies,” where government information is pushed to citizens, and where they can pay local property taxes, renew state driver’s licenses and apply for federal passports all in one place, with seamless interfaces back to the respective agencies involved in the transactions. The Greek endeavour for developing the information society (IS) began within the second community support framework, with an effort to integrate scattered actions in homogenous sectoral operational programmes (Telecommunications, Industry, Public Administration, Education and Initial Professional Training). In 1994, “KLISTHENIS,” an operational program for the modernization of public administration over the period 1994-1999, was launched. The “KLISTHENIS” program received EU funding in
the framework of the second CSF (IDABC eGovernment Observatory, 2005). Its main objective was to create the conditions of a continuous modernization of the public sector via interventions of a technical, organizational and educational order. The introduction of new technologies in public service delivery was among its main priorities. Among the most important projects included in the program were the development of electronic tax services (TAXISnet), launched at the end of second CSF period, and the creation of a national public administration network (pilot phase of SYZEYXIS). The enhanced conditions of the Greek economy during the implementation period of the second CSF, the contribution itself of the second CSF and the measures taken for macroeconomic stability and structural changes, established the required environment for higher efficiency of the growth endeavour, which is still pursued in the period 2000-2006, the 3rd CSF period. In 2000, the Op-
Table 1. Best e-government practices of EU countries Country
e-Government
URL
The “Help” portal
http://www.help.gv.at
SBA-Online
http://www.schulbuchaktion.at
Citizen Card
http://www.buergerkarte.at http://www.chipkarte.at
Light Citizen Card
http://www.bdc.at/208.html http://www.mobilkomaustria.com http://www.a1.net/signatur http://www.atrust.at
Official e-mail service for the public sector
http://www.zustellung.gv.at
FINANZonline
https://finanzonline.bmf.gv.at
“Kafka” Initiative: Simplification of Public Administration
http://www.kafka.be http://www.simplification.be
e-ID Card
http://eid.belgium.be http://www.certipost.be/eid http://www.eid-shop.be
Tax on Web
http://www.tax-on-web.be
Federal Portal for Citizens and Business
http://www.belgium.be
Income Taxes Assessment, Declaration and Payment
http://www.ir.dgi.minefi.gouv.fr
French eGovernment portal
http://www.service-public.fr
eVoting
http://www.interieur.gouv.fr/rubriques/b/b3_elections/b31_actualites
Online Declaration and Payment of VAT
http://www.tva.dgi.minefi.gouv.fr/index.jsp
Austria
Belgium
France
continued on following page
E-Government’s Barriers and Opportunities in Greece
Table 1. continued Country
Germany
Denmark
United Kingdom
Ireland
e-Government
URL
BUND.DE
http://www.bund.de
Public Purch@sing online
http://www.evergabe-online.de
Arbeitsamt online (Employment Office)
http://www.arbeitsamt.de
BaföG-Online (Student loans online)
http://www.bva.bund.de/aufgaben/bafoeg/index.html
ELSTER: e-Tax Return
http://www.elster.de
DIGANT
http://www.bundesdruckerei.de
Customs Online 2005 & ATLAS
http://www.zoll-d.de
BRN: Das Bayerisches Realschulnetz
http://www.realschule.bayern.de
WEB for ALL
http://www.webforall-heidelberg.de
Bremen On-line Services
http://www.bremen.de
The Digital North Denmark
http://www.detdigitalenordjylland.dk
Nordpol.dk: Democracy on the web
http://www.nordpol.dk
The public procurement portal
http://www.doip.dk
Digital Signatures
http://www.digitalsignatur.dk
“e-Boks”
http://www.e-boks.dk
NetCitizen: Portal for Digital Citizen Services
http://www.netborger.dk
Electronic Tendering (SKI): National Procurement
http://www.ski.dk
Info4local
http://www.info4local.gov.uk
e-Government Unit
http://www.caimed.org
e-Economy Unit
http://www.caimed.org
e-Communications Unit
http://www.caimed.org
e-Voting
http://www.caimed.org
e-Participation
http://www.caimed.org
National Planning Portal
http://www.planningportal.gov.uk
Customer Handling of Import and Export Freight (CHIEF)
http://www.hmce.gov.uk
Directgov.uk
http://www.directgov.uk
Fife Direct
http://www.fifedirect.gov.uk
Irish Public Procurement Portal
http://www.etenders.gov.ie
OASIS & BASIS portals
http://www.oasis.gov.ie http://www.basis.gov.ie http://www.eforms.ie
ROS (Renevue Online Services)
http://www.ros.ie
REACH - Messaging infrastructure for intra-governmental cooperation
http://www.reach.ie
FAS & PUBLIC JOBS portals
http://www.fas.ie http://www.publicjobs.ie
Motor Tax Online
http://www.motortax.ie
Irish Tax administration’s SMS service
http://www.revenue.ie/wnew continued on following page
E-Government’s Barriers and Opportunities in Greece
Table 1. continued Country
Iceland
Spain
Italy
Luxembourg
e-Government
URL
Government Offices
http://www.government.is
GoPro: Electronic Records Management System (ERMS)
http://www.gopro.net
eTax: electronic tax returns
http://www.brussels.rsk.is
Customs declaration on the web
http://www.tollur.is
INFO XXI
http://enis.eun.org
CAT 365 citizen’s portal
http://www.cat365.net
Utenet: ICT training for disabled people and welfare workers
http://www.utenet.com.ar
Citizen car registration and ownership
http://www.aci.it
Italia.gov.it
http://www.italia.gov.it
Electronic ID Card and National Services Card
http://www.cartaidentita.it
eProcurement
http://www.acquistinretepa.it
PolisWeb: Lawyer access to case information
http://www.tribunale.bologna.giustizia.it
TELEMACO: Signed Electronic Filling for Business Entities
http://web.telemaco.infocamere.it
Luxembourg e-Government strategy
http://www.eluxembourg.lu
Public Websites Portal
http://www.etat.lu
Online declaration and payment of VAT
http://saturn.etat.lu/etva/index.do
Public Key Infrastructure
http://www.opengroup.org
Electronic Government Counters
http://www.eurovision.net http://www.caimed.org
Electronic Vote
http://www.minbzk.nl http://www.caimed.org
Biometric passports and ID cards
http://www.caimed.org
Citizens Portal
http://www.portaldocidadao.pt
Fiscal Electronic Declarations
http://www.e-financas.gov.pt http://www.dgci.min-financas.pt
Portugese Government Portal
http://www.portugal.gov.pt
e-Voting
http://www.votoelectronico.pt
Ministry “24-7”
http://www.24-timmarsmyndigheten.se
Seniornet.se
http://www.seniornet.se http://www.sics.se
Finnish e-ID Card
http://www.fineid.fi
Oodi & Web-Oody Systems: Enrollment in the University of Helsinki
http://www.oodi.fi
The public sector portal
http://www.suomi.fi http://www.lomake.fi
TYVI
http://www.tyvi.fi
Tyoelake: Finnish Center for Pensions
http://www.tyoelake.fi
The ‘VERO’ portal
http://www.vero.fi
Citizen Certificate
http://www.sonera.fi http://www.vaestorekisterikeskus.fi
Netherlands
Portugal
Sweden
Finland
E-Government’s Barriers and Opportunities in Greece
erational Programme for the Information Society (OPIS) was launched, which aims to implement the White Book “Greece in the information society, strategy and actions” guidelines, published in 1999, and the EU’s eEurope 2002 Action Plan approved by the Feira European Council of June 2000. E-government is one of the key priorities of the program, which sets the objective of improving the quality of public services through the development of online services (including public tendering and procurement procedures) and the use of ICT to streamline and re-engineer procedures and communications within and between government bodies. Within the framework of the elaboration of the OPIS, the agencies of the Central and Regional Administration elaborated their own specialized operational programs. Most EU countries initiated their first operational program later than Greece did; France and Switzerland in 1998, Germany and Ireland in 1999, Italy and United Kingdom in 2000, while Scandinavian countries (Sweden, Norway, Finland and Denmark) activated in Information Society during the mid 90s, without taking advantage of a specific operational program (Observatory for the Greek Information Society, 2006). EU countries in general are active in e-government projects. Table 1 summarizes the best e-government practices of EU countries (Observatory for the Greek Information Society, 2005).
bEst hEllEnic E-govErnmEnt practicEs The Hellenic E-government Strategy advocates that electronic services (e-services) should be characterized by ubiquity, uniqueness of reference (i.e., single point of service), de-materialization, quality and cost-effectiveness. E-services are seen as essential business infrastructures that should only be planned and deployed as such. The Greek government set out essential methodological steps for developing and implementing e-services that included (State Services Commission, 2004): •
0
Identifying critical areas of service.
• • •
Determining business priorities and critical success factors. Identifying business partners and building consensus. Determining the scope of a pilot application.
During the second CSF period, the first serious attempt, which came to success, was the TAXISnet (Taxation Information System, http://www.taxisnet.gr, available in Greek). It is the e-government portal enabling tax-related transactions, issuing of electronic certificates as well as document handling via the Internet. During the first years of operation, it offered Gartner’s second stage services and now it has evolved and provides Gartner’s fourth stage services to citizens and businesses. TAXISnet operation has resulted in simplifying and improving servicing citizens and businesses across all taxation procedures. Furthermore, operation of the portal has reduced the corresponding manual procedures and manpower, for the benefit of Hellenic Ministry of Economy and Finance. The TAXISnet service, provides G2C, G2B and G2G services, including electronic submission of income tax forms, personalized electronic notification of the results of the tax return clearance process, electronic issuing of certificates by fax, electronic submission of VAT forms, payment via banking system services and validation of tax certifications, for example tax clearance certification (Gouscos et al., 2001). During the third CSF period more e-government projects were launched. In 2001, the government network “SYZEFXIS” (http://www. syzefxis.gov.gr) was launched as a pilot project, with the participation of 15 state organizations. The “SYZEFXIS” is meant to become a nationwide intranet for the Greek public administration, ultimately connecting more than 1,700 organizations nationwide. The network provides advanced telecommunication and information services, including telephony, data and video transmission through four virtual private networks (VPNs). It is complemented by the development of “Metropolitan Area Networks” (optical rings) infrastructures in approximately 50 municipalities across Greece,
E-Government’s Barriers and Opportunities in Greece
aiming to interconnect “points” of public interest (such as public administration buildings, schools, tax offices, administrations) through a broadband network. In 2002, the first 10 “Citizens’ Service Centres” (KEP in Greek, http://www.kep.gov.gr) opened, one-stop administrative shops located in or near municipality and prefecture offices. The Citizens’ Service Centres are meant to gradually integrate all administrative procedures through the use of ICT. Through these shops, citizens can have access to public service information and to a number of standardized administrative procedures. There are currently more than 1,000 Citizen Service Centres spread around Greece and more than 850 administrative procedures that can be accessed through the Centres. These centers are linked together by an IP network and use a platform called “e-kep” to file citizens’ requests, create a relevant e-directory, electronically register KEP mail, manage citizens’ requests and monitor their progress all the way through settlement. Accessible through the one-stop service centers across the country or through the Internet, the ekep platform supports the use of certified digital signature, enabling real time on-line transactions between citizens and public administration. The average service time usually does not exceed 7 days. The Citizen Service Centre Internet portal receives over nine million visits each month. According to i2010 (2006), Greece has only 30 services fully available online, posting the country at 23rd place of the 28 countries of EU measured. Unfortunately, the projects previously presented are the only remarkable e-government projects launched in Greece that have more than informational profile and actually provide electronic G2B, G2C and G2G services. The third CSF and especially OPIS, has brought enough funding to reorganize the whole Greek public administration. The institutional and organizational obstacles of the Greek public administration, however, remain still insurmountable. The most crucial problem for the preparation of ICT projects are time delays caused by red tape and bureaucratic processing of the calls for interest and biddings, reducing dramatically time available for implementation
and the use of the third CSF funds. That is why it is essential that the call for interest and bidding processes are accelerated and the cooperation and the coordination between the related actors are enhanced, in order to realize a substantial and qualitative implementation (Boufeas, Halaris & Kokkinou, 2004). The main characteristics of the Greek public administration could be summarized in the following features (Boufeas et al., 2004): • • • • •
Low efficiency Difficulties in the introduction of organizational information architecture models Fragmented efforts of computerization—lack of standardization Insufficient technical infrastructure Lack of training and experienced personnel on information technology
hEllEnic E-govErnmEnt ranking Many organizations and surveys attempt to measure e-government progress of several countries either on a EU level or an international level, according to different indicators and measurements. European Commission (2005), measures the e-government policy indicator of the eEurope Action Plan on 28 EU countries yearly. For these countries the European Commission and the Member States defined a list of twenty basic public services. For twelve of these services, the citizens are the target group while for eight of them businesses are the target group. European Commission (2005) resulted for online sophistication indicator in an overall average score of 65 percent for the 20 public services in the 28 countries (53 percent for the 10 new member states and 72 percent for the other countries). This means that the online sophistication of public service delivery in the EU is situated between one-way interaction and two-way interaction. Even the EU 15+ countries are overall not yet on a level of two-way online service delivery. As far as the new fully available online indicator is concerned, the fifth measure-
E-Government’s Barriers and Opportunities in Greece
Figure 1. Overall results (European Commission, 2005)
ment resulted in an overall average score of 40 percent for the 20 public services in the 28 countries (29 percent for the 10 new member states and 46 percent for the 18 other countries). These results are illustrated in Figure 1, while Figure 2 and 3 depict each participating country’s indicators measurement. According to European Commission (2005), over the last three years, the online development of public services has improved by 27 percentage points as depicted in Figure 4 while the ‘fully available online’ development of public services has
Figure 2. Country results—online sophistication (European Commission, 2005)
Figure 3. Country results—fully availability online (European Commission, 2005)
E-Government’s Barriers and Opportunities in Greece
Figure 4. 2001-2004 overall progress—online sophistication (European Commission, 2005)
Figure 5. 2001-2004 overall progress—fully available online (European Commission, 2005)
Table 2. Country ranking (European Commission, 2005)
improved by 26 percentage points, as depicted in Figure 5. Greece (EL) stands at sixteenth place as far as online sophistication progress is concerned and at fifteenth place as far as fully available online progress is concerned. Table 2 illustrates each participating country’s progress on these two indicators during 2001-2004. At international level, the World Economic Forum’s Networked Readiness Index (NRI) measures the propensity for countries to exploit the opportunities offered by ICT and is published
annually. Table 3 illustrates the NRI rankings for the past 4 years for most EU countries as well as other countries with high NRI rankings.
futurE trEnds One step to solve the problems outcome above, is the formation of dedicated e-government legislation in Greece. Additionally, there is currently no legislation governing the use of electronic means in public procurement in Greece.
E-Government’s Barriers and Opportunities in Greece
Table 3. Greece NRI ranking (World Economic Forum, 2006)
Moreover, the reengineering of the internal processes has, surely, to be achieved at a maximum degree, since it is the basic prerequisite for the modernization of the public administration according to the basic principles of e-government. On the other hand, the effective dissemination of information, both within the framework of the organizational coordination, and towards the improvement of the electronically provided services to the end users, cannot be easily controlled from the very beginning. The general culture of the Greek society is the main factor which will set, on the one hand the level of inter-agency cooperation for the production of qualitative outputs and outcomes, and on the other hand, the e-government acceptance degree by the citizens—customers. The planners in each agency should take into consideration the aforementioned critical factors for the maximization of the benefits resulting from the e-government projects and re-adjust the strategic goals, in accordance with the changes in the preferences of the end users and the speedy developments in the information technology sector.
A lot of opportunities arise since several EU funding programs have been and will be launched. In 2005, a three-year program “Politeia 20052007” was launched, for the ‘re-establishment of Public Administration.’ The objectives of the program are to better serve all citizens by focusing on their real needs, increasing transparency in public administration, implementing e-government in all administrative levels (central and regional administration, municipalities), restructuring agencies an processes, protecting citizen’s privacy and consolidating the Rule of Law (IDABC eGovernment Observatory, 2005). At the end of 2006, the 4th CSF, for the period 2007-2013, and its operational programs will be launched. Additionally, in 2005, the Greek Digital strategy for the period of 2006-2013 was presented, aimed at enabling a “digital leap” to improve productivity and quality of life by 2013. The digital strategy includes more than 65 actions and is divided into two parts. The first part of the plan will be enacted by 2008, and the second one by 2013. By 2008, the government will promote the development of electronic procurement, broadband connections, digital public services for citizens
E-Government’s Barriers and Opportunities in Greece
and businesses, and the use of electronic signatures. After 2008, the proposed strategy includes creating one-stop e-points to serve companies, re-organizing the public administration and incorporating new technologies into the education system. The digital strategy will involve possible public-private co-operations in e-government projects, and will include three key government-wide projects: the development of a national e-services portal “Hermes,” the implementation of a single authentication and transaction security system, and the development of a single interoperability system for public services. These projects will help reduce administrative burdens for businesses and improve people’s quality of life. There are several projects that can be implemented during the forthcoming years. There is currently no central e-identification infrastructure for e-government in Greece. In particular, no plans for e-ID cards have been issued yet. There is currently no central e-procurement infrastructure in Greece. The Government’s objective is to introduce an operational electronic public procurement system by the end of 2007. There is currently no government-wide knowledge management infrastructure in Greece.
conclusion Greek Public Administration modernization is a necessity imposed by the increase of the quality of service delivery and the reduction of the transaction costs. Greek transition to the Information Society, even if it is temporally delayed, it is in the stage of maturation, building on the previous experience and correcting the weaknesses within the frame of the second CSF. The operational programs are based henceforth on the integrated planning of OPIS, eliminating the initial fragmentation of the individual actions. The public agencies are not anymore reserved towards outsourcing, and information technology companies have acquired adequate experience. The proposed interventions, finally, will be realized under the light of best practices and particular effort will be made for the training of human capital.
The vision of an electronically modernized Greek public administration, as reflected in the business plans, should not take the form of another list of good intentions which will fail to get actually implemented. It should be shaped within the body of Greek public administration. The only way to success, apart the effective operational coordination at an organizational level, is the strict implementation according to the time schedule and the budget, as well as the rational monitoring and evaluation of the process through clear quantitative and qualitative indicators.
futurE rEsEarch dirEctions E-government has evolved towards a recognized research domain. A number of issues still remain unclear and require further investigation and discussion. Some of the most interesting open problems are the following: •
Privacy: Used e-government technologies enable automatic data gathering on citizens and businesses (eGovRTD2020, 2007). The amount of personal information available to governments continues to increase tremendously. These data can be further used for user modeling and profiling process as well as for producing intelligent interfaces and interactions. In this framework, privacy protection is becoming more important than ever. The research questions that arise are “How to assure that privacy and personal data are secure and protected, and will not be misused? What are the risks? Which are the ways of preventing them?” The threats for user privacy in an electronic environment are so many that a single solution does not exist. Although new technologies and products for protecting user’s privacy on computers and networks are used, none can guarantee absolutely secure communications. Electronic privacy issues in the foreseeable future will become highly crucial and intense. The future challenges and research in the direction of delivering e-services without jeopardiz-
E-Government’s Barriers and Opportunities in Greece
•
•
ing—but in fact protecting—privacy relate to: standards support, intelligible disclosure of data, disclosure of methods, provision of organizational and technical means for users to modify their user model entries, user model servers that support a number of anonymization methods, and adapting user modeling methods to privacy preferences and legislation (Markellos et al., 2004, Teltzrow & Kobsa, 2004). Security: Another key research aspect relates to protect e-government sites from attacks and misuses and to secure interconnection and intercommunication (McIntyre, Taylor & McCabe, 2004). This does not address only the advances in cryptography and protocols for secure Internet communication (e.g., SSL, S-HTTP, etc.) that significantly contributed in securing information transfers within egovernment infrastructures (Bouguettaya et al., 2004). It also encompasses a whole range of policies, legal processes and operational guidelines. Specifically, the ability to share information and process over a secure environment involves publication, support, maintenance, ownership and access. In this context, the main research questions are “How can we create a secure government? What technologies can be used?” Trust: Trust between online electronic transaction parties (agencies, businesses, NGOs, citizens) is a key to the success of an e-government environment. It comprises a complicated issue and it is not easy to understand how is it built, used, destroyed or abused. The certain is that trust is not earned instantly and will continue to grow with each successful positive online interaction. Trust is closely linked to privacy and security issues. For example without trust, citizens who may be suspicious of using technology may avoid the use of online services that ask for detailed personal information. The level-degree of trust that is required to built a partnership and a relationship between a government and its customers vary according to the relationship strategic significance
•
or risk (Al-Omari & Al-Omari, 2006). The research questions that need to be addressed are “How can we create a trusted e-government? What is trust? How is built? How do trust influence e-government efforts?” Semantic and cultural interoperability: Interoperability in essence leads to extensive information sharing among and between governmental entities (IDABC, 2004). The increased use of ICT and Internet has as a result the disappearing of borders. However, the obstacles, which prevent a rapid progress into that direction, are not merely technical. In fact, the technology side may prove the least difficult to address, while the organizational, legal, political, cultural and social aspects may prove much more of a challenge. The research question is “How to create a shared understanding and seamless interoperability framework among different entities, cultures and communities?” Work should be done to define and agree upon government sector-specific semantics and on the alignment of business processes. Many e-government services exist, such as taxation functions and social services that require government agreement on their own semantics and processes. Likewise there are frequently additional public sector requirements in general business processes such as procurement that are not found in the private sector, for example specific competitive bidding requirements and/or specific approval approaches. For e-government, business process alignment in many cases requires an alignment of laws, regulations, and so on, that the EU, with its single market approach, can leverage (Lueders, 2005).
rEfErEncEs Al-Omari, H., & Al-Omari, A. (2006). Building an e-government e-trust infrastructure. American Journal of Applied Sciences, 3(11), 2122-2130.
E-Government’s Barriers and Opportunities in Greece
Baum, C., & Di Maio, A. (2000). Gartner’s four phases of e-government model. GartnerGroup.
Workshop on e-Government at the 8th Panhellenic Conference on Informatics. Nicosia, Cyprus.
Boufeas G., Halaris, I., & Kokkinou, A. (2004). Business plans for the development of e-government in greece, an appraisal. UNTC Occasional Papers Series No 5, Thessaloniki, United Nations Thessaloniki Center for Public Service Professionalism. Retrieved March 22, 2007, from http://unpan1.un.org/intradoc/groups/public/documents/UNTC/UNPAN014633.pdf
Hahamis, P., Iles, J., & Healy, M. (2005). E-government in greece: Bringing the gap between need and realty. The Electronic Journal of e-Government, 3(4), 185-192.
Bouguettaya, A., Rezgui, A., Medjahed, B., & Ouzzani, M. (2004). Internet computing support for digital government. In M.P. Singh, (Ed.), Practical handbook of internet computing, CRC Press. Dawes, S. (2002). The future of e-government. Center for Technology in Government, University at Albany/SUNY. Retrieved March 22, 2007, from http://www.ctg.albany.edu/publications/reports/ future_of_egov/future_of_egov.pdf Egov. (2003). E-government strategy, implementing the president’s management agenda for e-government. Retrieved March 22, 2007, from http://www.whitehouse.gov/omb/egov/2003egov_ strat.pdf E-governance Institute. (2004). Concepts and principles of e-governance. Rutgers University’s National Center for Public Productivity. Retrieved March 22, 2007, from http://www.andromeda. rutgers.edu/~egovinst/Website/institutepg.htm eGovRTD2020 (2007). The eGovernment RTD 2020 Project. Visions and Conceptions of European Citizens. Retrieved March 22, 2007, from http://www.egovrtd2020.org European Commission. (2005). Online Availability of Public Services: How is Europe Progressing? European Commission Directorate General for Information Society and Media. Retrieved March 22, 2007, from http://www.observatory.gr/files/ meletes/egov_benchmarking_2005.pdf Gouscos, D., Mentzas, G., & Georgiadis, P. (2001, November 8-10). Planning and implementing e-government service delivery: Achievements and learnings from on-line taxation in greece.
i2010. (2006). Online Availability of Public Services: How Is Europe Progressing? Web Based Survey on Electronic Public Services Report of the 6th Measurement, June 2006. Retrieved March 22, 2007, from http://europa.eu.int/information_society/eeurope/i2010/docs/benchmarking/online_availability_2006.pdf IDABC. (2004). European Interoperability Framework for Pan-European Egovernment Services, Luxembourg. Office for Official Publications of the European Communities. Retrieved March 22, 2007, from http://ec.europa.eu/idabc/servlets/ Doc?id=19528 IDABC eGovernment Observatory. (2005). Factsheet: eGovernment in Greece, November 2005. Retrieved March 22, 2007, from http://ec.europa. eu/idabc/en/document/5094/254 Lueders, H. (2005, February 23-25). Interoperability and Open Standards for eGovernment Services. In Proceeding of the 1st International Conference on Interoperability of Enterprise Software and Applications (eGOV INTEROP’05), Geneva, Switzerland. Retrieved March 22, 2007 from, http://www.softwarechoice.org/download_files/eGovinterop05_paper.pdf Markellos, K., Markellou, P., Rigou, M., Sirmakessis, S., & Tsakalidis, A. (2004, April14-16). Web Personalization and the Privacy Concern. Proceedings of the 7th ETHICOMP International Conference on the Social and Ethical Impacts of Information and Communication Technologies, Challenges for the Citizen of the Information Society. Syros, Greece. McIntyre, M., Taylor, G., & McCabe, C. (2004). Open in Europe, The Importance of Open Source Software and Open Standards to Interoperability in Europe, Open Forum Europe. Retrieved March
E-Government’s Barriers and Opportunities in Greece
22, 2007, from http://xml.coverpages.org/OpenIreland-OS.pdf Ndou, V. (2004). E-government for developing countries: Opportunities and challenges. Electronic Journal on Information Systems in Developing Countries (EJISDC), 18(1), 1-24. Observatory for the Greek Information Society. (2005). Best Practices in the ICT sector, European Countries. Retrieved March 22, 2007, from http://www.observatory.gr/files/meletes/Best percent20Practices percent20in percent20the percent20ICT percent20sector_GR_11.10.05.pdf Observatory for the Greek Information Society. (2006). Study to Create Infrastructure to Observe Operational Programme for the Information Society and Programming Complement Indicators. Deliverable: Operational Programmes for the Information Society in other EU countries. Retrieved March 22, 2007, from http://www.observatory.gr/files/meletes/Π1b_final.pdf Riedl, R. (2003). Design Principles for E-Government Services. Proceedings of the eGov Day. Retrieved March 22, 2007, from http://www.ifi. unizh.ch/egov/Wien03.pdf State Services Commission. (2004). Channel Strategy: Summary of International E-Service Initiatives Terms and Definitions. E-government Unit. Retrieved March 22, 2007, from http://www. e.govt.nz/_NONarchive/policy/participation/participation-0305/participation-0305.pdf Tambouris, E., Gorilas, S., & Boukis, G. (2001). Investigation of electronic government. Proceedings of the 8th Panhellenic Conference on Informatics, 2, 367-376. Teltzrow, M., & Kobsa, A. (2004). Impacts of user privacy preferences on personalized systems: A comparative study. In C.-M. Karat, J. Blom & J. Karat (Eds.), Designing personalized user experiences for e-commerce. Dordrecht, Netherlands: Kluwer Academic Publishers. World Economic Forum. (2006). Retrieved March 22, 2007, from http://www.weforum.org
additional rEading Abie, H., Foyn, B., Bing, J., Blobel, B., Pharow, P., Delgado, J. et al. (2004). The need for a digital rights management framework for the next generation of e-government services. Electronic Government International Journal, 1(1), 8-28. Aldrich, D., Bertot, J.C., & McClure, C.R. (2002). E-government: Initiatives, developments, and issues. Government Information Quarterly, 19(4), 349-355. Anderson, L., & Bishop, P. (2005). E-government to e-democracy communicative mechanisms of governance. Journal of E-government, 2(1), 5-26. Barnes, S.J., & Vidgen, R. (2004). Interactive egovernment: Evaluating the web site of the UK inland revenue. Journal of Electronic Commerce in Organizations, 2(1), 42-63. Beynon-Davies, P., & Williams, M.D. (2003). Evaluating electronic local government in the UK. Journal of Information Technology, 18(2), 137-149. Burn, J., & Robins, G. (2003). Moving towards e-government: A case study of organisational change processes. Logistics Information Management, 16(1), 25-35. Carbo, T., & Williams, J. (2004). Models and metrics for evaluating local electronic government systems and services. Electronic Journal of e-Government, 2(2), 95-104. Chau, D. (2005). Developing a generic framework for e-government. Journal of Global Information Management, 13(1), 1-30. Dawes, S., Gregg, V., & Agouris, P. (2004). Digital government research: Investigations social and information science. Social Science Computer Review, 22(1), 5-10. Doukidis, G., Mylonopoulos, N., & Pouloudi, N. (2003). Social and economic transformation in the eigital era. Hershey, PA: Idea Group Publishing.
E-Government’s Barriers and Opportunities in Greece
Edmiston, K.D. (2003). State and local e-government — prospects and challenges. American Review of Public Administration, 33(1), 20-45. Galindo, F. (2005). Basic aspects of the regulation of e-government. Journal of Information, Law and Technology (University of Warwick), 2(3). Retrieved March 22, 2007, from http://www.vlex. us/lawjournals/Journal-of-Information-Law-andTechnology-University-of-Warwick/Basic-Aspects-of-the-Regulation-of-e-Government/2100321122,01.html Garson, G.D. (2004). The promise of digital government. In A. Pavlichev & G.D.Garson, (Eds.), Digital government: Principles and best practices (pp. 2-15). Hershey, PA: Idea Group Publishing. Gortmaker, J., Janssen, M., & Wagenaar, R.W. (2007). Requirements on cross-agency processes in e-government: The need for the reference model. In A. Mitrakas, P. Hengeveld, D. Polemi, & J. Gamper (Eds.), Secure e-government web services (pp. 217-232).Hershey PA: IGI Global. Gronlund, A. (2003). Emerging electronic infrastructures—exploring democratic components. Social Science Computer Review, 21(1), 55-72. Heeks, R. (2001). Understanding digital government project failures. Computer, 34(2), 34-34. Heeks, R. (2006, July 27-28). Understanding and measuring e-government: International benchmarking studies. Workshop, E-Participation and E-Government: Understanding the Present and Creating the Future (UNDESA), Budapest, Hungary. Retrieved March 22, 2007, from http:// unpan1.un.org/intradoc/groups/public/documents/UN/UNPAN023686.pdf Holzer, M., Carrizales, T., Kim, S.T., Kim, C.G. (2006). Digital governance worldwide: A longitudinal assessment of municipal web sites. International Journal of Electronic Government Research, 2(4), 1-23. Karyda, M., Balopoulos, T., Gymnopoulos, L., Kokolakis, S., Lambrinoudakis, C., Gritzalis, S. et al. (2006). An ontology for secure e-government applications. First International Conference on
Availability, Reliability and Security (ARES’06) (pp.1033-1037). IEEE Computer Society. Ke, W., & Kei Wei, K. (2004). Successful e-government in singapore. Communications of the ACM, 47(6), 95-99. Koh, C., & Prybutok, V. (2003). The three ring model and development of an instrument for measuring dimensions of e-government functions. Journal of Computer Information Systems, 43(3), 34-37. Lambrinoudakis, C., Gritzalis, S., Dridi, F., & Pernul, G. (2003). Security requirements for e-government services: A methodological approach for developing a common PKI-based security policy. Computer Communications, 26(16), 1873-1883. Layne, K. & Lee, J.W. (2001). Developing fully functional e-government: A four stage model. Government Information Quarterly, 18(2), 122-136. Löfstedt, U. (2005). E-government— assessment of current research and some proposals for future directions. International Journal of Public Information Systems, 1, 39-52. Marche, S., & McNiven, J.D. (2003). E-government and e-governance: The future isn’t what it used to be. Canadian Journal Of Administrative Sciences, 20(1), 74-86. Moon, M.J. (2002). The evolution of e-government among municipalities: Rhetoric or reality? Public Administration Review, 62(4), 424-433. Muir, A., & Oppenheim, C. (2002). National information policy developments worldwide I: electronic government. Journal of Information Science, 28(3), 173-186. Mullen, P.R. (2003). The need for government-wide information capacity. Social Science Computer Review, 21(4), 456-463. Snellen, I. (2002). Electronic governance: Implications for citizens, politicians and public servants. International Review of Administrative Sciences, 68(2), 183-198.
E-Government’s Barriers and Opportunities in Greece
Sorrentino, M. (2004). The implementation of ICT in public sector organisations, analysing selection criteria for e-government projects. Proceedings of the 17th Bled eCommerce Conference, eGlobal, Bled, Slovenia. Retrieved November 5, 2007, from http://www.bledconference.org/proceedings.nsf/0/2cf98789c476bc16c1256ee000383c2b/ $FILE/44Sorrentino.pdf Stamoulis, D., Gouscos, D., Georgiadis, P., & Martakos, D. (2001). Revisiting public information management for effective e-government services. Information Management & Computer Security, 9(4), 146-153. Steyaert, J.C. (2004). Measuring the performance of electronic government services. Information & Management, 41(3), 369-375. Tan, C.W., & Pan, S.L. (2005). Managing stakeholder interests in e-government implementation: Lessons learned from a singapore e-government project. Journal of Global Information Management, 13(1), 31-53. Tolbert, C.J., & Mossberger, K. (2006). The effects of e-government on trust and confidence in government. Public Administration Review, 66(3), 354-369. Van der Meer, A., & Van Winden, W. (2003). E-governance in cities: A comparison of urban information and communication technology policies. Regional Studies, 37(4), 407-419. West, D.M. (2004). E-government and the transformation of service delivery and citizen attitudes. Public Administration Review, 64(1), 15-27. Whitson, T.L., & Davis, L. (2001). Best practices in electronic government: Comprehensive electronic information dissemination for science and technology. Government Information Quarterly, 18(2), 79-91.
0
tErms and dEfinitions Community support framework (CSF): The document which contains both the strategy and the priority actions of the structural funds in a certain member state or region, thereby defining the specific objectives of such actions together with the financial contribution of the structural funds and other financial resources. The CSF serves as a basis for drawing up the operational programs and for guaranteeing the co-ordination of all the community structural support measures in the regions interested by the various operational programs. E-Government: Any government functions or processes that are carried out in digital form over the Internet. Local, state and federal governments essentially set up central web sites from which the public (both citizens and businesses) can find public information, download government forms and complete their transactions with government online, through electronic services. Government-to-Business (G2B): The objective of G2B is to reduce burdens on business, provide one-stop access to information and enable digital communication using the language of e-business (XML). Moreover, the government should reuse the data reported appropriately and take advantage of commercial electronic transaction protocols. Government-to-Citizen (G2C): The goal of G2C is to provide one-stop, online access to information and services to individuals. Citizens should be able to find and access what they need quickly and easily. Government-to-Employees (G2E): The objective of G2E is agencies to be able to improve effectiveness and efficiency, eliminating delays in processing and improving employee satisfaction and retention. Internal efficiency and effectiveness, adopting commercial best practices in government operation in areas such as supply chain management, financial management and knowledge management.
E-Government’s Barriers and Opportunities in Greece
Government-to-Government (G2G): The aim of G2G is to enable governments and organizations related to them to more easily work together and to better serve citizens within key lines of business. Networked Readiness Index (NRI): The NRI seeks to better comprehend the impact of ICT on the competitiveness of nations. The NRI is a composite of three components: the environment for ICT offered by a given country or community, the readiness of the community’s key stakeholders (individuals, businesses, and governments) to use ICT, and finally the usage of ICT amongst these stakeholders. Operational Program (OP): It specifies the long-term actions (Measures) to be implemented in order to achieve one or more strategic priorities
(priorities). The OPs can be contracted to national (NOP) or regional (POR) authorities and exploit the resources of one or more structural funds. Structural Funds: They are the main financial instruments by which the European Community strengthens economic and social cohesion, thus reducing the gap between the levels of development in the various Regions. Virtual Private Network (VPN): It is a network that is constructed by using public wires to connect nodes. For example, there are a number of systems that enable you to create networks using the Internet as the medium for transporting data. These systems use encryption and other security mechanisms to ensure that only authorized users can access the network and that the data cannot be intercepted.
Chapter XVIII
E-Lections in New Zealand Local Government Alex Dunayev AXI Web Solutions, New Zealand John Paynter AXI Web Solutions, New Zealand
introduction Worldwide governments are investing in initiatives to open access to information, resources, communication and services via channels typically used for electronic commerce. Government agencies are usually the leaders in communication technology commonly developed primarily for military use and later adopted by the general public. Since its inception, the Internet has gained widespread usage, prompting governments to provide online services to the public. The broad category for this type of information and services provision is called “e-government.” It is the general description of a way to provide better access to government information and services. According to the New Zealand e-government strategy (Clifford, 2003) the Internet will be used to improve the quality of the services and provide greater opportunities to participate in the democratic process for the citizens. E-government is now emerging as a viable method of offering a good number of government services—from local to global. Central govern-
ment now provides services such as immigration, social services, income protection, and student loan applications through the Internet. Locally, city, and regional authorities can arrange rubbish collection and traffic fine payment, amongst other things, online. One of the services necessary to maintain this interaction still has a stigma of being “not quite ready” for the Internet—online elections. Because elections govern the process of appointing government officials, they are an essential part of a democratic government (e-democracy). Compared with the larger central governments, the local government segment has a better opportunity to innovate in the elections field. The process of online elections is however very similar between the two types of government. Both require the same basic steps of registering, voting, counting votes, and presenting the election results. In local online elections, there is higher potential for technical and political innovation and a realistic possibility that technology developed for it could later be used for the large-scale central government elections.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
E-Lections in New Zealand Local Government
background Mahoney (2002) identified the goals of e-government as improving customer service, internal efficiency, and citizen engagement. E-government initiatives are gaining momentum worldwide; they are seen as an innovation tool of governments across the developed world (Ronaghan, 2002). Similarly to other innovation drivers, there is an inherent risk in e-government implementations (Aichholzer, 2004) that is most commonly addressed in strategic planning of the initiatives. Ronaghan (2002) identifies five stages of e-government development and implementation—the emerging, the enhanced, the interactive, the transactional, and the seamless. A United Nations study was conducted in 2001 across 190 U.N. member states to assess their progress. The results of this study are shown in Figure 1. Emerging nations have limited e-government capacity; if it is present, it is likely to focus on mostly static Internet sites with infrequently updated information. The majority of countries have reached the enhanced stage, with more up-to-date government information available on the Internet and
links to other diverse sites that assist some form of government service. A slightly smaller, but similar percentage of countries are at the interactive stage. These allow a greater level of sophistication in interaction and have abilities to process forms and applications online as well as access to specialized information. The current leaders in e-government initiatives have achieved the transactional stage, where complex transactions (such as passport and visa applications) can be processed through the Internet. Other online services range from payment of parking fines to complex taxation calculations and payments. The UN study found no countries to yet be at the most advanced, seamless, stage of e-government adoption at which full integration of e-services across administrative boundaries becomes possible (Szeremeta, 2002). The seamless stage of e-government, amongst other services, includes the online voting system. From the time of the U.N. study this trend has continued and to date no central government has yet announced their readiness to provide complete election services via the Internet.
Figure 1. E-government stages of implementation worldwide (Source: Ronaghan, 2002)
E-government worldwide
Transactional,
Interactive,
Emerging,
Enhanced,
E-Lections in New Zealand Local Government
tEchnology, politics, and rEsEarch The greatest impact made by the Internet in the last decade has been the increase in ease of communication. The possibility of online interaction between anyone anywhere in the world created a striving world-wide marketplace. Electronic commerce (EC) has enabled people to conduct transactions via the Internet, ranging in scale from simple online purchases to complex major banking and insurance financial transactions. As more and more people use EC to conduct business on the Internet, citizens begin to expect governments to follow suit and offer continuous service at any time of the day. The concept of e-government was designed with the goal to bring together a country’s citizens, businesses and government, as well as increase the government’s availability to its citizens (Backus, 2001). Given good penetration of the Internet through the society, governments could offer seamless communication and service to the public. Conducting the election voting process online has the potential to also improve voter turnout, a known problem for New Zealand local council elections. A 2002 online election in Estonia has shown voter participation to increase approximately 3.5 percent compared to previous year’s figures (Sharkey & Paynter, 2003). In this chapter, we look at the interaction between voters and the government. This type of interaction is frequently accomplished as an all-in-one portal providing virtually all services to the public, from basic information to complex transaction processing. Research points to portal technology already deployed in the USA, Britain, Australia and, New Zealand (Paynter & Fung, 2006; Pons, 2004). The New Zealand e-government initiative is based around an all-encompassing online portal targeted at businesses and general public alongside specialized offerings by individual government departments, such as the IRD (tax collection) and Companies Office (company registration). The
main portal provides access to both information and services for mostly central and local government bodies. The e-local government strategy was released in 2003 from collaborative work by Local Government New Zealand, the Society of Local Government Managers, the Association of Local Government Information Management, Local Government On-line, the State Services Commission, the Ministry of Economic Development, and representatives from city, district and regional councils. Similarly to the five themes of the overall egovernment strategy, e-local government strategy is focused around four key themes: 1. 2. 3. 4.
Providing easy online access to information and services Developing innovative products and services Enhancing the people’s participation in local democracy Providing community leadership on e-business initiatives
Of these four, the third theme is of particular interest as it directly relates to our topic of online election information presentation and associated services. Its aim is to increase participation in local government democracy. Enhancing the people’s participation in local democracy can be categorized into the key result areas of online interaction (increasing Internet-based communication between the public and local government bodies) and online voting (directly participating in selection of future local government administrators). Online interaction (or participation) is mostly dependent on the level of technological acceptance by the general public. In one of the early online voting system trials set up by St. Mary’s Bank Credit Union in the USA it has been noted that “people who shop online, for example, would be more likely to use an online voting application than people who have a natural aversion to that” (Swedberg, 2004). Online voting faces more technological problems relating to ensuring transactional security and fairness of the process.
E-Lections in New Zealand Local Government
The New Zealand e-local government timeline breaks down these key result areas into concrete deliverables with the aim of conducting the 2007 local government body elections via online means.
rEsults Content Analysis Our study goal is to compare existing research and government guidelines with real implementations by local government bodies—to achieve this goal we have used Web site content analysis (Neuendorf, 2002). The automated method used is described by Dunayev and Paynter (2006). The five unique areas of information and features related to online elections were initially identified as: general information, voter registration, candidate information, online voting and election results. Figure 2 shows the quality of content in the candidate information (“Candidate”), voter reg-
istration (“Voter”), general information (“Information”), and election results (“Results”) areas. The assessment of the sites revealed that no local government body supported online voting in New Zealand. Environment Bay of Plenty Regional Council, Otorohanga District Council, and Auckland Regional Council are the three distinguishable “bestof-breed” leader sites, seen as the outer-most lines on the radar plot. There are further six sites with moderate online elections offering (score > 50), 21 sites with some online elections information (score ≥ 10), 31 sites with few election services (score between 1 and 10) and eighteen council sites remain free of election-related information.
discussion There are four sectors of local government in New Zealand—regional, city, district and unitary. The descriptive information of scores for each sector is shown in Table 1.
Figure 2. Overall council Web site scores across four categories Candidate 0 0 0 0 0 0 0 0 Voter
0
Results
Information www.cdc.govt.nz www.pdc.govt.nz www.tararua.com www.waipadc.govt.nz www.odc.govt.nz www.qldc.govt.nz www.bullerdc.govt.nz www.wanganui.govt.nz www.horowhenua.govt.nz www.mpdc.govt.nz www.cityofdunedin.com www.orc.govt.nz www.codc.govt.nz www.huttcity.govt.nz www.ashburtondc.govt.nz www.ew.govt.nz
www.chathams.com www.rodney.govt.nz www.timaru.govt.nz www.waitaki-dc.govt.nz www.selwyn.govt.nz www.wdc.govt.nz www.mstn.govt.nz www.bankspeninsula.com www.waikatodistrict.govt.nz www.ecan.govt.nz www.wrc.govt.nz www.hastingsdc.govt.nz www.mdc.govt.nz www.tcdc.govt.nz www.napier.govt.nz www.arc.govt.nz
www.hauraki-dc.govt.nz www.rotoruanz.co.nz www.trc.govt.nz www.wcrc.govt.nz www.tdc.govt.nz www.swdc.govt.nz www.nelsoncitycouncil.co.nz www.hcc.govt.nz www.hbrc.govt.nz www.wairoadc.govt.nz www.greydc.govt.nz www.mackenzie.govt.nz www.rangdc.govt.nz www.ccc.govt.nz www.franklindistrict.co.nz www.otodc.govt.nz
www.invercargill.org.nz www.ruapehudc.govt.nz www.upperhuttcity.com www.hurunui.govt.nz www.stratford.govt.nz www.waitomo.govt.nz www.nrc.govt.nz www.gdc.govt.nz www.manukau.govt.nz www.marlborough.govt.nz www.kaipara.govt.nz www.wellington.govt.nz www.aucklandcity.govt.nz www.taupodc.govt.nz www.waimatedc.govt.nz www.envbop.govt.nz
www.pcc.govt.nz www.stdc.co.nz www.waimakariri.govt.nz www.kaweraudc.govt.nz www.westlanddc.govt.nz www.northshorecity.govt.nz www.southlanddc.govt.nz www.waitakere.govt.nz www.swktodc.govt.nz www.npdc.govt.nz www.pncc.govt.nz www.chbdc.govt.nz www.kapiticoast.govt.nz www.goredc.govt.nz www.fndc.govt.nz
E-Lections in New Zealand Local Government
Table 1. Score breakdown by sector Total no of sites
Min score
Max score
Average score
Contribution %
Unitary
5
0
42
11.80
4.22
Regional
11
0
169
41.55
32.69
Territorial: district
48
0
143
13.45
47.14
Territorial: city
14
0
50
15.93
15.95
Sector
comparison with similar research
Figure 3. Election related Web site content % 0% % 0% % 0% % 0% % 0% General information
Candidate information
Automated content analysis
Election results
Voter registration
E-local government survey
Regional councils lead both the average and the maximum scores, showing the presence of local government bodies with excellent election content. The other three sectors have similar average score values and the presence of Web sites not offering any election-relation information is seen in every sector.
Comparison with Survey Data Our content analysis and the e-local government survey present comparable results (Figure 3). The General Information area is closely comparable between our findings and the councils’ view of the field. Similarly comparable is the importance of offering election results, the second highest ranked area in our findings. However, the amount of candidate information found across council Web sites is lower than indicated in the survey, likely due to the quality of candidate information presented through local government body Web sites.
Voter registration is ranked low in both cases, as it is usually a function delegated to Elections NZ and most local councils do not offer localized online voter registration services. Overall, the findings were roughly comparable to the results of the e-local government survey. This validated the research instrument and the automated content analysis software.
Fung (2004) conducted extensive research of e-local government Web sites in New Zealand. This was done by performing manual content analysis of the available council Web sites across all feature-related categories. As these were the same Web sites as investigated by our study, it is of interest to compare our findings with Fung’s using Spearman’s rank correlation, a technique used to test the direction and strength of the relationship between two sets of data. Our findings at 90 percent confidence level show that there is no correlation between the rank of the Web site determined by us and the rank presented by Fung (rs = 0.18, n = 78, R crit (0.10 = .22)). This can be attributed to the difference in scope between the two studies. Our research instrument was targeted exclusively at electionrelated features of the sites, while Fung’s analysis covered all areas of information that could be present on a local government body Web site. It is thus not possible to predict the election-related information based on the overall score of an elocal government Web site.
futurE trEnds Local Government Strategy comparison The State Services Commission has plans to move towards electronic voting in e-local government by the 2007 elections1. To achieve this goal the government requires operational directives towards what constitutes as an “online election” offering from local govern-
E-Lections in New Zealand Local Government
ment bodies, and how it can be developed and deployed. This is not yet available from either of the two government agencies charged with implementing the e-government strategy, namely the State Services Commission and Local Government Online. A significant proportion of e-local government Web sites have no election-related content or services, potentially hindering the plans of conducting the 2007 elections exclusively through electronic means.
overseas comparison E-government initiatives across the world heavily rely on the all-in-one portal framework created by government branches to offer exhaustive information and services to the public. While research points towards this as a trend across all major e-government adopters, such as USA, Australia and Britain (Forbes, 2002; Paynter & Fung, 2006; Pons, 2004) our study has found this to not be the case in New Zealand on the local government level. We have discovered that while there is an e-local government portal, it is devoid of content and does not provide any services to the public. Instead, each of the local council bodies have elected to design and develop their own customized online offering, with varying level of quality. Sharkey and Paynter (2003) argue that a shift towards online election can increase voter turnout. A recent local government election in Estonia is cited as an example of this trend; however nothing similar has yet been held in New Zealand so a comparison cannot be made. Further overseas comparison of e-local government Web sites is problematic due to the differences in political environments between nations. However, with appropriately chosen factors and keywords, the automated Web site content analysis software developed in this study can provide a means for comparison of the data collected.
conclusion The analysis of government strategy, results, and other research in the e-government field, reveals the emergence of e-government offerings to be a global trend. The process of e-government rollout is firmly set—the strategy is established and as the technology is improving, more advanced services could become ready for access through the Internet and other communication channels. From existing research we have come to expect the e-government field to be a rapidly growing area of government innovation. The provision of online election information and services is essential for governments to reach the seamless stage of egovernment development, and the New Zealand government has recognized it by including the area in both the central and the local e-government strategy. Participation in democracy is often cited as one of the motivations for e-government. As voter turnout remains in a downward spiral under the 50 percent mark for New Zealand local government elections, improving it will help accomplish this goal. It has been shown in overseas studies that elocal government online elections have a potential to improve voter turnout and hence have the capability to directly contribute towards an increase in the participation of citizens in democracy. We have distinguished five factors related to online election services: general elections information, candidate information, voter registration, current and past election results, and online voting. Of these areas, three are well represented in top e-local government implementations—they are general elections information, election results and candidate information, in the order of quality of implementation. Voter registration has proved to offer only a very small contribution to the score of the Web sites; this is likely due to the excellent voter registration service provided by the central government in form of Elections NZ both online and via other means. Nevertheless, local government elections registration processes can be different to the ones used by the central government. An explanation of the enrolment process and eligibility
E-Lections in New Zealand Local Government
to vote in a particular area on the council Web site provides a valuable service. However, none of the local council Web sites provides online voting services. Local council Web sites have a low quality of content relating to online elections. This contrasts with the State Services Commission strategy to have e-voting in local government body elections electronically enabled by 2007. In order for this to happen, e-local government Web sites should already have an adequate foundation of services essential for online voting. The first steps towards online voting are offering appropriate content in election-related categories. This was found to be true in some cases; a number of local councils provided information across the four main election categories, most noticeably in the general information and election results areas. Environment Bay of Plenty Regional Council, Otorohanga District Council and Auckland Regional Council as the top three distinguishable “best-of-breed” leader sites However, sites like these were few and overall the e-local government field did not show signs of a foundation to build e-voting systems on. Regional councils are more prepared for the shift towards e-government than their territorial and unitary counterparts. Nevertheless, there are still sites offering little or no election-related information in each of the local government sectors leaving potential for improvement. The current state of the field could be attributed to the fact that e-local government does not yet have a unified approach to online elections and voting. The e-government strategy does not yet specify an operational plan of the process and requirements for online voting at local government level. A development of this plan is in progress and should be available in the future. Until it is released, the government can not provide guidelines of what Internet-enabled services constitute a complete “online elections” offering from a local council body. A comparison with an on-going e-local authority survey shows that participating local councils have an accurate understanding of the general information, candidate information and election results features they need to currently be offering,
but are still behind in the voter registration and online voting. The implications for local council bodies show areas of potential improvement. To achieve the strategy of being a world-leader in e-government, New Zealand e-local government Web sites need to improve on the level of electionrelated information and services offered through their Web sites. This can be aided by following examples of “best-of-breed” sites identified in this study. Four relevant categories identified in this study, namely general election information, candidate information, voter registration services and election results, are ready to be incorporated into e-local government Web sites by using existing technology. Implementation of the final category, online voting, could then follow on the basis of this foundation.
futurE rEsEarch dirEctions Our research has focused on only one of the services seamless e-government needs to provide. The provision of electronic services is equally important to all three branches of the government, and the services provided by the executive, judicial and legislative branches are vast. Some of them are unsuitable for access through e-government, and majority of others do not necessarily share the same trends in development and adoption as online elections. Furthermore, this research is focused exclusively on New Zealand e-local government and a comparison across different nations at the local government level can be problematic. While similar in their goals and purposes, local government bodies in different countries have varied political structures and hierarchies. New Zealand follows a two-tier government hierarchy – other countries could have additional elements in the government system. For example, in Australia and the USA the local council structure is separately determined by each of the state governments, which function independently from the central, federal government. This could make the applicability of this research limited to New Zealand.
E-Lections in New Zealand Local Government
E-government can include technology other than the Internet. Various other forms of telecommunication (telephones, facsimiles, and mobile phones for example) are actively used by government to reach the public. While the Internet is currently prevalent for access through desktop personal computers, we are experiencing a strong move to the mobile market. As other advanced technology becomes more available, the importance of the Internet communication channel, as it is used now, may become different. One such example is the automated use of text messaging to cellphones to track electronic census forms in the 2006 New Zealand census (Paynter & Peko, 2006). An alternative is the use of Interactive Digital Television to record votes (Forbes, 2006).
rEfErEncEs Aichholzer, G. (2004). Scenarios of e-government in 2010 and implications for strategy design. Electronic Journal of E-Government, 2(1), 1-10 Backus, M. (2001). E-governance in developing countries: IICD research brief. Retrieved December 11, 2004, from http://www.cs.ucy. ac.cy/courses/EPL011/readings/brief1.doc Clifford, P. (2003). Strategic plan for e-local government. Retrieved December 11, 2004, from http://www.lgnz.co.nz/library/files/store_005/eLocalGovernment_Strategy.pdf Dunayev, A., & Paynter, J (2006, July 6-10). Towards online local government elections. The 19th Annual Conference of the National Advisory Committee on Computing Qualifications Preparing for the Future—Capitalising on IT (pp. 79-88)Wellington, New Zealand. Forbes, D. (2002). The Impact of e-Government on british local government: Centre-for-eGovernment.com. Retrieved December 11, 2004, from www. for-eGovernment.com Forbes, D. (2006) e-voting the new way forward for democracy? Retrieved November 24, 2006 from http://www.centre-for-egovernment.com/ edemocracy.html
Fung, M. Y. L. (2004). E-government in new zealand: Local body strategies. Unpublished Masters Thesis, the University of Auckland, Auckland. Hacker, K.L. and Van Dijk, J. (eds.) (2000). Digital democracy: Issues of theory and practice. London: SAGE Publications Ltd. Mahoney, J. (2002) Local e-government now: A worldwide view. In Proceedings of the Socitm 2002 Spring Seminar. Retrieved December 11, 2004 from http://www.ival.com/zip/socitm.pdf Neuendorf, K. (2002). The content analysis guidebook. Thousand Oaks, CA: Sage. Paynter, J., & Peko, G. (2006). E-census 2006 in New Zealand; in Digital Government, IDEA Published (this volume) Paynter, J.C., & Peko, G. (2007, November 29-30). Local government elections 2007. In Proceedings of the operational research society conference. Aukland, New Zealand. Paynter, J.C., Fung, Y.L. (2006). E-service provision by New Zealand local government. In A.V. Anttiroiko, & M.Malkia, (Ed.), Encyclopedia of digital government (pp. 718-725). Hersey, Idea Group Publishing. Pons, A. (2004). E-government for Arab countries. Journal of Global Information Technology Management, 7(1), 30-47. Ronaghan, S. A. (2002). Benchmarking e-government: A global perspective. Assessing the Progress of the UN Member States. Retrieved December 13, 2004, from http://www.unpan.org/egovernment2.asp#survey Sharkey, E., & Paynter, J. (2003). Factors influencing the uptake of Online Voting in NZ. In proceedings of CHINZ ‘03: the 4th annual conference of the ACM Special Interest Group on Computer-Human Interaction (pp.121-122). New Zealand Chapter, Dunedin Swedberg, J. (2004). Electronic elections. Credit Union Management, 27(2), 44-48.
E-Lections in New Zealand Local Government
Szeremeta, J. (2002, November 25-27) Benchmarking e-government: A global perspective. Proceedings of International Congress on Government On Line (pp.1-19). Ottawa, Canada, 2002.
furthEr rEading Krimmer, R. (2006). Electronic voting 2006 (pp.86). GI Lecture Notes in Informatics,Bonn. Watt, B. (2006). UK election law: A critical examination. Routledge: Cavendish. Yee, G. (2006). Privacy protection for e-services. Hershey, PA: Idea Group Inc.
tErms and dEfinitions Content Analysis:Analysis of a Web site or other medium using frequency ratings of key words and features. This may be done manually or electronically. E-Democracy: An interactive facility provided on the local authority Web sites for citizens to register and vote online. It also encompasses the use of ICT and computer-mediated communication, such as the Internet, interactive broadcasting and digital telephony, to enhance political democracy or the participation of citizens (Hacker & van Dijk, 2000, p1). E-Government: The general description of a way to provide better access to government information and services through electronic means such as the Internet and mobile communications.
E-Services: Online interaction (increasing Internet-based communication between the public and local government bodies) and online voting (directly participating in selection of future local government administrators). Emerging E-Government: In this phase (Ronaghan, 2002), an official government online presence is established. Enhanced E-Government: In this phase (Ronaghan, 2002), government sites increase and information becomes more dynamic. Interactive e-government: Users can download forms, e-mail officials and interact through the Web. Online enrolment and voting would be examples of such interactions. New Zealand Government Web Guidelines (NZGWG): A set of guidelines and standards provided by e-government Committee of the State Services Commission to help public sectors developing their Web presence. Portals: A portal is a point of entry which enables citizens to have access to a full range of services without any consciousness of movement between Internet sites and where those services may be tailored to the user’s profile. Seamless: The full integration of e-services across administrative boundaries Transactional: Users can pay for goods and services online.
EndnotE 1
00
This was not achieved (Paynter & Peko, 2007)
0
Chapter XIX
E-Census 2006 in New Zealand John Paynter University of Auckland, New Zealand Gabrielle Peko University of Auckland, New Zealand
introduction A census is an official count. It can be contrasted with sampling in which information is only obtained from a subset of a population. As such, it is a method used for accumulating statistical data, and it is also vital to democracy (voting). Census data is also commonly used for research, business marketing, and planning purposes. In New Zealand a census is held every five years. It is a snapshot on the chosen day when the number of people and dwellings (houses, flats, apartments) counted. Everyone in the country on that day is asked to complete census forms. There are two census forms. The blue individual form must be completed by everyone in your household on census day. The brown dwelling form must be completed by one person in your household. For the 2006 census an option was introduced to complete the forms on the Internet. Other initiatives included sending text messages about this process, amongst other things to the enumerators (collectors) whose job it is to collate the information in the field.
Information technology, especially the Internet, opens possibilities of using methods to distribute information and deliver services on a much grander scale (Paynter & Fung, 2006). It can deliver government services and encourage greater democracy and engagement from citizens. Governments around the world are exploring the use of Web-based information technology (Grönlund, 2002). Since the mid-1990s governments have been tapping the potential of the Internet to improve governance and service provision. “In 2001, it was estimated that globally there were well over 50,000 official government Web sites with more coming online daily. In 1996, less than 50 official government homepages could be found on the World Wide Web.” (Ronaghan, 2002). Along with the rapid growth of technological developments, people demand high quality services that reflect their lifestyles and are accessible after normal office hours from home or work. Thus, the goals of delivering electronic government services are to simplify procedures and documentation; eliminate interactions that fail to
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
E-Census 2006 in New Zealand
yield outcomes; extend contact opportunities (i.e., access) beyond office hours and improve relationships with the public (Grönlund, 2002).
background Census-taking began in China and the Middle East. One of the earliest recorded censuses took place in the Babylonian Empire nearly 6,000 years ago. Early censuses are mentioned widely in early Middle Eastern literature, with references to them in a number of places in the Bible. Censuses of population were first taken in England and Scotland in March 1801, Ireland in 1811 and Australia in 1828. In the USA the census is undertaken every 10 years. The US Census 2000 project spanned 13 years at a cost of $65 billion (Gido & Clements, 2006 p147). It is largely based on mailbacks with census employees personally visiting non-respondents. The first New Zealand census was undertaken in 1851, although this census excluded Māori (Statistics New Zealand, 2006a). In New Zealand several acts of parliament have formed the legal basis for the collection of statistical data and census taking that has developed over the years. The most recent of which is the Statistics Act 1975. It clarified that the information contained in returns is to be used for statistical purposes only. It also specified which particulars it is mandatory to collect in the census and which particulars are able to be collected if the government statistician considers it in the public interest to do so. It also guaranteed the census to be free of government influence.
thE usE of tEchnology The 1921 census marked the first occasion on which automatic sorting and counting machines were employed in New Zealand, enabling the major portion of census compilation to be carried out mechanically. The system installed for this census was purchased from the United States, which had been employing mechanical tabulation for census work since 1870. 0
For the 1966 census, sorting machines were replaced by computers. Statistical tables were also produced by computer for the first time and results became available much earlier with a large number of additional cross-classifications of the census data being possible. The use of punch cards for each individual and dwelling was continued until 1976 when an automatic, electronically-based system was introduced. Mechanical tabulation has been replaced by electronic data capture and handling as the speed and capacity of computing technology has improved. In 1996, the scanning and imaging of census forms was introduced, further demonstrating that Statistics New Zealand was now fully immersed in the era of information technology, with analytical tools and information at a level incomprehensible to the department of earlier years.
EnumEration In the 2001 census, though, the process of distributing and collecting forms (enumeration) had hardly been changed. Enumerators within each district would hand deliver forms to each household (one dwelling form and one individual form for each person expected to be present on census night). Each form would be coded with an identifier made up of district, sub-district, meshblock and dwelling (the individual forms had Person ID added on collection). This ID was recorded in a field book along with any comments including the address and best pick-up time. After census night the households would be visited again to collect the completed forms. Up to three visits would be made in each of the delivery and collection phases. On the third unsuccessful visit prior to census night a default number of forms (one dwelling, three individual) would be left. After census night, an envelope would be left on the third unsuccessful collection visit. At the end of the enumeration phase the district supervisors would send follow up letters or visit non-respondents. Once district offices were closed five weeks after the census, the central census office would follow up nonrespondents.
E-Census 2006 in New Zealand
cEnsus 2006 Census 2006 would see the introduction of two technological innovations. The first was the adoption of Internet-based census forms (Statistics New Zealand, 2004). This would enable the dwelling and individual forms to be submitted electronically via the Internet by the householder and individuals. The second, in part necessitated by the first, was to automate the flow of information about the forms submitted either electronically or via post to the enumerators (collectors). On the basis of the Internet ID on the census form the enumerators were texted via Webmail to their census cell phones with the details of forms submitted.
online census forms On delivery of the census forms to the household the enumerator would ask the “Hi-Five” questions. These included whether or not members of the household would like to submit their forms online. If one of more individuals indicated that they might want to do this, then they were given an Internet PIN for everyone in the household to use. The household ID (district, sub-district, meshblock and dwelling) forms the Internet ID for the entire household. A sealed slip was given out to the household. This contains the PIN. Although everyone within the household used this one combination of ID and password, their information would still be secure as each individual’s had to be entered within the one session. That is, a session could not be saved and resumed. However the different individuals could enter their information at different times. Once each dwelling or individual form was completed the system would batch the submissions and text the enumerator on a trice-daily schedule. They would get a cumulative report of the number of forms of each type submitted for each household.
text messaging The computer field management system (FMS) records the contract and contact details of each of the enumerators. With the exception of some
of the high-density apartment buildings in the central business districts (CBDs) each subdistrict is assigned a single enumerator. The census cell phone number of each enumerator is recorded whether it be their own phone (for which an allowance is paid) or one provided for the census. During the census period FMS can text messages to the enumerators. This is done on the basis of the Internet ID (district + subdistrict) where the message is sent in reference to a pre-coded form or the address when initiated by an individual who perhaps does not have a form. These ACTION messages comprise one of four types. An example of each type looks like this: 0010498,6320201001W,3 Bombay,NOTIFY,(I:1e ),dispatched more forms 0010499,6320201,5 Bombay,ACTION,,housesitt ers staying here; deliver forms 0010503,6320201006F,11 Bomba,INFO,,Destiny Child will need assistance. 6320201004P,9 Bombay,OFFICE,(D:1,I:3,A:1), The first part is the unique message number; this is followed by the Internet ID (632=District, 02 = Subdistrict, 01 = Meshblock, 001 = Dwelling, W = Checkletter; this is followed by the address; the message type (NOTIFY, ACTION, INFO, OFFICE); the type (D = Dwelling, I = Individual) and number of forms; and lastly any textual information. The forms may be either in English (e) or Maori (m)—the two official languages in New Zealand. The enumerators were required to enter any messages in their field book against the line for that particular dwelling. In the case of ACTION messages they were to deliver the required number of forms to the dwelling. OFFICE messages denote that the forms have been received via the Internet (online submission) or have been mailed. The enumerator would update the forms received in the OFFICE column of the field book and could check this against those delivered to see if any further forms needed collection for that address. These messages were also available in the action log. The district supervisor could print these to check against the enumerator’s field book when
0
E-Census 2006 in New Zealand
they made field checks and when the enumerator brought the field book in with the boxes containing the forms they had picked up at the end of the field phase of the census.
rEsults Trials conducted a year before suggested a 20 percent uptake of the electronic census form option (Statistics New Zealand, 2005). Field staff were instructed not to oversell this option as there were fears that 30 percent uptake would be too much for the computer system. During the delivery phase staff reported a high uptake in some areas (as much as 80 percent, particularly in inner city suburbs and the CBD). Although the forms were to be completed as of census night (March 7, 2006), the system was available from February 20 (the beginning of the delivery phase) till the end of March. The peak period though was on census night and few problems1 were encountered with the use of the Internet forms. However some of the OFFICE messages to the enumerators were not received during this time. It should be emphasised that the forms themselves were received in the census system. There were also performance and other problems with the FMS, especially considering that access to it from the district offices scattered throughout the country was via dial-up lines (Jackson, 2006). Exacerbating this, many more mailback forms were received at the central office in Christchurch and the district offices. Each of these was to be entered in the system so that FMS could text the enumerator with the corresponding OFFICE message. This was not feasible in the district offices and the information entered at the central office lacked addresses (omitted to cut down the time it took), so it was difficult to determine the correct dwelling when the forms had been miscoded. At the end of the field phase only eight percent of the completed forms had been received via the Internet option. The author questioned people
0
during the collection phase of the census (when doing doorstep checks). Many of those who had requested PINs so that they could do the census online stated that as they had the paper-based form (given out both to record the Internet ID, and as a back up in case the electronic version failed) they found it easy to complete the paper form, particularly with family members sitting around the table. Conversely in the large flatting situations typified in some suburbs, only having a single PIN made it difficult for those who would elect to complete the forms at work and other places where they had Internet access. Another situation again arose in the case of non-private dwellings (NPDs) such as hostel, hospitals, and hotels where each individual was given a separate PIN. However the uptake in such places tended to be low. In contrast over 10 percent of census forms had been mailed in. These and the ones collected by the enumerators and district supervisors had to be dispatched to Christchurch to be scanned. Some preliminary results for the census (e.g., the overall population count, based on analysis of the field books) were available at the end of May (Statistics of New Zealand, 2006b), but it would take three months for the forms to be scanned and the data were to be made available on December 6. The use of the field books too was error prone. Enumerators could wander out of a meshblock and even into another subdistrict or district. Such recording errors would have to be corrected in later phases of the census. This would be very hard to do if it did not occur at the district level. For instance if the online option was taken there was no mechanism to the District Supervisor to correct the return—it could only be logged as an enumeration event. The enumerators texted, e-mailed or phoned their daily delivery and collection statistics to the district supervisors. Although these were to be entered in FMS problems arose and a largely manual system was used. This resulted in further time delays in receiving information. Thus it would be hard say to realise more staff were needed when delivery and collection were encountering problems.
E-Census 2006 in New Zealand
discussion
E-government
E-census
It has been claimed that the low penetration of broadband technology in New Zealand is a limitation to the spread of electronic services. However the unbundling of the Telecom local loop monopoly was released prior to the 2006 Budget (Budget, 2006). Clearly the infrastructure is in place for the participation of New Zealand citizens in e-government. However the uptake by both the citizens, as shown in the census, and government agencies is low. For instance, only one council, Auckland Regional Council, provides an on-line forum for discussion and sharing of ideas (Paynter & Fung, 2006). None of the local government sites provide any e-democracy although some sites have put up information about Elections 2004. Dunayev (2005) used an automated tool to analyse all the local government Web sites. He concluded that the sites did not appear to have matured sufficiently to meet the goal of online local government elections in the next cycle (2007). Some of the obstacles to e-voting, such as trust, are outlined in Sharkey and Paynter (2003) and steps towards an e-voting transition in Paynter and Peko (2005). This included the use of e-services in the census and in the local body elections—both less potentially risky and lower profile than a general election. Other countries that have an e-census option include Switzerland where 4.2 percent of the population took part in the E-Census. Eleven percent who began to fill out the form online interrupted the process before reaching the end. The E-Census homepage received 238,000 visits. Of those households that took a look at the Web site, only half actually filled out and submitted their questionnaires online. This shows the wide gap between simply searching for information over the Internet and carrying out a complex transaction online (Swiss Statistics, 2001).
Lessons were learnt that will improve the participation in the next census. The risks of the online version were recognized in the trial held a year earlier. Emphasis was made on training of the field supervisors and the collectors. The collectors were given access to the online census facility, to enable them to become familiar with the online option before it went live. This was invaluable as an aid to their understanding of how the option would be viewed by respondents prior to standing at their doors and offering that option (Statistics of New Zealand, 2006) Additional training was also introduced for the field collectors to ensure they understood what is required of the respondent, regardless of which option is chosen. This was to ensure, firstly, that the online option is communicated correctly to the respondent on the doorstep; and, secondly, that the field communication systems integrating the online option with the paper option were correctly performed. The training theme continued with field supervisors, who need to be able to use the field operations monitoring systems. These are the mechanisms used to monitor how the team in the field responds to the text messages they receive about the lodgement of Internet forms and related helpline actions. The complexity of these systems and the need to integrate with existing systems for the paper form collection process posed a real challenge to ensure seamless operations. Unfortunately the failure of FMS (Jackson, 2006) meant that this integration was not a success. The field team became distrustful of the accuracy of the information provided about forms collected by the online and mailback options. This made their jobs harder in the collection phase and then when the collectors returned the forms. Information about mailbacks, privacy envelopes, e-forms, and hand-collected forms had to be combined. Subsequent to this follow-up letters were sent to apparent non-respondents but again the possibility existed that the forms had been received by mailback or electronically but that this was not communicated to field staff
futurE trEnds The information and services provided online via by governments is constantly undergoing evolution partly driven by innovations in information
0
E-Census 2006 in New Zealand
technology (IT), partly by government wishing to leverage this tool and to a minor extent by user demand. Although the uptake by citizens to undertake the census online option was poor, this can be improved as the Internet becomes more pervasive and lessons learnt from the 2006 census incorporated into the training and testing in 2011. Given the problems in contacting households, in particular in high rise apartments and walled communities, the Internet offers a viable option. Use of the field books is cumbersome and error prone. There are also delays in communicating such information captured on hard copy paper. It would be advantageous to use an electronic notepad allied with a global positioning system to minimize recording errors (such as failing to give the complete internet ID or moving out of the correct meshblock) and maximize the responsiveness to the delivery and collection statistics recorded. Surveylab (www.surveylab.co.nz) is an example of such a system used for global information system (GIS) information (Surveylab, 2006).
conclusion For the most part, although the technical side of the electronic census worked, its uptake by citizens was low. This would suggest that further moves towards e-democracy in the form of electronic voting would be premature in terms of public acceptance and uptake.
futurE rEsEarch dirEctions We intend to continue this research as part of a more general one in the area of e-government. For instance we will continue to survey and interview local bodies in New Zealand and organizations such as LGOL (Local Government On Line). The aim was for the next round of elections (2007) to be electronic, at least in pilot, although it is clear that this will not happen. We are currently involving the processes involved in running the local government elections. As part of the move to e-government we look at other government
0
initiatives such as the general election (and byelections, if any) and census. Other related areas include how government departments interface with people (e.g., Inland Revenue Department, Internal Affairs and Foreign Affairs—including how Web sites are configured for the target population) and the impact of technology on people’s lives, for example road tolling and congestion pricing, electronic passports, opening up of court decisions to be posted online. As part of the evaluation of online election strategies we are looking at how Web sites are to be made more interactive and accessible to serve the needs of a diverse populace, whilst maintaining security, privacy and legal compliance. In parallel with this we are also reviewing the content of the Web sites of the political parties to see how they promote the democratic process.
rEfErEncEs Australian Bureau of Statistics. (2006). Welcome to eCensus. Retrieved September 26, 2006 from http://www.census.abs.gov.au/eCensusWeb/ Budget. (2006). Government moves fast to improve Broadband. Retrieved May3, 2006 from http://www.beehive.govt.nz/ViewDocument. aspx?DocumentID=25636 Dunayev, A. (2005). Electronic local government elections in New Zealand. Unpublished BCom (Hons) dissertation, The University of Auckland, New Zealand. Gido, J., & Clements, J. (2006). Successful project management (3rd Ed., pp.147). Mason, OH: Thomson. Grönlund, A. (2002). Electronic government: Design, applications & management. Hershey: Idea Group Publishing. Hacker, K. L., and van Dijk, J. (Ed.) (2000). Internet activism—Digital democracy. M/Cyclopedia of New Media Retrieved November 9, 2007, from http://wiki.media-culture.org.au/index.php/Internet_activism_-_digital_democracy
E-Census 2006 in New Zealand
Jackson, R. (2006). Census system overload but no data lost. Computerworld, 20 March, 2006. Retrieved May 3, 2006 from http://computerworld. co.nz/news.nsf/UNID/A98536DAE494843CC2 57133007AEE8B?OpenDocument&Highlight= 2,census Paynter, J., & Fung, M. (2006). E-service provision by new zealand local government. In A.-V.Anttiroiko, & M.Malkia, (Ed.), Encyclopedia of digital government. Hersey: Idea Group Publishing. Paynter, J. & Peko, G. (2005, Dec 1-3). E-lections and the price of democracy. In proceedings of the 40th Operations Research Conference of New Zealand (pp.145-154), Wellington, 2005. Ronaghan, S. A. (2002) Benchmarking e-government: A global perspective. Retrieved May 29, 2004, from http://www.unpan.org/egovernment2. asp#survey Sharkey, E., & Paynter, J. (2003, July3-4). Factors influencing the uptake of online voting in NZ. CHINZ ‘03: The 4th annual conference of the ACM Special Interest Group on ComputerHuman Interaction (pp.121-122), New Zealand Chapter Dunedin. Statistics New Zealand. (2004). On-line option for 2006. Census Retrieved June 22, 2006 from http:// www2.stats.govt.nz/domino/external/pasfull/pasfull.nsf/Web/Media+Release+2006+Census:+Online+forms+October+2004?open Statistics New Zealand. (2005). Using Both Internet and Field Collection Methods for the 2006 Census of Population and Dwellings Retrieved 19 September 2006 from http://www.stats.govt. nz/census/2006-census/methodology-papers/internet-and-field-collections.htm Statistics New Zealand. (2006a). Introduction to the Census (2001)—reference report. Retrieved June 20, 2006 from http://www.stats.govt.nz/census/2001-census-technical-info/2001-introduction/default.htm Statistics New Zealand. (2006b). 2006 Census of Population and Dwellings—Provisional Counts.
Retrieved May 29, 2006 from http://www.stats. govt.nz/products-and-services/hot-off-thepress/2006-census/2006-census-provisionalcounts-2006-hotp.htm Surveylab. (2006). Customizing applications for ike, Retrieved 27 September 2006 from http://www.surveylab.co.nz/~downloads/ CUSTOMIZING%20IKE.pdf Swiss Statistics. (2001). Experience with the e-census. Retrieved September 26, 2006 from http://www.bfs.admin.ch/bfs/portal/en/index/themen/volkszaehlung/uebersicht/blank/zur_erhebung0/erfahrungen_mit_e-census.html
furthEr rEading Statistics New Zealand. (2007). Census 2006. Data Retrieved April 19,2007 from http://www.stats. govt.nz/census/2006-census-data. US Census Bureau. (2007). Retrieved April 19, 2007 from http://www.census.gov/
tErms and dEfinitions Census: This census is a snapshot on the chosen day when we count how many people and dwellings (houses, flats, apartments) there are. District/Sub-District: An administrative area used in the census. There were 412 such districts in the country. Each district had up to 25 subdistricts, each assigned to a enumerator. E-Consultation: An interactive facility provided by local authority Web sites to allow citizens register and communicate by e-mail with their councilors. E-Democracy: An interactive facility provided on the local authority Web sites for citizens to register and vote online. It also encompasses the use of ICT and computer-mediated communication, such as the Internet, interactive broadcasting and
0
E-Census 2006 in New Zealand
digital telephony, to enhance political democracy or the participation of citizens (Hacker and van Dijk, 2000, p1). E-Government Strategy: E-government is about using new technology (e.g. computers and the Internet) to improve the way central and local government deliver their services, communicate, consult and work with others. Under this strategy, the Government’s aim is to create a public sector that is structured, resourced and managed to perform in a manner that meets the needs of citizens in the information age and which increasingly delivers information and services using online capabilities.
Meshblock: The lowest unit at which aggregate statistical information is recorded. PIN: A personal identification number (PIN) is a secret shared between a user and a system that can be used to authenticate the user to the system. Typically, the user is required to provide a non-confidential user identifier or token (such as a banking card) and a confidential PIN to gain access to the system. Upon receiving the User ID and PIN, the system looks up the PIN based upon the User ID and compares the looked-up PIN with the received PIN. If they match, then the user is granted access. If they do not match, then the user is not granted access.
Enumerator: Also called a “Collector” these people are each assigned to a subdistrict comprising several meshblocks. The process of delivering, collecting and collating the forms is referred to as “enumeration”.
Sampling: Information is only obtained from a subset of a population, in comparison with a census that attempts to obtain a 100 percent ‘sample.’
Global Positioning System: A GPS can be used to check your location. Thus in conjunction with an electronic notebook the position of dwellings can be recorded accurately. This would minimise mapping errors in the census.
EndnotE
Hi-Five: During the census delivery phase enumerators had queue cards to prompt them to ask five questions of the householder. They were their name, the address, how many dwellings there were on the property, whether they wanted the English or Maori/English census form and whether they wanted to do the census using the Internet option. Internet ID: The household ID (district, subdistrict, meshblock and dwelling) would form the Internet ID for the entire household. The components are 3,2,2,3 digits long respectively. A check letter is appended to this code.
0
1
If pop ups were disabled the forms could not be completed as there was a pop up confirmation message at the end of the submission stage. Some collectors had either not coded the forms at all or had omitted the check letter. Where there were more dwellings in a meshblock than expected, although an extra 20 percent loading was catered for, an overflow book had to be used and there was no corresponding Internet ID, that is all the check letters were coded as ‘X.’ In these circumstances the respondents could not use the Internet option for completing the forms.
0
Chapter XX
Security Challenges in Distributed Web Based Transactions:
An Overview on the Italian Employment Information System Mirko Cesarini Università degli Studi di Milano Bicocca, Italy Mariagrazia Fugini Politecnico di Milano, Italy Mario Mezzanzanica Università degli Studi di Milano Bicocca, Italy Krysnaia Nanini Politecnico di Milano, Italy
introduction Public administrations, during the last few years, activated modernizations in public service delivery. In particular, this arrangement relates to the service digitalization and automation, thanks to the massive inclusion of Information and communication technologies in public offices. This paved the way for internal and external organizational and technological changes, in that a new approach is required to leverage the new technologies. Moreover, the Internet technologies began to play an important role in public services delivery, and many transactions are Web-based nowadays.
In this perspective, several governments in Europe (Liikanen, 2003), and others all over the world, started their own plans of e-government with the goal of increasing the amount and the quality of the service offered to their customers (citizens, enterprises, profit, and no-profit organizations) via the Internet. In such a streamline, one of the fields where e-government is more fertile regards public employment services: in fact, due to their social implications (e.g., sustainability, workforce mobility, workers’ re-qualification paths, training for fresh graduates and students), they are becoming more and more important. Consequently, employment information systems
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Security Challenges in Distributed Web Based Transactions
Table 1. Goals of the Italian plan of e-government 1. 2.
3.
4.
5.
Management of the public administrations’ autonomy: this concerns both national and local administrative aspects on financial, social and infrastructural problems. Proposal of innovative services for citizens, business, private and public entities, based on the most widespread information and communication technologies (customer relationship management, local institutional Websites and multi-channel advices). Proposal of local information services targeted to the creation of statistical databases and data warehouses available for citizens, business, private and public entities. Development of local government functions in order to make the whole national administrative infrastructure more adherent to the information needs of the customers. Transition to e-governance: this means interaction among public administrations, populations and politicians, both with the single citizen and wider groups (communities, associations), for the methodological solution and control of the problems.
deserve a special attention in public employment service provision. In Italy, the job workfair has been conceived as a distributed and cooperative network and is regulated by a law dated October 23, 2003. The job workfair plays an important role in the plan of e-government (CNIPA, 2002) that was launched in 2002. Table 1 sums up the objectives of the Italian plan of e-government. As a general trend, we observe that public administrations opted for the creation of local portals at the beginning of the project. Each portal was directly managed by the local public administration, and had no relationships with other local portals. In a second phase, each local Portal joined a federation created to establish relationships among the local portals. The federation was conceived as glue of the local nodes (more back-end oriented rather than front-end oriented) and not as a replacement of the local portals. As a result, all the local Information Systems are connected into a distributed and cooperative Information System (Coulouris, Dollimore & Kindberg, 1994). The local portals and the corresponding local information systems are very different from each other. They could not have been easily replaced by a single centralized version. The federative ap-
0
proach, although exploiting the local peculiarities, raises some issues concerning the management of global or distributed functions (e.g., should a customer look for a service provided by a portal different from the one where she or he authenticated herself or himself). Before exploring deeper this topic, we would like to introduce some more information about the job workfair. The next subsection is aimed not only at exploring the job market place peculiarities but also to provide the rationale that driven the developers of the Italian job workfair (or “Borsa Continua Nazionale del Lavoro,” BCNL) to build a distributed and cooperative information system.
background Services to employment, in the past years, were generally offered by local private work agencies and public centers for employment. These entities are autonomous, therefore the federation is the only way to share the information and thus to create a job workfair where the data concerning curricula and job vacancies might freely circulate among the participants. The inclusion of Information and communication technologies, both in public administrations and in private organizations, heavily changed the way of collecting, storing and managing data. Previously, data collection and management was based on paper documents and huge amounts of physical archives. The availability of electronic data management is the premise to match curricula vitae and vacancies coming from different entities. Furthermore the advent of Web-service allows the creation of non-invasive federations of legacy information systems. Actually, enormous quantities of data are received from many sources and then stored into digital archives (databases and data warehouses as well) where they can be either easily resumed or updated. Web services allows easy sharing of the information content managed by each actor. Many attempts have been done to design and execute the job requests and offers matching process in an electronic environment. Beginning
Security Challenges in Distributed Web Based Transactions
from an overview of some employment-oriented information systems, the one called Ergasiognomon has been implemented to support the Greek job marketplace (Milis, Symeonidis & Mitkas, 2003), while a second example is the job network, a system developed for Internet job search and recruiting in Australia (Dockery, 2002; Webster & Harding, 2001). In Australia, the old commonwealth employment services system was abolished in 1998 and replaced by Job Network, whose goal is to link all public employment services out of private or community organizations. From a European perspective, the EURES (EURopean Employment Services) Portal was born to couple job offers and requests loaded from 29 different territories among European countries and crossboarder regions (EURES, 2006). From the beginning of the century, many provincial or regional authorities autonomously exploited local job workfairs. We report two examples: the former is the SINTESI (integrated system for employment services) project (SINTESI, 2006) developed by the Municipality of Milan, and the latter is the Borsa Lavoro Lombardia project (Borsa Lavoro Lombardia, 2006), targeted to job matches in Lombardia, a region in the north of Italy. Both the two local employment networks have been included into the Italian job workfair The employment information system behind the Italian job workfair was conceived in order to federate the existing local entities (both private and public ones) providing services within the job market place and to create a unique network of distributed and cooperative domains (Mylopoulos & Papazoglou, 1997). The large set of existing and independent entities participating to the job workfair made the federated architecture the only feasible solution.
thE italian Job workfair sErvicE modEl The Italian job workfair, named Borsa Continua Nazionale del Lavoro, is a federation of local nodes (also referred to as domains) linked together in a virtual network whose nodes can be accessed on-
line from any (local) portal of the system (Fugini, Maggiolini & Nanini, 2006). This network is peculiar in that it is distributed and cooperative. The information enter the system through a local node (e.g., a job seeker submits a curriculum to a private job agency), the information property remains in the local node, however it is accessible from any other federated access point. Furthermore, the job seeker can update the data by any federated access point. Consequently, the end-user does not have to worry about the physical location of the information since it is always available from any connection. To achieve the just introduced functionalities, some technical challenges have been addressed. This topic is further explored in the next section. The federation appears as a multilayer structure organized according to geographical basis (as a general reference, in Italy, towns and cities are grouped in provinces, provinces are grouped in regions and the regions build the Italian country): the bottom layer is composed by the local domains which are grouped and mutually linked under the direct control of a provincial domain. The middle layer, which is composed by the regional nodes (each regional node is the reference node for the provincial domain present in the region); then each regional portal is mutually connected to the other regional portals and all of them refer to the national domain. This multilayer structure exploits the Web service technology, which is used for exchanging data between applications and systems running on different platforms. Web services are described through the WSDL standard (Web service description language). The distributed information system is fed by the information provided by the end-users. Specifically, job seekers and employers. Private and public job agencies (which play as the mediators of the job matching procedures) are the front-end of the system to the final users. Besides job agencies, since the network relates also to education, training and job re-qualification programs, schools and universities, and public administrations (as pension registries, social security offices and the ministry of welfare) join the distributed Information system as well.
Security Challenges in Distributed Web Based Transactions
We are outlining the job seeker typical interaction with the system in order to show the service model provided. The job seeker inserts his or her personal data and the curriculum in the node of the network he or she wants to be registered in. After being registered and given the credentials (usually a user-identification and a password), the user fills in the fields with all the mandatory personal data (which are confidential, so they are electronically protected and visible only after the consent of the owner). The job seeker completes the curriculum section loading the curricular data. The curricular section is split in two: the public and the private one. “Public curricular data” can be shared among the network and are used to compute the match and for statistical purposes. Contact data are not present in the public curricular data but are the main content of the private one. They are shared among nodes according to business rules and with the consent of the owner. The public curricular data is mainly intended as information to be shared within different entities of the federation (even different private job agencies). The contact data are not freely shared but are exchanged according to business rules to preserve the private agencies businesses. We will now outline the employer typical interaction. Registration is required also for the employer, who loads confidential data and the job profile requested to apply for the vacancy. The local system archives both job offers and requests in a local database. The distributed information system queries the local databases to return the best match according to job search and vacancy criteria. Once an applier is considered suitable for the vacancy, his or her public curricular data are sent to the entrepreneur by means of notification systems (generally publish and subscribe mechanisms, e-mails, short message systems in future, and so on). The Entrepreneur chooses the best candidates and decides to give them an opportunity for a selection; afterwards, the mediators contact the job seeker to propose him or her the selection. If the job seeker accepts and explicitly acknowledge it, his or her personal data are communicated to the entrepreneur for the selection. If the selection ends successfully with recruitment,
the system communicates the mandatory information about the worker’s contract to the pension registries, which can update his or her job status to the actual position. The end-users may search for training opportunities and job re-qualification courses for long-term unemployed as well. In that case, the user browses the federated portals, filling in the required fields with the search criteria and starting the research. Schools and universities, after the registration, are in charge of inserting both the training opportunities and the job re-qualification courses and give all the information useful to apply.
distributEd ans sEcurE wEb basEd transactions in thE italian Job workfair After having outlined the service model, this paragraph will focus on the security issues addressed within the Italian employment information system. Two topics will be introduced: security in a federated authentication and in exchanged messages. Concerning user’s authentication mechanisms in the Italian job workfair, security is assured through the use of SAML (security assertion markup Language) token (Barbir, 2003). In a SAML oriented architecture, end users authenticates their selves to a specific node of the federation, the node release a set of tokens that can be used in the remaining nodes of the federation as a proof of authentication. The authentication activities, the tokens exchange among the component of the federation, and the token exchange with the user Web browser is managed transparently from the user point of view. In the Italian job workfair the tokens are generated by two different entities: the national node, managed by the Ministry of Welfare, and the main pension registry. The national node is in charge of authenticating all the end-users except for the enterprises which are responsibility of the pension registry node. The architectural behavior will be further explained by using two scenarios. The first one concerns the case of a federated authentication of an enterprise, registered in the pension registry
Security Challenges in Distributed Web Based Transactions
Table 2. First scenario of federated authentication Hypotheses 1. The enterprise accesses the national node of the network and asks for a service 2. Authentication is required to access the service 3. The enterprise is registered in the pension registry archives Phases: 1. The enterprise requests the service, the enterprise specifies to be registered in the pension registry archives 2. The national node, unable to identify the user, redirects the user to the pension registry Portal, which presents the log-in mask and where the user should insert the credentials 3. The user can verify to be on the correct Website of the pension registry through the SSL-3 (Secure Socket Layer-3) and then inserts the credentials 4. The pension registry node verifies the credentials, and if they are correct, it generates a token and gives it out to the national node 5. Once received the first token, the national node validates the entry token, and if it is correct, then it generates its own SAML token and communicates it to the user’s browser 6. The user can access the service requested at the beginning through the SAML token
archives, which requests a service available on the national node (first scenario). Table 2 summarizes the various passages of the authentication phases in the first scenario. The second scenario shows what happens when, within a Web session (similar to the one of the first scenario), the user asks for one more service (for which the authentication is required) on another portal of the federation. The hypotheses are the same as the first scenario, the first six phases are similar, however, the process does not end after the sixth phase but continues. For example, the user asks for a service on a different domain, suppose the regional domain A. The user does not need to be authenticated twice in the work session to access the second service: in fact, due to the single signon mechanism (Gross, 2003), the user is identified once per session, and then is recognized as a user of the entire information system and not of a single domain. Similar authentication mechanisms are used to recognize the job seekers’ and other actors’ identities in the employment information system.
Since the federation is composed of many nodes and many end-user processes involves different nodes, the single sign-on mechanism results in being a powerful instrument. Concerning the security issues related to the messages exchanged in the network, the message structure must adhere to the standards of the envelope of e-government (Curbera, Duftler, Khalaf, et al., 2002). In particular, to grant the security in the messages, these ones have to be structured according to the SOAP (simple object access protocol) v1.1 with attachments protocol (Barton, Thatte & Nielsen, 2000; Jepsen 2001). The SOAP message is divided into the Header and the Body: both of them are divided into some modules. On the one hand, the header of the SOAP message contains, near the fields describing the information for the message management, the WS (Web service)-security module. Basically, the WSsecurity module, even if optional, introduces in the SOAP message some protection mechanisms towards integrity, confidentiality and authentication. WS-security associates the security token to the messages using some profiles for interoperability and cooperation in a federated environment. On the other hand, the Body of the SOAP message describes the message in all its fields going from the description ones to the management of the possible attachments to the message.
sEcurity in statistical information provisioning As previously outlined, the Italian information system is fed by the data stored by users. Apart from the confidential data, the public data (those ones used for the job matching procedures) are treated also for statistical purposes. This means that one of the core components of the architecture is a statistical information system (Cesarini, Fugini, Maggiolini et al., 2006). Data collection and information flow analysis in the statistical information system are useful for the construction of synthetic indicators and to return information on the current status of the system. This means a continuous monitoring of the system and the
Security Challenges in Distributed Web Based Transactions
gives the possibility to tune the system according to the end-users needs (Cesarini, Fugini & Mezzanzanica, 2006). The different classification systems present in the nodes of the federation arise some problems in that data coming from different sources hardly match if simply used as they are loaded. Furthermore, some data quality operations are needed before treating data for statistical purposes. Specifically, data cleansing (syntactic and semantic cleaning) is managed both at a single archive level as well as at a multi archives level. Then, all the archives are integrated in a single archive out of which gaining statistical information. The statistical information are available to the end users according to their user profiles, that is the information provided to the job seekers is different from the information provided to the employers or to the private job agencies. The statistical information distribution mechanism relies on user authentication to select which information to provide.
futurE trEnds Employment information systems and, more generally, public employment support services are more and more important within the exploitation of many countries’ plans of e-government. So far, as cited in the previous sections, EURES is the first attempt of creation of a European centralized portal devoted to employment issues in Europe since 1993. But the future perspectives and research fields of the employment themes are evolving rapidly. In particular, another European project started in January, 2006, whose acronym is SEEMP (single European employment market place). The SEEMP project aims at creating a unique platform out of the various different and local employment information system, using distributed and cooperative applications, able to afford the job offers and requests match in the European environment. Consequently, this means that the matching procedures have to access the information and data contained in a huge set of local databases and out of diverse classification systems and ontologies. The development of the
SEEMP network will bring new data management approaches (using, for instance, Semantic Web Services and ontologies). The SEEMP may be seen as an additional ring in the described Italian multilayer structure: in fact, besides having provincial, regional and national nodes, the new platform will add a European level of the entire system where both the coupling procedures, among curricula vitae and job vacancies may be executed, and more training and educational opportunities in a wider environment may be found.
conclusion This chapter discusses how security challenges have been addressed in the Italian employment information system. The result is a case study of secure transactions in a distributed and cooperative environment. Security has been implemented not only on the data stored into the local databases of the network (as stated by the national privacy laws), but also on the messages exchanged in the entire network through channel encryption mechanisms (like the SSL protocol), and above all, on the users’ authentication. The user authentication is related both to security credentials (user ID and password), and to security tokens that are generated and shared among the different (and independent) nodes of the network. The tokens are used to let the user authenticate once to a node of the network and then let him/her use all the (independent) nodes of the network without authenticating twice. Moreover, the work has shown that the employment information systems are, since now, a growing field in the delivery of digital services and statistical information to citizens, business and non-profit entities, and among public administrations themselves. Many issues should be further studied and completed, for example the ones concerning the matching operations in a European environment where reconciliation mechanisms have to preserve the features of the information flows and the autonomy of the organizations involved in the information exchange.
Security Challenges in Distributed Web Based Transactions
futurE rEsEarch dirEctions Concerning the future works, we are investigating security issues in coopetitive information scenarios where several competitors agree to exchange data among their information systems. The term coopetition is used in management literature to refer to a hybrid behavior comprising competition and cooperation. Coopetition takes place when some actors cooperate in some areas (e.g., data sharing for market analysis) while compete in some others. A coopetitive model can be used to model the interaction taking place among information systems of different entities (e.g., companies and public administrations) partially collaborating (Cesarini, Mezzanzanica 2006).
rEfErEncEs Barbir, A. (2003, June 14). Web Services Security: An Enabler of Semantic Web Services. In Proceedings of the Workshop on Business Agents and the Semantic Web. Halifax, Canada. Barton, J. J., Thatte, S., & Nielsen, H. F. (2000). SOAP Messages with Attachments. In the W3C Note, December 11, 2000. Retrieved May 20, 2006 from http://www.w3.org/TR/SOAP-attachments Borsa Lavoro Lombardia (2006). The network for services to Employment in Lombardia. Retrieved April 20, 2006 from http://www.borsalavorolombardia.net Cesarini, M., Fugini, M., Maggiolini, P., Mezzanzanica, M., & Nanini, K. (2006). The Italian e-Government Plans. Experiences in the Job Marketplace and in Statistical Information Systems. In Proceedings of the European Conference on E-Government (pp. 57-66). Marburg, Germany Cesarini, M., Fugini, M., & Mezzanzanica, M. (2006). Analysis-Sensitive Conversion of Administrative Data into Statistical Information Systems. In Proceedings of the International Conference on Enterprise Information Systems (pp.293-296). Paphos, Cyprus.
Cesarini, M., & Mezzanzanica, M. (2006) Policy making for coopetitive information systems. In Proceedings of the International Conference on Information Quality.Boston. CNIPA National Centre for Informatics in Public Administrations. (2002). Handbooks on Networked Services Offered on Websites by Italian Regions and Autonomous Provinces. Retrieved January 15, 2005 from http://www.cnipa.gov.it Coulouris, G., Dollimore, J., & Kindberg, T. (1994). Distributed Systems, Concepts and Design. Addison-Wesley. Curbera, F., Duftler, M., Khalaf R., Mukhi, N., Nagy, W., & Weerawarana, S. (2002). Unraveling the web services web: An Introduction to SOAP, WSDL, and UDDI. IEEE Internet Computing, 6(2), 86-93. Dockery, A. M. (2002). The new enterprise incentive scheme: An evaluation and a test of the job network. Australian Journal of Labour Economics, 5(3), 351-372. EURES European Employment Services. (2006). A network to help workers cross borders. Retrieved June 15, 2006 from http://www.europa. eu.int/eures Fugini, M., Maggiolini, M., & Nanini, K. (2006). Supporting e-placement: Achievements in the italian workfare project. In Proceedings of the International Conference on Enterprise Information Systems (pp. 245-250). Paphos, Cyprus Gross, T. (2003, December 8-12,). Security Analysis of the SAML Single Sign-On Browser/ Artifact Profile. In Proceedings of the 19th Annual Computer Security Application Conference (pp. 298-307). Jepsen, T. (2001). SOAP cleans up interoperability problems on the web. IT Professional, 3(1), 52-55. Kant, K., Iyer, R., & Mohopatra, P. (2000, September 17-20). Architectural impact of secure socket layer on internet servers. In Proceedings of the International Conference on Computer Design (pp. 7-14). Austin, Texas.
Security Challenges in Distributed Web Based Transactions
Liikanen, E. (2003, April). E-government and the european union. UPGrade, 4(2), 7-11. Milis, G.M., Symeonidis, A.L., & Mitkas, P.A. (2003). Ergasiognomon. A model System of Advanced Digital Services Designed and Developed to Support the Job Marketplace. In Proceedings of the 9th Pan-Hellenic Conference on Informatics, Thessalonica, Greece, November 21-23, 2003, 388-401. Mylopoulos, J., & Papazoglou, M. (1997). Cooperative Information Systems. IEEE Expert, 1997 SINTESI (2004) Integrated System for Services to Employment. (2006). In SINTESI the e-Government for employment. Retrieved October 20, from http://www.sintesi.provincia.milano.it/portalemilano/pdf Webster, E., & Harding, G. (2001). Outsourcing public employment services: The australian experience. The Australian Economic Review, 34(2), 23.
furthEr rEading CNIPA National Centre for Informatics in Public Administrations. (2005, January) Handbooks on Networked Services Offered on Websites by Italian Regions and Autonomous Provinces. Retrieved January 15, 2005 from http://www.cnipa.gov.it Curbera, F., Duftler, M., Khalaf, R., Nagy, W., Mukhi, N., & Weerawarana, S. (2002). Unravelling the web services web: An introduction to SOAP, WSDL, and UDDI. IEEE Internet Computing, 6-2. Hoffmann, E. (1995). We must use administrative data for official statistics—but how should we use them? Statistical Journal of the United Nations/ECE, 12, 41-48. Rosenberg, J., & Remy, D. (2004). Securing web services with WS-security: Demystifying WSsecurity, WS-policy, SAML, XML signature, and XML encryption. SAMS, May 12,
Statistics Denmark. (2000). The use of administrative sources for statistics and international comparability (invited paper). In Conference of European Statisticians, 48th plenary session. Paris. Webster, E., & Harding, G. (2001). Outsourcing public employment services: The australian experience. The Australian Economic Review, 34(2),23. Welch, V., Siebenlist, F., Foster, I., Bresnahan, J., Czajkowski, K., Gawor, J., & Kesselman et al. (2003). Security for Grid services. In proceedings of the 12th IEEE International Symposium on High Performance Distributed Computing.
tErms and dEfinitions Cooperative Information System: A system that puts in correlation pre-existent and autonomous information and elaborative resources of different organizational subjects. Distributed Information System: A system where, applications (cooperative among one another) stay on different elaborative nodes and the information property, unique, is hosted on different elaborative nodes. SSL: Secure socket layer is a protocol used to guarantee privacy and security of the communication on the Internet. It allows client/server applications to be safely according to encryption mechanisms preventing the transactions from intrusions and manipulations and falsification of the messages. SSO: Single sign-on is a specialized form of software authentication that enables a user to authenticate once and gain access to the resources of multiple software systems Statistical Information Systems: Information systems designed to collect, store, manage and distribute statistical information. Token: Tracer or tag which is attached by the receiving server to the address (URL) of a page
Security Challenges in Distributed Web Based Transactions
requested by a user. A token lasts only through a continuous series of requests by a user, regardless of the length of the interval between requests. Web Service: Collection of protocols and standards used for exchanging data between applications or systems: software applications
written in various programming languages and running on various platforms can use Web services to exchange data over computer networks like the Internet in a manner similar to inter-process communication on a single computer.
Chapter XXI
Interactive Personalized Catalogue for M-Commerce Sheng-Uei Guan Brunel University, UK Yuan Sherng Tay University of Singapore, Singapore
introduction M-commerce possesses two distinctive characteristics that distinguish it from traditional e-commerce: the mobile setting and the small form factor of mobile devices. Of these, the size of a mobile device will remain largely unchanged due to the tradeoff between size and portability. Small screen size and limited input capabilities pose a great challenge for developers to conceptualize user interfaces that have good usability while working within the size constraints of the device. In response to the limited screen size of mobile devices, there has been unspoken consensus that certain tools must be made available to aid users in coping with the relatively large volume of information. Recommender systems have been proposed to narrow down choices before presenting them to the user (Feldman, 2000). We propose a product catalogue where browsing is directed by an integrated recommender system. The recommender system is to take incremental feedback in return for browsing assistance. Product appearance in the catalogue
will be dynamically determined at runtime based on user preference detected by the recommender system. The design of our hybrid m-commerce catalogue recommender system investigated the typical constraints of m-commerce applications to conceptualize a suitable catalogue interface. The scope was restricted to the case of having personal digital assistant (PDA) as the mobile device. Thereafter, a preference detection technique was developed to serve as the recommender layer of the system.
background In a study conducted by Bryan & Gershman (1999), a new user behavior termed opportunistic exploration has been identified, where users have multiple, ill-defined overlapping interests. Throughout the course of browsing, exposure to items affect interests, and interest may evolve due to exposure or whim. In Tateson & Bonsma (2003) the emphasis was that the paradigm of online shopping is fundamentally different from that of information retrieval.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Interactive Personalized Catalogue for M-Commerce
Despite the importance of having a well-designed online catalogue that supports the shopping behavior of users, the challenge of including such browsing capabilities in m-commerce is great, given that small screen size of mobile devices severely limits the number of products that may be presented on-screen. The predominant strategy of organizing products into narrow categories has many problems (Lee, Lee & Wang, 2004). The alternative solution of interactive catalogues (Tateson & Bonsma, 2003) allows for fluid navigation in the product space, whereby users are given the freedom to shape up the browsing process and redirect it when their interests change. Instead of having the user browse painstakingly through the product catalogue of a retailer, a recommender system could be used as a subsystem of the interactive catalogue to narrow down the choices before presenting them to the user. Recommender systems perform the role of sales agents by first understanding a user’s preferences through querying and profiling, and subsequently presenting information or products of relevance to the user (Schafer Konstan & Riedl, 2001). Recommender systems have long been regarded as a highly desirable feature of e-commerce. Currently there are numerous ongoing studies to improve recommender technology in the context of e-commerce (Montaner, Lopez & Lluis, 2003). However, the approaches of such studies are seldom directly applicable to the domain of m-commerce. With respects to the m-commerce constraints, a “best effort” recommender system that make do with whatever information available will serve as an interesting alternative to the “best quality” emphasis of current recommendation technology. A preference detection technique was developed to serve as the recommender layer of the proposed system.
dEscription of intEractivE cataloguE The interface of a catalogue is divided into three components: visual presentation, browsing process, and feedback mechanism
Figure 1. Screenshot
presentation Given the constraint of a PDA screen, the main concern of our design is to maximize emphasis on product presentation while simplifying the control elements. Human cognition is more adapted to the processing of visual images as compared to textual information (Lee et al., 2004). Visual elements are thus useful mechanisms to improve the usability of a catalogue. To save space while facilitating easy examination of products, we incorporate a product information panel. Figure 1 shows a screenshot of the implemented user interface.
browsing process Browsing naturally induces a sense of flow, which may be imagined as a navigation process through the product space. The main challenge in the design of such a navigation system is to define the relation of products with respect to one another. Differing viewpoints of people dictate that each individual sees the product relations from a different perspective. One method of custom defining product relations doing so is through interactive
Interactive Personalized Catalogue for M-Commerce
critiquing of products (Burke, 2002). Interactive critiquing involves allowing a user to express the goals that are not satisfied by current items. Another method to understand the preference of a user is through clustering. In our case, clustering may be used to group items that receive similar feedback from a user in an attempt to identify the underlying pattern that matches the preference of the user. While the sharp focus on a single point in the product space, a feature of interactive critiquing, makes it unsuitable for expansive browsing, in our catalogue, one desirable feature is to have an adaptable focus that allows user to glance at the entire product range as well as zoom in on a few products of interest. We define two parameters in our browsing: breadth and preference. Breadth is a measure of diversity in the product presentation whereas preference is the inferred interest of the user. Breadth needs to be changed according to the state of browsing. As the user increasingly grasps some understanding of the available choices, breath should be narrowed down to focus on recommended products based on the user’s preference allowing the user to discover products of increasing interest, and at the same time facilitate a comparison of close alternatives to aid in the purchase decision. At any time, should a shift be detected in the user interest, breadth has to be relaxed accordingly to allow the user the possibility to explore again products of differing nature. To implement such a mechanism, we divided each page of the catalogue into two portions, the first containing products recommended based on the detected preference of the user and the second containing randomly sampled products. Breadth is defined as the size of the latter portion.
feedback mechanism In our case, we note that the most intuitive and compact feedback method is for a user to comment directly on the products on display, as proposed by Burke, Hammond and Young (1997) in their
0
case-based critiquing approach and adopt a bipolar rating system for simplification . Using the bipolar rating system, we obtain a set of selected products and its complement. The selected set is a derived through explicit feedback by the user. This establishes it as a strong indicator of user interest. The converse however is not necessarily true for the complementary set of non-selected products. The usefulness of non-selected products is the relativistic nature of product selection. A user initially selects what appears to be the best available option. With greater exposure to relevant products, it is natural for a user to become more discerning in making a choice. It is thus inaccurate to conclude that non-selected products are disliked by the user. In view of the ambiguity in interpreting the set of non-selected products, the approach adopted in this paper is to analyze only the selected set.
prEfErEncE dEtEction Clustering is the conceptual grouping of similar products. For our case, we seek to identify a few dominant areas of interest associated with a user so as to find relevant products for recommendation. To do so, we perform clustering on the set of positive examples volunteered by the user.
Product Ontology The products are represented through the specification of an encoding scheme that maps products from the same category into a conceptual product space. The encoding scheme is responsible for the enumeration of product attributes and in so doing, determines the relationship between products. We adopted a static encoding scheme in the form of product ontology (Guan & Zhu, 2004; Smith, 2003). In our context, product ontology is simply a descriptive tree that defines the key attributes of each product category as well as their relevant enumeration schemes. Figure 2 shows an example of product ontology.
Interactive Personalized Catalogue for M-Commerce
Cluster Definition
Figure 2. Product ontology product ontology
PDA
Digital Camera
Price
Resolution
Low (1)
Mid (2)
High (3)
Figure 2: Product Ontology
Product Definition Let p denote a product and P the product space such that p ∈ P. A product is characterized by a set of attributes as well as their associated value. We define an attribute as a particular aspect of a product’s characteristics (e.g., weight, color) while an attribute instance is a value taken by a product attribute (e.g., 100g, red). Let α denote an attribute instance and A the domain that the attribute belongs to such that α ∈ A. A product space P is defined as a vector space of η dimensions where η is the total number of unique attributes possessed by products in P. P : A1 × A2 × ··· × Aη Products are mapped into the product space through a predefined product ontology. Products may then be represented by ordered η-tuples with the ith value representing the attribute instance for the ith attribute of the product. We shall refer to this η-tuple as the product characteristic. p : {α1, α2, ..., αη}, αi ∈ Ai A product is assumed to be entirely characterized by the set of ordered attribute instances it is associated with.
To facilitate the clustering of products, we adopt the concept of a schema proposed by John Holland (1975) in his Schema Theorem. In our context, a schema is a template that specifies partially a set of product characteristics. This is possible with the introduction of wildcards that match with any value. A schema effectively defines a subset of the product space for all products that match with the schema. Let χ denote a schema and Χ the schematic domain such that χ ∈ Χ. Χ : G1 × G2 × ··· × G η where Gj = Aj ∪ ∗ χ : { γ1 , γ2 , … , γη } where γj ∈ Gj To determine if a product p matches with a schema χ, we define the following functions: 1 δ( α , γ ) = 0
α = γ or γ = ∗ else η
δmatch ( p, χ) = ∏ δ(α j , γ j )
(1) (2)
j =1
A schema serves as a useful means to define a cluster, providing both a signature to determine membership to the cluster as well as a definition of product similarity. Products within a cluster are similar in the sense that they match with the schema representative of the cluster. In this paper, we shall adopt the schema as the sole definition of a cluster, χ ≡ C. We term such an approach schematic clustering. p ∈ C ⇔ δmatch ( p, χ) = 1 p ∉ C ⇔ δmatch ( p, χ) = 0
scoring With the definition of cluster in place, the best cluster that generalizes a sequence of user selection S has to be found. For this purpose, we need
Interactive Personalized Catalogue for M-Commerce
to be able to evaluate the relative quality of each possible cluster as a generalization of S.
Span Let S be mapped into an n×η matrix { αij }, such that αij denotes the jth attribute instance of the ith product. Adapting the match function (2) for use on a matrix, η
δmatch ( pi , χ) = ∏ δ(α ij , γ j )
(3)
j =1
We define span as the number of matches a schema has on a set of products, n
σ( S , χ) = ∑ δmatch ( pi , χ) i =1
(4)
Given two clusters with different span, we derive greater confidence in the cluster with a larger span as an area of interest with greater significance. For example if a user selected six products, of which five belongs to cluster A while only one belongs to cluster B, we naturally conclude that cluster A serves as a better representation of the user’s area of interest. Span thus serves as an important measure of quality.
order Given a schema, we define order as the number of non-wildcard values present in the schema. 1 δ wildcard ( γ ) = 0 η
γ ≠∗ γ =∗
d (χ) = ∑ δ wildcard ( γ j )
(5) (6)
j =1
Considering the definition of span, it is clear that the number of wildcards present in a schema is proportionate to the chances of the schema having a large span. However having too many wildcards may not be a desirable because it dilutes the interpretation of the area of interest. For example, the null schema [*,*,…,*] is undoubtedly the schema with the largest span in
any situation for it encompasses the entire product space. However the null schema does not gives any inference as to where the actual area of interest may lie. Assuming that a product fits the cluster [1,*,*,*,*] as well as the cluster [1,2,3,*,*], we see that the latter is a more precise interpretation of the area of interest because it has a more exclusive membership. Order thus serves as an equally important measure of quality as compared to span.
Span: Order Tradeoff Having established that span and order are two competing objectives, it is not possible to maximize both measures simultaneously. To distinguish better the quality of a schema from another, we introduce another measure called coverage.
Coverage κ ( S , χ ) = σ( S , χ ) ⋅ d ( χ )
(7)
Score Γ 2 ( S , χ) = κ( S , χ)
(8)
Coverage eliminates schemas with extreme span or order to give preference to those with a balance of the two. However in certain cases, it is still not possible to discern schemas with equally good balance. To do so, we have to decide whether to give greater priority to span or order. Since span represents a measure of the level of confidence in an area of interest, we adopt a prudent approach by giving it a higher priority.
Score Γ 3 ( S , χ ) = κ ( S , χ ) + µ ⋅ σ( S , χ )
(9)
where 0 < μ < 1
noise correction In the context of data processing, it is usually inevitable that the data be distorted by a certain level of noise due to uncontrollable factors. In our case,
Interactive Personalized Catalogue for M-Commerce
noise may be introduced either due to ignorance on the part of the user, or the lack of appropriate choices for the user to express freely a preference. To overcome this limitation, we introduce a noise threshold K to relax the condition for a match between a schema and a product. η
γ ( pi , χ) = ∑ (1 − δ(α ij , γ j ))
(10)
j =1
1 δ 'match ( pi , χ) = 0
γ ( pi , χ) ≤ K γ ( pi , χ) > K
(11)
With the redefinition of coverage, there is an improvement in the score to give less emphasis to matches that makes use of the noise threshold. Despite having a more equitable score, the redefined coverage is still incapable of differentiating between the sensible use of the noise threshold to accommodate noise or the abuse of it to increase coverage. To correct this error, the approach adopted is the inclusion of a penalty term to penalize the usage of the noise threshold.
Penalty n
π( S , χ) = −∑ δ 'match ( pi , χ)γ ( pi , χ)
where 0 ≤ K < η
(15)
i =1
Span’ Score n
σ '( S , χ) = ∑ δ 'match ( pi , χ)
(12)
i =1
With such an allowance given for noise, the scoring system will be able to pick up the optimum schema that matches the user preference. This is because the noise threshold allows schemas to be credited for partial matches with the selected products. Owing to the noise threshold, ambiguity appears in the assessment of schemas. A schema that takes advantage of the threshold term in an unwarranted context stands to gain a higher coverage. One main reason is the simple definition of coverage as a product of span and order, which gives unnecessary credit to schema values that do not match the actual attribute instance value. α = γ and γ ≠ ∗ else
1 δ pt (α, γ ) = 0
(13)
Γ( S , χ) = σ '( S , χ) + µ ⋅ κ '( S , χ) + λ ⋅ π( S , χ) (16) where 0 < μ < 1, λ > 1
global optimization Having defined a scoring function to evaluate the relative superiority of each schema, we seek to design an algorithm to search for the best schema given a sequence of user selection. EA was found to be a more appropriate choice in our context. In particular, we chose genetic algorithm (GA) which is a form of EA for the optimization of our scoring function.
genetic algorithm By assigning a value of zero to the wildcard, the η-tuple of positive integer values of a schema is encoded directly into a chromosome as an array of integers. Figure 3 illustrates the encoding process.
Coverage Figure 3. Genetic encoding n
η
κ '( S , χ) = ∑∑ δ 'match (α ij , γ j )δ pt (α ij , γ j ) (14)
:{*,*,1,2,3,* }
0
0
1
2
3
0
i =1 j =1
Schema C
hromosome
Interactive Personalized Catalogue for M-Commerce
Figure 4. GA pseudocode INITIALIZE population with random candidate solutions repeat until TERMINATION CONDITION 1.
EVALUATE chromosomes
2.
SELECT parents
3.
RECOMBINE pairs of parents
4.
MUTATE offspring
5.
EVALUATE offspring
6.
SELECT survivors to next generation
Having defined the chromosomes, we apply the typical genetic algorithm as summarized in Figure 4. Evaluation is done using the scoring function defined in the previous section.
performance To determine the performance of the algorithm, we define accuracy and efficiency as the performance measures. Accuracy is the frequency that results produced by the genetic algorithm matches the actual global optimum. We calculate accuracy as the average percentage of such matches. On the other hand, efficiency is the amount of computational effort required to execute the algorithm. We thus calculate efficiency as the average number of generations.
Prototype A prototype of the catalogue was developed for testing purposes. Through fully implemented in Java, the interface was designed to be easily presentable in HTML format. In an actual implementation, the catalogue software is intended to reside on a Web server and remotely accessed via PDA.
Evaluation To evaluate the effectiveness of our catalogue, a test module was developed that simulates the response of users. For each simulated user, a preference is generated based on a random subset of attributes from a random product in the database. The simulated user then browses through the cata-
logue, selecting products that bear resemblance to the preference. For the purpose of testing, a product database was created as a replica of three categories of products available from an online shopping search engine—BizRate (www.bizrate. com, 2003). The categories are: digital cameras (249 entries), MP3 players (209 entries) and PDAs (95 entries). To assess the performance of the recommender, we measured the likelihood of the recommender in arriving at a reasonable interpretation of a user’s preference. This likelihood depends on the number of pages that a user has browsed through. The longer a user interacts with the catalogue, the more information available to the catalogue for the understanding of the user. We thus define accuracy as the probability of a successful preference interpretation at a given browsing depth. Browsing depth is defined as the number of catalogue pages a user has browsed through. The performance of this technique has been evaluated and the results proved to be promising. From the results, the system accuracy under the PDA category attained the best overall performance (accuracy > 90 percent) while the MP3 category is definitively more difficult to browse (accuracy 80 percent+) compared to the other categories. The digiital camera category however displayed good performance at low browsing depth but lagged behind in later part of the test, with final accuracy around 85 percent. The initial lead may be attributed to a large pool of products with similar counterparts while the eventual lag implies the presence of unique products that bears no resemblance to others.
futurE trEnds The proposed system can be useful when deployed in not only e- or m-commerce devices for product browsing, but also in public information systems. For example, such an interactive, personalized system when incorporated into the public library catalogue systems will enhance the searching process by allowing the systems to learn user interests and reflect them during the search pro-
Interactive Personalized Catalogue for M-Commerce
cedure to make it more effective. Similarly, the proposed system will complement the function of many public information portals where search engines are used. Search engines that are capable of learning specific user preferences will reduce irrelevant search results within a short time frame while focus on specific relevant items that would interest the user. The potential social implications due to the use of personalized systems such as the one proposed in this article could be wide and beneficial to the general public. For example, computer user interfaces can be made better if such an intelligent preference learning engine can be embedded, this will break down the human computer barrier further, leading to wider usage of computers in the society. People who resist the use of computers in their work previously may change their minds when computers change the ways of interaction and become smarter and easier to use. A new profession such as online psychology may even arise, psychologists with access to smart preference learning systems in hand will be able to learn and analyze on-line user behavior better and conduct business from remote.
system has been proposed. This approach emphasizes a minimal-attention user interface that allows user to browse through a catalogue quickly with as little cognitive effort as possible. The associated recommender system that has been conceived adopts a best effort strategy that accommodates any level of user participation. It has been shown to be capable of detecting non-linear preferences in a set of incremental feedback, as well as tolerate noisy input produced by a user.
futurE rEsEarch dirEctions One drawback of this design is the danger of using predefined product ontology in the enumeration of attribute instances. This leads to stereotypic preference interpretation whose relevance depends largely on how the product ontology is defined. A solution is to use adaptive product ontology, where product attributes may evolve dynamically. Preference interpretation will then be aligned to the up-to-date product ontology and will need corrections when inconsistencies between old product ontology and new product ontology are detected.
conclusion The approach in this study focused on realizing the possibility for more complete m-commerce environment. This outlook is shared by other researchers who attempt to tackle the same problem with different strategies (Guan, Ngoo & Zhu, 2000, Guan, Tan & Chan, 2004, Guan, Chan & Zhu, 2005,). Our approach differs in the absence of a passive viewing mode, as the context of m-commerce makes it unfeasible for users to concentrate on the screen for an extended period of time. Interaction control was greatly simplified in our catalogue. Through the usage of recommender technology, we streamlined the browsing process by using a reduced form of feedback. In summary, this paper highlighted the need for specialized applications in the domain of mcommerce. In particular, a novel method of product catalogue navigation with the aid of a recommender
rEfErEncEs Bryan, D., & Gershman, A. (1999). Opportunistic exploration of large consumer product spaces. Proceedings of the 1st ACM conference on Electronic commerce (pp. 41-47). Burke, R. D. (2002). Interactive critiquing for catalog navigation in e-commerce. Artificial Intelligence Review, 18, 245-267. Burke, R.D., Hammond, K. J., & Young, B.C. (1997). The FindME approach to assisted browsing. IEEE Expert: Intelligent Systems and Their Applications, 12(4), 32-40. Feldman, S. (2000). Mobile commerce for the masses. IEEE Internet Computing, 4, 75-76. Guan, S.-U., Chan, T.K., & Zhu, F. (2005). Evolutionary intelligent agents for e-commerce: Ge-
Interactive Personalized Catalogue for M-Commerce
neric preference detection with feature analysis. Electronic Commerce and Research Applications, 4(4), 377-394. Guan, S.-U., Ngoo, C. S., Zhu, F. (2000). Handy broker: An intelligent product-brokering agent for m-commerce applications with user preference tracking. Electronic Commerce Research and Applications. 1, 314-330. Guan, S.-U., Tan, P.C., & Chan, T.K. (2004). Intelligent product brokering for e-commerce: An incremental approach to unaccounted attribute detection. Electronic Commerce and Research Applications,3(3), 232-252. Guan, S.-U., & Zhu, F. (2004). Ontology acquisition and exchange of evolutionary product-brokering agent. Journal of Research and Practice in Information Technology, 36(1), 35-46. Holland, J. H. (1975). Adaptation in natural and artificial systems. Ann Arbor: The University of Michigan Press. Lee, J.Y., Lee, H.S., & Wang, P (2004). An interactive visual interface for online product catalogs. Electronic Commerce Research, 4, 335-358. Montaner, M., Lopez, B., & Lluis, J. (2003). A taxonomy of recommender agents on the internet. Artificial Intelligence Review, 19, 285-330. Schafer, J.B., Konstan, J.,& Riedl, J. (2001). Ecommerce recommendation applications. Data Mining and Knowledge Discovery, 5, 115-153. Smith, B. (2003) Ontology. Blackwell guide to the philosophy of computing and information (pp.155-166). Oxford: Blackwell. Tateson, R., & Bonsma, E. (2003). ShoppingGarden—improving the customer experience with online catalogues. BT Technology Journal, 21(4), 84-91. www.bizrate.com (2004).
furthEr rEading Amoroso, D.L., & Reinig, B.A. (2004). Personalization management systems. In Proceedings of the 37th Annual Hawaii International Conference on System Sciences. 1-1. Babaguchi, N., Ohara, K., & Ogura, T. (2003). Effect of Personalization on Retrieval and Summarization of Sports Video. In Proceedings of the 2003 Joint Conference of the Fourth International Conference on Information, Communications and Signal Processing, and the Fourth Pacific Rim Conference on Multimedia, Vol. 2, (pp. 940944). Boll, S. (2002). Modular Content Personalization Service Architecture for E-commerce Applications Proceedings Fourth IEEE International Workshop on Advanced Issues of E-Commerce and Web-Based Information Systems. (WECWIS 2002). (pp. 213-220). Chen, Q., & Guan, S.U. (2004). Incremental multiple objective genetic algorithms. IEEE Transactions on Systems, Man and Cybernetics Part B, 34(3), 1325-1334. Dezhi, Wu, Il Im, Tremaine, M., Instone, K., Turoff, M. (2003). A Framework for Classifying Personalization Scheme Used on E-commerce Websites. In Proceedings of the 36th Annual Hawaii International Conference on System Sciences, 12-23. Evans, A., Fernandez, M., Vallet, D., & Castells, P. (2006). Adaptive Multimedia Access: from User Needs to Semantic Personalization. Proceedings. 2006 IEEE International Symposium on Circuits and Systems ISCAS 2006. Guan, S.-U., & Hua, F. (2003). A multi-agent architecture for electronic payment. International Journal of Information Technology and Decision Making (IJITDM), 2(3), 497-522 Guan, S.-U., Tan, S.L., & Hua, F. (2004). A modularized electronic payment system for agent-based e-commerce. Journal of Research and Practice in Information Technology, 36(2), 67-87.
Interactive Personalized Catalogue for M-Commerce
Guan, S.-U., Wang, T., & Ong, S.-H. (2003). Migration control for mobile agents based on passport and visa. Future Generation Computer Systems, 19(12), 173-186.
Panayiotou, C., Andreou, M., Samaras, G., & Pitsillides, A. (2005). Time based personalization for the moving user. International Conference on Mobile Business, 2005. ICMB 2005. 128-136.
Guan, S.-U., & Yang, Y. (1999). SAFE: SecureRoaming Agent for E-commerce. 26th International Conference on Computers & Industrial Engineering. Australia.
Poh, T. K., & Guan, S. U. (2000). Internet-enabled smart card agent environment and applications. In S. M.Rahman & M. Raisinghani (Eds.), Electronic commerce: Opportunities and challenges. Hershey: Idea Group Publishing.
Guan, S.-U., & Yang Y. (2004). Secure agent data integrity shield. Electronic Commerce and Research Applications, 3(3), 311-326 Guan, S.-U., & Zhu F, (2002). Agent fabrication and its implementation for agent-based electronic commerce. International Journal of Information Technology and Decision Making (IJITDM), 1(3), 473-489. Guan, S.U., Zhu, F.M., & Ko, C.C. (2000). Agent fabrication and authorization in agent-based electronic commerce. In Proceedings of International ICSC Symposium on Multi-Agents and Mobile Agents in Virtual Organizations and E-Commerce. Wollongong, Australia, 528-534. Guan, S.-U., Zhu, F., & Maung, M. T. (2004). A factory-based approach to support e-commerce agent fabrication. Electronic Commerce and Research Applications, 3(1), 39-53. Jorstad, I., Van Thanh, D., & Dustdar, S. (2005). The personalization of mobile services. IEEE International Conference on Wireless And Mobile Computing, Networking and Communications, (WiMob’2005), Vol. 4, 59-65. Kar Y.T., & Shuk, Y.H.. (2003). Web personalization: Is it effective? IT Professional 5(5),53-57. Kibum, K., Carroll, J.M., & Rosson, M.B. (2002). An empirical study of web personalization assistants supporting end-users in web information systems. Proceedings IEEE 2002 Symposia on Human Centric Computing Languages and Environments. 60-62. Koutrika, G., & Ioannidis, Y. (2004). Personalization of queries in database systems. Proceedings. 20th International Conference on Data Engineering. 597-608.
Sim, L. W., & Guan, S.-U. (2002). An Agent-Based Architecture for Product Selection and Evaluation under E-Commerce. In S. Nansi (Ed.), Architectural issues of web-enabled electronic business (pp. 333-346). Hershey: Idea Group Publishing. Specht, G., & Kahabka, T. (2000). Information Filtering and Personalisation in Databases Using Gaussian Curves. 2000 International Database Engineering and Applications Symposium. 1624. Tseng, B.L., Lin, C-Y, & Smith, J.R. (2002). Video Personalization and Summarization System. IEEE Workshop on Multimedia Signal Processing. 424–427. Tan, X., Yao, M., & Xu, M. (2006). An Effective Technique for Personalization Recommendation Based on Access Sequential Patterns. IEEE Asia-Pacific Conference on Services Computing. APSCC ‘06. 42-46. Treiblmaier, H., Madlberger, M., Knotzer, N., & Pollach, I. (2004). Evaluating Personalization and Customization from an Ethical Point of View: an Empirical Study. Proceedings of the 37th Annual Hawaii International Conference on System Sciences, 2004. Wang, Y. Kobsa, A., Van der Hoek, A., & White, J. (2006). PLA-based Runtime Dynamism in Support of Privacy-enhanced Web Personalization. 10th International Software Product Line Conference. Yang, Y. & Guan, S. U. (2000). Intelligent Mobile Agents for E-commerce: Security Issues and Agent Transport. In S.M. Rahman & M. Raisinghani (Eds.), Electronic commerce: Opportunities and challenges. Hershey: Idea Group Publishing.
Interactive Personalized Catalogue for M-Commerce
Yang, Y. (2006). Provisioning of Personalized Pervasive Services: Daidalos Personalization Functions. 2006 1st International Symposium on Pervasive Computing and Applications. 110115.
tErms and dEfinitions
Yee, G. (2006). Personalized Security for eServices. The First International Conference on Availability, Reliability and Security, 2006. ARES 2006. Yu, P.S. (1999). Data Mining and Personalization Technologies. Proceedings 6th International Conference on Database Systems for Advanced Applications. 6-13.
E-Commerce: Electronic commerce or e-commerce consists of the buying, selling, distributing, marketing, and servicing of products or services over computer networks such as the Internet.
Zhao, Y., Yao, Y., & Zhong, N. (2005). Multilevel Web Personalization. Proceedings the 2005 IEEE/WIC/ACM International Conference on Web Intelligence. 649-652. Zhu, F.M. & Guan, S.U. (2001). Towards evolution of software agents in electronic commerce, in Proc. of the Congress on Evolutionary Computation 2001 (CEC2001), 1303-1308, Seoul, Korea, 2001.
Agents: A piece of software, which acts to accomplish tasks on behalf of its user.
Genetic Algorithms: Are methods used in computer science, engineering, and other fields to search for optimal solutions to optimization problems. Genetic algorithms use techniques inspired by evolutionary biology such as mutation, selection, and crossover. Information Retrieval: Information retrieval is the science of searching for information in documents for text, sound, images or data. M-Commerce: M-Commerce or mobile commerce stands for electronic commerce made through mobile devices and wireless networks. Server: Powerful computers or processes dedicated to managing files, data, or network traffic. World Wide Web: The World Wide Web (“WWW” or simply “Web”) is an information space in which resources are identified by global identifiers called Uniform Resource Identifiers (URI).
Chapter XXII
Trust Based E-Commerce Decisions Vesile Evrim University of Southern California, USA Dennis McLeod University of Southern California, USA
introduction Over the years, trust has been extensively studied in many fields such as sociology, psychology, and economics. The sociologist Gambetta (2000) states that trust is one of the most important social concepts present in all human interaction and without it there is no cooperation or society. Berscheid (1994) also claims that trust is central to how we interact with each other; thus, it is a key to the positive interrelationships. Social psychologists use the notion of trust to predict acceptance of behaviors by others and institutions (e.g., government agencies). In literature, trust is defined in so many ways that it becomes more elusive than the physical dimensions of space and time. In time, due to the increase of human-computer interaction, trust has become one of the most challenging topics in computer science. Similar to the definitions of trust defined by sociologists and psychologists, computer scientists have also defined trust in their own way (McKnight & Chevany 1996; Falcone & Castelfranchi, 2001; Wang & Vassileva, 2003). How much we trust the source, information, or agent has become one
of the hardest questions to answer. As computer technology advances, the need for trust between multiple parties in a communication-based systems increase. The Internet is one of the best examples of the applications that trust needs to be investigated in depth. Although, it was conceived as a military and academic project, over the years, the number of non-academics users, has increased, gaining popularity among the business community. With the introduction of the World Wide Web (WWW) (Berber-Lee & Cailliau, 1994), the Internet has become a collaborative medium that allows people anywhere in the world to add and retrieve information. Today, Internet services are increasingly being used in business to consumer (B2C) e-commerce applications (Liao & Cheung, 2001). E-commerce provides a new way of shopping for the customers by offering more choices and transforming economic activity into a digital media. It also provides an opportunity for the businesses to extend their sales to a larger community. However, the success of getting higher profits and improved services are based on better communication. As
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Trust Based E-Commerce Decisions
in the real world, critical understanding of users’ behavior in cyberspace cannot be achieved without the analysis of the factors affecting the purchase decisions (Limayem, Khalifa & Frini, 2000). Having lots of options in an environment that is missing face-to-face interaction enforce users to make trust-aware decisions to better protect their privacy and satisfy their expectations such as quality of services.
background One of the first works that tried to give a formal treatment of trust that could be used in computer science was introduced by Marsh (1994). This model is based on social properties of trust and presents an attempt to integrate all the aspects of trust taken from sociology and psychology. However, the model is too complex to be implemented in today’s e-commerce applications (Aberer & Despotovic, 2001). McKnight and Chervany (1996) also use social sciences in their work. They defined three kinds of trust. Impersonal/structural is trust to a social or institutional structure in the situation. Dispositional trust is based on the personality attributes of the trusting party that develops across broad spectrum of situations and persons. Personal/interpersonal trust is to a person or a group of people in the specific situation. Falcone and Castelfranchi (2001) have presented a cognitive model of trust in terms of mental ingredients such as beliefs and goals. Their cognitive analysis of trust distinguishes internal and external attributes that predicts different strategies for building and increasing trust. In order to quantify the degree of trust, different models use various representations of trust values. In some models, trust values are represented as intervals (-1, +1), as done by Jonker (1999) or probabilities (0,1) done by Josanf and Ismail (2002). The others such as Abdul-Rahman and Hailes (2000) have proposed discrete values such as very trustworthy, trustworthy, and not trustworthy. In addition to have a common understanding about the meaning of a given trust statement, Kinateder (2005) has proposed a generic trust model to com-
0
bine all the representations of trust into a common model. However, these studies focused on the syntactic representation of trust and did not answer the question of what it means to trust a source in certain degree (e.g., 30 percent). Subjectivity and the context dependency of trust (O’Hara, Alani, Kalfoglou et al., 2004) make it harder to judge about the trustworthiness of sources. Recently, increasing amount of data and sources make it necessary to analyze trust in recommender systems (Golbeck, Parsia & Hendler, 2003; Massa & Bhattacharjee, 2004; O’Donovan & Smyth, 2005). O’Donovan and Smyth (2005) inferred trust relationships from “rating based data” and used these relationships to influence the recommendation process. Similarly, Massa and Bhattacharjee (2004) used the popular consumer review site Epinions.com to create a trust-graph and use that to compare users according to their degree of connectedness. Seigneur and Jensen (2004) incorporated security, privacy, and risk into their trust models. Intuitively, it is not possible to talk about trust if there is no security. But, even if an environment may be secure, it cannot necessarily be trusted. For example, a security guard may make sure that all attendees in the school party have a student ID. Although, it does not mean that a student can trust to everybody who is attending the party. In order to provide a better idea about the security of systems and the privacy of our information, there are currently many services helping people to make better trust-based decisions. For example, VeriSign (2001) provides valid SSL certificates to confirm that the businesses exist. TRUSTe provides assurance to users that the site is following its stated privacy practices through initial and periodic reviews, seeding, and compliance reviews (Benassi, 1999).
trust in intErnEt applications The variety of application domains and users together with uncertainty make trust one of the most important parameters of decision making (Jøsang,
Trust Based E-Commerce Decisions
1999) in e-commerce, gaming, health care, virtual environments/organizations and peer-to-peer (P2P) applications (Aberer & Despotovic, 2001; Abdul-Rahman &Hailes, 2000; Wang & Vassileva, 2003). In order to have a better strategy, peers of multi-user games need to consider trust in their interaction to better decide on the action they will take. In P2P applications such as Gnutella it is important to choose the trusted peers to download files in order to prevent malicious intent (e.g., files with viruses). Currently, significant percentages of the public are already using the Internet to access their health information and it is important to have trusted users of the system to protect the privacy of their information. In a single e-commerce application (e.g., Amazon), trust is a degree to measure the product quality and the source that will best satisfy our expectations.
trust and dEcision making Although in real world, trust might be used to express feelings without the intention of taking any action, such as “I trust that my son will pass the exam” or “I trust that the football team will win the game,” in the cyber world, trust is mainly used as a measurement to help the decision making process (Josang, 1999). When we make decisions, or choose between options, we try to obtain as good an outcome as possible, according to some standard of what is good or bad (Sven, 1994). In the virtual world, we are claming that trust is the standard determining good or bad for the users. However, the decision making strategies of our mind is not so simple and adapted to specific situations and environments (Payne, 1982; Brehm, 1966). Despite of uncertainty and lack of information caused by virtual environment bring some hidden problems together, for many users, e-commerce comes with many advantages. It saves time, money, and offers more options: • •
It is almost impossible to know who the other party really is during the interaction. It is not always possible to analyze the quality of products bought.
• • •
The intermediate parties in communication may not be always known. The unknown parties can easily steal the data provided during the transaction. Since confidential information (e.g., name, credit card number and social security number) is stored electronically, it may be hacked by third parties.
As the uncertainty increases risk, makes it harder for users to make decisions (Janis & Mann, 1977). Witte (1972, p. 180) states, “we believe that human beings cannot gather information without in some way simultaneously developing alternatives. They cannot avoid evaluating these alternatives immediately, and in doing this they are forced to a decision.” Similarly, e-commerce provides numerous alternatives for the users with different levels of risk and thus, forces them to decide. Nevertheless, users want to know more about the seller and the quality of the product to choose. Most of the auction or retail sites use reviews or ratings, provided according to past transactions, to present the trustworthiness of sellers or items. Table 1 shows part of the survey conducted by (O’Donovan, Evrim, Smyth, & McLeod, 2006). In order to help people to decide, different applications use various tools. Here, we are going to take e-commerce as an example application and will analyze the role of most commonly used tool namely recommender systems in people decision making process. Then we will state the necessity of having trust-aware recommender systems to help people better adapt the decentralized structure of the internet.
rEcommEndEr systEms The development of the Internet provides an avalanche of information at our doors. That increases the difficulty in finding what we want in a manner which best meets our requirements. For example, a user using keywords in order to find a particular item or a computer system would like to have suggestions based on the user’s profile preferences
Trust Based E-Commerce Decisions
Table 1. Comparative survey of popular e-commerce sites Site Name
User Roles (Buyer/Seller)
Review Types (Item/Person)
Existing Trust Value (y/n)
Is Trust value personalized (y/n)
Requirements for Rating (Purchase, registration, None)
URL
eBay
Both
People
Yes
No
Purchase
eBay.com
Amazon
Both
Both
Yes
No
Item: Registration
www.amazon.com
Epinions
Buyer
Item
No
No
Person: Purchase
epinions.com
TradeMe
Both
People
Yes
No
None
trademe.co.nz
Overstock
Both
People
Yes
No
Purchase
overstock.com
BidEra
Both
People
Yes
No
Purchase
bidera.com
Yahoo Auctions
Both
People
Yes
No
Registration
auctions.yahoo.com
Auctionfire
Both
People
Yes
No
Purchase
auctionfire.com
MSN Auctions
Buyer
Both
Yes
No
Purchase
auctions.msn.com
to make better decisions. Recommender systems are programs used to suggest like or similar items and ideas to a user’s specific way of thinking in order to help them in their further decision making (Burke, 2002). Although, the selection of sources providing the items are at least as important as the items, until recent research, almost all recommender systems are focused on item recommendation. Many of the top e-commerce sites such as Amazon use recommender systems to improve their sales. Once a user adds an item to his or her shopping cart or checks out an item, he is recommended with the list of other items he or she might be interested in buying. In those kind of applications, good recommendations are not only important to the user, but to the seller as well in order to increase their sales.
Evolution of Recommender Systems Over the years, many methods have been used to provide recommendations to users. The most important ones include: searching, clustering, classifiers, association rules, content-based filtering (CBF), collaborative filtering (CF), and hybrid systems (Burke, 2002; Lin, Alvarez & Ruiz 2002). However, among these methods CBF and especially CF are considered the most popular methods used for recommendations.
A content-based filtering technique is based on a user profile. The user’s history is used to find items similar to the ones that the user liked before. However, this method is difficult to apply to the situations where the desirability of an item, for example a web page, is determined in part by multimedia content or aesthetic qualities (Paulson & Tzanavari, 2003). These types of materials are generally not compatible with the content analyses and require other recommendation methods. In collaborative filtering, recommendations are based on the opinions of similar-minded users rather than data. The performance of the systems is determined by the similar users. Collaborative filtering systems do not require any content information and the similarities are based on the ratings provided on the products (Bruyn, Giles & Pennock, 2004). The advantage of CF compared to CBF is that the recommended items are not restricted with the user interest and recommendations are open to the exploration of new topics and items (Paulson & Tzanavari, 2003). As shown in Table 2. Gerald (2000) mentioned that using a recommendations agent reduces the time spent on searching a product and increases the quality of the set of products offered. Besides recommendation, Gerald also mentioned the comparison matrix tool used to make an in-depth comparison of the recommended products. So far, all mentioned recommender systems are controlled by
Trust Based E-Commerce Decisions
Table 2. Comparisons of content-based filtering and collaborative filtering methods CONTENT BASED FILTERING
COLLABORATIVE FILTERING
Based on the user profile of user.
Based on the user similarity.
Recommend to the user, items, similar to the ones he liked before.
Recommend to the user, items, similar users checked out before.
Do not offer new varieties and close to the new discoveries.
Gives user an opportunity to find items in different interest areas and open to new discoveries.
There is no need for ratings or reviews in the recommendation process.
Ratings are the main information used for recommendations.
Highly suitable for users who have specific interest and the content information.
It is good for the applications that there isn’t much content information.
Independent than the other users of the application.
Depends on the other users of an application
CBF is good on the recommendations with a rich content. For example: Google personal search uses content based filtering to offer more personalized search results.
CF is good in the item recommendation. Most of the e-commerce sites such as Amazon use this method for an item recommendation.
the centralized systems. Thus, users do not have much control about the recommendations or their user profiles. That provides an opportunity to the application sites to commercialize products in exchange for money. In that case, the relevance and ordering of the recommended products for a user becomes questionable. Increasing number of decentralized systems makes recommender systems inefficient in most aspects. In decentralized environments everyone is free to create content and there is no centralized quality control entity (Massa & Bhattacharjee, 2004). Users of the systems can easily create malicious ratings, reviews and auctions (e.g., ebay) that are used as a measure of user similarity in CF systems. This increases the necessity of giving more control to the users on their selection process. In order to decrease the number of malicious attacks, users use trust to select peers that will have an effect on their recommendation process. Traditional recommender systems such as CF do not provide any recommendations for the new users of the system that do not have any ratings (cold start problem) (Schein, Popescul, Ungar et al., 2002). Selecting trusted peers also helps to solve cold start problems by providing a seed set of users at the beginning. Decentralization of the Internet and the increasing number of users make RS computationally expensive. In the case of CF, each record needs to
be compared with each other and that is usually done offline and not real time (Schein et al., 2002). Using trust to select a smaller set of peers in the recommendation process decreases the computational complexity and increases the scalability of the systems (Massa & Bhattacharjee, 2004). Social networks are one of the best examples of incorporating trust models into recommender systems by synthesizing recommendations based upon opinions from trusted peers rather than most similar ones. Golbeck (2003) applied trust on social networks (filimtrust) for collaborative filtering. She used FOAF (friend of a friend) (Brickley & Miller 2004) trust model that deals with personal information and the relationships that connect people together, to assign trust in web-based social nets. She also investigates how trust information can be mined and integrated into applications. She assigned ratings to describe direct connections between people in social networks and compose this information to infer the trust that may exist between two people that are not directly connected. Massa & Avesani (2004) combined the trust related information (either direct or indirect) with user similarity to enhance collaborative filtering. Incorporating trust into collaborative filtering has solved some of the above mentioned problems of CF such as malicious attacks, cold start problem and computational complexity. However, trust-
Trust Based E-Commerce Decisions
awareness won’t be able to solve the sparseness problem (Bryun et al., 2004) of the big domains such as a book domain where the number of items compared to the number of users is high.
futurE trEnds Although, applying trust-aware collaborative systems on social networks is promising, the relation between trust and user similarity is still not exactly clear. Currently, all the available applications of trust based CF are in high interest groups and share a single context and trust value (movie, music). Since every application has different group of users, goals, context and the trustworthiness, it will be interesting to analyze the effects of the trust-aware collaborative systems in e-commerce applications. Trust is a subjective term thus; different users base their decisions on different attributes (Marsh, 1994). For example, some users of applications consider risk, security and privacy as the main attributes to make trustworthy decisions. However, some others might give more importance to the user-interface in their decision making. It is an interesting topic to analyze the effects of user interface on recommendations.
conclusion Today, the Internet is an environment we use for many purposes from buying/ selling products, chatting, sharing files to get an idea about the reviews of other people. As the use of Internet increases, we need trust more to be able to make better decisions for the privacy and security of our information. Currently, most of the e-commerce applications use collaborative filtering techniques that take ratings as a degree of trust to calculate the user similarity and to recommend items to other users. However, CF automates the process too much forgetting the social dimension of environment. On the other hand, it doesn’t take into account
what are the opinions of people about others but it simply tries to predict them based on a similarity about how they rate items. Besides, these techniques are open to malicious attacks and also not sufficient to make suggestions for the new comers of the systems. This shows the need of more personalized systems that will provide an opportunity to users to have a better control about their profiles by knowing more about what and whom to trust.
futurE rEsEarch dirEctions The definition of trust varies in multiple dimensions based on its context. So far, the multi-dimensionality of trust has mentioned by many researchers, but because of its complexity, it is not analyzed in details. Trust is not only multidimensional, but its dimensions change based on the application domain. It is a need to analyze the dimensions of trust in different application domains to have a better understanding about trust and its definitions. Once the different dimensions are identified, then it is possible to talk about the interpretation of trust among domains. E-commerce is one of the domains that are in need of trust analysis. The more accurate trust analysis may be achieved by analyzing e-commerce types in detail. Since users are one of the most important dimensions of trust, understanding e-commerce users and categorizing them to the user types is a big step towards obtaining better trust models. Integrating trustworthiness into recommender systems is a key in the success of future e-commerce applications. The current recommender systems do focus on recommending items that will match to user’s interest. The increase in the number of sources and the products they provide enforce users to make a trust based decisions about who to interact with. Thus, integrating trust based analysis of the user behavior into the recommender systems seems to have a big impact in the success of recommendations. One of the most important elements of trustworthiness is the credibility of sources. It is important
Trust Based E-Commerce Decisions
for the user to feel confident about the privacy and security aspects of their transactions. Since e-commerce users do not need to be technical individuals, in addition to the challenge of providing secure environment, researchers need to work on the ways of presenting the capability of their systems to the users to convince them. Overall, user’s decision making is a complicated process and has effected by the combination of many factors such as user interface or presentation, product or information type, credibility of the source, and user’s expectation. The more successful decision making estimations are expected to be done by the collaboration of the researchers from multiple disciplines such as psychology, philosophy and computer science.
rEfErEncEs Abdul-Rahman, A., Hailes, S. (2000). Supporting trust in virtual communities. In Proceedings of the 33rd Hawaii International Conference on System Sciences, Maui Hawaii. Aberer, K., & Despotovic, Z. (2001). Managing Trust in a peer-2-Peer information system. In Proceedings of the 10th Int’l Conf. Information and Knowledge Management (CIKM-01), (pp. 310-317). ACM Press. Benassi, P. (1999), TRUSTe: An online privacy seal program. Communications of the ACM, 42(2), 56-59. Berners-Lee, T., Cailliau, R. (1994). The worldwide web. Communications of the ACM, 37(8). Berscheid, E. (1994). Interpersonal relationships. Annual Review of Psychology, 45, 79-129. Brehm, J. (1966). A theory of psychological reactance. New York, NY: Academic Press. Brickley, D., Miller, L. (2004). FOAF Vocabulary Specification 0.1, http://xmlns.com/foaf/0.1/ Bruyn, A.D., Giles, C.L., Pennock D.M. ( 2004). Offering collaborative-like recommendations when data is sparse: The case of attraction-
weighted information filtering. AH, (LNCS 3177, pp. 393-396). Burke, R. (2002). Hybrid recommender systems: Survey and experiments. User Modeling and User-Adapted Interaction, 12(4), 331-370. Falcone, R., & Castelfranchi, C. (2001). Social trust: A cognitive approach. In C. Castelfranchi & Y. Tan (Eds.), Trust and deception in virtual societies (pp. 55-90). Kluwer Academic Publishers. Gambetta, D. (2000). Extract of trust: Making a breaking cooperative relation (pp. 213-237). Department of Sociology: University of Oxford. Gerald, H., & Trifts V. (2000). Consumer decision making in online shopping environments: The effects of interactive decision aids. Marketing Science, 19(1), 4-21. Golbeck, J., Parsia, B., & Hendler, J. (2003). Trust networks on the semantic web. In Proceedings of Cooperative Intelligent Agents, Helsinki,Finland. Janis, I. L., & Mann, L. (1977). Decision making: A psychological analysis of conflict, choice, and commitment,.New York, NY: Free Press. Jonker, C. M., Treur, J. ( 1999). Formal analysis of models for the dynamics of trust based on experiences (Vol. 1647, pp. 221-231). Springer-Verlag.. Jøsang, A. (1999). Trust-Based Decision Making for Electronic Transactions. In L. Yngstrom & T. Svensson (Eds.), Proceedings of the 4th Nordic Workshop on Secure Computer Systems (NORDSEC’99). Jøsang, A., Ismail, R. ( 2002). The beta reputation system. In Proceedings of the 15th Bled Conference on Electronic Commerce, Bled, Slovenia. Kinateder, M., Baschny, E., & Rothermel, K. (2005). Comparison of various trust update algorithms. In V. Herrmann, Issarny, & S. Shiu, (Eds.), Proceedings fo the iTrust (LNCS, 3477, pp. 177-192) Springer-Verlag. Liao, Z., Cheung, M.T. (2001). Internet-based e-shopping and consumer attitudes an empiri-
Trust Based E-Commerce Decisions
cal study. Information and Management, 38(5), 299-306. Lin, W., Alvarez, S., & Ruiz, C. (2002). Efficient adaptive-support association rule mining for recommender systems. Data Mining and Knowledge Discovery, 6, 83-105.
fusion, and reasoning within a formal linguistic representation framework (pp. 168-185). Berlin: Springer-Verlag. Payne, J. W. (1982). Contingent decision behavior. Psych. Bull., 92, 82-402.
Limayem, M., Khalifa, M., Frini, A. (2000). What makes consumers buy from Internet? A longitudinal study of online shopping. IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans, 30, 421-32.
Schein, A.I., Popescul, A., Ungar, L.H, Pennock, D.M. ( 2002). Methods and metrics for cold-start recommendations. Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, Tampere, Finland.
Marsh, S. (1994). Formalising rust as a computational concept Ph.D. Thesis, University of Stirling.
Seigneur, J-M., & Jensen, C.D (2004). Trading privacy for trust. In Proceedings of the Conference on Trust Management, Springer-Verlag.
Massa, P., Avesani, P. (2004). Trust-aware collaborative filtering for recommender systems. Published in International Conference on Cooperative Information Systems (CoopIS).
Sven, H.O., (1994). Changes in preference, theory and decision.
Massa, P., & Bhattacharjee, B., (2004). Using trust in recommender systems: an experimental analysis. Proceedings of 2nd International Conference on Trust Managment, Oxford, England. McKnight, D., & Chervany, N. (1996). The meanings of trust. University of Minnesota, Management Information Systems Reseach Center, Tech. Rep. MISRC Working Paper Series 96-04. O’Donovan, J., & Smyth, B. (2005). Trust in recommender systems. In IUI ’05: Proceedings of the 10th international conference on Intelligent user interfaces, 167-174. ACM Press. O’Donovan, J., Evrim, V., Smyth, B., & McLeod, D. (2006). Personalizing Trust in Online Auctions. In proceedings STAIRS 2006 Trentino, Italy. O’Hara, K., Alani, H., Kalfoglou, Y., & Shadbolt, N. (2004). Trust Strategies for the Semantic Web. In ISWC’04 Workshop on Trust, Security and Reputation on the Semantic Web. Paulson, P., & Tzanavari, A. (2003). Combining collaborative and content-based filtering using conceptual graphs. In J. Lawry, J.G. Shanahan & A. Ralescu (Eds.), Modeling with words: Learning,
VeriSign, Inc, (2001). VeriSign Certification Practice Statement, Version 2.0 http://www.verisign. com/repository/cps20/cps20.pdf Wang, Y., Vassileva, J. (2003). Trust and reputation model in peer-to-peer networks. Peer-to-Peer Computing (pp. 150-157). Linkoeping, Sweeden: IEEE Press. Witte, E. (1972). Field research on complex decision-making processes—the phase theorem. International Studies of Management and Organization, 2, 156-182.
furthEr rEading Abdul-Rahman, A., & Hailes, S. (September 1997). A distributed trust model. In proceedings of the workshop on New security paradigms, (pp.48-60). Cumbria, United Kingdom: Langdale. Basu C., Hirsh, H., & Cohen, W. (1998). Recommendation as Classification: Using Social and Content-Based Information in Recommendation. In proceedings of the 15th National Conference on Artificial Intelligence (pp. 714-720). Madison, WI.
Trust Based E-Commerce Decisions
Ben S. J., Konstan, J., & Riedl, J. (1999). Recommender Systems in E-Commerce. ACM Conference on Electronic Commerce. http://www.cs.umn. edu/Research/GroupLens/ec-99.pdf Cai-Nicolas, Z., & Lausen, G. (2004). Analyzing Correlation Between Trust and User Similarity in Online Communities. Proceedings of Second International Conference on Trust Management.
Grandison, T., & Sloman, M. (2000). A Survey of Trust in Internet Applications. IEEE Communications Surveys and Tutorials, 3. Haubl, G., & Trifts, V. (2000). Consumer decision making in online shopping environments: the effects of interactive decision aids. Marketing, 19(1), pp. 4-21.
Chadwick, S.A. (2001). Communicating trust in e-commerce interactions. Management Communication Quarterly, 14(4), 653-658.
Jarvenpaa, S.L., & Todd P.A. (1996-1997). Consumer reactions to electronic shopping on the world wide web. International Journal of Electronic Commerce, 1(2), pp. 59-88.
Chen, M., & Singh, J.P. (October 2001). Computing and Using Reputations for Internet Ratings. In Proceedings of the Third ACM Conference on Electronic Commerce (EC’01). ACM.
Jøsang, A., Ismail, R., & Boyd, C. (2005). A Survey of Trust and Reputation Systems for Online Service Provision, article to appear in Decision Support Systems.
Cheung, C.M.K., & Lee, M.K.O. (2001). Trust in internet shopping: Instrument development and validation through classical and modern approaches. Journal of Global Information Management 9(3), 23-35.
Lee, M. K. O., & Turban, E. (2001). A Trust Model for Consumer Internet Shopping. International Journal of Electronic Commerce, 61(1), 75-91
Cho, Y.H., Kim, J.K & Kim, S.H. (2002). A personalized recommendation system based on web usage mining and decision tree induction. Expert Systems with applications, 23, 329-342. Damiani, E. (2002). A Reputation-Based Approach for Choosing Reliable Resources in Peer-to-Peer Networks. In proceedings of the 9th ACM conference on Computer and Communications Security (CCS’02), (pp. 207)Œ216. ACM. Gefen, D., Rao, V. S., & Tractinsky, N. (2003b). The Conceptualization of Trust, Risk and Their Relationship in Electronic Commerce: The Need for Clarifications. In proceedings of the 36 The Hawaii International Conference on System Sciences, 1-10. Goecks, J., & Mynatt, E.D. (2004). Leveraging social networks for information sharing. In The Proceedings of the ACM Conference on Computer Supported Cooperative Work. Golbeck, J. (2005). Computing and Applying Trust in Web-Based Social Networks, Ph.D. Dissertation, University of Maryland, College Park.
Manchala, D.W. (1998). Trust Metrics, Models and Protocols for Electronic Commerce Transactions, proceedings of the 18th International Conference on Distributed Computing Systems. O’Donovan, J., & Dunnion, J. (2004). A framework for evaluation of collaborative recommendation algorithms in an adaptive recommender system, Proceedings of the International Conference on Computational Linguistics (CICLing-04), Seoul, Korea, pages 502-506. Springer-Verlag. Ratnasingham, P. (1998). The Importance of Trust in Electronic Commerce. Internet Research: Electronic Networking Applications and Policy, 8(4), 313–321. Sarwar, B., Karypis, G., Konstan, J., & Riedl, J. (May 2001). Item-Based collaborative filtering recommendation algorithm. In Proceedings of the 10th International WWW Conference, Hong Kong. Sarwar, B.M., Karypis, G., Konstan, J.A., & Riedl, J. (2000). Application of dimensionality reduction in recommender systems—A case study. In ACM WebKDD Web Mining for E- Commerce Workshop, Boston, MA.
Trust Based E-Commerce Decisions
Sarwar, B.M., Karypis, G., Konstan, J.A., & Riedl, J.T. (2001). Item-based collaborative filtering recommendation algorithms. In Proceedings of the 10th Int. WWW Conf., (pp. 285-295).
is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and application themselves.
Shahabi, C., Banaei-Kashani, F., Chen, Y.-S, & McLeod, D. (2001).Yoda: An accurate and scalable Web-based recommendation system. Proceedings of the 6th Int.Conf. on Cooperative Information Systems.
Cognitive: Pertaining to cognition, the process of being aware, knowing, thinking, learning and judging.
Tan, Y-H. & Thoen, W. (2000–2001). Toward a Generic Model of Trust for Electronic Commerce. International Journal of Electronic Commerce,5(2), 61-74. Yang, Y., Brown, L., Newmarch, J. & Lewis, E. (April 1999). A Trusted W3 Model: Transitivity of Trust in a Heterogeneous Web Environment. In proceedings of the Fifth Australian World Wide Web Conference Proceedings (pp.59-73). Ziegler, C., & Golbeck, J.(2006). Investigating Correlations of Trust and Interest Similarity –Do Birds of a Feather Really Flock Together? Decision Support Systems. http://trust.mindswap. org/papers/zieglergolbeck.pdf
tErms and dEfinitions Centralized System: The concept of a centralized system is one which has a major hub or hubs which peers communicate with. This system allows peers to be freed from certain efforts, giving them to the central body. Maintaining easily accessed and accurately updated lists of information on all clients is one way in which a central body shines. Context: Any information that can be used to characterize the situation of an entity. An entity
Domain: The part of the external world, including users and inmates of the system that effects and is affected by the system. E-Commerce: E-commerce (electronic commerce or EC) is the buying and selling of goods and services on the Internet, especially the World Wide Web. P2P: a peer-to-peer (or P2P) computer network is a network that relies on computing power at the edges (ends) of a connection rather than in the network itself. P2P networks are used for sharing content like audio, video, data or anything in digital format Sensor: A device that converts physical conditions into information so that the control system can understand the commands. Social Network: A map of the relationships between individuals, indicating the ways in which they are connected through various social familiarities ranging from casual acquaintance to close familial bonds. Trust: A level of expectancy on a particular action of a person/agent in a context in which it affects our own action. User Profile: A user description that includes account ID, user ID, and password information. The characteristics that designate how a user works with Information Exchange.
239
Chapter XXIII
Using Partial Least Squares in Digital Government Research J. Ramon Gil-Garcia Centro de Investigación y Docencia Económicas, Mexico
Introduction Digital government is a complex socio-technical phenomenon, which is affected by technical, managerial, institutional, and environmental factors (Dawes & Pardo, 2002; Fountain, 2001; Gant, 2003; Garson, 2000; Heeks, 2005; Kraemer, King, Dunkle et al., 1989; Landsbergen & Wolken, 2001; Laudon, 1985; Rocheleau, 2003). Recent studies have greatly contributed to developing the necessary knowledge about e-government benefits and success factors (Barrett & Greene, 2000; Dawes, 1996; Gil-García & Pardo, 2005; Heeks, 2003; Holmes, 2001; O’Looney, 2002; Rocheleau, 1999; West, 2005; Zhang et al., 2002). However, an important portion of this research has used a single measure of e-government and relatively simple assumptions about the relationships between information technologies and organizational, institutional, and contextual factors (Gil-García, 2005b).
With important exceptions, previous research has hypothesized mostly models in which all variables are at the same level of importance, limiting understanding about the complex relationships among different categories of factors (e.g., organizational and institutional). In fact, in most of the academic work that has been done so far, all the factors are hypothesized to have a direct relationship to information technology success and few hypotheses have been made about the relationships among the different factors themselves and potential indirect effects of these (see Figure 1). In addition, many of these studies do not integrate and evaluate multiple measures of egovernment, but instead use a single measure and, therefore, need to assume no measurement error.1 Therefore, a more reliable method of measuring e-government has not been developed and, as a consequence, the comparability of findings among different studies can become problematic. These two conditions are, at least in part, the result of
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
FIGURES AND TABLES Using Partial Least Squares in Digital Government Research
Figure 1. Mostly direct effects have been hypothesized (Gil-García, 2005b) Factor 1 Information Technology Success
Factor 2 Factor n
Figure 1. Mostly Direct Effects have been Hypothesized (Gil-García, 2005b) 2005b). It is important to clarify that the intenTable 1. Two limitations of linear regression tion is not to suggest that every research project Assumes no • Normally, constructs are meashould use PLS, but to encourage scholars and Measurement sured using a single indicator practitioners to seriously consider this technique Error • This indicator is assumed to perfectly capture the essence of as an alternative when designing and carrying the theoretical construct out their research.3 PLS is a structural equation • Examples of constructs used modeling (SEM) technique similar to covariancein social sciences: social status, organizational capability, based SEM as implemented in LISREL, EQS, or adequate institutional environAMOS.4 Therefore, PLS can simultaneously test ment, job satisfaction, policy effectiveness, etc. the measurement model (relationships between indicators and their corresponding constructs) Assumes No • Does not systematically test reIndirect Effects lationships among independent and the structural model (relationships between variables constructs) (see Figure 2). • Therefore, indirect effects are This chapter is organized in six sections, not represented • Causes are assumed to be indeincluding the foregoing introduction. Section pendent from each other two shows an example of how to present the theoretical model and hypotheses to be tested. two important limitations related to the statistical Since PLS uses multiple indicators for each varimethod that many of these previous studies have able, section three highlights the importance of used: linear regression analysis (see Table 1). including the operationalization of the constructs. This chapter shows how to use partial least Section four offers an example of how to present squares (PLS) and argues that this could help to the findings, including both the measurement incorporate more realistic assumptions and better and the structural model. Section five suggests measurements into digital government research.2 potential future trends and section six provides It does it through a commented example of a some final comments. Throughout sections two, digital government research study (Gil-García, three, and four, comments about how to use PLS
1
Figure 2. Testing the measurement and structural models with PLS Indicator 1 Indicator 2
Exogenous Construct 1 Indicator 4
Indicator 3 Endogenous Construct 1 Indicator 7 Indicator 8
Indicator 5 Indicator 6
Endogenous Construct 2
Indicator 9
Figure 2. Testing the Measurement and Structural Models with PLS
240
Using Partial Least Squares in Digital Government Research
and how to present each of the different sections are incorporated. It is important to clarify that only relevant elements were presented in this example, and not the complete text of an article. Most of the material included in this article comes from a previous study about the success of e-government, based on an analysis of the 50 statewide Websites in the United States (Gil-García, 2005a, 2005b, 2005c, 2006).
Background: Theoretical Model and Hypotheses Comments The researcher should develop a theoretical framework and present a group of hypotheses to be tested. When using PLS, the model and the hypotheses can include relationships among the independent variables and, therefore, test for indirect effects. This section suggests that a diagram can be used to summarize the theoretical framework. From this diagram, the author can derive and present the corresponding hypotheses. Here is a very brief example of the type of text, diagram, and hypotheses that could be presented.
Table 2. Hypotheses derived from theoretical model H1: Organizational factors and strategies are directly associated with the success of electronic government. H2: The institutional framework is directly associated with the success of electronic government. H3: The institutional framework is directly associated with organizational factors and strategies. H4: Contextual factors are directly associated with the success of electronic government. H5: Contextual factors are directly associated with organizational factors and strategies. H6: Contextual factors are directly associated with the institutional framework.
and e-government literature (Gil-García, 2005b) (see Figure 3). Based on an institutional tradition (Brinton & Nee, 1998; Powell & DiMaggio, 1991; Scott, 2001), the technology enactment theory attempts to explain the effects of organizational forms and the institutional framework on the information technology selected, designed, implemented, and used by government agencies (Fountain, 2001; Gil-García, 2005b). Table 2 shows the hypotheses to be tested.
Operationalization of Constructs
Example The research model presented in this paper is based on Fountain’s technology enactment theory (Fountain, 1995, 2001) and refined through a review of IT implementation, IT success, social informatics,
Comments PLS allows using multiple indicators to measure each construct or latent variable. Therefore, the
Figure 3. Theoretical model of electronic government success (Adapted from Gil-Garcia, 2005b) Organizational Factors and Strategies Contextual Factors Institutional Framework
E-Government Success Enacted Technology Organizational Outputs
Figure 3. Theoretical Model of Electronic Government Success (Adapted from Gil-Garcia, 2005b)
241
Using Partial Least Squares in Digital Government Research
researcher should make clear to the reader how exogenous constructs (independent variables) and endogenous constructs (dependent variables) were operationalized. If the author is using a high-level theory, such as institutional theory, it may also be useful to operationalize the constructs at the structural level (see Figure 4). Following is an example of how the operationalization of the constructs could be presented. For this example, only e-government success and the contextual factors are operationalized, but all relevant constructs should follow a similar logic.
Example According to the theoretical model presented early and the hypothesis suggested, relevant constructs are electronic government success, organizational factors and strategies, the institutional framework, and contextual factors (Gil-García, 2005b). Based on previous studies and a careful analysis of each potential indicator, the constructs were operationalized as follows.
Electronic Government Success Web site functionality and technological sophistication have been used as measures of local
e-government success in previous studies (Ho, 2002; Holden, Norris & Fletcher, 2003; Moon, 2002). At the state level, several studies assess e-government success by examining different aspects of functionality such as the number of online services, the degree of customization, and e-mail responsiveness, among others ( Gant, Gant & Johnson, 2002; West, 2000, 2001, 2002, 2003, 2004). Following this convention, this study operationalizes this construct as the state website functionality measured with four indicators (GilGarcía, 2005b): overall state e-government ranking (score), digital state e-commerce score, number of e-commerce systems, and number of online services. The first two are composite scales and they include important elements such as number of online services, electronic payments, types of online information, availability of specific government forms, usability assessment, e-mail responsiveness, privacy and security, foreign language access, and democratic outreach.
Contextual Factors The context or environment of organizations includes multiple factors and it is practically impossible to include all of them in a single research project. This study selected three of the most
Figure 4. Operationalization of the constructs—structural model (Adapted from Gil-Garcia, 2005b) Organizational Factors and Strategies Organizational Factors
Web Management Strategies
Contextual Factors E-Government Success
Political Orientation
State Website Quality
E-Government Demand Availability of Resources Institutional Framework
Figure 4. Operationalization of the Constructs - Structural Model (Adapted from GilGarcia, 2005b) 242
Using Partial Least Squares in Digital Government Research
important factors to represent this environment in which organizations are embedded (Gil-García, 2005b): voting preferences as a proxy for a more general political orientation, demographic factors as a proxy for potential e-government demand (Ho, 2002; La Porte, Demchak, & Dejong, 2002), and overall size of the state economy as a proxy for availability of resources for state government agencies. Political orientation is represented by the percentage of votes for the democrat and republican candidates in the previous gubernatorial election (1997–2000), and whether the governor was democrat or republican in 2000. E-government demand includes several measures of income, education, computer ownership, and Internet access. Availability of financial resources was operationalized using government gross state product, total state revenue, total state debt, and the number of jobs and private earnings in several industries such as state government, local government, education, communication, electronic and other electric equipment, and engineering and management services.
PLS Data Analysis and Results Comments PLS is not as commonly used as linear regression and other statistical techniques. Therefore, it is useful to provide a brief introduction to the PLS approach and discuss some of the advantages and limitations of the technique. This can be done either in the “research method” section or at the beginning of the “analysis and results” section. The level of detail of this explanation depends on the journal, discipline, and conventions of the author’s academic community, among other factors. PLS allows for representing the constructs as formative or reflective. Therefore, the researcher should clarify which constructs are formative and which ones are reflective (see Table 3). PLS-Graph provides results for the measurement and structural models and there are some tests that need to be performed. For the measurement model, besides the loadings for the reflective constructs and the
weights for the formative constructs, the following tests and measures should be reported: composite reliability, average variance extracted, and crossloadings. For the structural model, besides the standardized coefficients (paths) connecting two constructs, the following results could be reported: a diagram showing the paths and coefficients of multiple determination (R-square) for endogenous constructs, the relative impact of independent variables (f 2 test), and indirect and total effects.
Example Partial least squares (PLS) were used to empirically evaluate the theoretical model (Gil-Garcia, 2005b). As mentioned earlier, PLS is a structural equation modeling (SEM) technique similar to covariance-based SEM as implemented in LISREL (Joreskog, 1978), EQS (Bentler, 1985), or AMOS. Therefore, PLS can simultaneously test the measurement model (relationships between indicators and their corresponding constructs) and the structural model (relationships between constructs) (Barclay, Thompson & Higgins, 1995; Hulland, 1999). It produces loadings from reflective constructs to their indicators, weights to formative constructs (see below) from their indicators, standardized regression coefficients between constructs, and coefficients of multiple determination (R-squared) for endogenous constructs (dependent variables) (Gefen, Straub & Boudreau, 2000). PLS allows for small sample sizes and makes less strict assumptions about the distribution of the data (Hair, Anderson, Tatham et al., 1998). Small samples do not always meet normality and homogeneity assumptions. Similarly, categorical variables may also not satisfy the distributional assumptions of covariance-based SEM. According to Chin (1998), the sample size should be 10 times whichever is greater: (1) the larger number of indicators in a formative construct, or (2) the larger number of structural paths going to an endogenous construct (Barclay et al., 1995). In PLS, the relationship between a construct and its indicators can be modeled as either formative or reflective (Barclay et al., 1995; Gefen et al., 2000). Formative indicators are also known as cause or 243
Using Partial Least Squares in Digital Government Research
induced indicators and reflective indicators are also known as effect indicators (Bollen, 1989). Reflective indicators are widely used in social sciences. They are expected to measure the same underlying phenomenon and to be one-dimensional and correlated with each other (Chin, 1998; Gefen et al., 2000). In contrast, formative indicators are conceived as causes of the underlying construct and they represent different dimensions of the construct (Gefen et al., 2000). In this study, e-government success, political orientation, e-government demand, and availability of financial resources are reflective constructs with four, four, 14, and 17 indicators, respectively. Web management strategies, organizational factors, and institutional framework are formative constructs with four indicators each. Table 3 presents a summary of the constructs used in this study. PLS does not directly provide significance tests. Significance levels for loadings, weights, and paths were calculated through bootstrapping. Two hundred bootstrap samples (200) were used to empirically calculate standard errors and evaluate statistical significance.
Measurement Model (Outer Model) Reflective and formative indicators must be treated differently. For reflective indicators, there are two important aspects of the measurement model that should be evaluated: convergent and discriminant
Table 3. Constructs and number of indicators (Adapted from Gil-Garcia, 2005b) Type
Number of Indicators
E-Government Success
Reflective
4
Web Management Strategies
Formative
4
Organizational Factors
Formative
4
Institutional Framework
Formative
4
Political Orientation
Reflective
4
E-government Demand
Reflective
14
Availability of Financial Resources
Reflective
17
Construct
validity (Gefen et al., 2000). Convergent validity can be assessed by the examination of indicator reliability, composite reliability, and average variance extracted (Fornell, 1982). Table 4 shows that all loadings but one were above the 0.7 threshold (for their respective construct), suggesting good indicator reliability (Fornell & Larcker, 1981). Similarly, composite reliabilities were all greater than 0.7 (Gil-García, 2005b). Significant tests were conducted using bootstrapping (200 samples) and all loadings are statistically significant at the one percent level (pvalue < 0.01). CR = composite reliability. Source: Adapted from Gil-Garcia (2005b). Table 5 compares the square root of the average variance extracted (AVE) with the correlations among reflective constructs. All constructs were more strongly correlated with their own measures than with any of the other constructs, suggesting good convergent and discriminant validity. Finally, as suggested by Chin (1998) cross-loadings were calculated and all indicators showed higher loadings with their respective construct than with any other reflective construct. Formative indicators are not expected to correlate with each other. Therefore, traditional measures of validity are not appropriate (Chin, 1998). However, Bollen (1989) contends that validity is “the strength of the direct structural relation between a measure and a latent variable” (p. 222) and therefore, validity of formative constructs can be evaluated by looking at the size and significance of the weights. Table 6 shows the weights of formative indicators in their respective constructs.
Structural Model (Inner Model) The structural model represents the relationships between constructs that were hypothesized in the research model (see Figure 5). Significant paths are represented with bold arrows. In PLS there are not well-established overall fit measures. Paths (statistical and practical significance) and coefficients of determination (R-squares) together indicate overall model goodness of fit. R-squares are measures of the variance in endogenous constructs accounted for by other constructs that were
Using Partial Least Squares in Digital Government Research
Table 4. Loadings of reflective constructs Construct E-government success CR: 0.853
Political orientation CR: 0.911
E-government demand CR: 0.968
Availability of Financial Resources CR: 0.994
Indicator
Loading
Overall quality of e-government services (2001)
-0.8315
Number of online services provided by states (2002)
-0.8267
Number of e-commerce systems developed by states (2001)
-0.7108
Quality of e-commerce in state governments (2001)
-0.7048
Governor was Democrat in 2000
-0.9069
Governor was Republican in 2000
-0.8545
Percentage of votes for the Democrat party (1997-2000)
-0.8461
Percentage of votes for the Republican party election (1997-2000)
-0.7794
Median income per family (1999)
-0.9332
Median income per household (1999)
-0.9217
Percentage of population for whom poverty status is determined (1999)
-0.8844
Percentage of households with Internet access (2000)
-0.8838
Percent of Families below poverty level (1999)
-0.8673
Percentage of population 25 years and over with bachelor’s degree or higher education (2000)
-0.8609
Percentage of households with computers (2000)
-0.8393
Personal income per capita (1999)
-0.8253
Percentage of households with Internet access (1998)
-0.8206
Gross state product per capita (2000)
-0.7994
Percentage of households with computers (1998)
-0.7784
Percentage of population with bachelor’s degree or higher (2000)
-0.7734
Percentage of population with high school or higher education (2000)
-0.7275
th
Percentage of population with less than 9 grade education (2000)
-0.6457
Local government private earnings (2000)
-0.9910
Government and government enterprises private earnings (2000)
-0.9903
State government private earnings (2000)
-0.9895
Government gross state product (2000)
-0.9877
Number of local government jobs (2000)
-0.9864
Number of engineering and management services jobs (2000)
-0.9839
State total revenue (2000)
-0.9838
Number of government and government enterprises jobs (2000)
-0.9828
Engineering and management services private earnings (2000)
-0.9778
Number of state government jobs (2000)
-0.9643
Number of jobs in the communications industry (2000)
-0.9554
Communications industry private earnings (2000)
-0.9366
Number of educational services jobs (2000)
-0.9341
Number of electronic and other electric equipment jobs (2000)
-0.9201
Electronic and other electric equipment private earnings (2000)
-0.8888
Educational services private earnings (2000)
-0.8790
Total state debt (2000)
-0.8586
Using Partial Least Squares in Digital Government Research
Table 5. Reflective construct correlations and square root of AVE (Adapted from Gil-Garcia, 2005b) E-Government success E-government success
0.771
Political orientation
E-government demand
Political orientation
-0.152
0.848
E-government demand
0.034
0.081
0.829
Availability of financial resources
0.473
-0.024
0.124
Availability of financial resources
0.954
Table 6. Weights of formative indicators (Adapted from Gil-Garcia 2005b). Construct
Web management strategies
Organizational factors
Institutional framework
Indicator
Weight
Website services are outsourced only
-0.7843***
Number of marketing media and intensity of marketing
0.7203***
Only the IT organization directly provide website services
-0.4836**
IT organization directly manages portal development for agencies
0.2701
Number of people working for the IT organization (Size)
0.7087***
Percentage of the IT budget revenue sources from federal funds
0.5387**
State provides accessibility training for IT professionals
0.4406*
Percentage of the IT office budget devoted to maintenance
-0.2919
State IT professionals are members of the civil service only
0.7185***
State has executive orders/directives as the only way to establish authority for CIO offices
-0.5883***
State has an IT Specific Legislative Committee - Senate
-0.3712*
State has mandatory accessibility standards for state web sites
-0.2904
Significant tests were conducted using bootstrapping (200 samples) and weights with *** are significant at the 1 percent level, those with ** are significant at the 5 percent level, and those with * are significant at the 10 percent level.
hypothesized to have an effect on them and can be interpreted as R-squares in regression analysis. Hypotheses 1, 3, 4, 5, and 6 were supported. Hypothesis 2 was not supported (see Table 7). This study did not find a significant direct relationship between the institutional framework and e-government success represented by state website functionality. The existence of a direct link was tested and found to be neither statistically nor practically significant (at least 0.2). Similar to multiple regression analysis, all the interpretations should take into consideration that all other direct variables in the respective equation are held constant. Three factors have a significant direct relationship to state website functionality: (1) availability of financial resources, (2) organizational factors, and (3) Web management strategies.
The institutional framework is the only construct with a significant direct relationship to organizational factors. Institutional framework, e-government demand, and political orientation have a significant direct relationship with web management strategies. Finally, all three contextual factors are directly associated with institutional framework (p < 0.1). States with greater potential demand for e-government (higher incomes, higher levels of education, and higher percentages of computer ownership and Internet access) are predicted to have a more enabling institutional framework. These states with greater demand are also predicted to be less likely to have all IT employees as members of the civil service. About 46 percent of the variance in state Website functionality was accounted for by its explanatory constructs. Similarly, the model ex-
Using Partial Least Squares in Digital Government Research
Figure 5. PLS structural parameters (Adapted from Gil-Garcia, 2005b) Web management strategies E-government demand
0.536 E-government success
0.459
Political orientation
Organizational factors
Availability of financial resources Institutional framework
0.290
0.228
Figure 5. PLS Structural Parameters (Adapted from Gil-Garcia, 2005b).
Table 7. Structural results (Adapted from GilGarcia 2005b) Direct Effect
Path Coefficient
Effect on E-Government success Web management strategies
0.300*
Organizational factors
0.311*
Institutional framework
0.009
Political orientation
0.021
E-government demand
0.024
Availability of financial resources
0.352***
Effect on Web management strategies Organizational factors
0.203
Institutional framework
-0.451**
Political orientation
-0.276**
E-government demand
-0.391***
Availability of financial resources
-0.017
Effect on Organizational factors Institutional framework
-0.504**
Political orientation
0.008
E-government demand
-0.004
Availability of financial resources
0.092
Effect on Institutional framework Political orientation
0.259*
E-government demand
-0.262*
Availability of financial resources -0.285* Significant tests were conducted using bootstrapping (200 samples) and path coefficients with *** are significant at the 1 percent level, those with ** are significant at the 5 percent level, and those with * are significant at the 10 percent level.
plained about 54 percent of the variance in Web management strategies, 29 percent of the variance in organizational factors, and 23 percent of the variance in institutional framework. The average explanatory power of endogenous constructs in the model was about 38 percent (R 2 = 0.3783). Falk and Miller (1992) suggest a way to test for significance of these squared multiple correlations. All coefficients of multiple determination were significant at the 1 percent level (p < 0.01). There is also a way to calculate direct, indirect, and total effects.
futurE trEnds Currently, some social sciences are increasingly using structural equation modeling (SEM) in general, and partial least squares (PLS) in particular. For instance, Gefen et al. (2000) show how in information systems (IS) research the number of articles using SEM in three of the main journals represented about 18 percent of all articles published from 1994 to 1997. The most dramatic 11 case was information systems research, in which 45 percent of the published articles used a SEM technique and 19 percent used PLS. We can expect this trend to continue, since SEM techniques have several advantages in
Using Partial Least Squares in Digital Government Research
comparison to linear regression or other firstgeneration statistical techniques. In addition, there are certain conditions where PLS is more appropriate than its covariance-based counterpart and this might increase the use of PLS in social science research. Falk and Miller (1992) classify these conditions into four groups: theoretical conditions, measurement conditions, distributional conditions, and practical conditions. Theoretical conditions refer to whether a strong theory about the phenomenon exists. Measurement conditions are related to the characteristics of the data and their relationships with both latent and manifest variables. In addition, Garson (2004) explains “PLS is a predictive technique which can handle many independent variables, even when these display multi-collinearity.” One very important distributional condition indicates when PLS can be a more appropriate technique: “data come from non-normal or unknown distributions” (Falk & Miller, 1992: 6). Finally, practical conditions deal with the design and limitations of the research in which SEM will be used.
conclusion The partial least squares (PLS) approach offers several advantages and allows researchers to incorporate more realistic assumptions into their studies. Based on a digital government study, this chapter provides some guidance about how to use PLS and present the results. Considering PLS when designing and conducting research is important not only because of its advantages as an SEM approach, but also because it works in situations that are more realistic in social science research settings (Falk & Miller, 1992: 5-6): • • •
“Hypotheses are derived from macro-level theory in which all relevant variables are not known.” “Relationships between theoretical constructs and their manifestations are vague.” “Relationships between constructs are conjectural.”
• • • • • • •
“Some of the manifest variables are categorical or they represent different levels of measurement.” “Manifest variables have some degree of unreliability.” “Residuals on manifest and latent variables are correlated.” “Data come from non-normal or unknown distributions.” “Non-experimental research designs are used.” “A large number of manifest and latent variables are modeled.” “Too many or too few cases are available.”
These conditions are commonly encountered in digital government research and, as shown in this paper, PLS could be an important alternative to other statistical techniques, including linear regression and covariance-based SEM.
futurE rEsEarch dirEctions There are several opportunities for future research in this area. These opportunities can be divided into two distinct themes. First, the use of structural equation modeling (SEM) in general and partial least squares (PLS) in particular in digital government research is still in its initial stages. It seems that they have the potential to generate new knowledge in this field, but more research is needed in order to assess the real contribution of these methodologies and accompanying statistical techniques. In this regard, there are opportunities for studies comparing the results of SEM analysis and multiple regression analysis. Studies looking at some statistical properties and limitations of PLS in social science research are also needed. Research dealing with non-linearity or heteroscedasticity represents examples of these more technical studies. In addition, there are also opportunities for future research on the substantive topic of this example: electronic government success. Due to the different conceptualizations of electronic government, few results are comparable. SEM and
Using Partial Least Squares in Digital Government Research
PLS could help to develop reliable measurements for e-government and other relevant constructs such as organizational strategies, institutional arrangements, political orientation, e-government demand, and economic conditions. PLS provides the advantage of representing these constructs as either scales or indices, which are modeled as reflective and formative constructs, respectively. Finally, given the importance of indirect effects and the interrelationships among constructs, SEM and PLS can potentially generate more complete explanations of e-government success and other relevant social phenomena. Future research should explore and assess if this is in fact the case.
acknowlEdgmEnt The author wants to thank Sharon S. Dawes, R. Karl Rethemeyer, Jon P. Gant, Richard H. Hall, Jane E. Fountain, and Luis F. Luna-Reyes for their valuable suggestions throughout the development of this study. The author is also thankful to Wynne Chin and Shobha Chengalur-Smith for their helpful guidance and suggestions when conducting the PLS analysis. Any mistakes or omissions are the sole responsibility of the author. This work was partially supported by the National Science Foundation under Grant No. 0131923. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
rEfErEncEs Barclay, D., Thompson, R., & Higgins, C. (1995). The partial least squares (pls) approach to causal modeling: Personal computer adoption and use as an illustration. Technology Studies, 2(2), 285309. Barrett, K., & Greene, R. (2000). Powering up: How public managers can take control of information technology. Washington, DC: Congressional Quarterly Press.
Bentler, P. M. (1985). Theory and implementation of eqs: A structural equations program. Los Angeles, CA: BMDP Statistical Software. Bollen, K. A. (1989). Structural equations with latent variables. New York: John Wiley & Sons. Brinton, M. C., & Nee, V. (1998). The new institutionalism in sociology. Stanford, CA: Stanford University Press. Chin, W. W. (1998). The partial least squares approach for structural equation modeling. In G. A. Marcoulides (Ed.), Modern methods for business research. Mahwah, NJ: Lawrence Erlbaum Associates. Dawes, S. S. (1996). Interagency information sharing: Expected benefits, manageable risks. Journal of Policy Analysis and Management, 15(3), 377-394. Dawes, S. S., & Pardo, T. A. (2002). Building collaborative digital government systems. Systematic constraints and effective practices. In W. J. McIver & A. K. Elmagarmid (Eds.), Advances in digital government. Technology, human factors, and policy (pp. 259-273). Norwell, MA: Kluwer Academic Publishers. Falk, R. F., & Miller, N. B. (1992). A primer for soft modeling. Akron, Ohio: The University of Akron. Fornell, C. (1982). A second generation of multivariate analysis: Methods (Vol. 1). New York, NY: Praeger. Fornell, C., & Larcker, D. F. (1981). Evaluating strcutural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18, 39-50. Fountain, J. E. (1995). Enacting technology: An institutional perspective. Cambridge, MA: John F. Kennedy School of Government, Harvard University. Fountain, J. E. (2001). Building the virtual state. Information technology and institutional change. Washington, D.C.: Brookings Institution Press.
Using Partial Least Squares in Digital Government Research
Gant, D. B., Gant, J. P., & Johnson, C. L. (2002). State web portals: Delivering and financing e-service. Arlington, VA: The PricewaterhouseCoopers Endowment for the Business of Government.
Gil-García, J. R., & Pardo, T. A. (2005). E-government success factors: Mapping practical tools to theoretical foundations. Government Information Quarterly, 22(2), 187-216.
Gant, J. P. (2003). Information sharing, communications, and coordination in e-government collaborations. Albany, NY: Center for Technology in Government, University at Albany, SUNY.
Hair, J. F., Jr., Anderson, R. E., Tatham, R. L., & Black, W. C. (1998). Multivariate data analysis. Upper Saddle River, NJ: Prentice Hall.
Garson, G. D. (2000). Information systems, politics, and government: Leading theoretical perspectives. In G. D. Garson (Ed.), Handbook of public information systems. New York: Marcel Dekker. Garson, G. D. (2004). Partial least squares regression (pls). Retrieved December 8, 2006, from http://www2.chass.ncsu.edu/garson/PA765/pls. htm Gefen, D., Straub, D. W., & Boudreau, M.-C. (2000). Structural equation modeling and regression: Guidelines for research practice. Communications of the AIS, 4, Article 7. Gil-García, J. R. (2005a, September 1-4). Do citizens expectations matter for e-government? Exploring the determinants of the functionality of state web sites. Paper presented at the 2005 APSA Annual Meeting, Washington, DC. Gil-García, J. R. (2005b). Enacting state websites: A mixed method study exploring e-government success in multi-organizational settings. Unpublished Doctoral Dissertation, University at Albany, State University of New York, Albany, NY. Gil-García, J. R. (2005c). Exploring the success factors of state website functionality: An empirical investigation. Paper presented at the National Conference on Digital Government Research, Atlanta, GA. Gil-García, J. R. (2006, January 4-7). Enacting state websites: A mixed method study exploring e-government success in multi-organizational settings. Paper presented at the 39th Hawaii International Conference on System Sciences (HICSS), Hawaii, USA.
0
Heeks, R. (2003). Success and failure rates of egovernment in developing/transitional countries: Overview. Retrieved from www.egov4dev. org/sfoverview.htm Heeks, R. (2005). Implementing and managing egovernment: An international text. Thousand Oaks, CA: SAGE Publications. Ho, A. T.-K. (2002). Reinventing local governments and the e-government initiative. Public Administration Review, 62(4), 434-444. Holden, S. H., Norris, D. F., & Fletcher, P. D. (2003). Electronic government at the local level: Progress to date and future issues. Public Performance and Management Review, 26(4), 325-344. Holmes, D. (2001). E.Gov. E-business strategies for government. London: Nicholas Brealey Publishing. Hulland, J. (1999). Use of partial least squares in strategic management research: A review of four recent studies. Strategic Management Journal, 20(2), 195-204. Joreskog, K. G. (1978). Structural analysis of covariance and correlation matrices. Psychometrika, 43, 443-477. Kraemer, K. L., King, J. L., Dunkle, D. E., & Lane, J. P. (1989). Managing information systems. Change and control in organizational computing. San Francisco, CA: Jossey-Bass. La Porte, T. M., Demchak, C. C., & DeJong, M. (2002). Democracy and bureaucracy in the age of the web. Empirical findings and theoretical speculations. Administration and Society, 34(4), 411-446.
Using Partial Least Squares in Digital Government Research
Landsbergen, D., Jr., & Wolken, G., Jr. (2001). Realizing the promise: Government information systems and the fourth generation of information technology. Public Administration Review, 61(2), 206-220. Laudon, K. C. (1985). Environmental and institutional models of system development: A national criminal history system. Communications of the ACM, 28(7), 728-740. Moon, M. J. (2002). The evolution of e-government among municipalities: Rhetoric or reality? Public Administration Review, 62(4), 424-433. O’Looney, J. A. (2002). Wiring governments. Challenges and possibilities for public managers. Westport, CT: Quorum Books. Powell, W. W., & DiMaggio, P. J. (1991). The new institutionalism in organizational analysis. Chicago, IL: University of Chicago Press. Rocheleau, B. (1999). Building successful public management information systems: Critical stages and success factors. Paper presented at the American Society for Public Administration’s 60th National Conference, Orlando, FL. Rocheleau, B. (2003). Politics, accountability, and governmental information systems. In G. D. Garson (Ed.), Public information technology: Policy and management issues (pp. 20-52). Hershey, PA: Idea Group Publishing. Scott, W. R. (2001). Institutions and organizations (2nd ed.). Thousand Oaks, CA: Sage. West, D. M. (2000). State and federal e-government in the united states, 2000. Providence, RI: Brown University. West, D. M. (2001). State and federal e-government in the united states, 2001. Providence, RI: Brown University. West, D. M. (2002). State and federal e-government in the united states, 2002. Providence, RI: Brown University. West, D. M. (2003). State and federal e-government in the united states, 2003. Providence, RI: Brown University.
West, D. M. (2004). State and federal e-government in the united states, 2004. Providence, RI: Brown University. West, D. M. (2005). Digital government. Technology and public sector performance. Princeton, NJ: Princeton University Press. Zhang, J., Cresswell, A. M., & Thompson, F. (2002). Participant’s expectations and the success of knowledge networking in the public sector. Paper presented at the AMCIS Conference, Texas.
furthEr rEading Bajjaly, S. T. (1999). Managing emerging information systems in the public sector. Pubic Performance & Management Review, 23(1), 40-47. Baroudi, J., & Orlikowski, W. (1989). The problem of statistical power in MIS research. MIS Quarterly, 13(1), 87-106. Chengalur-Smith, I., & Duchessi, P. (1999). The initiation and adoption of client-server technology in organizations. Information & Management, 35, 77-88. Chin, W. W., & Newsted, P. R. (1999). Structural equation modeling analysis with small samples using partial least squares. In R. H. Hoyle (Ed.), Statistical strategies for small sample research. Thousand Oaks, CA: Sage Publications. Chin, W. W., & Todd, P. A. (1995). On the use, usefulness, and ease to use of structural equation modeling in MIS research: A note of caution. MIS Quarterly, 19(2), 237-246. Chin, W. W., Marcolin, B. L., & Newsted, P. R. (2003). A partial least squares latent variable modeling approach for measuring interaction effects: Results from a monte carlo simulation study and an electronic mail emotion/adoption study. Information Systems Research, 14(2), 189-217. Cresswell, A. M. (2004). Return on investment in information technology: A guide for managers. Albany, NY: Center for Technology in Government, University at Albany, SUNY.
Using Partial Least Squares in Digital Government Research
Cresswell, A. M., & Pardo, T. A. (2001). Implications of legal and organizational issues for urban digital government development. Government Information Quarterly, 18, 269-278. Cresswell, A. M., Pardo, T. A., Canestraro, D. S., Dawes, S. S., & Juraga, D. (2005). Sharing justice information: A capability assessment toolkit. Albany, NY: Center for Technology in Government, University at Albany, SUNY. Cushing, J., & Pardo, T. A. (2005). Research in the digital government realm. IEEE Computer, 38(12), 26-32. Dawes, S. S., Gregg, V., & Agouris, P. (2004). Digital government research: Investigations at the crossroads of social and information science. Social Science Computer Review, 22(1), 5-10. Dawes, S. S., Pardo, T., & DiCaterino, A. (1999). Crossing the threshold: Practical foundations for government services on the world wide web. Journal of the American Society for Information Science, 50(4), 346-353. Dawes, S. S., Pardo, T. A., Simon, S., Cresswell, A. M., LaVigne, M., Andersen, D. et al. (2004). Making smart IT choices: Understanding value and risk in government it investments. Albany, NY: Center for Technology in Government. DeLone, W., & Mclean, E. (1992). Information systems success: The quest for the dependent variable. Information Systems Research, 3(1), 60-95. DeLone, W., & Mclean, E. (2003). The delone and mclean model of information systems success: A ten year update. Journal of Management Information Systems, 19(4), 9-30. Esteves, J., Pastor, J. A., & Casanovas, J. (2002). Using the partial least squares (pls) method to establish critical success factors interdependence in erp implementation projects. Barcelona: Universidad Politécnica de Catalunya. Flynn, D., & Arce, E. (1997). A case tool to support critical success factors analysis in it planning and requirements determination. Information and Software Technology, 39, 311-321.
Gil-Garcia, J. R., & Helbig, N. (2006). Exploring e-government benefits and success factors. In A.-V. Anttiroiko & M. Malkia (Eds.), Encyclopedia of digital government. Hershey, PA: Idea Group Inc. Gil-Garcia, J. R., & Luna-Reyes, L. F. (2006). Integrating conceptual approaches to e-government. In M. Khosrow-Pour (Ed.), Encyclopedia of e-commerce, e-government and mobile commerce. Hershey, PA: Idea Group Inc. Jenster, P. V. (1987). Firm performance and monitoring of critical success factors in different strategics contexts. Journal of Management Information Systems, 3(3), 17-33. Lohmöller, J.-B. (1989). Latent variable path modeling with partial least squares. Heidelberg: Physica-Verlag. Luna-Reyes, L. F., Zhang, J., Gil-Garcia, J. R., & Cresswell, A. M. (2005). Information systems development as emergent socio-technical change: A practice approach. European Journal of Information Systems, 14(1), 93-105. Maruyama, G. M. (1998). Basics of structural equation modeling. Thousand Oaks, CA: Sage Publications. Mooney, C. Z., & Duval, R. D. (1993). Bootstrapping. A nonparametric approach to statistical inference. Newbury Park, CA: Sage Publications. Pardo, T. A., Cresswell, A. M., Thompson, F., & Zhang, J. (2006). Knowledge sharing in crossboundary information system development in the public sector. Information Technology and Management, 7(4), 293-313. Rocheleau, B. (2000). Prescriptions for public-sector information management: A review, analysis, and critique. American Review of Public Administration, 30(4), 414-435. Seddon, P. B. (1997). A re-specification and extension of the DeLone and McLean model of IS success. Information Systems Research, 8(3), 240-253.
Using Partial Least Squares in Digital Government Research
Seddon, P. B., Staples, S., Patnayakuni, R., & Bowtell, M. (1999). Dimensions of information system success. Communications of the AIS, 2(20). Umble, E. J., Haft, R. R., & Umble, M. M. (2003). Enterprise resource planning: Implementation procedures and critical success factors. European Journal of Operational Research, 146, 241-257. Zhang, J., Cresswell, A. M., & Thompson, F. (2002). Participant’s expectations and the success of knowledge networking in the public sector. Paper presented at the AMCIS Conference, Texas.
regression, assume no measurement error. Partial Least Squares (PLS): SEM technique that attempts to minimize the differences between the observed and predicted values of endogenous constructs (dependent variables). Structural equation modeling (SEM): Statistical technique that allows simultaneously testing the measurement and structural models. Two of the main types of SEM are covariance-based and variance-based.6
EndnotEs tErms and dEfinitions5
1
Construct: It is a variable that cannot be observed and measured directly, but has theoretical relevance. In a SEM framework, multiple indicators are used to form or reflect a construct or latent variable. Covariance-Based SEM: SEM techniques that attempt to minimize the differences between the observed and predicted covariance matrices. Direct Effect: In a causal model, this is the direct impact of one variable on another and is represented by an arrow from the independent to the dependent variable. Indicator: It is a variable that can be directly observed and measured and, in a SEM framework, it is used to form or reflect a construct or latent variable. Indirect Effect: In a causal model, this is the impact of one variable on another, through the impact of the former on a third variable that is called mediating variable. In complex models, several indirect-effect paths may exist between two variables. Measurement error: This type of error is derived from the quality of the indicators used to operationalize a variable. SEM techniques allow researchers to model the measurement error, while other statistical techniques, such as linear
2
3
4
5
6
Structural equation modeling (SEM) is capable to deal with measurement error, but it is not the only technique that can. There are other ways to deal with these problems. See Carroll, Raymond J., Ruppert, David& Stefanski, Leonard A. (2006). Measurement error in nonlinear models: A modern perspective. Boca Raton, FL: Chapman & Hall/CRC. For an overview of PLS see www2.chass. ncsu.edu/garson/PA765/pls.htm The objective of this chapter is not to compare PLS with other statistical techniques. For one example of such comparisons, see Gefen, D., Straub, D. W., & Boudreau, M.C. (2000). Structural equation modeling and regression: Guidelines for research practice. Communications of the AIS, 4, 7. For an overview of covariance-based structural equation modeling see www2.chass. ncsu.edu/garson/PA765/structur.htm The definitions of these terms are those of the author and have been useful to convey the characteristics of SEM techniques and PLS to audiences with no extensive statistical knowledge. For more formal definitions, the readers are suggested to consult the books or journal articles cited in this chapter. For an overview of the topic see Kline, R.B. (1998). Principles and practice of structural equation modeling. NY: Guilford Press.
Section II
Privacy, Access, Ethics, and Theory
Internet technology has the potential to bring about democratic transparency in the way government conducts its business. Transparency in turn is thought to promote governmental responsiveness, which in turn is thought to increase public demand for yet more online access in a self-reinforcing cycle. At the same time, it is believed that public agencies that fail to embrace online openness will become seen as inefficient, unresponsive, and unworthy of being rewarded in competition for shrinking public-sector budgets. That is, the argument is made that the Internet will promote transparency, and transparency will create a revolution in governmental efficiency and responsiveness. Even a transparent government, however, must support individual privacy rights. As noted in a 2006 report from the Security and Privacy Committee of the National Association of State Chief Information Officers (NASCIO), privacy is a particularly daunting challenge for government because citizens expect openness, yet at the same time, government must foster citizen trust by ensuring the privacy of information about individual citizens. Privacy issues are pervasive, arising in arenas ranging from e-government services to IT governance to enterprise architecture. Privacy is an issue because people have good reason to believe that data collected on them for one purpose may be appropriated and used for altogether different purposes than the original ones about which they were informed (if at all) at the time data were collected. Some 49 million Americans (22%) reported in 2006, based on a national Harris poll, that they had been informed that their personal information had been lost, stolen, or compromised in the past 3 years. The Privacy Rights Clearinghouse reported that up to 2006, some 100 million Americans have had their personal information compromised—over one third of the United States. Some of the worst instances of compromise of personal data have occurred in the governmental sector. Beyond this there is the routine sharing of data with other government agencies without permission from the citizen who volunteered personal data, in violation of long-standing fair data-use principles. In theoretical terms, some seeking to understand these issues have turned to structuration theory, which is a variant of institutional theory growing out of the work of Anthony Giddens. Giddens held that individual actions both shape and are constrained by social structures. This two-way cause-effect
relationship Giddens called “the duality of structure.” In studying e-democracy, structuration theory opposes both the technological determinist view that a technological imperative will override human will, and the opposite human design view that information technology is infinitely malleable to human intentions. Rather, it predicts that human behavior will shape information technology, but that technology will reshape human behavior as well. This prediction is very similar to that made with regard to information technology by institutional theorists such as Fountain’s theory of technology enactment. Existing social structures combine with the spirit of technology to generate the structural potential of a given technology or innovation. In matters of access, privacy, and ethics, technology does not preclude governments doing the right thing, nor does it make ethical outcomes inevitable, for better or for worse. G. David Garson, September 2007
Chapter XXIV
Privacy Issues in Public Web Sites Eleutherios A. Papathanassiou Athens University of Economics and Business, Greece Xenia J. Mamakou Athens University of Economics and Business, Greece
introduction
background
The advent of the Internet has altered the way that individuals find information and has changed how they engage with many organizations, like government, health care, and commercial enterprises. The emergence of the World Wide Web has also resulted in a significant increase in the collection and process of individuals’ information electronically, which has lead to consumers concerns about privacy issues. Many researches have reported the customers’ worries for the possible misuse of their personal data during their transactions on the Internet (Earp & Baumer, 2003; Furnell & Karweni, 1999), while investigation has been made in measuring individuals’ concerns about organizational information privacy practices (Smith, Milberg & Burke, 1996). Information privacy, which “concerns an individual’s control over the processing, that is the acquisition, disclosure, and use, of personal information” (Kang, 1998) has been reported as one of the most important “ethical issues of the information age” (Mason, 1986).
Since 1973, a number of guidelines, regulations, acts, legislations and conventions have been introduced, both in the EU and the USA, in order to form a framework that should be followed by organizations that collect personal information, both off-line and online. The “Hew report,” in 1973, advocated the existence of the “fair information practices” that discussed matters of data collection, usage and security, as well as the individual’s rights of data access and correction (HEW, 1973). The content of this report, created by the Secretary’s Advisory Committee on Automated Personal Data Systems within the Department of Health, Education, and Welfare, summarized that an individual’s personal privacy is directly affected by the kind of disclosure and use made of identifiable information about him in a record. In 1980, the Organization for Economic Cooperation and Development (OECD) adopted the “Guidelines on the Protection of Privacy and
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Privacy Issues in Public Web Sites
Transborder Flows of Personal Data.” The OECD Guidelines included eight privacy principles, concerning collection, processing, quality and security issues (OECD, 1980). More specifically, these guidelines that were in support of the three principles that bound the member states of the OECD: the pluralistic democracy, the respect on human rights and the open market economies, included the collection limitation principle, the data quality principle, the purpose specification principle, the use limitation principle, the security safeguard principle, the openness principle, the individual participation principle, and the accountability principle. One year later, in 1981, The Council of Europe elaborated the Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data (Council of Europe, 1981) in order to reconcile the two fundamental rights, the private life and the information (Council of Europe, 1950), and to ensure the same level of protection of these rights beyond the national borders, by legally binding the member states that certify it. The convention designates a number of principles for the fair and legal collection and use of data. More specifically, it declares that personal data can only be legally collected and processed and can only be used for a specific and known reason. It states that the data must be accurate and secure and it must be safely kept only for the necessary time period. The convention also establishes the right of an individual to know whether his personal information is stored, as well as the identity and the natural address of the person or organization responsible for the data processing. Nine years later, in 1990, The United Nations General Assembly adopted the Guidelines Concerning Computerized Personal Data Files (United Nations, 1990) and in June 1995, the U.S. Secretary of Commerce Ronald H. Brown, President of the White House Information Infrastructure Task Force (IITF), announced a report on the Privacy and the National Information Infrastructure. The report that was administered by the IITF Working Group on Privacy, explains the principles for providing and using personal information (IITF, 1995).
On 24 October 1995, the European Parliament and the Council announced the Directive 95/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data. The goal of this directive was the protection of the fundamental freedoms and rights of natural persons and in particular their right to privacy, with respect to the processing of personal data by laying down guidelines determining when this processing is lawful. These guidelines relate to the quality of the data, the legitimacy of the data processing, the information to be given to the data subject, the data subject’s right of access to data, the data subject’s right to object to the processing of data, the confidentiality and security of the data processing and the notification of the processing to a supervisory authority (Directive 95/46/EC, 1995). In 1998, the Federal Trade Commission, which is an independent service, having as its goal to protect the consumers from unfair and misleading commercial acts, published a report entitled, “Privacy Online: A Report to Congress.” This report summarized the Fair Information Practice Principles, which were the five core principles of privacy protection and included notice/awareness, choice/consent, access/participation, integrity/security and enforcement/redress, and were common to all the previous documents (FTC, 1998). On 12 July 2002, the European Parliament and the Council announced the Directive 2002/58/EC concerning the processing of personal data and the protection of privacy in the electronic communications sector, which is known as the Directive on privacy and electronic communications. This directive was adopted as a new legislative framework designed to regulate the sector of the electronic communications. It contains provisions on a number of topics, such as the sending of unsolicited e-mail, the use of cookies, the member states keeping connection data for police surveillance purposes and the inclusion of personal data in public directories (Directive 2002/58/EC, 2002). Finally, the Online Privacy Protection Act of 2003, which was presented to the U.S. Congress on January 7, 2003, prohibits the collection, use or disclosure of personal information without
Privacy Issues in Public Web Sites
notification from the Web site, with regard to the identity of the operator, what personal information is collected by the operator, how the operator uses such information, and what information may be shared with other organizations (OPPA, 2003).
3. 4.
public sEctor lEgal and Ethical wEb sitE guidElinEs Taking into consideration all these regulations, a list of guidelines has been created that should be implemented by all public Web sites that collect PI, in order to be considered as legitimate for their visitors in both legal and ethical terms (Papathanassiou, Mamakou & Kardaras, 2006a). These Legal and Ethical Web site Guidelines include the following principles: 1.
2.
Collection of personal data principle: According to this principle, personal information must be collected only after the consent of the data subject. Moreover, the data subject must be informed through the Web site about the kind of the personal data that is collected and about the ways that are used for the data collection. Purpose of collection principle: The personal information must be collected only
Table 1. Introduction of the proposed guidelines
5.
6.
7.
under concrete, specific and legal purposes and any further processing of this data must comply with these purposes. Data accuracy principle: The personal data must be accurate, current and in accordance to the purpose of its collection. Consent principle: The PI can be processed and used only after the data subject’s consent. The Web sites must provide choices about whether and how the personal data can be used for purposes other than the initial ones. Awareness principle: The data subject must have the right to know if its personal data has been collected and processed and if any PI has been sold, disclosed or transferred to third parties. Access-modification principle: The data subject must have the right to access the personal information about him that has been collected and to correct or modify the file that contains its data. Security principle: The public organization that collects personal information must take reasonable steps to safeguard the integrity and security of the personal data, both during the input and transfer of the data, and during its storage.
Privacy Issues in Public Web Sites
Notice principle: The data subject must be informed through the Web site about the identity and the natural address of the public organization that collects personal data, as well as about ways of communicating about privacy issues.
•
Table 1 shows the introduction of the proposed “Legal and Ethical Web site Guidelines” by each privacy principle and regulation discussed in the Background (Papathanassiou, Mamakou & Kardaras, 2006b).
•
8.
privacy policy All public sectors’ Web sites that collect personal information need to post a privacy policy both in the Web site’s home page, as well as in the pages that collect PI, in order to present all the necessary information in a simple, sufficient and comprehensible way. The privacy policy, which is a form of communication between the organization and its Web site’s visitors, regards the way in which the organization uses personal information, and ensures uniformity among public Web sites, which prevents any users’ difficulties in finding information concerning the above guidelines. Privacy policies reflect ethical views of the organization and indicate the perceived trustworthiness to those who make transactions with this organization (Earp, Anton & Jarvinen, 2002). More specifically, according to the Legal and Ethical Web site Guidelines, the privacy policy of a public Web site should include the following information (Papathanassiou, Mamakou & Kardaras, 2006b): •
Type of the data collected and way of collection: Personal data can be collected via a public Web site both actively and passively. Active collection is made through a form completed by the user, whereas passive collection is made without the user’s participation and it takes place with the use of technology, like cookies.
•
•
•
•
Use of the personal information: The privacy policy must clearly state all the possible ways with which the users’ personal data could be used. This includes both internal and external use of personal data, like marketing techniques, transfer of the data to third parties etc. Ways of the user’s consent: The privacy policy must specify the way of the user’s consent regarding the use and disclosure of the user’s personal information. The way of consent can have the form of choice (opt-in) and the form of withdrawal (opt-out). The first form requests the user’s active consent considering desire to share personal information, whereas the opt-out form considers that the user accepts the disclosure of PI by default, while either wise the user needs to state his or her opposition. Ways of notice: The privacy policy must inform the user about the ways he or she can communicate with the public organization, in order to learn whether his or her personal data have been collected, processed or transferred to third parties. Access and modification of the personal information: The privacy policy must specify the ways, with which the user can access his or her data, in order to modify them, renew them or delete them. In case a direct access to this information is not possible, there must be a statement about how the user can communicate with the company for a modification of his personal information. Safeguarding the file of the users’ personal information: The privacy policy must state the ways of safeguarding the data transmitted online as well as the file that contains the users’ personal information. Identity and natural address: Through the privacy policy, the user must be informed about the identity and the natural address of the public organization that collects personal data, as well as about the ways of communicating for privacy issues.
Privacy Issues in Public Web Sites
futurE trEnds For the last several years, many public organizations have been trying to transform into a more efficient and effective organization by adopting new technologies, especially the Internet (Lee & Rao, 2005). Like e-commerce companies, public organizations use the Internet technology to collect individuals’ personal information, in order to fulfil their requests for certain products and services, to send them information, to customize the content of the Web site, and so forth. Although personal data was collected by traditional public enterprises even prior to the World Wide Web, the emergence of the Internet has increased the flow of personal data, and almost simultaneously has increased the level of the individuals’ concern, with regard to the protection of their personal information. Last years’ events have created the need to implement new laws and acts in order to deal with the new conditions. In the United States for example, the September 11, 2001, terrorist attacks prompted congressional action on many fronts. The result was the creation of the USA Patriot Act, which is an American act signed into law by President George W. Bush on October 26, 2001. The focus of the act on improving data collection and information sharing could contribute to the establishment of technical standards that may facilitate the implementation of public Web site initiatives and to ensure data integrity between the public and citizens transactions. However, the implications of the act could have a negative effect as well, since it expands the type of data that can be collected by law enforcement officials and increases the concern about possible abuse of that data (Smith, Seifert, McLoughlin et al., 2002). Likewise, EU Directive 2002/58/EC permits member states to adopt legislative measures to restrict the scope of the rights and obligations provided for the confidentiality of communications, when such a restriction constitutes a necessary measure to safeguard national security, defense, public security and the prevention, investigation, detection and prosecution of criminal offences or of unauthorized use of the electronic communication system (Directive 2002/58/EC, 2002). 0
Since, individuals’ dependency on the Internet for government information and services, especially those related to national emergency, have greatly increased (Horrigan & Rainie, 2002), it is important that all public organizations follow a same framework regarding the collection and use of the users’ PI. If public organizations want to prevail in the online world, they need to comply with the guidelines and to convince individuals that they care about the protection of their privacy.
conclusion The dissemination of the Internet has created new opportunities for consumers, since they have access to a large amount of products, services and information much faster than in the past. Both private and public sectors use the Internet technologies to offer their services, without time limits, making transactions much easier for everyone. At the same time, the collection and use of personal information from the Web sites worries individuals that show a high concern about the possible misuse of their personal data. Thus, all public organizations must comply with the Legal and Ethical Web site Guidelines and must include all the necessary information in their Web site’s privacy policy.
futurE rEsEarch dirEctions In order to examine whether public sector’s Web sites comply with the Legal and Ethical Web site Guidelines presented in the chapter, special research must take place. This research should test a sample of public Web sites in order to find the degree in which they implement each one of the Guidelines. A similar research has already taken place (Papathanassiou et al., 2006a) in a sample of commercial Web sites focusing on the number of Web sites that collect personal information from their visitors, along with the type of PI, the number of Web sites that post a privacy policy and the degree in which they comply with the Legal and Ethical
Privacy Issues in Public Web Sites
Web site Guidelines. The results showed that the purpose of collection principle and the data accuracy principle were the highest implemented guidelines, leaving the security principle and the awareness principle at the bottom level of development. Thus, it would be interesting to check whether the results of a research focusing on public Web sites would give equivalent results. A further step in the research in the public Web sites’ compliance with the guidelines would be their classification in different categories (e.g., public schools, libraries, health sector, etc.) and then an analysis of the degree that each public sector implements the Legal and Ethical Web site Guidelines. Taken the research from a different angle, it would also be interesting to test the Web sites visitors’ point of view regarding the Legal and Ethical Web site Guidelines. Surfers could state their opinion on the level of importance of each guideline, in order to see whether they favour or reject a Web site according to the use it makes to their personal data. This research could take place by the use of both questionnaires and monitoring the users’ activities in a number of sample public Web sites. The findings of the two different researches proposed above could be correlated in order to find whether the guidelines that Web sites visitors find as the most important ones are the same ones most implemented by the public Web sites. The results of such a study would be very beneficial and could help improve the way Web sites deal with personal information.
such data. Directive 95/46/EC of the European Parliament and of the Council. October, 24, 1995. Official Journal of the European Communications, L, 281/31-95.
rEfErEncEs
IITF. (1995). The Information Infrastructure Task Force (IITF), Information Policy Committee, Privacy Working Group. Privacy and the National Information Infrastructure: Principles for Providing and Using Personal Information.
Council of Europe. (1950). European Convention of Human Rights. Council of Europe. (1981). Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data. Directive 95/46/EC (1995). Directive on the protection of individuals with regard to the processing of personal data and on the free movement of
Directive 2002/58/EC. (2002). Directive on privacy and electronic communications. Directive 2002/58/EC of the European Parliament and of the Council, July 12, 2002. Official Journal of the European Communications, No. L. 201/37-47. Earp, J.B., Anton, A.I., & Jarvinen, O. (2002). A social, technical and legal framework for privacy management and policies. In proceedings of the 10th Anniversary IEEE Joint Requirements Engineering Conference (RE ’02). Essen, Germany. Earp, JB., Baumer, D.L. (2006). Innovative web use to learn about consumer behavior and online privacy. Communications of the ACM, 46(4), 81-83. FTC. (1998). Federal Trade Commission. Privacy Online: A Report to Congress. Furnell, S.M., & Karweni, T.(1999). Security implications of electronic commerce: A survey of consumers and businesses. Internet Research, 9(5), 372–382. HEW. (1973). The Secretary’s Advisory Committee on Automated Personal Data Systems. Records, Computers and the Rights of Citizens. The United States Department of Health, Education and Welfare’s seminal. Horrigan, J.B., & Rainie, L. (2002). Counting on the internet. Pew Internet & American Life Project.
Kang, J. (1998). Information privacy in cyberspace transactions. Stanford Law Review, 50, 1193-1294. Lee, J., & Rao, H.R. (2005). Risk of terrorism, trust in government, and e-government services:
Privacy Issues in Public Web Sites
An exploratory study of citizens’ intention to use e-government services in a turbulent environment. YCISS Working Paper, Number 30. Mason, R.O. (1986). Four ethical issues of the information age. MIS Quarterly 10(1), 4-12. OECD. (1980). Organisation for economic cooperation and development. Guidelines on the Protection of Privacy and Transborder Flows of Personal Data, Privacy Online. OECD Guidance on Policy and Practice. OPPA. (2003). Online privacy protection act of 2003. 108th Congress of the United States of America, 1st Session, H.R. 69. Papathanassiou, E.A., Mamakou, X. J., & Kardaras, D. K. (2006a). Privacy Online: Research and Recommendations. Proceedings of the 5th WSEAS International Conference on Telecommunications and Informatics (pp. 309-314). Istanbul, Turkey.
furthEr rEading Ackerman, M., & Cranor, L. (1999). Privacy Critics: UI Components to Safeguard Users’ Privacy. Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI’99), short papers (vol.2.), (pp. 258-259). Anton, A., Earp, J. & Reese, A. (2002). Analyzing Web site Privacy Requirements Using a Privacy Goal Taxonomy. IEEE Joint International Requirements Engineering Conference 2002, (pp. 605-612). Anton, A., Earp, J. & Reese, A. (2001). Strategies for Developing Policies and Requirements for Secure Electronic Commerce Systems. in ECommerce Security and Privacy. Ghosh, Kluwer Academic Publishers, (pp. 29-46). Benessi, P. (1999). TRUSTe: An online privacy seal program. Communications of the ACM, 42(2), 56-59.
Papathanassiou, E.A., Mamakou, X. J., & Kardaras, D. K. (2006b). Legal and ethical Web site guidelines. WSEAS Transactions on Computers, 5(7), 1502-1509.
Budnitz, M. (1998). Privacy protection for consumer transactions in electronic commerce: Why self-regulation is inadequate. South Carolina Law Review, 49, 847.
Smith, H.J, Milberg, S.J., & Burke, S.J. (1996). Information privacy: Measuring individuals’ concerns about organizational practices. MIS Quarterly, 20(2), 167-195.
Carnevale, D. & Wechsler, B. (1992). Trust in the public sector: Individual and organizational determinants. Admin. Soc., 23, 471.
Smith, M.S., Seifert, J.W., McLoughlin G.J., & Moteff, J.D. (2002). The Internet and the USA PATRIOT Act: Potential implications for electronic privacy, security, commerce, and government. Congressional Research Service. The Library of Congress. United Nations (1990). The Guidelines Concerning Computerized Personal Data Files. Adopted by the General Assembly.
Cheung, C., & Lee, M. (2001). Trust in internet shopping: Instrument development and validation through classical and modern approaches. Journal of Global Information Management, 9, 23. Cranor, L. & Reagle, J. (1998). Designing a social protocol: Lessons learned from the platform for privacy preferences project. In J.K. MacKieMason and D. Waterman, (Eds.), Telephony, the internet, and the media. Mahwah: Lawrence Erlbaum Associates. Cranor, L., Reagle, J. & Ackerman, M. (1999). Beyond concern: Understanding net users’ attitudes about online privacy. AT&T Labs-Research Technical Report TR99.4.3.
Privacy Issues in Public Web Sites
Culnan, M. (1993). “How did they get my name?” An exploratory investigation of consumer attitudes toward secondary information use. MIS Quarterly, 17, 341-364.
Sewell, G., & Barker, J. (2001). Neither good, nor bad, but dangerous: Surveillance as an ethical paradox. Ethics and Information Technology 3, 183-196.
Culnan, M. (2000). Protecting privacy online: Is self-regulation working? Journal of Public Policy & Marketing, 19, 20.
Thompson, P. (2001). Privacy, secrecy and security. Ethics and Information Technology 3, 13-19.
Culnan, M. & Armstrong, P. (1999). Information privacy concerns, procedural fairness, and impersonal trust: An empirical investigation. Organization Science, 10(1), 104. Goodwin, C. (1991). Privacy: Recognition of a consumer right. Journal of Public Policy & Marketing, 10, 149. Hine, Ch. & Eve J. (1998). Privacy in the marketplace. The Information Society, 14(4), 253-262. Hoffman, D., Novak, T. & Peralta, M. (1999). Building consumer trust online. Communications of the ACM, 42, 80. Kelvin, P. (1973). A social-psychological examination of privacy. British Journal of Social Clinical Psychology, X2, 248. Laudon, K. (1995). Ethical concepts and information technology. Communications of the ACM, 38(12), 33-39. McArthur, R. (2001). Reasonable expectations of privacy. Ethics and Information Technology 3, 123-128. Milberg, S., Burke, S., Smith, J., & Kallman, E. (1995). Values, personal information privacy and regulatory approaches. Communications of the ACM, 32(12), 65-74. Parker, D. (1968). Rules of ethics in information processing. Communications of the ACM, 11(3), 198-201. Raab, Ch. & Bennett, C. (1998). The distribution of privacy risks: Who needs protection? The Information Society, 14(4), 253-262. Reagle, J. & Cranor, L. (1999). The platform for privacy preferences. Communications of the ACM, 42(2), 48-55.
Volkman, R. (2003). Privacy as life, liberty, property. Ethics and Information Technology 5, 199-210. Volokh, E. (2000). Personalization and privacy. Communications of the ACM, 43(8), 84-88. Wang, H, Lee, M. & Wang, Ch. (1998). Consumer privacy concerns about internet marketing. Communications of the ACM, 41(3), 63-70. Wolinsky, C. & Sylvester, J (1992). Privacy in the telecommunications age. Communications of the ACM, 35(2), 23-26.
tErms and dEfinitions Data Collection: The process of gathering information about a user. Data Processing: Any process that converts data into information. The processing is usually assumed to be automated and running on an information system. Data Subject: A person who can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity. Data Subject’s Consent: Any freely given specific and informed indication of the data subject’s wishes by which the data subject signifies his agreement to personal data relating to him being processed. Personal Information: Any information relating to an identified or identifiable natural person. Such information includes home address and telephone number, date of birth, social insurance number,
Privacy Issues in Public Web Sites
age, marital and financial status, race, national or ethnic origin, and religion. Privacy: The claim of individuals, groups or institutions to determine for themselves when, how and to what extent information about them is communicated to others.
Privacy Policy: A form of communication between the organization and its Web site’s visitors, regarding the way with which the organization uses personal information. User: Any natural person using a publicly available electronic communication service, for private or business purposes, without necessarily having subscribed to this service.
Chapter XXV
A Framework for Accessible and Usable Web Applications Lourdes Moreno Carlos III University of Madrid, Spain Elena Castro Carlos III University of Madrid, Spain Dolores Cuadra Carlos III University of Madrid, Spain Paloma Martinez Carlos III University of Madrid, Spain
introduction Internet growth makes use feasible by an increased number of people around the world. For this reason, several approaches must be taken in order to get a universal access for all kinds of users, independent of their capabilities. Nowadays, it is difficult for disabledd people to use the Web in the same way as non-disabledd people, even though the use of this technology is a right for everybody and more in the public administration scope in which, a lot of services must be available for users and on a correct way.
Universal access may be obtained through the integration of usability and accessibility concepts in the software engineering discipline. To get the integration a user central design (UCD) (ISO 13407, 1999) lean on an inclusive design (Newell &Gregor, 2000), has to be required. These design methodologies consist of the possibility that every users, with independence of if they have disabilities or not, participate in all phases of the Web application development. These techniques imply that the accessibility is taken into account from the first development phases, so that disabled people will be able to access to each and
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Framework for Accessible and Usable Web Applications
every one information technologies. In addition, the accessible applications design does not only benefit to disabled people but also for all users and developers, because, following accessibility guidelines, contributing to the scalability and using standards that will make easy the application growth and evolution (Moreno et al., 2005), we will obtain strong applications that will be easily re-usable. Traditional methodologies in Web application design must be adapted including accessibility and usability. The integration of these concepts should be transparent to developers and must be seen as a usual behavior in Web application development.
background The adaptation of new technologies in order to provide access for all users with independence of their capabilities and context of use, involves several actors as public administrations, enterprises, universities, the society, and so on. To minimize digital barriers in the Web, development activities require collaboration among different expert groups and organizations. Administrations play a main role in this process; they must promote the universal accessibility with the aim to ensure equal bases in the access to the information for all citizens. To do this, they own several normalization and legislation tools (W3C, 2006). Enterprises play an essential role because, if they do not fulfill the normative in their products, the universal access will not be possible. Some of leader enterprises in the technological market, as IBM, Java, Adobe, Microsoft or Macromedia, have their own initiatives but the use of de facto standards instead of universal standards is a mistake, due to the diversity of criteria. The lack of accessibility standards makes difficult the proliferation of products and applications that would include disabled people, causing in some cases non-desirable market segmentations. To solve such accessibility issues, various efforts are underway worldwide. The W3C (W3C, 1994)
has promoted a Web accessibility initiative (WAI) (WAI, 2006) to publish the Web Content Accessibility Guideline 1.0 (WCAG 1.0) (WCAG, 2006a) in 1998. They are working in the 2.0 version hopping it will be available at the end of 2006, this new version will be a more easily applicable standard and includes several technological profiles called baselines (WCAG, 2006b) for Web sites and the level of accessibility for each one. The WAI helps coordinate international Web accessibility efforts to bring together technical and human component considerations. WAI includes working groups to produce technical specifications that support accessibility: user agent accessibility guidelines (UAAG) (UAAG, 2006), authoring tool accessibility guidelines (ATAG) (ATAG, 2006) and evaluation tools. WCAG is widely seen as a standard to which legislation and policy can refer, directly or indirectly. Nevertheless, these guidelines do not cover all situations, and resources may be inaccessible even when they conform fully to the guidelines (DDC, 2004). Regarding to the legislation, there are examples in several countries that promote the accessibility for Web sites: In Australia and the UK, there is a Disability Discrimination Act 1992 (DDA) (ADA, 1990). Web sites are not mentioned in the legislation in an explicit way, but it mentions the need to make accessible and usable services to disabled people and initiatives will be in the way to continue the WCAG by site developers. On the other hand, after a formal research study in 2004, the Disability Discrimination Commission of UK investigated the accessibility of Web sites concluding the need to extend WCAG because, in some cases, pages that did pass that WAI test were inaccessible in another way and almost certainly would have failed usability tests. In the United States, the amendment Section 508 of the Rehabilitation Act (Department of Justice USA, 1998) arranges a set of rules for federal agencies in which the technology has to provide accessibility to disabled people. In simple terms, the legislation requires to be in accordance with the Section 508. Standards are not part of the
A Framework for Accessible and Usable Web Applications
legislation by themselves, but they are a set of technical requirements, some of which specifically connect to Web accessibility. These requirements are very similar to—but not identical to, and not as extensive as—the WCAG. In Italy, laws introduced requirements for accessibility of computer systems, with specific provision for Web sites (Italian Parlament, 2004). The legislation, like Section 508, provides a set of technical requirements, which will serve as a standard to be adhered for Web site developers in order to ensure legal performance. Technical requirements reference the WCAG. In Spain, Law Services of the Information Society and Electronic Commerce (LSSICE) (LSSICE, 2002), say that administrations have to accomplish the WCAG “Double-A” level. These examples show the diversity of approaches to the issue of Web accessibility and the protection of disabled people rights. The university and legislation centers should promote research in communication and information technologies (CIT) in order to help to the complete integration, including accessibility subjects in studies programs. If the university trains developers in these subjects and includes teaching cases to disabled users it will be combated the “hole” of professionals with capacities to develop accessible technologies.
others may be deaf or paralytic and another may do not be able to process some kind of data in an easy way because they may have difficulties to read or comprehend a text. This is why the accessibility concept must be allowed for. Concerning to the usability concept, the International Organization for standardization (ISO) has promote the standard Ergonomic Requirements for office Work with Visual Display Terminals, part 11: Guidance on usability (ISO 9241-11, 1998), that defines the usability as the “extend to which a product can be used by specified users to achieve specified goals effectively, efficiency and with satisfaction in a specified context of use.” Usability implies user interface design that is effective, efficient, and satisfactory. In this context of usability, accessibility ensure that the user interface is designed to be effective, efficient, and satisfactory for more people, especially for people with disabilities. Currently, the standard recognizes that usability plays two roles: a detailed software design activity implied by the definition of usability, and an overall goal that the software meets user needs (Bevan, 1999). The second role has to do with the fact that to have a usable Web but impossible to access to the contents because of its inaccessibility. Taking into account usability criteria and Web Content Accessibility Guidelines, we can notice some common aspects:
a mEthodology for building accEssiblE wEb applications
•
Accessibility and usability are two very close concepts, but their meaning is clearly different. Usability concerns the use quality and the effectiveness; accessibility means the use possibility. However, both concepts are necessary to get a universal Web access. We say that a Web page is accessible when it is designed and implemented in a way that it’s content and services will be available for every user with independence of their use context. When a Web application is created, designers must take into account that users may want to operate with it in several context. Some of them may be blind,
•
•
Use of colors: Accessibility requires effective color contrast to make easy the access for people with impaired vision. Usability criteria includes the color factor indicating that the use of poor contrast colors is dangerous for the vision and makes difficult the navigation on the Web. Device independence: Both accessibility and usability want to avoid the fragmentation of the Web into a limited space because of device or network limitations, including small screens, restricted keyboards, and so on. Navigation: Navigation flow must be simples and intuitive for the user. Clear mechanisms for the navigation identifying each link ob-
A Framework for Accessible and Usable Web Applications
• • •
jective must be included in guidelines and standards. Visibility. Usability and accessibility criteria about visibility consist of display only data which is outstanding and necessary. Simplicity. If a Web site has a simple design the redundancy is avoided and users may navigate more easily. Shortcuts. Keyboard shortcuts are special features that must be designed to help disabled users to find answers quickly.
Accessibility and usability must be taken into account no only in technical aspects but also in the communication of contents, texts must be clear and comprehensible for the user. Web applications must provide overall user access, without excluding those users with limitations or disabilities, in a usable way, so both concepts (usability and accessibility) must address towards the user centre design (UCD) with user inclusion or inclusive design framework. The difference between the UCD and the inclusive design is that the first reaches the usability but not always the accessibility because it does not Figure 1. methodological framework for an inclusive design
take into accounts all kind of users. The inclusive design framework guarantees the usability and the accessibility but the number of users and their heterogeneity makes difficult its introduction so a number of techniques must be propose. Four phases must be considered in the UCD (Figure 1): analysis, an iterative design-prototype-evaluation process, an implementation and a maintenance step. The iterative process lets us the evaluation and debug through all the development life cycle, minimizing costs. Firstly, in the analysis phase a requirements definition must be done and we must confront product objectives that are going to be developed with user needs. We may model to the user with inclusion. This step consists of defining user classes (Mayhew, 1999), grouping by common characteristics, information needs, limitation access, and so on. The problem of this modeling technique is when the audience is extensive and with many non-common characteristics, because the total categorization is impossible or very costly. In these cases, it would be convenient to use the person spproach (Cooper & Reimann, 2003) that does not cover all user groups but helps us in the decision making process. In addition, using this approach, we may get an inclusive design allow for accessibility and usability. Secondly, the design phase must use the requirements definition of the previous step and usability techniques with inclusion as the use of scenarios (Carrol, 1997). Usability techniques with inclusion help the designer to know how he is designing, which are access features, and the user context scenarios describe Web use cases in which it may be possible to study the computerhuman interaction (Rosson & Carroll, 2002). The document, “How People with Disabilities Use the Web,” (W3C, 2005) shows several examples about possible sceneries in which disabled people need to access to the Web with different technologies. Sceneries may be represented through natural language, storyboard, videos, or in UML use cases diagrams. Besides, in the design phase the expert must define navigation models, graphical elements, the Web look, feel, and the information architecture in order to get an effective, and
A Framework for Accessible and Usable Web Applications
ease of use (Granollers, 2004). For defining the architecture there are techniques such as card sorting (Robertson, 2001) with inclusion that get a conceptualization similar to the user mental model. Alter the design phase, some prototypes must be built. At first, these prototypes may be sheet models and after resulted in software solutions. In order to achieve the evaluation, we may use the heuristic carried out by an expert and based on his or her own experience and knowledge. In addition, we may employ user tests. Also we have automatic evaluation tools for Web pages as Bobby (Bobby, 2006), TAW (TAW, 2006), Cynthia says (Cynthia says, 2006), HERA (HERA, 2006), that let us to check the conformity degree of a Web site with general accessibility criteria. Besides, software prototypes have to be evaluated in an exhaustive and heuristic manual process by human experts (Nielsen, 1993) for minimizing errors. Evaluation results must be use to feedback the design until achieve all requirements. Once the design phase is completed, the implementation step and, at the end, the maintenance must be done to get a good tool. This methodology integrates usability and accessibility starting at the Web application development’s first steps. The main objective is to provide with efficient methods and techniques to get it reducing cost of the development.
futurE trEnds The methodology framework proposed in this chapter is included in the inclusive design. Currently, it is being putting in practice in a real Web site development (CESyA, 2006). We are applying usability techniques with inclusion to get a great audience. The design phase is taken into account several heterogeneous user groups, making imaginaries sceneries, and studying the user’s mental model in depth. The experience in the iterative process shows us that the initial high cost of the design may be compensated by the low cost of final phases. With the experience and the results, it is desirable to formalize a complete
methodology that, following standards, will be as simple as possible and at the same time that efficient in time. This methodology should be formalized and standardized so public administrations can demand its fulfillment. To put in practice the methodology in a real Web site development involves a high cost in the learning of the developer’s team and in the analysis and design phase. Luckily, the next steps (implementation and evaluation) are cheaper because changes in design are feasible and errors are relative minimum. Also, we ensure that software applications have a high degree of accessibility and usability.
conclusion Public administrations are looking at the emerging Web technology as a useful platform to reach the population. The main problem is that tools are usually developed without taking into account all interaction aspects that include end users. This technology should provide a complete system that does not exclude any user, independently of their capabilities or disabilities. In addition, there are some accessibility and usability problems that are due to the inherent complexity of the computer-human interaction concerning us all, as for example the visualization of a Web site in several browsers. To achieve this goal, de facto standards and a specific legislation must be created. In this sense, the W3C is promoting a set of initiatives. In addition, methodologies, as the methodology shown in this chapter considerering accessibility and usability aspects from the beginning of the software development, are essential to achieve a good practice. A user-centered system designed with user inclusion and usability techniques are other questions that must be included in any methodology for Web applications development. The main objective should be to have methodologies and techniques that integrate all user needs and provide to the developer teams a clear way in which they can implement an accessible
A Framework for Accessible and Usable Web Applications
and usable Web without an expensive cost of human or machine resources.
futurE rEsEarch dirEctions The accessibility is an interesting topic widely discussed nowadays. Despite the existence of vast documentation about this topic, there is a hole in the accessibility integration focusing on the software engineering development processes. It is necessary to help those professionals on designing and developing accessible Webs with efficiency. As for making a Web application accessible, there should be transparent work for the developer and should not mean any extra effort. It is necessary to open new research lines in order to find out which is the moment and how these criteria of accessibility must to be included. Although there exists technology, it seems to have a lack of harmony between the different actors who take part in the development of a Web application (suppliers of contents, software developers, rehabilitation engineering, etc). In order to advance on the right way, the de facto standards must be developed, as well as to make effective politics, for overcoming accessibility barriers which are surprisingly increasing at the same time as the technology evolves. This is the other face of the progress. Given the high amount of information and its diversity that we found on the net, it is essential to change roles. As well as the user role changes in the Web 2.0, the perspective of how to design a Web application should also evolve; the effective information retrieval by means of semantic labeling for Web resources can help, not only from the point of view of the query optimization and of the inter-operability, but also from the accessibility point of view. XML technologies provide us with the chance to label all resources according to the access characteristics and the context of use, addressing individual situations where exist accessibility barriers to be able to offer to the user the accessible contents adapted to his or her needs or preferences. In this way a blind user could be offered all the possible access contents with a screen reader or an audio type,
0
or if a user has a specific player, she or he will be offered the audio visual contents in a compatible format with that player. In accessibility area there are many open research lines, and they should converge to provide precise and simple results so that they could then be put into practice in a clear way.
rEfErEncEs ADA. (1990). Americans with Disabilities Act. Retrieved January 10, 2006, from http://www. Webaim.org/coordination/law/us/ada/ ATAG. (2006). Authoring Tool Accessibility Guidelines. Retrieved January 10, 2006, from http://www.w3.org/WAI/AU/ Bevan, N. (1999). Quality in use: Meeting user needs for quality. Journal of System and Software. Research Manager at Serco Usability Services, 49(1), 89-96. Bobby. (2006). Watchfire® WebXACT™. Watchfire Corporation. Retrieved January 10, 2006, from http://Webxact.watchfire.com/ Carroll, J. M. (1997). Scenario-based design. In M. Helander, T. Landauer & P. Prabhu (Eds.), Handbook of human-computer interaction (2nd Ed). North-Holland. CESyA. (2006). Centro Español de Subtitulado y Audiodescripción. Retrieved January 10, 2006, from http://www.cesya.es Cooper, A., & Reimann, A. (2003). About face 2.0: The essentials of interaction design. Wiley Publishing. Cynthia says. (2006). HiSoftware® Cynthia Says™. Retrieved January 10, 2006, from http:// www.cynthiasays.com/ DDC (2004). DDC report the Web access and inclusion for disabled people. Retrieved January 10, 2006, from http://www.drc.gov.uk/library/formal_investigation_report_w.aspx
A Framework for Accessible and Usable Web Applications
Department of Justice USA. (1998). Section 508 of the rehabilitation act. Retrieved January 10, 2006, from http://www.access-board.gov/sec508/ guide/act.htm HERA (2006). Hera 2.0. Retrieved January 10, 2006, from http://www.sidar.org/hera/ Granollers, T. (2004). MPIu+a. Una metodología que integra la Ingeniería del Software, la Interacción Persona-Ordenador y la Accesibilidad en el contexto de equipos de desarrollo multidisciplinares. (Doctoral Thesis, Universitat de Lleida). Retrieved January 10, 2006, from http://griho.udl. es/publicacions/2004/Tesis_Toni/TesiToniGranollers.pdf ISO 13407. (1999). ISO 13407: Human-centred design processes for interactive systems. International Standard. ISO 9241-11. (1998). Ergonomic requirements for office work with visual display terminals (VDTs)Part 11: Guidance on usability. International Standard. Italian Parlament. (2004). Disposizioni per favorire l’accesso dei soggetti disabili agli strumenti informatici. Parlament. Retrieved January 10, 2006, from http://www.camera.it/parlam/leggi/04004l. htm LSSICE. (2002). Law 34/2002, of 11 of July, services of the information society and electronic commerce. Spanish Congress, serie A, núm. 68-13 Retrieved January 10, 2006, from http://www.congreso.es/docu/publicaciones/l7/ind_a_68.html Moreno, L. et al. (2005). Towards accessible semantic web applications. In Proceedings of the International Conference on Dublin Core and Metadata Aplications (pp. 87-95). Mayhew, D.J. (1999). The usability engineering lifecycle A practitioner's handbook for user interface design. San Francisco: Morgan Kaufmann. Newell, A.F., & Gregor, P. (2000). User sensitive inclusive design: In search of a new paradigm. In The Proceedings of the First ACM Conference on Universal Usability (CUU 2000) (pp. 39-44).
Nielsen, J. (1993). Usability engineering. Boston: AP Professional. Robertson, J. (2001). Information Design Using Card Sorting. Retrieved January 10, 2006, from http://www.steptwo.com.au/papers/cardsorting/ Rosson, M.B., & Carroll, J.M. (2002). Usability engineering: Scenario-based development of HCI. Morgan Kaufmann. TAW. (2006). Web accessibility test. Retrieved January 10, 2006, from http://www.tawdis.net/ taw3/cms/en UAAG. (2006). User Agent Accessibility Guidelines (UAAG).W3C. Retrieved January 10, 2006, from http://www.w3.org/WAI/intro/uaag.php W3C. (1994). World Wide Web Consortium. Retrieved January 10, 2006, from http://www. w3.org. W3C. (2005). How people with disabilities use the web. Retrieved January 10, 2006, from http://www. w3.org/WAI/EO/Drafts/PWD-Use-Web/ W3C. (2006). Policies relating to web accessibility. Retrieved January 10, 2006, from http://www. w3.org/WAI/Policy/ WAI. (2006). Web accessibility initiative (WAI). Retrieved January 10, 2006, from http://www. w3.org/WAI/ WCAG. (2006a). Web content accessibility guidelines (WCAG). Retrieved January 10, 2006, from http://www.w3.org/WAI/intro/wcag.php WCAG. (2006b). Web content accessibility guidelines (WCAG). Retrieved January 10, 2006, from http://www.w3.org/WAI/WCAG20/baseline/
furthEr rEading ACM Transactions on Accessible Computing, http://www.is.umbc.edu/taccess/index.html Behaviour & Information Technology, http://www. tandf.co.uk/journals/tf/0144929X.html
A Framework for Accessible and Usable Web Applications
Bevan, N. (1999). Quality in use: Meeting user needs for quality. Journal of System and Software. Bevan, N.(2003) Usabilitynet methods for user centred design. Human-computer interaction: theory and Practice (Vol. 1). Retrieved from http://www. usabilitynet.org/tools/13407stds.htm Booth, P. et. al. (2000). Evaluating web resources for disability access. In Proceedings of The ACM SIGCAPH Conference on Assistive Technologies, (pp. 80-84). Clark, J. (2002). Building accessible websites. New Riders Publishing. Conell, B.R. et. al. (1997). What is universal design? Retrieved from http://www.design.ncsu. edu:8120/cud/univ_design/princ_overview.htm Henry, S., et al.(2006). Web accessibility: Web standards and regulatory compliance. Henry, S., et al. (2002). Constructing accessible websites. Horton, S. (2005). Access by design: A guide to universal usability for web designers. IETF journal. Retrieved from http://www.isoc. org/ietfjournal/ International journal of human-computer studies (IJHCS). Retrieved from http://www.hcirn. com/res/period/ijhcs.php Newell, A. F., & Gregor, P. (2000). User sensitive inclusive design: in search of a new paradigm. In Proceedings of The First ACM Conference on Universal Usability (CUU 2000). Nicolle, C., & Abascal, J. (2001). Inclusive design guidelines for HCI. CRC Press. Nielsen, J. (1999). Disabled accessibility: The pragmatic approach. Alertbox, 13. Retrieved from http://www.useit.com/alertbox/990613.html Nielsen, J. (2003). Alternative interfaces for accessibility. Alertbox, 7. Retrieved from http://www. useit.com/alertbox/20030407.html
Olshavsky R. (2002). Bridging the gap with requirements definition. Retrieved from http:// www.cooper.com/newsletters/2002_07/requirements_definition.htm Paciello, M. (2000). Web accessibility for people with disabilities. CMP Books. Perlman, G. (2000). The FirstSearch user interface architecture: Universal access for any user, in many languages, on any platform.In The Proceedings of the 2000 International Conference on Intelligent User Interfaces. Slatin, J. & Rush, S. (2002). Maximum accessibility: Making the web more usable for everyone. Addison Wesley Professional. Sloan, D. et. al. (2006). Contextual web accessibility: Maximizing the benefit of accessibility guidelines, In The Proceedings of the 2006 international cross-disciplinary workshop on Web accessibility (W4A). Stephanidis, C. (2001). Universal access in the information society: A retrospective of recent activities. In The Proceedings of The ACM Conference on Human Factors in Computing Systems (CHI 2001). Stephanidis, C. (2001). User interfaces for all: Concepts, methods, and tools. Mahwah, NJ: Lawrence Erlbaum Associates. Stephanidis, C. et al. (1998). Universal Accessibility in HCI: Process oriented design guidelines and tool requirements. Troyer, L. (1998). WSDM: A user centered design method for Web sites. Computer Networds and ISDN Systems, 30, 85-94. Universal Access in the Information Society, International Journal. Retrieved from http://www.springer.com/east/home/computer/ user+interfaces?SGWID=5-154-70-1167458-0 Waters, C. (1997). Universal web design. New Riders Publishing. Web Accessibility Initiative (WAI). Retrieved from http://www.w3.org/WAI/ World Wide Web, Internet and Web Information Systems. Retrieved from http://www.springer.com/
A Framework for Accessible and Usable Web Applications
east/home/computer/user+interfaces?SGWID=5154-70-35553315-0
tErms and dEfinitions Adaptive Strategies: Techniques that people with disabilities use to improve interaction with the Web, such as increasing the font size in a common browser. Adaptive strategies include techniques with mainstream browsers or with assistive technologies. Assistive Technologies: Software or equipment that people with disabilities use to improve interaction with the Web, such as screen readers that read-aloud Web pages for people who cannot read text, screen magnifiers for people with some types of low vision, and voice recognition software and selection switches for people who cannot use a keyboard or mouse. Information Architecture: Description or specifications of design about how the information must be organized and processed. In Web design this concept applies to the contents organization in categories and the development of an interface to display these categories.
User-Centered Design Process (UCD): Human centered design processes for interactive systems, ISO 13407 (1999), states: “Human-Centered design is an approach to interactive system development that focuses specifically on making systems usable. It is a multi-disciplinary activity.” In UCD, all “development proceeds with the user as the center of focus.” User Characteristics: Include things like age, job responsibilities, software, hardware, environment (for example, home, shared office, private office, shared public terminal), computer experience, and Web experience. User characteristics can also include type of disability, adaptive strategies used, and experience with specific assistive technologies Universal Design: The process of creating products (devices, environments, systems, and processes) which are usable by people with the widest possible range of abilities, operating within the widest possible range of situations (environments, conditions, and circumstances), as is commercially practical. Web Content: Generally refers to the information in a Web page or Web application, including text, images, forms, sounds, and such.
Chapter XXVI
Intelligent User-Centric Access to Public Information Giovanni Maria Sacco Università di Torino, Italy
introduction The quantity and diversity of information available from public government sources is now quite large. Most people associate governmental information exclusively with prescriptive information such as laws and regulations. However, governments, especially local ones, are using the Web to provide a number of services that are mainly informative and aim at improving the quality of life of citizens and at promoting the local community, for example job placement services, tourist information, and so forth. Finally, government e-services available to citizens represent one of the most frequent and critical points of contact between public administrations and citizens. In addition to common services such as ID cards and permits, e-services represent the only practical way of providing incentives and support to specific classes of citizens.
The key problem is that information must be findable (Morville, 2002). Easy and effective user-centric access to complex information is therefore one of the most critical functionalities of e-government. Without timely and accurate information, the participation of citizens in the government is likely to be an illusion: in short, no democracy without knowledge. Traditional access paradigms are not suited to most search tasks, which are exploratory and imprecise in essence. The user needs to explore the information base, find relationships among concepts and thin out alternatives in a guided way. New access paradigms supporting exploration are needed. Since the goal is end-user interactive access, a holistic approach, in which modeling, interface and interaction issues are considered together, must be used and will be discussed in this chapter.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Intelligent User-Centric Access to Public Information
background Public information is usually managed by four retrieval techniques, which are frequently used at the same time for different subsets of the information base: (a) information retrieval (IR) techniques (Van Rijsbergen, 1979) recently dubbed search engines; (b) queries on structured databases; (c) hypertext or hypermedia links and, (d) static taxonomies, such as Yahoo! IR techniques are the obvious choice for laws and regulations, since they are essentially textual in nature. However, their limitations, especially in the legal domain, are well known: Blair and Maron (1985) reported that only 20 percent of relevant documents in a legal database were actually retrieved. Such a significant loss of information is due to the extremely wide semantic gap between the user model (concepts) and the model used by commercial retrieval systems (words). Other problems include poor user interaction because the user has to formulate his or her query with no or very little assistance, and no exploration capabilities since results are presented as a flat list with no systematic organization. Recently, clustering techniques have been used to support some sort of exploration by clustering the documents retrieved by an IR query according to “significant” terms or phrases that occur in them. This approach provides a summary for query results and is used for instance in the U.S. government portal, firstgov. gov. Cluster summaries do not address the query problems inherent in IR and do not increase the recall, but rather the precision of the result because they allow users to quickly skip clusters that are not relevant. In addition, the exploratory capabilities offered by text clustering are quite limited (Hearst, 2006; Sacco, 2000). Database queries require structured data and are not easily applicable to situations in which most information is textual and not structured or loosely structured. They are extensively used for informative, promotional services (e.g., job placement services, tourist information, etc.) because in most cases, they rely on structured information. Like IR, database queries do not support exploration.
Hypermedia (see Groenbaek & Trigg, 1994) is quite flexible, but it gives no systematic picture of relationships among documents; exploration is performed one document at a time, which is quite time consuming; and building and maintaining complex hypermedia networks is very expensive. They are currently used in public information portals to manage e-services, because the number of e-services is reasonably small. Traditional taxonomies are based on a hierarchy of concepts that can be used to select areas of interest and restrict the portion of the infobase to be retrieved. Taxonomies support abstraction and are easily understood by end-users. However, they are not scalable for large information bases (Sacco, 2006), and the average number of documents retrieved becomes rapidly too large for manual inspection. Solutions based on semantic networks, ontologies and Semantic Web (BernersLee, Hendler & Lassila, 2001) are more powerful than plain taxonomies. However, general semantic schemata are intended for programmatic access and are known to be difficult to understand and manipulate by the casual user. User interaction must be mediated by specialized agents, which increases costs, time to deployment and decreases the transparence and flexibility of user access.
dynamic taxonomiEs Dynamic taxonomies (Sacco, 1987, 2000, also called faceted search systems) are a general knowledge management model based on a multidimensional classification of heterogeneous data items and are used to explore complex information bases in a guided yet unconstrained way through a visual interface. The intension of a dynamic taxonomy is taxonomy, which is a concept hierarchy going from the most general to the most specific concepts. Directed acyclic graph taxonomies modeling multiple inheritances are supported, but rarely required. A dynamic taxonomy does not require any other relationships in addition to subsumptions (e.g., IS-A and PART-OF relationships).
Intelligent User-Centric Access to Public Information
In the extension, items can be freely classified under n (n>1) topics at any level of abstraction (i.e., at any level in the conceptual tree). This multidimensional classification is a generalization of the one dimensional classification scheme used in conventional taxonomies and models common real-life situations. First, items are very often about different concepts. Second, items to be classified usually have different features, “perspectives,” or facets (e.g., time, location, etc.), each of which can be described by an independent taxonomy. In dynamic taxonomies, a concept C is just a label that identifies all the items classified under C. Because of the subsumption relationship between a concept and its descendants, the items classified under C (items(C)) are all those items in the deep extension of C, that is the set of items identified by C includes the shallow extension of C (i.e., all the items directly classified under C) union the deep extension of C’s sons. By construction, the shallow and the deep extension for a terminal concept are the same. There are two important immediate consequences of this approach. First, since concepts identify sets of items, logical operations on concepts can be performed by the corresponding set operations on their extension. This means that the user is able to restrict the information base (and to create derived concepts) by combining concepts through the normal logical operations (and, or, not). Second, dynamic taxonomies can find all the concepts related to a given concept C: these concepts represent the conceptual summary of C. Concept relationships other than subsumptions are inferred through the extension only, according to the following extensional inference rule: two concepts A and B are related if there is at least one item d in the knowledge base which is classified at the same time under A or under one of A’s descendants and under B or under one of B’s descendants. For example, we can infer an unnamed relationship between terrorism and New York, if an item classified under terrorism and New York exists. At the same time, since New York is a descendant of the United States, also a relationship between terrorism and the United States can be inferred. The extensional inference
rule can be seen as a device to infer relationships on the basis of empirical evidence. The extensional inference rule can be easily extended to cover the relationship between a given concept C and a concept expressed by an arbitrary subset S of the universe: C is related to S if there is at least one item d in S which is also in items(C). Hence, the extensional inference rule can produce conceptual summaries not only for base concepts, but also for any logical combination of concepts. Since it is immaterial how S is produced, dynamic taxonomies can produce summaries for sets of items produced by other retrieval methods such as IR or database queries, shape retrieval, etc. and therefore access through dynamic taxonomies can be easily combined with any other retrieval method. Dynamic taxonomies work on conceptual descriptions of items, so that heterogeneous items of any type and format can be managed in a single, coherent framework. Finally, since concept C is just a label that identifies the set of the items classified under C, concepts are language-invariant, and multilingual access can be easily supported by maintaining different language directories, holding language-specific labels for each concept in the taxonomy. If the metadata descriptors used to describe an item use concepts from the taxonomy, then also the actual description of an item can be translated on the fly to different languages. This feature is extremely important for multilingual governments such as the European Union.
Exploration The user is initially presented with a tree representation of the initial taxonomy for the entire knowledge base. Each concept label also has a count of all the items classified under it, meaning the cardinality of items(C) for all C’s. The initial user focus F is the universe, meaning all the items in the information base. In the simplest case, the user selects a concept C in the taxonomy and zooms over it. The zoom operation changes the current state in two ways. First, concept C is used to refine the current user
Intelligent User-Centric Access to Public Information
focus F, which becomes F∩items(C). Items not in the focus are discarded. Second, the tree representation of the taxonomy is modified in order to summarize the new focus. All and only the concepts related to F are retained and the count for each retained concept C’ is updated to reflect the number of items in the focus F that are classified under C’. The reduced taxonomy is derived from the initial taxonomy by pruning all the concepts not related to F, and it is a conceptual summary of the set of documents identified by F, exactly in the same way as the original taxonomy was a conceptual summary of the universe. In fact, the term dynamic taxonomy indicates that the taxonomy can dynamically adapt to the subset of the universe on which the user is focusing, whereas traditional, static taxonomies can only describe the entire universe. The retrieval process can be seen as an iterative thinning of the information base: the user selects a focus, which restricts the information base by discarding all the items not in the current focus. Only the concepts used to classify the items in the focus and their ancestors are retained. These concepts, which summarize the current focus, are those and only those concepts that can be used for further refinements. From the human computer interaction point of view, the user is effectively guided to reach his goal by a clear and consistent listing of all possible alternatives, and, in fact, this type of interaction is often called guided thinning or guided navigation. Figures 1 through 5 show how the zoom operation works. Figure 1 shows a dynamic taxonomy: the upper half represents the intension with circles representing concepts; the lower half is the extension, and documents are represented by rectangles. Arcs going down represent subsumptions; arcs going up represent classifications. In order to compute all the concepts related to H, we first find, in Figure 2, all the documents classified under H (that is, the deep extension of H, items(H)) by following all the arcs incident to H (and, in general, its descendants): items(H)={ b, c, d }. All the items not in the deep extension of H (Figure 3) are removed from the extension. In Figure 4, the set of all the concepts under which
Figure 1. A dynamic taxonomy: the intension is above, the extension below. Arrows going down denote subsumptions, going up classification A
B
E
C
F
a
G
b
D
H
I
L
c
d
M
e
Figure 2. Focusing on concept H: finding all the items classified under H A
B
E
C
F
a
G
b
H
D
I
L
c
d
M
e
Figure 3.All the items not classified under H are removed A
B
E
C
F
G
b
H
D
I
c
L
M
d
Figure 4.All the concepts under which the items in the focus are classified (and, because of subsumptions) their ancestors are related to H A
B
E
C
F
G
b
H
c
D
I
L
M
d
Intelligent User-Centric Access to Public Information
Figure 5. The reduced taxonomy: All concepts not related to the current focus are pruned A
B
C
F
G
b
H
c
I
d
the documents in items(H) are classified, B(H), is found by following all the arcs, leaving each element in the set: B(H)={ F, G, H, I }. The inclusion constraint implied by subsumption states that if items(C) denotes the set of documents classified under C and C’ is a descendant of C in the taxonomy, items(C’)⊆items(C) (Sacco, 2000). This is
equivalent to say that a document classified under C’ is also classified under C. Hence, the set of concepts related to H is given by B(H) union all the ancestors of all the concepts in B(H), that is the set of all concepts related to H is {F, G, H, I, B, C, A}. Finally, in Figure 5, all the concepts not related to H are removed from the intension, thus producing a reduced taxonomy that fully describes all and only the items in the current focus. A sample interaction on e-services is reported in Figures 6 and 7. Figure 6 shows the initial dynamic taxonomy, in which every topic is followed by a number indicating how many items are classified under it. As the Type facet shows, items include online and offline services, but also guides. It must be stressed that the final goal in using dynamic taxonomies in e-government portals is to make all information available and findable in an integrated way.
Figure 6. E-government portal with a dynamic taxonomy on seven facets (© Knowledge Processors, 2006, www.knowledgeprocessors.com)
Figure 7. Reduced taxonomy after a zoom on senior citizens: characterization of the 67 items specifically targeted to senior citizens (© Knowledge Processors, 2006, www.knowledgeprocessors.com)
Intelligent User-Centric Access to Public Information
A zoom on senior citizens produces the reduced taxonomy in Figure 7, that shows all and only the topics classified under senior citizens, that is those services, guides, and so forth that specifically apply to them. Drill-down can be iterated as required: for instance, the user might focus on housing, thereby reducing both the number of items and the number of topics that apply.
advantagEs The single, largest advantage of dynamic taxonomies over competing techniques is that they can support exploratory access to any type of public information and therefore provide a single, uniform, coherent, and easily understood interface to end users. Dynamic taxonomies require a very light theoretical background: namely, the concept of a taxonomic organization and the zoom operation, which seems to be very quickly understood by end-users. Hearst et al. (2002) and Yee et al. (2003) conducted usability tests on a corpus of art images. Despite slow response times, access through a dynamic taxonomy was shown to produce a faster overall interaction and a significantly better recall than access through text retrieval. Perhaps more important are the intangibles: the feeling that one has actually considered all the alternatives in reaching a result. Although few usability studies exist, the widespread adoption by e-commerce portals, such as Yahoo!, Lycos, Bizrate, and so forth, empirically supports this initial evidence. The advantages of dynamic taxonomies are also dramatic in terms of convergence of exploratory patterns. The analysis by Sacco (2002, 2006) shows that three zoom operations on terminal concepts are sufficient to reduce information bases of up to 10,000,000 items and described by a compact taxonomy with 1,000 concepts to an average 10 items. Experimental data on a real newspaper corpus of over 110,000 articles, classified through a taxonomy of 1100 concepts, reports an average 1246 documents to be inspected by the user of a static taxonomy vs. an average 27 documents after a single zoom on a dynamic taxonomy.
The derivation of concept relationships through the extensional inference rule has important implications on conceptual modeling. First, it simplifies taxonomy creation and maintenance. In traditional approaches, only the relationships among concepts explicitly described in the conceptual schema are available to the user for browsing and retrieval. Therefore, all possible relationships must be anticipated and described: a very difficult if not helpless task. In dynamic taxonomies, no relationships in addition to subsumptions are required, because concepts relationships are automatically derived from the actual classification. For this reason, dynamic taxonomies easily adapt to new relationships and are able to discover new, unexpected ones. Second, since dynamic taxonomies synthesize compound concepts, these need usually not be represented explicitly. This removes the main cause of the combinatorial growth of traditional taxonomies. Sacco (2000) developed guidelines that produce taxonomies that are compact and easily understood by users. Some are similar to faceted classification (Hearst, 2002; Ranganathan, 1965), at least in its basic form: the taxonomy is organized as a set of independent, “orthogonal” subtaxonomies (facets or perspectives) to be used to describe data. As an example of faceted design guidelines, consider a compound concept such as “housing grants for single parents in New York.” It can be split into its facets: a location taxonomy (of which New York is a descendant), a citizen taxonomy (of which single parent is a descendant), a needs taxonomy (including housing) and an action taxonomy (of which grant is a descendant). The items to be classified under the compound concept will be classified under Location>New York, Citizen>single parent, Needs>housing and Action>grant, instead. The extensional inference rule establishes a relationship among these concepts and the compound concept can be recovered by zooming on any permutation of them. In a conventional classification scheme, in which every item is classified under a single concept, a number of different concepts equal to the cartesian product of the terminals in the four taxonomies has to be defined. Such a combinatorial growth either results in extremely large conceptual
Intelligent User-Centric Access to Public Information
taxonomies or in a gross conceptual granularity (Sacco, 2000). In addition, faceted design coupled with dynamic taxonomies makes it simple to focus on a concept, for example single parent, and immediately see all related concepts such as grants, locations, and so on, which are recovered through the extensional inference rule. In the compound concept approach, these correlations are unavailable because they are hidden inside the concept label. This feature is extremely important in assisting users to discover information relevant to them: for instance, by zooming on single parent and then on grants, the user may discover that, in addition to grants for housing on which he is currently interested into, there are also grants for education, and so on. Additional advantages include the uniform management of heterogeneous items of any type and format, easy multilingual access and easy integration with other retrieval methods. Dynamic taxonomies do not support reasoning beyond the extensional inference rule, and are therefore less powerful than general ontologies. However, they can be directly manipulated by users without the mediation of specialized agents and represent a quicker, less costly and more transparent alternative.
applications Dynamic taxonomies have an extremely wide application range and a growing body of literature indicates that their adoption benefits most portal applications, although the main industrial application is currently e-commerce (Sacco, 2003). With respect to public information, there are studies of applications of dynamic taxonomies to key areas such as law and regulations (Sacco, 2005a), informative, promotional services (Sacco, 2005c), human resources and job placement (Berio, Harzallah & Sacco, 2006), and e-services (Sacco, 2006a). Additional areas include cultural heritage and museum portals (Hyvönen, Saarela & Viljanen, 2004; Yee et al., 2003), but also medical guideline portals (Wollersheim & Rahayu, 2002), medical diagnosis (Sacco, 2005b) and CRM (customer relationship management). 0
A growing number of Web-based commercial systems based on dynamic taxonomies exist. Among these are Knowledge Processors, Endeca, i411 and Siderean Software.
futurE trEnds Following the quick and widespread adoption by e-commerce portals that has made this new search paradigm quite well known in the Internet, we expect dynamic taxonomies to become pervasive in public information portals as well, and to replace or integrate traditional techniques. As previously noted, the application areas of dynamic taxonomies in public information systems cover a significant and very diverse range, so that it is conceivable that dynamic taxonomies can be used as the search backbone in complex public information portals. However, access through a dynamic taxonomy is user-friendly, natural, and efficient only if the taxonomies used are well-designed and easily understood. While there are general design guidelines for the design of dynamic taxonomies (Ranganathan, 1965; Sacco, 2000), the design phase is still critical and labor-intensive. A significant design effort might be required especially for those areas whose semantics are not clearly defined. In addition, human factors have to inform the entire design cycle, so that the conceptual organization as well as concept labels are clearly understood by users with different backgrounds, education and capabilities. With respect to static taxonomies, dynamic taxonomies allow the designer to describe the same concept in different ways, so that the taxonomic structure can be personalized to different users.
conclusion Exploratory browsing applies to most practical situations and search tasks in public information portals. In this context, dynamic taxonomies represent a dramatic improvement over other search and browsing methods, both in terms of convergence and in terms of full feedback on alternatives and complete guidance to reach the user goal.
Intelligent User-Centric Access to Public Information
futurE rEsEarch dirEctions
rEfErEncEs
Current research is focused on three broad areas:
Berio, G., Harzallah, M., & Sacco, G.M. (2007). Portals for integrated competence management. In A. Tatnall (Ed.), Encyclopedia of portal technology and applications. Hershey: Idea Group Inc.
1.
2.
3.
Automatic classification and schema design: Dynamic taxonomies do not define how documents are actually classified. Current research focuses on automatic text classification (Dakka, Ipeirotis & Wood, 2005) and automatic classification from structured data (Sacco, 1998). Recent investigations (Sacco, 2005d) suggest that dynamic taxonomies can be automatically derived from semantically rich conceptual schemata and used as a user-centered front-end to complex information. Other research addresses the problem of specifying valid term compositions in faceted taxonomies for textual information (Tzitzikas, Analyti & Spyratos, 2005) Extensions to the model and human factors: A fuzzy (Zadeh, 1965) classification, in which a document can be classified under several concepts with different probabilities, can sometimes be more appropriate than the Boolean classification currently used (Sacco, 2004). Because of the holistic approach, human factors play a paramount role in devising extensions to the model and in critical issues such as the presentation and manipulation of the taxonomy, where several alternatives exist (see Yee et al., 2003 vs. Sacco, 2000). Centralized, distributed, federated architectures: The zoom operation and the subsequent reduction of the corpus taxonomy must be performed in real time because a slower execution would severely impair the sense of free exploration that the user of dynamic taxonomy systems experiences. Special data structures and evaluation strategies must be used (Sacco, 1998). In addition, distributed and federated architectures need to be investigated since centralized architectures are not always appropriate, because of organization needs and of performance and reliability bottlenecks.
Berners-Lee, T., Hendler, J., & Lassila, O. (2001). The semantic web. Scientific American, May 17, 35-43. Blair, D. C. & Maron, M. E. (1985). An evaluation of retrieval effectiveness for a full-text documentretrieval system, Communications of the ACM, 28(3), 289-299 Dakka, W., Ipeirotis, P. G., & Wood, K. R. (2005). Automatic construction of multifaceted browsing interfaces. In Proceedings of the 14th ACM international Conference on information and Knowledge Management (CIKM ‘05) (pp. 768-775). Groenbaek, K., Trigg, R. (Eds.) (1994). Hypermedia. Comm. ACM 37(2). Hearst, M. (2006). Clustering versus faceted categories for information exploration. Communications of the ACM 49(4), 59-61. Hearst, M. et al. (2002). Finding the flow in web site search. Comm. of the ACM, 45(9), 42-49. Hyvönen, E., Saarela, S., Viljanen, K. (2004). Application of ontology techniques to view-based semantic search and browsing. In proceedings of the First European Semantic Web Symposium (ESWS 2004), Springer-Verlag, LNCS 3053, (pp. 92-106). Morville, P. (2002). The age of findability. Boxes and Arrows. Retrieved April 29, 2002, from http:// www.boxesandarrows.com/archives/002595.php Ranganathan, S. R. (1965). The Colon Classification. In S. Artandi (Ed.), Rutgers series on systems for the intellectual organization of information (Vol 4). New Jersey: Rutgers University Press. Sacco, G. M. (1987). Navigating the CD-ROM. In Proceedings of the Int. Conf. Business of CD-ROM
Intelligent User-Centric Access to Public Information
Sacco, G. M. (1998). Dynamic taxonomy process for browsing and retrieving information in large heterogeneous data bases, US Patent 6,763,349; also, Italian Patent 01303603 Sacco, G. M. (2000). Dynamic taxonomies: A model for large information bases. IEEE Transactions on Knowledge and Data Engineering 12(2), 468-479. Sacco, G. M. (2002). Analysis and Validation of Information Access through Mono, Multidimensional and Dynamic Taxonomies. Tech Report Dept. of Informatica, Univ. of Torino Sacco, G. M. (2003). The intelligent e-sales clerk: The basic ideas, Proc. In proceedings of INTERACT’03 the Ninth IFIP TC13 International Conference on Human-Computer Interaction, (pp. 876-879). Sacco, G. M. (2004). Uniform access to multimedia information bases through dynamic taxonomies, IEEE Sixth Intern. Symp. on Multimedia Software Engineering, (ISMSE’04), (pp. 320-328). Sacco, G. M. (2005a). No (e-)democracy without (e-)knowledge. In e-government: Towards electronic democracy, Int. Conf. IFIP TCGOV 2005, Bolzano, Springer Lecture Notes in Computer Science 3416,. 147-156 Sacco, G. M. (2005b). Guided interactive diagnostic systems, 18th IEEE International Symposium on Computer-Based Medical Systems (CBMS’05), (pp. 117-122) Sacco, G. M. (2005c). Guided interactive information access for e-citizens, in: EGOV05 – Int. Conf. on E-Government, within the Dexa Conf. Framework, Springer Lecture Notes in Computer Science 3591, 261-268 Sacco, G. M. (2005d). Discount semantics: modeling complex data with dynamic taxonomies, Tech. Rep, Università di Torino. Sacco, G. M. (2006). Analysis and Validation of Information Access through Mono, Multidimensional and Dynamic Taxonomies, FQAS 2006, 7th International Conference on Flexible Query
Answering Systems, Milano, June 2006, Lecture Notes in Artificial Intelligence Springer 2006 Sacco, G. M. (2006a). User-centric access to egovernment information: E-citizen discovery of e-services. 2006 AAAI Spring Symposium Series, Stanford University Tzitzikas Y., Analyti A., Spyratos N. (2005). Compound term composition algebra: The semantics. J. Data Semantics, 2, 58-84 Van Rijsbergen, C. J. (1979). Information retrieval. London: Butterworths. Wollersheim, D., & Rahayu, W. (2002). Methodology For Creating a Sample Subset of Dynamic Taxonomy to Use in Navigating Medical Text Databases, Proc. IDEAS 2002 Conf., 276-284 Yee, K-P. et al. (2003). Faceted Metadata for Image Search and Browsing, Proc. ACM CHI 2003, 401-408 Zadeh, L. (1965). Fuzzy sets. Information Control, 8, 338-353
tErms and dEfinitions Facet: One of several top level (most general) concepts in a multidimensional taxonomy. In general, facets are independent and define a set of “orthogonal” conceptual coordinates. Extension, deep: Of a concept C, denotes the shallow extension of C union the deep extension of C’s sons. Extension, shallow: Of a concept C, denotes the set of documents classified directly under C. Extensional Inference Rule: Two concepts A and B are related if there is at least one item d in the knowledge base which is classified at the same time under A (or under one of A’s descendants) and under B (or under one of B’s descendants). Subsumption: A subsumes B if the set denoted by B is a subset of the set denoted by A (B ⊆ A)
Intelligent User-Centric Access to Public Information
Taxonomy: A hierarchical organization of concepts going from the most general (topmost) to the most specific concepts. A taxonomy supports abstraction and models subsumption (IS-A and/or PART-OF) relations between a concept and its father. Tree taxonomies can be extended to support multiple inheritances (i.e., a concept having several fathers). Taxonomy, one-dimensional: A taxonomy where an item can be classified under a single concept only Taxonomy, multidimensional: A taxonomy where an item can be classified under several concepts
Taxonomy, reduced: In a dynamic taxonomy, a taxonomy, describing the current user focus set F, which is derived from the original taxonomy by pruning from it all the concepts not related to F. User Focus: The set of documents corresponding to a user-defined composition of concepts; initially, the entire knowledge base. Zoom: A user interface operation, that defines a new user focus by OR’ing user-selected concepts and AND’ing them with the previous focus; a reduced taxonomy is then computed and shown to the user.
Chapter XXVII
Open Access to Scholarly Publications and Public Policies Jean-Philippe Rennard Grenoble Graduate School of Business, France
introduction “If I have seen further it is by standing upon the shoulders of giants.” The famous statement of Sir Isaac Newton demonstrates that the progress of science relies on the dissemination of discoveries and scientific knowledge. Even though scientific progress is not strictly cumulative (Kuhn, 1970) information sharing is the heart of this progress. Nowadays, scientific knowledge is mainly spread through scholarly journals, that is, highly specialized journals where quality controls and certifications are achieved through peer-review. The first section of this chapter will present the specificity of the current economic model of scientific publications. The second section will introduce to the open access movement and to its emerging economic model. The third section will show the growing implication of governments in that movement.
background: thE Economic modEl of sciEntific publications The growing complexity of modern science induces a growing need of knowledge dissemination media. The number of academic journals is very difficult to estimate, but according to the “Ulrich’s International Periodicals Directory,” (www.ulrichsweb.com) there were about 164,000 scientific periodicals in 2001 in all disciplines (see Figure 1). The largest publishers like Elsevier-Reed, Blackwell, and Wiley own most of these journals. Over the last 20 years, commercial firms, especially the largest ones, have raised prices at a rate which cannot be justified by cost or quality increase (McCabe, 2000). According to the Association of Research Libraries (ARL, 2005), the mean serial
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Open Access to Scholarly Publications and Public Policies
Figure 1. Number of periodicals published worldwide (’000s) 1998-2001. (Source: Ulrich’s International Periodicals Directory) 0 Nombre '000
000
00
.
Figure 2. Monograph and serial costs in ARL Libraries, 1986-2004. (Source: ARL, 2005)
located from monographs to journals. The rise of journals prices has a multiple origin, one of the most important being provisions to invest in electronic publications (Chartron & Salaun, 2000). These provisions are nevertheless insufficient to explain the current prices. Reed Elsevier operating profit is estimated 34 percent (17 percent net profit) and Wiley had an operating profit of 29 percent in the first half of 2000 (House of Commons Science and Technology Committee, 2004). Such “Microsoft like” margins are very unusual and demonstrate the inefficiency of the scientific publication market. There are four main reasons to this inefficiency: •
•
•
• unit cost of 89.77 USD in 1986 reached 258.73 USD in 2004. Former president of the University of California recently stated: “University librarians are now being forced to work with faculty members to choose more of the publications they can do without.” (Atkinson, 2003, p.1, original italics). As a consequence, Figure 2 shows that, in the USA, acquisition expenditures have tremendously grown and that part of the budgets had to be real-
Researchers publish to popularize their works and to improve peers recognition (which has a great impact on their careers). They are “giveaway authors” (Harnad, 2001) and do not receive any royalties or fees. Authors do not have to pay to access to scientific information since all the expenses are paid by academic libraries. They are then not concerned with the price of journals, and only consider the reputation and the citation impact of the journals they publish in. The demand is then price-inelastic (that is prices have few impact on the volume of the demand) since prices are not important for researchers and journals are not easily substitutable. Libraries evolve on a commercial market but do not have any commercial approach. They buy up to their budget limit and not according to any price equilibrium. The multiplication of mergers among publishers has strongly contributed to the increase of prices (McCabe, 2000).
The growing conflict between researchers, who aim at disseminating their works as widely as possible, and libraries, which have a limited budget on the one hand and publishers who mainly have financials objectives on the other hand, gave rise to an accelerated development of the practice of open access to electronic publications.
Open Access to Scholarly Publications and Public Policies
thE opEn accEss movEmEnt In the Gutenberg era researchers had no alternative, publishers were the only way to reach readers. In the Post-Gutenberg Era, digital networks offer a powerful alternative that can lead, in the long term, to a new organization of scientific publications (Harnad, 1999). Preserving quality controls and certifications through peer-review, this organization should be based on open access to electronic publications. Beginning with self-archiving and repositories, the open access movement is now moving towards free electronic publications.
publishers has changed a lot over the last years. The Project RoMEO (rights metadata for open archiving) lists publisher’s copyright transfer agreement. Figure 3 shows that 83 percent of the 10,673 journals listed in September 2004 accepted at least preprint archiving. This percentage was only 55 percent in 2003. Self-archiving undoubtedly increases visibility but, since these archives can only be found through usual search engines, their access is very difficult without the knowledge of the existence of a given work.
self-archiving
rEpositoriEs
From the very beginning, scientists have exchanged information, consulted peers about a given idea or tested colleagues’ reactions to an innovative concept. Up to the second half of last century, the main transmission tool was private correspondence via postal mail. With the development of Internet and electronic communications, informal exchanges have exploded since it is now easy and very common to contact a researcher by e-mail to ask him for a copy of a given work. In order to ease informal exchanges and to increase their visibility, many researchers have used Internet for a long time to self-archive their works, that is to make either preprints (before refereeing) or postprints (after refereeing) available on their own (personal or institutional) Web site. Due to the pressure of the open access movement, the copyright policy of journals and
The success of self-archiving and the difficulty to find self-archived works led Paul Ginsparg, then physicist at the Los Alamos National Laboratory, to initiate in 1991 the arXiv archives. It aimed at centralizing and easing access to free electronic publications. Researchers were asked to directly archive their works in the repository. With such tools, publications are no longer dispersed among many web sites and are available at once. There are now more than 450.000 articles in Arxiv with a submission rate of about 5000 papers per month. Following this pioneer, other high-level archives emerged. Some of the most important being:
Figure 3. Evolution of journals’ self-archiving policies, 2003-2004. (Source: RoMEO)
• • • • •
Cogprints (http://cogprints.ecs.soton.ac.uk) specialized in cognitive sciences. PubMed Central (http://www.pubmedcentral.gov/) specialized in life sciences. Repec (http://www.repec.org/) and WoPEc (http://netec.mcc.ac.uk/WoPEc.html) specialized in economics. Math-Net (http://www.math-net.org/) specialized in mathematics. NCSTRL (http://www.ncstrl.org/) and CiteSeer (http://citeseer.ist.psu.edu/) specialized in computer science.
The development of repositories and self-archives led to a standardization need, notably to
Open Access to Scholarly Publications and Public Policies
build services permitting to search across multiple repositories. Repositories also needed capabilities to properly identify and copy articles stored in other repositories (Lynch, 2001). These needs led to the Open Archives Initiative (http://www. openarchives.org) initiated by Ginsparg in 1999 with “The Santa Fe Convention of the Open Archives Initiative.” The Open Archives Initiative designed specific metadata tagging standards (standard format of keywords) to make archives easily harvestable. Even though the Open Archives Metadata Harvesting Protocol is mainly used by free repositories, it is also employed by servers housing commercial products (the term open refers to the technical architecture, not to the fact that the content should be free). Specific directories like OAIster or Eprints. org now provide lists of OAI-compliant archives. This initiative knows a tremendous success. In July, 2006, OAIster managed more than 8.5 million records originated from more than 660 institutions.
onlinE Journals Publishers could not ignore the progress of electronic publication and distribution. Considering the quick development of knowledge dissemination through Internet, many among them have thus decided to make their journals available online.
Apart from their usual paper edition, those journals so try to improve their diffusion and reputation. Some publishers or institutions also decided to adopt a more radical solution: purely electronic journals. Considering the prices of printing and postal diffusion, electronic publications can reduce the cost of journals (WellcomeTrust, 2003). Publishers only have to support the organization of the review process and the cost of diffusion tools (software and hardware). The access to electronic articles originated in classical journals is usually reserved to subscribers, but a growing number of them are now freely available online on certain conditions (such as time-delayed release). In July 2006, the Directory of Open Access Journals listed more than 2,300 journals in all disciplines. One of the reasons of the growing success of open access journals is that open access articles have a greater citation impact than others. Studying 119,924 conference articles in computer science and related disciplines, Lawrence found that the number of citations of open access articles was 2.6 times greater than the number for offline articles (Lawrence, 2001). A recent study based on the ISI CD-ROM citation database concluded that for the year 2001, the citation impact in all physics fields was 5.5 times higher for open access articles (Brody, Stamerjohanns, Vallieres et al., 2004). These results will undoubtedly contribute to improve the perception of open access journal
Table 1. Estimates of journal costs (Source: Wellcome Trust, 2004) Subscriber-pays journal Cost in USD
Author-pays journal Cost in USD
Good to highquality
Medium-quality
Good to highquality
Medium-quality
journal(a)
journal(b)
journal(a)
journal(b)
First-copy costs per article
1,500
750
1,500
750
Fixed-costs per article
1,650
825
1,850
925
Variable costs per article
1,100
600
100
100
Total costs per article
2,750
1,425
1,950
1,025
Cost element
a: b:
eight articles reviewed for each article accepted. two articles reviewed for each article accepted.
Open Access to Scholarly Publications and Public Policies
by researchers who consider them as a last resort option.
thE sEarch for a nEw Economic modEl The transition to electronic journals reduces the costs but is of course insufficient to economically validate the open access model. Apart from subsidy-based free journals, a growing economic model is based on the payment by the authors’ institutions. An author-pays model is substituted to the classical subscriber-pays system. A study by the Wellcome Trust tries to compare the costs of classical subscriber-pays journals and of electronic author-pays journals (Wellcome-Trust, 2004). The results are summarized in Table 1. The structure of fixed costs is similar for both types of journals (editorial costs, review costs, articles preparation, etc.), but fixed costs are estimated higher for author-pays journals because they have to cover the administration of the charging system to authors. Variable costs differ since the marginal cost of electronic distribution is very low. According to Wellcome Trust, “In terms of costs of production, system costs and the implication of those for levels of fees, the author-pays model is a viable option. Open-access author-pays models appear to be less costly and to have the potential to serve the scientific community successfully.” (Wellcome-Trust, 2004). One of the first online author-funded journals was the New Journal of Physics(NJP) launched at the end of 1998 (Haynes, 1999). This journal requires authors of published papers to pay a publication fee of £300 (about 550 USD). The beginnings were difficult since online journals were not considered as “100 percent serious” but NJP is now ranked 14 of 68 titles in the physics multidisciplinary category of ISI’s Journal Citation Reports (Haynes, 2004). The most prestigious initiative yet is that of the Public Library of Science founded in October 2000 by Nobel Prize recipient Harold E. Varmus, Patrick O. Brown from Stanford University and Michael Eisen from the University of California Berkeley. They received a USD nine million grant
from the Gordon and Betty Moore foundation and launched a high level journal, PLoS Biology, in October 2003. PLoS Biology charged authors about 1,500USD per accepted article, but, thanks to an equalization system, publications in PLoS Biology could be affordable to any laboratory in developing countries (Delbecq, 2004). The NJP as well as PLoS Biology do not cover their direct costs yet with authors’ fees and strongly rely on subsidies. The NJP should increase the number of published articles by 150 percent, the proportion of authors paying articles from the present 60 percent to 95 percent and the fee from the present £400 (about 750 USD) to £600 (about 1,100 USD) in order to cover its costs (Haynes, 2004). The economic model of free publications then remains to be constructed. A pure author-pays system cannot be implemented immediately. Prosser (Prosser, 2003) proposes a transition model where journals would give authors two options: • •
To pay for publication and the article will then be freely available. Not to pay for publication and the article will only be available to subscribers.
According to Prosser, the numerous advantages of open access, particularly in terms of visibility and citation frequency (Harnad, 2004) should lead to a growing share of author-pays articles. Prosser’s model as well as the propositions of the Open Society Institute (Crow & Goldstein, 2004) remain to be validated. No open-access journal covers its fixed costs yet and the solutions to bring them to financial equilibrium are still to be invented. Furthermore the open-access model undoubtedly has undesired effects: •
•
Many scientific societies live by their publications. These non-profit organizations use the publication incomes to finance conferences or scholarships. The development of openaccess could threaten their activities. By succeeding, the open-access movement will threaten the largest publishers. They should be tempted to concentrate their publications on core collections. Loosing
Open Access to Scholarly Publications and Public Policies
•
•
•
economies of scales from successful publications, the cost of marginal highly specialized journals could explode (Okerson, 2003). The author-pays model could result in a simple shift from library subscription to research budgets. In 2003, Duke University published about 4,500 papers. If authors had paid 1,500USD per article the total cost of 6.75 millions would have been close to the current budget for journals which is about 6.6 millions (Guterman, 2004). Author-pays journal will inevitably be tempted to accept a growing number of articles in order to cover their fixed costs, the global quality of these publications could then decrease. Authors who do not have the budget to finance a publication might look to think tank and corporations to find extra funding. These scientific works will paradoxically be more influenced by political and commercial agendas (Wellen, 2004).
public policiEs The importance of education, research and innovation for economic growth is well known at least since Schumpeter (Schumpeter, 1912). According to Jones (Jones, 2000), between 1965 and 1990, 35 percent of the United States growth can be attributable to the rise in educational attainment and 40 percent can be attributable to the rise in worldwide research intensity. Science, technology, and medicine (STM) publishing market is estimated between seven billion USD and 11 billion USD (European Commission, 2006). Currently, governments pay three times during the publishing process (House of Commons Science and Technology Committee, 2004): • • •
They fund the research projects. They pay the salaries of academics who carry out peer review. They fund libraries to purchase publications.
“What other business receives the goods that it sells to its customers from those same customers, a quality control mechanism provided by its customers, and a tremendous fee from those same customers?” (House of Commons Science and Technology Committee, 2004, p.37) This context explains that governments concerned with research budgets show a growing interest in the open access movement and try to support it. UK government committee published in 2004 a report notably declaring (House of Commons Science and Technology Committee, 2004): •
•
We are convinced that the amount of public money invested in scientific research and its outputs is sufficient to merit Government involvement in the publishing process. It is in society’s interest that public understanding of science should increase. Increased public access to research findings should be encouraged by publishers, academics and Government alike.
The report advocates two main recommendations: •
•
UK education institution should establish repositories to store their publications and from which they could be read online free of charge and government funded researcher should have to deposit their publications in their institution repository. The author-pay model could be viable and should be better studied.
In the United States, following the 2003 Public Access to Science Act proposed by M. Sabo, Senator John Cornyn proposed in May 2006 the bipartisan Federal Research Public Access Act”(FRPAA). Considering considering that more than 55 billion USD are invested each year on basic and applied research, the proposition require free access within six months to publicly funded publications in peer-reviewed journals. Europe launched a study on the economic and technical evolution of the scientific publication
Open Access to Scholarly Publications and Public Policies
market in June 2004. This report released in April, 2006, notably recommends guaranteeing public access to publicly-funded research shortly after publication (European Commission, 2006). At the end of January 2004, OECD ministers “…recognized that fostering broader, open access to and wide use of research data will enhance the quality and productivity of science systems worldwide. They therefore adopted a Declaration on Access to Research Data from Public Funding.” (OECD, 2004). One of the principles of this declaration is to promote “openness,” that is open access to publicly funded researches. Developing countries, notably in Latin America also try to stimulate open access. The most successful initiative being the project Scientific Electronic Library Online, which is now gathering Brazil, Chile, Cuba and Spain. In Africa, the African Journals OnLine promotes African publications and already proposes free abstracts. Generally speaking, governments are now deeply involved in the open-access movement. Considering the weight of the investments in research and the importance of the quick diffusion of knowledge, no doubt that government involvement will be reinforced in the coming years.
futurE trEnds Open-access is by no way a panacea. It is not economically viable yet and could have important undesired effects. Nevertheless, the pressure induced on commercial publishers is now very high and they no longer can ignore this movement. It is now very difficult to imagine that in a decade or more, commercial publications will disappear and be replaced by free publications, but the open-access movement will undoubtedly slow down the exploding dynamic of prices. The future equilibrium will inevitably associate commercial and open-access publications, opening the way toward a more efficient market of scholarly publications. The growing interest of governments will significantly contribute to this dynamic.
0
conclusion The Journal of Comparative Neurology costs $18,000 a year; Brain Research costs about $21,000 and Nuclear Physics A and B more than $23,000 (Guterman, 2004). Such exploding prices explain the growing conflict between academics, government research agencies and publishers. The development of the open-access movement is then not the mere consequence of the diffusion of Internet, but also a clear symptom of the inefficiency of the current market. The debate on free publications remains very passionate and is not always rational, but its great merit is to raise an important issue. By modifying the balance of power between researchers and publishers, the success of the open access movement and the development of e-commerce and e-distribution will ease scientific knowledge dissemination, reduce the information gap between wealthy and low budget institutions and help the advent of an efficient market. No doubt that government involvement will accelerate this movement.
futurE rEsEarch dirEctions States and international organizations’ keen interest in open access should stimulate researches in the field. Three main issues have to be deepened: 1.
2.
3.
The analysis of open access publications costs remains insufficient. The decrease of these costs, along with the spread of free edition tools, should strengthen the interest of open access in the next years. A wide longitudinal research is necessary to embrace the whole economic potential of open access. The global economic impact of open access for states and international organizations needs to be quantified. Open access should allow reducing the overcharges of the current system, where public research pays twice for scholarly publications. The impact of the enlargement of the diffusion of research works associated with open access also needs to be analyzed, notably
Open Access to Scholarly Publications and Public Policies
concerning developing countries which have low acquisition budgets.
rEfErEncEs ARL. (2005). ARL Statistics 2003-04. Washington: Association of Research Libraries. Atkinson, R. C. (2003). A New World of Scholarly Communication. The Chronicle of Higher Education, 50(11), B16. Brody, T., Stamerjohanns, H., Vallières, F., Harnad, S., Gingras, Y., & Oppenheim, C. (2004). The effect of open access on citation impact. Paper presented at the National Policies on Open Access (OA) Provision for University Research Output: an International meeting, Southampton University. Chartron, G., & Salaun, J.-M. (2000). La reconstruction de l’économie politique des publications scientifiques. BBF, 45(2), 32-42. Crow, R., & Goldstein, H. (2004). Guide to Business Planning for Converting a Subscription-based Journal to Open Access. Retrieved September 20, 2004, from http://www.soros. org/openaccess//oajguides/ Delbecq, D. (2004, 5/6/2004). La revue en ligne qui fait trembler les payantes. Libération, 16. European Commission. (2006). Study on the Economic and Technical Evolution of the Scientific Publication Markets in Europe. Available at: http://www.infotoday.com/newsbreaks/nb0604101.shtml Guterman, L. (2004). The promise and peril of ‘open access.’ The Chronicle of Higher Education, 50(21). Harnad, S. (1999). Advancing Science by SelfArchiving Refereed Research. Retrieved August 10, 2004, from http://www.sciencemag.org/cgi/ eletters/285/5425/197#EL12 Harnad, S. (2001). The Self-Archiving Initiative. Nature (410), 1024-1025.
Harnad, S. (2004). Comparing the impact of open acces (OA) vs. non-OA articles in the same journals. D-Lib Magazine, 10(6). Haynes, J. (1999). New journal of physics: a web-based and author-funded journal. Learned Publishing, 12(4), 265-269. Haynes, J. (2004). Can Open Access be viable? The Institute of Physics’ experience. Retrieved September 20, 2004, from http://www.nature. com/nature/focus/accessdebate/20.html House of Commons Science and Technology Committee. (2004). Scientific Publications: Free for all? London: House of Commons. Available at: http://www.escholarlypub.com/oab/keyoaconcepts.htm Jones, C.I. (2000). Sources of u.s. economic growth in a world of ideas (No. 99-29). New York: United Nations World Employment Programme. Kuhn, T. S. (1970). The structure of scientific revolutions. Chicago: University of Chicago Press. Lawrence, S. (2001). Online or invisible? Nature, 411(6837), 521. Lynch, C. A. (2001). Metadata harvesting and the open archives initiative. ARL Bimonthly Report(217). McCabe, M. J. (2000). Academic Journal Pricing and Market Power: A Portfolio Approach (Working Paper): School of Economics, Georgia Institute of Technology. OECD. (2004). Science, Technology and Innovation for the 21st Century. Meeting of the OECD Committee for Scientific and Technological Policy at Ministerial Level, 29-30 January 2004. Final Communique. Retrieved Septembre 20, 2004, from http://www.oecd.org/document/0,2340,en_2649_ 34487_25998799_1_1_1_1,00.html Okerson, A. (2003). Towards a Vision of Inexpensive Scholarly Journal Publication. Retrieved September 20, 2004, from http://www.library.yale. edu/~okerson/Libri.html
Open Access to Scholarly Publications and Public Policies
Prosser, D. C. (2003). From here to there: A proposed mechanism for transforming journals from closed to open access. Learned Publishing, 16(3), 163-166. Schumpeter, J. A. (1912). Théorie de l’évolution économique. Paris: Dalloz. Wellcome-Trust (Ed.). (2003). Economic analysis of scientific research publishing. Histon: Wellcome Trust. Wellcome-Trust (Ed.). (2004). Costs and business models in scientific research publishing. Histon: Wellcome Trust. Wellen, R. (2004). Taking on commercial scholarly journals: Reflections on the open access’ movement. Journal of Academic Ethics, 2(1), 101-118.
furthEr rEading Anderson, R. (2004). Open access in the real world: Confronting economic and legal reality. College & Research Libraries News, 65(4), 206-208. Anderson, B. (2004). Open access journals. Behavioral & Social Sciences Librarian, 22(2), 93-99. Awre, C. (2003). Open access and the impact on publishing and purchasing. Serials, 16(2), 205208. Butler, D. (2003). Open-access row leads paper to shed authors. Nature, 25,334.
Guédon, J.-C. (2003). Open access archives: From scientific plutocracy to the republic of science. IFLA Journal, 23(2), 129-140. Harnad, S. (2000). Freeing the refereed journal corpus online. Computer Law & Security Report, 16(2), 87-87. Harnad, S. (2003). The Research-Impact Cycle. Information Service & Use, 23(2), 139-142.. Lamb, C. (2004). Open access publishing models: Opportunity or threat to scholarly and academic publishers? Learned Publishing, 12(2), 143-150. McKiernan, G. (2000). Research index: Autonomous citation indexing on the web. International Journal on Grey Literature, 1(1), 41-46. Peek, R.(2004). Open access expands its reach. Information Today, 21(1), 17-18. Velterop, J.(2003). Open access publishing. Information Services & Use, 23(2), 113-115.
tErms and dEfinitions DOAJ (Directory of Open Access Journals): is a portal listing more than 2,100 open access journals in all disciplines. OAI-PMH: “Open Archives Initiative Protocol for Metadata Harvesting” provides a standard framework for metadata harvesting.
Dryburgh, A. (2204). Open access — time to stop preaching to the converted? Learned Publishing, 17(1), 69-70.
Open Access Journal: Free online available scholarly journal. Some of them are purely electronic journals, others are classical ones offering a free electronic version.
Franklin, J. (2003).Open access to scientific and technical information: The State of Art. Information Services & Use, 23(2), 67-86.
Open Archives Initiative: Initiated by the American physicist P. Grinsparg in 1999, the OAI designed metadata tagging standards,.
Friend, F.J. (2004). How can there be open access to journal articles? Serials, 17(1), 37-40.
Preprint: Scientific work before peer-review.
Grivell, L. (2004). Access for all? EMBO Reports, 5(3), 222-225.
Postprint: Scientific work modified after peerreview. Public Library of Science: Organization founded in October 2000 committed to make scientific
Open Access to Scholarly Publications and Public Policies
literature a freely available resource. Nobel Prize recipient Harold E. Varmus is co-founder and Chairman of the Board of PLoS. Repository: Database where researchers self-archive their works, either preprints or postprints. The Open Archives Initiative proposes standards to allow access to different repositories.
Self-archiving: Consists in the deposit of a researcher works in a repository. The researcher is generally responsible of the format of the deposit and particularly of its conformance to the archive standards. Scientific Electronic Library Online: Particularly devoted to Latin America and the Caribbean countries, SciELO promotes a model for cooperative electronic publishing of scientific journals. (http://www.scielo.org)
Chapter XXVIII
The Digital Divide and Social Equity Alfred P. Rovai Regent University, USA Emery M. Petchauer Lincoln University, USA
introduction The Pew Internet and American Life Project (Pew/Internet; Lenhart, Horrigan, Rainie et al., 2003) reports 42 percent of Americans say they do not use the Internet, with 24 percent being truly off-line with no direct or indirect experience with the Internet. However, these percentages represent averages and don’t pertain uniformly across all subpopulations. Pew/Internet (Fox, 2005) reports Americans age 65 and older, African-Americans, and those with less education lag behind others in Internet usage. The present article examines the impact of these differences on social equity in terms of receiving fair, just, and equitable treatment by the political system regarding public policies and services. Pew/Internet (Madden, 2006) reports 53 percent of adults living in households with less than $30,000 in annual income use the Internet, versus 80 percent of those with incomes between $30,000-50,000, 86 percent of adults living in
households with annual incomes between $50,000 and $75,000, and 91 percent of adults living in households earning more than $75,000. Regarding Internet usage by race and ethnicity, Fairlie (2004) reports 70 percent of whites, 41 percent of blacks, and 39 percent of Latinos have access to home computers and 50 percent of whites, 29 percent of blacks, and 24 percent of Latinos have access to the Internet in the United States. Finally, Pew/Internet (Madden, 2006) reports level of education is also an important indicator for Internet use. “While 40 percent of adults who have less than a high school education use the Internet, 64 percent of adults with a high school degree go online. Among those who have some college education, 84 percent use the Internet, and 91 percent of adults with at least a college degree go online” (Madden, 2006, p. 4). These statistics suggest a digital divide between those who have reasonable access to information technology and those who do not. One reason why this divide is an important issue is that access to
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Digital Divide and Social Equity
information technology has a large impact on the ability of individuals to acquire knowledge and to become active creators and distributors of information. Moreover, this divide negatively affects the ability of the poor and minorities to accumulate social capital (Putnam, 2000) and participate fully in our technological society. As the Internet becomes increasingly central to living in today’s society, it becomes important that certain groups are not systematically excluded. This chapter examines the digital divide with an emphasis on critical perspectives that recognize power, racism, and social stratification and the challenges faced by public officials to promote information technology policies and programs that support social equality.
of Americans who do not use the Internet are no computer at home, lack of interest or knowledge, and the expenses associated with computer ownership and Internet access. Reddick (2002) suggests non-users can be divided into three types. 1.
2.
3.
background The term digital divide can take on several meanings, but at the most basic level it refers to the division between those who have reasonable access to the Internet and those who do not. Any discussion about the digital divide, particularly when related to digital inequality, assumes the knowledge gap hypothesis (e.g., Bonfadelli, 2002), which suggests a growing knowledge gap between individuals who have access to and are able to use information and those who are not. A digital divide exists anytime there is a gap in opportunities experienced by those with limited access to technology, especially the Internet. Warschauer (2003) argues that the digital divide, like literacy, is not a binary issue, one of haves and have nots; there are different degrees of computer access just as there are different degrees of literacy. The term digital divide describes inequalities in access to computers and the Internet between various subpopulations. The racial digital divide, for example, describes the difference in rates of access to computers and the Internet between those racial groups with high rates of access, such as whites, and those with lower rates of access, such as African Americans. The Center for the Digital Future (2005) reports the top reasons cited by the 21.4 percent
Type 1: Non-users recognize the value of the Internet and believe it may be beneficial for meeting their needs. However, their main obstacles are the technical skill development and affordability of the technology. Type 2: Non-users face the same problems as Type 1 non-users but they also lack perceived personal or social benefit from use of the Internet. Type 3: Non-users are similar to Type 2 nonusers but are unlikely to have the interest, resources, or social skills to benefit from Internet access.
On the other hand, Stewart (2000) identifies the following reasons most people get online: 1. 2. 3.
4.
Life events: Events that create new opportunities, relationships, time pressures, demands for work, and so forth. Social push: The social pressure of friends, family, and colleagues to adopt and use the Internet. Multimedia pull or instrumental need: Demands of work, or participation in other activities where information technology offers benefits of efficiency or economy, or is the only tool for the job. Curiosity and interest in technology or content.
The term digital divide is also used to address the disparities in computer ownership and access to high-speed broadband digital services. Most of our knowledge about this divide is based on surveys that suggest it is mostly related to ethnic and minority group affiliation, geographic location, household composition, age, education, and income level (Katz & Aspden, 1997). Those on the wrong side of the digital divide are denied the option to participate effectively in new high
The Digital Divide and Social Equity
tech jobs and in technology-enhanced education. From the educational perspective, groups most disenfranchised by the digital divide are the same groups historically disenfranchised by curricular and pedagogical practices, evaluation and assessment, and other aspects of education as well as with society at large. However, the reason for the digital divide along racial lines is that minorities are often at a socio-economic disadvantage due to lower education levels and poorer incomes. Seiden (2000) reports that there is strong evidence that schools in poorer urban and rural districts have not only less computer hardware, but less ability to train students. Cullen (2001) argues that: being on the wrong side of the digital divide is only one symptom of being poor. Lower socioeconomic groups also have far lower household incomes, less access to educational opportunities, and have far more limited job opportunities. . . one of the keys to increasing the socio-economic status of this country’s poorest citizens is to grant them fair and equal access to educational and economic opportunities, and the Internet presents us with an exceptional opportunity to do just that. (¶ 10) However, the cause and effect relationship for those on the wrong side of the digital divide is not clear. Poverty contributes to the digital divide, which in turn further isolates the poor from better job and educational opportunities. Hoffman and Novak (1998) report whites are more likely to own a home computer than African Americans at each education level. However, this disparity can be largely explained by sociological research that shows a black person with comparable education earns less than their white counterparts, family sizes are larger, and thus the per capita income is lower. Nonetheless, the result is that more blacks are on the wrong side of the digital divide than their white peers even after controlling for level of education. The type of computer access students experience can impact their attitudes about online learning as well as their academic achievement. If students own a computer, they may find online
learning activities more convenient and desirable than students who must locate an available computer in public places in order to access the Internet and their virtual classroom. This lack of convenience can contribute to negative reactions to online learning. According to the National Telecommunications and Information Administration, technology access is particularly acute among African American higher education students (Dervarics, 2003), placing them at a disadvantage to compete and qualify for the best-paying jobs. The result for those on the wrong side of this divide, typically the poor and minorities, can be social estrangement and reduced social support or meaningful social connection with mainstream society. Such individuals can experience alienation from society, become detached from mainstream groups such as student groups, and feel a lack of connection to other members of society and to other students. Just as poverty is defined as the lack of sufficient funds or material possessions necessary to provide the basic necessities (food, clothing, shelter) required to live a healthful life, Stewart (2000) suggests infopoverty is the lack of access to basic information that hinders individuals and communities from improving their circumstances. DiMaggio, Celeste, Hargittai et al. (2004) suggest that as more individuals use the Internet, it becomes less useful to discuss the digital divide in terms of access only. They claim that researchers also need to start looking at differences in people’s digital competency, which they refer to as the second-level digital divide. “Addressing the digital divide is not simply a matter of running wire and providing public computers — it is also a matter of ensuring that people have the requisite skills to use the technology and that they see relevance of the technology to their lives” (Seiden, 2000, ¶ 3). Consequently, looking at the digital divide exclusively in terms of technical access is insufficient. Such skills are directly related to how long it takes to complete online tasks. While economic, cultural, and social factors have a greater role to play in narrowing the gap in the first-level of the digital divide, educational factors are more important in closing the gap in online skills. When
The Digital Divide and Social Equity
one examines the digital divide in terms of the ability of university students to use computer technology effectively, Farrell (2005) writes that the majority of college freshmen belonging to an ethnic minority at schools such as UCLA are unequipped to manage the digital workplace. Consequently, regardless of how one examines the digital divide, African Americans appear to be at a disadvantage, and this situation can adversely impact their academic achievement. Cunningham (2001) suggests that today’s investment in computing for schools is unlikely to close the gap between those in our society who have access to, and know how to use, information technologies and those who do not: Indeed, placing the computer at the center of school routines will only increase the educational advantage of students for whom computers are just a fact of daily life. Putting computers in the classrooms of such students may increase their opportunities to learn ... But for students without such a comfort level, the demands of the computer will be a distraction from reading and writing and figuring or will become just a very expensive version of a textbook or workbook. (p. 41) Consequently he calls for a massive effort at teacher training, not one-shot in-service workshops, but comprehensive professional development that prepares everyone, regardless of subpopulation, to use information technologies.
minorities and the poor are making progress in closing this divide. Pew/Internet (Horrigan, 2006), for example, reports home broadband adoption among African Americans increased by 121 percent between 2005 and 2006 while it only increased by 35 percent for whites. Some authors (e.g., Mueller, 2001) suggest that the free market will eventually eliminate the digital divide over time. Others disagree. Warschauer (2002), for example, maintains Internet access tends to disperse more slowly than other information technologies, for example devices such as computers, because it requires monthly fees whereas devices only require a single purchase. Such continuing fees can keep lower socioeconomic groups from acquiring Internet access. Consequently some researchers contend that closing the digital divide will require some intervention into the operation of the free market (De George, 2003). Alternatively, will social stratification and the growth of an unskilled underclass in technological access create an enduring digital divide? Public telecommunications policy may play an important role here if the Internet is treated as a public utility, so that access is made widely available through public libraries, community centers, and private homes, much as telephone services were regulated to produce low-cost services and universal access to rural areas (Putnam, 2000).
conclusion futurE trEnds The major unanswered question for the future is whether digital inequalities are a temporary problem that will disappear over time as Internet access spreads and the population becomes more computer literate. Farrell (2005) reports “the so called digital divide is widening between black freshmen and students of other ethnic groups… Only 76.5 percent of black students reported using a personal computer frequently in 2004, compared with 86.7 percent of white students” (p. 2). However, other research findings suggest
Fairlie (2004) raises the issue of whether the digital divide should be viewed as an issue of goods and services arising because of income differences, as are other electronic goods such as digital cameras. Do computers as goods create access to knowledge, education, and jobs, thus requiring redistributive policies? The focus of public administration has always been on the public good, and issues such as efficiency and economy are historic hallmarks of effective public administration. While these issues are still important, so are issues of fairness, benevolence, and social equity (Frederickson, 1996).
The Digital Divide and Social Equity
Frederickson (1996) suggests that the present emphasis on efficiency in public administration has taken a toll on social equity. He also maintains that public administration is not devoid of political meaning or partiality either for or against the disadvantaged. The values to be achieved in a democracy include an active and engaged citizenry. In public administration, the term social equity sounds simple enough: the concept that everyone deserves equitable treatment, regardless of race, religion, sex or any other discriminating feature. The debate comes in defining what is considered equitable distribution for the service or product provided and the maintenance of social equality. There are a variety of ways to define equitable distribution of a product or service (Stone, 2002). Horizontal equity implies equitable distribution dependent upon abilities or incomes, that is higher levels of income will receive more goods or services. Vertical equity entails equitable distribution regardless of differences in income or abilities, that is lower levels of income will receive goods or services via redistribution. The challenge for public administrators is how to close the digital gap and create the opportunities for a wider participation of all people in the information age in order to provide the disadvantaged with more social capital to promote social equity. Taking equitable distribution of knowledge as a major goal of digital divide policy is to view access to information technology as a basic human right (Lievrouw & Farb, 2003). The digital divide can also be viewed as a gap in opportunities. It is mostly the young middle class who have cyberpower and access to information technology, whereas the economically disadvantaged, poorly educated, and minorities are being left out of the information age with negative repercussions on their employment opportunities and ability to use the Internet for civic engagement. Knowing how to access the Internet, as well as to master computer software and other communication devices, are essential skills in today’s society, where information is power and wealth is based on knowledge and the ability to communicate effectively.
An examination of the digital divide is necessary to fully inform the use of technology to promote digital inclusion. Digital inclusion is central to the success of the information society. It relates not only to distribution of an adequate share of resources among the entire population, but also to both individual and collective quality of life (Stewart, 2000). Even if the poor do not have an equal share of economic resources, they can have better opportunities to participate in society than is presently the case. Collective action must be taken to increase digital inclusion. The proliferation of cyber cafés and public access points (Stewart, 2000) means that many members of society have physical access to the Internet at a relative low price, making the discussion of access to the Internet a discussion of skills and quality of access rather than affordability and distribution of equipment. Considering use of technology for social inclusion allows us to shift our focus from that of gaps to be overcome by distribution of technology to that of social development through effective use of technology, training, and public policies. Accordingly, while much of governmental and non-profit efforts in the United States have put resources into technical solutions to bridging the digital divide, at least some of those resources should be shifted to alleviating the educational and economic discrepancies that exist in society.
futurE rEsEarch dirEctions The digital divide reflects loss of individual opportunities to participate in e-democracy, ecommerce, and e-research and to compete and prosper in today’s knowledge-based economies. As information and communications technology (ICT) expands and becomes more crucial, the negative results for those excluded become more severe. Although some data support arguments that the gaps will resolve themselves, research data also shows that some significant gaps are widening. Therefore, additional research is needed to identify more solid theoretical frameworks and socio-economic metrics to assess the effects of public policy interventions on the digital divide
The Digital Divide and Social Equity
and to conduct longitudinal studies to track trends in gap sizes over time and to explain fluctuations. Future research efforts also need to yield greater understandings of the consequences of the gap on individuals, families, and society at large. Most of the studies that examine the digital divide are descriptive in nature and address the presence or absence and closure or widening of gaps in access and usage. However, access and usage are only two aspects of the digital divide. Additional research is also needed to examine ICT usage differences by subpopulations once equitable access has been achieved. One such study conducted by Robinson, DiMaggio, and Hargittai (2003) found that college-educated participants possessed advantages over high school-educated participants in using the Internet to derive various benefits. They report that the clearest advantage appears in terms of the types of sites visited, uses made, and political discussion. This type of study needs to be replicated using other samples and settings and extended to look at potential differences in other demographic variables, such as ethnicity. Additional research is also needed to identify the cultural factors that influence the frequency and nature of ICT use in order to identify factors that directly facilitate or inhibit Internet use beyond simple access. In particular, studies are required that respond to Selwyn’s call for research into answering the following questions: • • •
•
What types of formal or theoretical access to what technologies do people have at home, at work and in community settings? What types of effective or practical access to what technologies do people have at home, at work and in community settings? What is the nature and extent of the use of technologies facilitated by this access? Under what circumstances does meaningful use or engagement arise? What factors contribute to people continuing to be users of ICT and others to revert to becoming non-users? What types of social, economic, cultural and technological capital are people able to draw upon when using technology?
• • •
•
What are the short-term outcomes of this engagement with technology for people and communities? What are the longer-term consequences of this engagement with technology in terms of individuals’ participation in society? How are people’s ICT access, engagement and outcomes patterned according to individual factors such as age, gender, class, geography, ethnicity and disability? What other mitigating factors and circumstances can be identified as having an impact on different social groups’ propensity and motivations to engage with ICT? (Selwyn, 2004, p. 356)
rEfErEncEs Bonfadelli, H. (2002). The internet and knowledge gaps: A theoretical and empirical investigation. European Journal of Communication, 1, 65-84. Center for the Digital Future. (2005). Surveying the digital future. Los Angeles, CA: USC Annenberg School. Retrieved January 30, 2006, from http:// www.digitalcenter.org/pages/current_report. asp?intGlobalId=19 Cullen, R. (2001). Addressing the digital divide. Online Information Review, 25(5), 311-321. Retrieved January 30, 2006, from http://wotan.liu. edu/dois/data/Papers/juljuljin5644.html Cunningham, C. A. (2001). Improving our nation’s schools through computers & connectivity. Brookings Review, 19(1), 38-43. Retrieved January 30, 2006, from http://www.brook.edu/press/REVIEW/winter2001/crandall.htm De George, R. T. (2003). The ethics of information technology and business. Malden, MA: Blackwell Publishing. Dervarics, C. (2003). House moves ahead on digital divide help. Black Issues in Higher Education, 20(13), 6. DiMaggio, P., Celeste, C., Hargittai, E., & Shafer, S. (2004). Second-level digital divide: From
The Digital Divide and Social Equity
unequal access to differential use. In K. Neckerman (Ed.), Social inequality (pp. 355-400). New York: Sage.
Putnam, R. D. (2000). Bowling alone: The collapse and revival of American community. New York: Simon and Schuster.
Fairlie, R.W. (2004). Race and the digital divide. Contributions to economic analysis & policy, 3(1). Retrieved January 30, 2006, from http://cjtc.ucsc. edu/docs/r_digitaldivide9.pdf
Reddick, A. (2002). The dual digital divide: The information highway in Canada. Ottawa: The Public Interest Advocacy Centre.
Farrell, E. F. (2005, February 4). Among freshmen, a growing digital divide. The Chronicle of Higher Education, 51(22), A32. Fox, S. (2005). Digital divisions. Washington, DC: Pew Internet & American Life Project. Frederickson, H. G. (1996). The spirit of public administration. San Francisco: Jossey-Bass. Hoffman, D. L., & Novak, T. P. (1998). Bridging the racial divide on the Internet. Science, 280(April 17), 390-391. Horrigan, J. B. (2006). Home broadband adoption 2006. Washington, DC: Pew Internet & American Life Project. Katz, J., & Aspden, P. (1997). Motivations for and barriers to Internet usage: Results of a national public opinion survey. Internet Research: Electronic Networking Applications and Policy, 7(3), 170-188.
Robinson, J. P., DiMaggio, P., & Hargittai, E. (2003). New social survey perspectives on the digital divide. IT & Society, 1(5), 1-22. Seiden, P. A. (2000). Bridging the digital divide. Reference & User Services Quarterly, 39(4), 329. Selwyn, N. (2004). Reconsidering political and popular understandings of the digital divide. New Media & Society, 6(3), 341-362. Stewart, A. (2000). Social inclusion: An introduction. In A. Stewart, & P. Askonas (Eds.), Social inclusion: Possibilities and tensions. London: Macmillan. Stone, D. (2002). Policy paradox. New York: W. W. Norton & Company. Warschauer, M. (2002). Reconceptualizing the digital divide. First Monday, 7(7). Retrieved January 30, 2006, from http://firstmonday.org/issues/issue7_7/warschauer/index.html
Lenhart, A., Horrigan, J., Rainie, L., Allen, K., Boyce, A., Madden, M., & O’Grady, E. (2003). The ever-shifting Internet population: A new look at Internet access and the digital divide. Washington, DC: Pew Internet & American Life Project.
Warschauer, M. (2003). Demystifying the digital divide. Scientific Ameican, 289(2). Retrieved January 30, 2006, from http://www.sciam.com/article. cfm?chanID=sa006&colID=1&articleID=000112 F0-AB93-1F09-97AE80A84189EEDF
Lievrouw, L. A., & Farb, S. E. (2003). Information and equity. Annual Review of Information Science and Technology, 37, 499-540.
furthEr rEading
Madden, M. (2006). Data memo: Internet penetration and impact April 2006. Washington, DC: Pew Internet & American Life Project. Mueller, M. L. (2001): Universal service policies as wealth redistribution. In B. M. Companie (Ed.), The digital divide: Facing a crisis or creating a myth? (pp. 179-187). Cambridge, MA: MIT Press.
00
Antonelli, C. (2003). The digital divide: understanding the economics of new information and communication technology in the global economy. Information Economics and Policy, 15(2), 173-199. Azari, R., & Pick, J. B. (2005). Technology and society: Socioeconomic influences on techno-
The Digital Divide and Social Equity
logical sectors for United States counties. International Journal of Information Management, 25(1), 21-37. Brewer, G. A., Neubauer, B. J., & Geiselhart, K. (2006). Designing and implementing e-government systems: Critical implications for public administration and democracy. Administration & Society, 38(4), 472-499.
Academy Of Political And Social Science, 597(1), 209-222. Hsu, J., Huang, J., Kinsman, J., Fireman, B., Miller, R., Selby, J., & Ortiz, E. (2005). Use of e-health services between 1999 and 2002: A growing digital divide. Journal of the American Medical Informatics Association, 12(2), 164-171.
Brown, D. (2005). Electronic government and public administration. International Review of Administrative Sciences, 71(2), 241-254.
Jaeger, P. T., Bertot, J. C., McClure, C. M., & Langa, L. A. (2006). The policy implications of Internet connectivity in public libraries. Government Information Quarterly, 23(1), 123-141.
Cava-Ferreruela, I., & Alabau-Muñoz, A. (2006). Broadband policy assessment: A cross-national empirical analysis. Telecommunications Policy, 30(8-9), 445-463.
Kind, T., Huang, Z. J., Farr, D., & Pomerantz, K. L. (2005). Internet and computer access and use for health information in an underserved community. Ambulatory Pediatrics, 5(2), 117-121.
Chang, B. L., Bakken, S., Brown, S. S., Houston, T. K., Kreps, G. L., Kukafka, R. et al. (2004). Bridging the digital divide: Reaching vulnerable populations. Journal of the American Medical Informatics Association, 11(6), 448-457.
Lai, B., & Brewer, G. A. (2006). New York City’s broadband problem and the role of municipal government in promoting a private-sector solution. Technology in Society, 28(1-2), 245-259.
Dugdale, A., Daly, A., Papandrea, F., & Maley, M. (2005). Accessing e-government: Challenges for citizens and organizations. International Review of Administrative Sciences, 71(1), 109-118. Dumont, G., & Candler, G. (2005). Virtual jungles: Survival, accountability, and governance in online communities. The American Review of Public Administration, 35(3), 287-299. Evans, D. & Yen, D. C. (2005). E-government: An analysis for implementation: Framework for understanding cultural and social impact. Government Information Quarterly, 22(3), 354-373. Flew, T. (2005). New media: An introduction (2nd ed.). New York: Oxford University Press. Frieden, R. (2005). Lessons from broadband development in Canada, Japan, Korea and the United States. Telecommunications Policy, 29(8), 595-613. Grazian, D. (2005). A digital revolution? A reassessment of new media and cultural production in the digital age. The Annals of The American
Lenhart, A. (Ed.). (2003). The ever-shifting internet population: A new look at internet access and the digital divide. Washington, DC: The Pew Internet & American Life Project. Milner, H. V. (2006). The digital divide: The role of political institutions in technology diffusion. Comparative Political Studies, 39(2), 176-199. Mossberger, K., Tolbert, C. J., Gilbert, M. (2006). Race, place, and information technology. Urban Affairs Review, 41(5), 583-620. Norris, P. (2001). Digital divide: Civic engagement, information poverty, and the Internet worldwide. New York: Cambridge University Press. Roe, K. (2006). The digital divide in the twentyfirst century: An introduction. Poetics, 34(4-5), 219-220. Selwyn, N. (2006). Digital division or digital decision? A study of non-users and low-users of computers. Poetics, 34(4-5), 273-292. Simpson, l., Daws, L., & Pini, B. (2004). Public internet access revisited. Telecommunications Policy, 28(3-4), 323-337.
0
The Digital Divide and Social Equity
Tapia, A., Maitland, C. & Stone, M. (2006). Making IT work for municipalities: Building municipal wireless networks. Government Information Quarterly, 23(3-4), 359-380. Thomas, J. C., Streib, G. (2005). E-democracy, e-commerce, and e-research: Examining the electronic ties between citizens and governments. Administration & Society, 37(3), 259-280. Tolbert, C. J., & Mcneal, R. S. (2003). Unraveling the effects of the internet on political participation? Political Research Quarterly, 56(2), 175-185. U. S. Government. (2000). Bridging the technological gap: Initiatives to combat the digital divide: Hearing before the Subcommittee on Empowerment of the Committee on Small Business. House of Representatives, One Hundred Sixth Congress, second session. Washington, DC: U.S. Government Printing Office.
tErms and dEfinitions Cyberpower: The effects of online access on an individual’s ability to do something or get something done. Digital Competency: The ability to apply computer skills and knowledge, especially as these skills pertaining to Internet usage. Digital Divide: The division between those individuals who have reasonable access to the Internet and those who do not. Digital divide can also refer to differences in computer ownership and skills in using information technology. Digital Inclusion: The process of expanding access to computing technology to better serve individuals and communities on the wrong side of the digital divide.
Van Dijk, J. A. G. M. (2006). Digital divide research, achievements and shortcomings. Poetics, 34(4-5), 221-235.
Infopoverty: The lack of access to basic information that hinders individuals and communities from improving their circumstances (Stewart, 2000).
Vogelwiesche, U., Grob, A., & Winkler, B. (2006). Improving computer skills of socially disadvantaged adolescents: Same-age versus cross-age tutoring. Learning and Instruction, 16(3), 241-255.
Social Capital: The degree to which individuals in a society collaborate and cooperate, through such mechanisms as networks, to achieve mutual benefits (Putnam, 2000).
Warschauer, M. (2004). Technology and social inclusion: Rethinking the digital divide. Cambridge, MA: MIT Press.
0
Social Equity: The principle that each member of society has a right to be given fair, just, and equitable treatment by the political system in terms of public policies and services.
0
Chapter XXIX
Africa and the Challenges of Bridging the Digital Divide Esharenana E. Adomi Delta State University, Nigeria
introduction Much of the developed world has, over the past two decades, been transformed by what are now termed information and communication technologies (ICTs). These technologies exert great impact on most aspects of our lives—in economic activities, education, entertainment, communication, travel, and so on. Also they have inextricably linked with economic prosperity and power (Davison, Vogel, Harris, et al., 2000). At present, Africa is at the bottom of the ICT ladder. This has serious implications both for the continent and the entire world. This is because ICTs are enhancing the economies of those countries that are ICT-rich faster then those that are ICT-poor, thus further widening the development gap between Africa and the industrialized world (Ya’u, n.d.) The realization of the importance of ICTs in economic advancement led the United Nations Commission on Science and Technology for Development (UNCSTD) to devote the 1995 through 1997 to the study of the linkages between ICTs and development. One of the important results of that effort was the placing of the digital divide on global development agenda. Since then, there
has been an internal consensus that there is the need to bridge the digital divide. As a result of this consensus, there has evolved various bridging strategies, actions and initiatives at international, regional, continental and local country levels. Learning from these efforts, African countries have, under the leadership of the United Nations Economic Commission for Africa (ECA), been developing national, sub-regional, and continental initiatives to overcome the digital divide and to promote the greater inclusion of Africa communities into the cyberspace (Ya’u, n.d.) In this chapter, efforts are made to define digital divide, unravel the status of Africa in the global digital map, enumerate the causes of low level of ICT diffusion in Africa, efforts at bridging the divide, discusses future trends, and concludes with steps that can address the divide.
background The digital divide is regarded as the gap existing between those with regular, effective access to digital technologies and those without it (Wikipedia, 2006). The concept of the digital divide is used to express the gap in access to information
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Africa and the Challenges of Bridging the Digital Divide
resources in some countries compared with those with state-of-the-art networks such as telephone, radio, television, Internet, satellite, in short, anything that can be classified as ICT. Thus, the digital divide is used to express the difference in facilities for people to communicate, relative to their geographic location, their living standard and their educational level. It is ultimately an indication of a country’s economic and social situation (Marine & Blanchard, 2004). The digital divide is related to social inclusion and equality of opportunity, is seen as a social and political problem, and has increasingly become relevant as the industrialized nations have become more dependent on digital technologies in their democratic and economic processes (Wikipedia, 2006.). The wide acceptance of computers and the phenomenal growth of the internet in the 1990s has also drawn attention to their grave implications for existing socio-economic gaps within and between countries. The concern essentially is that the telecommunication revolution may widen existing social gaps creating two clear cut classes of information haves (information rich) and information have nots (information poor) (Sonaike, 2004). The divide is deepening and the differences in the usage of communication resources and services between countries and regions intensifying. The digital divide, how-
ever, exists not only between countries but also within countries, between “rich” urban regions and “poor” rural regions. This regional divide is more evident within the developing countries (particularly Africa countries), even though rural areas have benefited to some extent from the boom in access to communication services (Marine & Blanchard, 2004). The provision of communication services in developing regions (like Africa) is an essential aspect of enhancing and facilitating the rate of economic and social development (Yavwa & Kritzinger, 2001). There is thus the need for African countries to make frantic efforts to ensure that ICTs are provided adequately and consistently to close the divide and reap the benefits of economic and social development.
africa and thE digital dividE Digital divide is strongly related to globalization and we have digital divides in industrialized as well as developing countries. However, the most pressing problem is that the gap between the industrialized world and developing countries is widening. The worst situation is the African continent, which is the most underdeveloped region in use of ICT (Hietanen, n.d.). Developing countries, especially
Table 1. World internet usage and population statistics (Source: Internet World Stats, 2006) World regions
Population (2006 Est.)
Population (percent) of world
Internet Usage, Latest Data
Population percent (Penetration)
Percent of World users
User growth 2000-2005
Africa
915.210,928
14.1
23,649,000
2.6
2.3
423.9
Asia
3,667,774,066
56.4
364.270,713
9.9
35.6
218.7
Europe
807,289,020
12.4
291,600,898
36.1
28.5
177.5
Middle East
190,084,161
2.9
18,203,500
9.6
1.8
454.2
North America
331,473,276
5.1
227,303,680
68.6
22.2
110.3
Latin America/ Caribbean
553,908,632
8.5
79,962,809
14.4
7.8
342.5
Oceania/ Australia
33,956,977
0.5
17,872,707
52.6
1.7
134.6
WORLD TOTAL
6,499,697,060
100.0
1,022,863,307
15.7
100.0
183.4
0
Africa and the Challenges of Bridging the Digital Divide
Africa, lag far behind industrialized countries in their take-up of new digital technologies, especially the Internet (Cawson, n.d.). Though African Internet user growth ranks second in the world (423.9%), its population penetration is the lowest at 2.6% as can be seen in Table 1. The figures from International Telecommunications Union (ITU, 2004) in 2004 also shows Africa’s low figures. While there are only 2.60% users per 100 people in Africa, there are 31.13% in Europe. In the United Kingdom, the percentage of Internet users per 100 inhabitants is 62.88. African’s number of internet hosts per 10,000 inhabitants and number PCs per 100 people are the lowest when compared with other world regions. It has been observed by Jensen (2002) that each computer with internet or e-mail connection in Africa supports three to five users. Table 3 reveals that Africa has the lowest total gross domestic product (GDP) of 620.5 billion U.S. dollars (USD) and per capital income of $746. Most developing nations, of which African countries
are a part, have a weak capital infrastructure. They suffer from financial poverty relative to industrialized countries, both in terms of assets and income, if not in natural resources also. After several decades of development activities, many have a level of external debt that is sizeable relative to their national income. In addition, the performance of their economies often depends largely upon the performance of the economies of the developed countries, over which they have little control. This situation weakens planning for investment in any infrastructure component (Sadowsky, n.d.). Telephone subscribers per 100 inhabitants are also the lowest in Africa. Over the last decade, ICTs have been growing at great speed, always exceeding global economic growth and changing the way people work, entertain, shop, communicate and organize their lives. This growth has been driven by both demand sided factors, such as the increasing popularity of mobile phones and the Net, and by supplyside factors such as regulating reforms, falling
Table 2. Internet and PC diffusion in 2004 (Source: ITU, 2004) Number of Internet Hosts Africa Americas
Number of Hosts per 10,000 People
Internet Users per 100 People
Number of PCs per 100 People
424968
4.93
2.60
1.74
205502481
2347.63
30.89
34.49
Asia
27986795
74.32
8.10
6.35
Europe
29055601
362.60
31.13
28.48
Oceania
4571332
1408.31
47.95
50.72
Table 3. GDP and telephone subscribers per 100 people (Source: ITU, 2005a) Total (Billion USD) Africa
Per Capital (USD) 2004
Telephone Subscribers per
620.5
746
12.25
Americas
13225.9
15292
80.06
Asia
9340.3
2487
37.39
Europe
13586.9
16929
124.03
Oceania
600.0
18561
110.30
0
Africa and the Challenges of Bridging the Digital Divide
Table 4. Cellular subscribers (Source: ITU, 2005b) * CAGR Per cent 2000-05
Per 100 people 2005
As percentage of total telephone subscribers 2005
Africa
52
11.29
82.2
Americas
20.3
51.51
60.7
Asia
28.7
22.24
58.8
Europe
18.3
84.42
67.3
Oceania
17.0
68.51
62.5
costs, and technological innovation. The story of African telecommunications, without doubt, is mobile (Gray, 2006). The percentage of mobile subscribers is 8.2 of the total telephone subscribers. The cumulative annual growth rate of cellular subscribers 2005 was highest in Africa when compared with other world regions. A number of reasons could be adduced to the success story of mobile telephony. A critical one is the ability of mobile network operators to provide coverage more rapidly. For investors, mobile networks are often much easier to deploy, operate and manage than fixed lines, making wireless a logical business decision (Gray, 2006). In spite of the success story of mobile telephony, African cellular phone penetration is still the lowest in the mobile world as can be seen in Table 4. The provision of communication technologies in developing regions (like Africa) is an important aspect of enhancing and facilitating the rate of economic and social development (Yavwa & Kritzinger, 2001). The low level of ICT diffusion in Africa is thus responsible for the slow pace of economic and social development of the continent.
causes of low level of ict status in africa The low level of ICT diffusion in Africa can be attributed to some factors. These are enumerated as follows: •
0
Poor economic base: It has been revealed in Table 3 that Africa’s total GDP of $620.5
•
•
billion and per capital of $746 are the lowest when compared to other world regions in 2004. Lack of economic resources to acquire and utilize ICT technology hamper the participation of many developing countries (including African countries) in the global information society (Davison et al., 2000). The digital divide is closely tied to the contextual economic environment of the continent (Mutula, 2005). The poor economic situation of the continent definitely weakens planning for any infrastructural development (Sadowsky, n.d.). Limited/poor telecommunications infrastructure: ICT development and application are not yet well entrenched in the African continent because of poor telecommunication infrastructure (Adomi, 2005a, 20005b, 2006a, 2006b; Mutula, 2003; Sonaike, 2004). Insufficient telecom infrastructure manifests in problem of unseamless interconnectivity between network (Adomi 2005c, 2006a) frequent network or call failures, limited network coverage area (Adomi, 2006a) among others. It has earlier been reported by Southwood (2004) that over 40 percent of the population in Africa are in areas that are not covered by mobile (telcom) services. Nonchalant attitudes of governments towards ICT development and adoption: The policies of most African countries’ government do not encourage the up-take of ICTs. Most governments are currently reluctant to completely free their telecommunication services and some, such as Morocco and
Africa and the Challenges of Bridging the Digital Divide
•
•
•
Tunisia, regulate access to the Internet. In Kenya, the government has been reluctant to issue licenses for very small aperture terminal (VSAT used for satellite communications) connections that would facilitate the role-out of telecommunication services to rural areas, even when UUNET (an Internet service provider) had offered to connect the schools at no charge to the government. In other sub-Saharan African countries, ICT has not also been effectively integrated in the development agenda of government plans (Mutula, 2003). Development initiatives in Africa increasingly incorporate an “ICT component” but mostly as mere projects without a focus on policy variable, that could impact and sustainability (Nzepa, n.d). Lack of cooperation between countries: The overall political situation in and between countries is a crucial factor for activities towards digital inclusion (Gourova , Herman, Leijten, et al., 2001). However, Nzepa (n.d.) has noted that there is a lingering lack of coordination and cooperation among African countries, regional institutions, their projects and programs. High cost of ICT facilities and services: Cost has been reported as being one of the factors which influences provision and use of ICT services (Adomi, 2005d). Gitta and Ikoja-Odongo (2003) discovered that application of the Internet in Uganda’s cybercafés is still hampered by high charges. Jagboro (2003) revealed a low use of Internet services by postgraduate students at Obafemi Awolowo University, Ile-Ife, Nigeria while Adomi (2006a) discovered that high cost of recharge cards or airtime militated against students’ use of mobile phones in a Nigerian state university. Ya’u (n.d) has identified the causative factors for the high cost to include the limited size of the ICT network, the fact that basic equipment like the computers are costly, and the lack of local manufacturing capacity. Inadequate ICT manpower: Basic computer skills in terms of being able to use
•
the computer for such basic applications as word processing is lacking in most African countries (Adomi & Anie 2006; Ya’u, n.d). The shortage of ICT manpower is responsible partly for the low representation of Africa on the World Wide Web. The latter aspect has two other dimensions. The first is that the language of the Net, which is still predominantly European, has excluded most Africans that cannot read and write in these languages. At present, very few African languages have registered their presence on the Internet. The second is that given that the majority of the citizens have no access to the Internet, their efforts could not get to the Web (Ya’u, n.d.). The problem of inadequate ICT manpower is due greatly to the high rate of illiteracy. Over 60 percent of Africans are illiterate. Unreliable electricity supply: Deployment of telecommunications and other ICT infrastructures require a reliable energy supply. However, some capital cites in African have power sharing and most rural areas may only have power for limited periods (Adomi, 2005b; Adomi, Okiy, & Ruteyan, 2003; Rosenberg, 2005; Gourova et al., 2001).
african’s digital divide bridging Efforts There have been a number of efforts aimed at addressing the problems of digital divide in Africa. Over the last decade, African leaders have been adopting declarations and resolutions to facilitate the development of ICT on the continent. In 1996, the Organization of Africa Unity (OAU) adopted the African Information Society Initiative (AISI) as the guiding framework for ICT efforts in Africa (Mutume, n.d.). AISI was adopted in May 1996 at the United Nations Economic Commission for Africa (ECA) Conference of Ministers. Since then, it has provided the framework for ECA’s programmes on harnessing information for development (Bounemra & Soltane, 2001). AISI was to provide the framework for the African information technology renaissance and to
0
Africa and the Challenges of Bridging the Digital Divide
be African information society by the year 2010 (Ya’u, n.d.). In October 1999, the ECA with the active support of its partners involved in the promotion of ICT, organized the first African Development Forum (ADF’ 99) on “The challenges to Africa of Globalisation and Information Age.” The forum examined prospects of ICT in the African continent and the progress made since the launch of the AISI in 1996. There was a call for commitment from the highest levels of leadership to applying ICTs to Africa’s pressing social and economic problems. The ADF ’99 was to be followed up with a post-forum summit which aims at bringing together Heads of States and Governments from selected African countries, and a similar number of leaders of the private sector and development agencies to secure commitment on the actual implementation of the key outcomes (Bounemra & Soltane 2001). In Africa, much of the technology deployment and infrastructure building are undertaken with assistance of such international organizations as the International Development Research Centre (IDRC), UNDP, UNESCO and International Telecommunications Union (ITU). The IDRC projects are centered mostly on the use of telecenters to provide multi-user access points (Ya’u, n.d.). Due to the poor state of infrastructure and the cost of connection on the continent, teleceners have been favored gifts from any development organizations, but, as business models, they proved to be a failure (Benjamin, 2000; cited by Ya’u, n.d.) The African continent’s new development framework, the New Partnership for Africa’s Development (NEPAD) places ICTs among eight priority sectors. Under NEPAD, African governments have pledged to double the number of telephone lines in Africa by 2005, lower as well as improve reliability of communications services. In order to implement the ICT goals of NEPAD, an e-African Commission has been set up by continental leaders. The commission is proposing a “debt for connectivity” program whereby rich countries agree to write-off at least one percent of the total
0
debt of African poor countries each year and place it into a common ICT fund (Mutume, n.d.). In order to improve the provision of network access—telecommunication and Internet—African governments and others have embarked upon the deregulation and liberalization of their telecom sectors. This has led to participation of the private sector in the provision of networked services. A high degree of liberalization and competition in the mobile sector has contributed immensely to the expansion of mobile services, by bringing down prices and making operators more service-oriented. Indeed, by the end of 2004, most African countries had moved from monopoly run operators, to competitive markets. Over the same period, a considerable reduction in mobile tariffs has been able to make services more affordable and within the reach of even lowincome earners. Current moves of leading mobile handset manufacturers to further reduce the price of mobile handsets and to come up with some functional so-called “ultra-low-cost handsets” specially designed for low-income markets will further drive growth rates (Gray, 2006). Apart from the liberalization of the telecom sector, African governments and others have made other strides in ICT development. For instance the Nigerian government has increased efforts to provide public sector information through e-government, which is aimed at improving the flow of information from government to its citizens, from citizens to government and within government by setting up appropriate Internet and intranets for federal, state and local governments (Adomi, 2005b; Daniel, 2004). Private telephone and mobile network operators are now providing Internet services as part of their network operations in Nigeria. This is increasingly making it possible for private individuals to have personal Internet connectivity (Internet access at home). In Zimbabwe, the Kubatana project, a Web site that links 230 civil and community-based groups, provides information on new legislation, the electoral system and voter registration procedures, as well as major social issues confronting the country, such as HIV/AIDs (Mutume, n.d.).
Africa and the Challenges of Bridging the Digital Divide
The number of Internet users is on the increase on the continent. This is due to the proliferation of cybercafés which provide internet public access points to users most of whom do not have home and office access (Adomi, Okiy, & Ruteyan, 2003; Mutula, 2003). Though a number of efforts are being made to enhance ICT diffusion on the African continent, the digital gap with other world regions is still wide.
futurE trEnds Despite the current wide gap in digital diffusion between Africa and the industrialized world, there are various ICT developments which may enhance the level of technology availability and application on the continent. For instance, the liberalization policies of various African governments, if sustained, will continue to increase the number of private sector operators in the telecom and other ICT sub-sectors. This in turn will lead to increase of the number of inhabitants with access to digital facilities and contents. In South Africa, the government has made good progress in establishing multipurpose community centers across the country as part of its e-government strategy (Mutula, 2003). The current moves by Nigerian telecom operators to provide Internet services to subscribers will go along way in facilitating access to the Net by more people. The ICT initiatives of international and other organizations are geared towards improving the digital landscape of Africa.
conclusion The digital gap between Africa and other world regions is still very wide. A number of factors such as economic base, limited telecommunications infrastructure, government indifference among others are responsible for the low status of ICTs on the continent. Though international, regional, national and private organizations have been making efforts to implement some ICT projects
and programs, these initiatives cannot be said to have yet made significant impact. In order to raise the continent’s ICT status and thus minimize the wide digital gap, there is need for African governments to introduce policies and program that should raise the economic status of the nations and individuals. The liberalizations and privatization policies and programs of the governments should be sustained. There is urgent need for cooperation among the different countries and institutions in ICT development matters. Africa government should as a matter of urgency, put strategies in place to produce/manufacture ICTs and work towards stabilization of energy supply.
futurE rEsEarch dirEctions Though this chapter has given an overview of digital divided challenges in Africa, it cannot be claimed that everything has been exhausted on Africa and the digital divide. Therefore, there are areas which scholars and researchers can still explore. For instance, African rural digital divide can be compared with rural digital divide in developed nations. Efforts can also be made to investigate the extent/dimension of digital divide by gender. One of the developments which contributed to narrowing the digital divide in the continent is the introduction of global system of mobile communication (GSM)—mobile telephony. Researchers and scholars can examine how mobile telephony has helped to bridge/narrow the divide across different African countries. The role of GSM in bridging the urban-rural divide in Africa can be explored. The efforts of international organizations in closing the divide in African continent were highlighted in this paper. However, efforts have not been made to assess the effects of these efforts on bridging the divide. It will therefore be desirable for scholars and researchers to investigate the effects of the efforts of international organizations in closing the divide in Africa. One of the problems militating against ICT diffusion in African is nonchalant attitudes of governments towards ICT development and adoption.
0
Africa and the Challenges of Bridging the Digital Divide
Scholars and researchers can explore advanced nations’ government efforts in enhancing the development and adoption of ICTs. This can be juxtaposed against African governments’ efforts. This can perhaps spur African government to take effective steps geared towards growing the ICT sector. Another area which should be investigated is the role of cybercafés and telecenters in bridging digital divide in Africa. This is particularly so as most of the Internet users access the Net in cybercafés and telecenters. Scholars and researchers can also investigate how African countries can become manufacturers rather than just consumers of ICT products.
Adomi, E.E., Okiy, R.B. & Ruteyan, J.O. (2003). A survey of cybercafes in Delta State, Nigeria. The Electronic Library, 21(5), 487-495. Benjamin, P. (2000). Telecentres: The key to wider Internet access. News update: Balancing Act 36. Bounemra, K., & Soltane, B. (2001). Africa and the digital inclusion: Personal reflections. Retrieved June 12, 2006, from http://www.inwent. org/ef-texte/digital/solta2-e.htm Cawson, A. (n.d.). Beyond the digital divide: Harnessing the Internet for cross-cultural dialogue. Retrieved December 31, 2005, from http://www. fiankoma.org/pdf/Beyound_the Digital_Divide. pdf
rEfErEncEs
Daniel, A. (2004). Government via on-line takes of soon. The Guardian (Nigerian), 41, 45.
Adomi, E.E. (2006a). Mobile phone usage patterns of library and information science students at Delta State University, Abraka, Nigeria. Electronic Journal of Academic and Special Librarianship, 7(1). Retrieved May 18, 2005, from http://southernlibrarianship.icaap.org/v7n01/adomi-e01.htm
Davison, R., Vogel, D., Harris, R., et al. (2000). Technology leapfrogging in developing countries: An inevitable luxury. The Electronic Journal on Information Systems in Developing Countries, 1(5), 1-10. Retrieved June 9, 2006, from http:// www.ejisdc.org
Adomi, E.E. (2006b). African Web portals. In A. Tanall (Ed.), Encyclopaedia of portals technology and applications. Hershey: IGI. (in press)
Gitta, S., & Ikoja-Odongo, J.R. (2003). The impact of cybercafes on information services in Uganda. First Monday, 8(4). Retrieved June 24, 2006, from http://www.firstmonday.org/issues/issue8_4/gitta/ index.html
Adomi, E.E. (2005a). Interview with Ms. Victoria Okojie, President of the Nigerian Library Association. Library Hi Tech News, 22(9), 24-27. Adomi, E.E. (2005b). Internet development and connectivity in Nigeria. Program, 39(3), 257268. Adomi, E.E. (2005c). Mobile telephony in Nigeria. Library Hi Tech News, 22(4), 18-21. Adomi, E.E (2005d). The effects of a price increase on cybercafe services in Abraka, Nigeria. The Bottom Line: Managing Library Finances, 18(2), 78-86. Adomi, E.E. & Anie, S.O. (2006). An assessment of computer literacy skills of professionals in Nigerian university libraries. Library High Tech News, 23(2), 10 - 14.
0
Gourova, E., Herman, G., Leijten, J., et al. (2001). The digital divide-a research perspective: A report to the G8 opportunities task force. Retrieved June 10, 2006, from http://www.lemon-digital. com/downloads/eur199/en.pdf Gray, V. (2006). The un-wired-continent: Africa’s mobile success story. Retrieved June 7, 2005, from http://www.itu.int/ITU-D/ict/statistics/at_glance/ Africa_EE2006_e.pdf Hietanen, O. (n.d.). The digital balance between industralised and developing countries — A case study: The development of an information society on the African continent. Retrieved December 31, 2005, from http://www.globaldevelopment. org/Deve_more_printable.htm
Africa and the Challenges of Bridging the Digital Divide
Internet World Stats. (2006). Internet usage statistics—The big picture: World Internet users and population stats. Retrieved June 8, 2006, from http://www.internetworldstats.com/stats.htm ITU. (2004). Information technology. Retrieved June 7, 2006, from http://www.itu.int/ITU-D/ict/ statistics/at_glance/intenet04.pdf ITU. (2005a). Basic indicators. Retrieved June 7, 2006, from http://www.itu.int/ITU-D/statistics/ at_glance/basic05.pdf ITU. (2005b). Cellular subscribers. Retrieved June 7, 2006, from http://www.itu.int/ITU-D/ict/statistics/at_glance/main05.pdf Jensen, M. (2002). The African Internet: A status report. Retrieved http://www3.sn.apc.org/africa/ afstat.html Jagboro, K.O. (2003). A study of Internet usage in Nigerian universities: A case study of Obafemi Awolowo University, Ile-Ife, Nigeria. First Monday, 8(2). Retrieved June 24, 2006, from http://www.firstmonday.org/issues/issue8_8/jagboro/index.html Marine, S., & Blanchard, J.M. (2004). Bridging the digital divide: An opportunity for growth in the 21st century. Alcatel Telecommunications Review, 3. Retrieved June 8, 2006, from http://www.alcatel. com/doctypes/articlepaperlibrary/pdf2004Q2/ SO408-Bridging_opportunity-EN.pdf Mutula, S.M. (2005). Bridging the digital divide through e-governance: A proposal for Africa’s libraries and information centres. The Electronic Library, 23(5), 591-602. Mutuala, S.M. (2003). Cybercafe industry in Africa. Journal of Information Science, 29(6), 489-497. Mutume, G. (n.d.). Africa takes on the digital divide: New information technologies change the lives of those in reach. Retrieved December 31, 2005, from http://www.un.org/ecosocdev/geninfo/ afrec/vol17no3/173tech.htm Nzepa, O.N. (n.d.) Challenges for Africa. Retrieved June 12, 2006, from http://www.wngig.org/docs/
book/olievier_nana_Nzepa.html Rosenberg, D. (2005). Towards the digital library. Findings of an investigation to establish the current status of university libraries in Africa. Retrieved June 6, 2005, from http://www.iinasp.info/pubs/ INASP/digilalhb.pdf Sadowsky, G. (n.d.). The importance to developing countries of access to distributed knowledge. Retrieved May 27, 2005, from http://pws.preserv. net/sadowsky/papers/cacm93.pdf Sonaike, S.A. (2004). The Internet and the dilemma of Africa’s development. Gazette: The International. Journal for Communication Students, 66(1), 41-61. Southwood, R. (2004). African telecom indicators-what do they use and why? Retrieved May 4, 2004, from http://www.balancingact_africa. com/news/back/balancingact_147.html Wikipedia (2006). Digital divide. Retrieved June 8, 2006, from http://en.wikipedia.org/wiki/Digital divide Yavwa, Y., & Kritzinger, P.S. (2001). Enabling communication in developing regions. The Electronic Journal on Information Systems in Developing Countries, 6(1), 1-15 Retrieved June 8, 2006, from http://www.ejisdc.org Ya’u, Y.Z. (n.d.). Confronting the digital divide: An interrogation of the African initiatives at bridging the gap. Retrieved December 31, 2005, from http://www.codesria.org/Links/conferences/ Nepad/yau.pdf
furthEr rEading Adam, L., & Wood, F. (1999). An investigation of the impact of information and communication technologies in Sub-Saharan Africa. Journal of Information Science, 25(4), 307-318. Alemna, A. A. (1998). Information in Africa society. Information Development, 14(2), 69-72. Alemna, A. A. (2006). Critical issues in informa-
Africa and the Challenges of Bridging the Digital Divide
tion and communication technologies for rural development. Information Development, 22(4), 236-241.
on Information Systems in Developing Countries, 8(1), 1-9. Retrieved June 12, 2005, from http:// www.ejisdc.org
Allafrica, Inc. (2004). Bridging the rural digital divide in Africa. Retrieved December 31, 2005, from http://ww.businessdevelopment.nl/article1012.1606.html
ICT and Internet in partnership countries. (2006). Retrieved March 27, 2006, from http://www. foundation_partnership.org/pubs/bandwith/inde. php?chap2&sub=c2a
Antonelli, C. (1991). The diffusion of advance telecommunication in developing countries. Paris: OECD.
Khalil, M., & Kenny, C. (n.d). The next decade of ICT development: Access application and the forces of convergence. Retrieved November 2006, from http://iris37.worldbank.org/domdoc/PRD/ others/PRDD.container.nsf/WB_view_Attachm ents?Readform&ID=85256D2400766CC78527/ D3005D14C35&
Arunachalam, S. (2003). Information for researcher in developing countries: Information technology, a friends or a foe. International Information and Library Review, 35(2-4), 133-147. Awe, J. (n.d.) Nigeria: Bridging the infrastructure divide. Retrieved November 4, 2006, from http:// www.jidaw.com/telecom/gsm.html Chan, L. Kirsop, B. Costa, S., & Arunachalam, S. (2005). Improving access to research literature in developing countries: Challenges and opportunities provided by open access. Retrieved July, 24, 2005, from http://www.ifla.org/IV/ifla71/papers/ 150e.chan.pdf Cross the digital divide: Strategies and implications (n.d). Retrieved June 12, 2005, from http://eprints. rclis.rog/archive/00005046/01/pletner_crossing. pdf Economic Commission for African (1999). Strengthening Africa’s information infrastructure. Retrieved November 6, 2006, from http://www. uneca.org/ad99/infrasture.html Falch, M., & Anyimadu, A. (2003) Tele-centres as a way of achieving universal access: The case of Ghana. Telecommunication Policy, 27(1/2), 21 -39. Ford, H. (2001). African content in the multimedia age. Retrieved October 27, 2003, from http://www. eisa.org.za/category/technology/content1.htm Hall, P. A. V. (2002). Bridging the digital divide, the future of localization. The Electronic Journal
Mazrui, A. A. (n.d). Nigeria between Lord Lugard and the digital divide: Political culture and the skill revolution. Retrieved November 4, 2006, from http://www.nigerdeltacongress.com/narticles/nigeria_between_lord_lugard_and_htm Missen, C. (n.d.). USIA/GTC Assessment of Internet Connectivity Needs of Nigerian Universities. Retrieved March 27, 2006, from http://www. widernet.org/nigerianconcsult/execsum.htm Nwagwu, E. E. (2006). Integrating ICTs into the globalisation of the poor developing countries. Information Development, 22(3), 167-179. Ojo, T. (2006) Communication networking; ICTS and health information in African Information Development, 22(2), 94-101. Rhine, L. (2006). The impact of information technology on health information access in subSaharan African: The divide within the divides. Information Development, 22(4), 242-251. Santoyo, A. S. (2003) Estimation and characterization of the digital divide. Retrieved December 31, 2005, from http://www.ejd.rg/meeting2003/ictp/ paper/serrano.pdf Solatane, K. B. B. (2001). African and the digital inclusion: Personal reflections. Retrieved June 12, 2006, from http://www.inwent.org/ef-texte/digi-
Africa and the Challenges of Bridging the Digital Divide
tal/solta2-e.html Utsumami, Y. (2001). African and the digital divide. Retrieved December 30, 2005, from http://. www.itu.int/osg/sg/sgs/speeches/2001/12aaccra. pdf#search:africa%20and%200thge%20digitral %20divide World Information Access Project. (2006). Patterns of international inequality in technology access, 1995-2005. Retrieved October 6, 2006, from http://www.wiareport.rog/index.php/11/patterns-of-inequality-in-technolgy-_access_1995 Zachary, G. P. (2004). Blackstar: Ghana information and development in Africa. First Monday, 9(3). Retrieved December 31, 2005, from http://www. firstmonday.org
for information storage, conversion, processing, protection, transmission and retrieval. Internet Hosts: These are computers that are connected to the Internet. Networks: A number of computers and other devices that are linked together in order to facilitate sharing of data. Telecenter: This is a place where inhabitants of a community can use ICT components such as the Internet, computers, telephone, etc. People can also acquire ICT skills in a telecenter.
kEy tErms Cybercafés: Places where entrepreneurs provide Internet public access services for a free. Digital Age: The current development era in which social, economic and political activities/ processes are driven by application of ICTs/digital technologies. E-Government: The use of or application of information technologies (such as Internet and intranet systems) to government activities and processes in order to facilitate the flow of information from government to its citizens, from citizens to government and within government. Information and Communications Technology (ICT): Also known as information technology (IT) refers to the technology that combines the use of computers and telecommunications
Chapter XXX
Research Ethics in E-Public Administration Carlos Nunes Silva University of Lisbon, Portugal
introduction Public administration includes many professions and occupations, each with its own set of ethical guidelines. In addition, they have to follow the code of professional ethics for public sector workers. These professionals, including those whose main occupation is to do scientific research, face complex ethical problems. These problems relate, for example, to social responsibility and public interest; public participation; professional competence, honesty; conflicts of interest; respect of citizens rights, dignity and diversity; information security; long-range consequences of public administration decisions; respect of other species and the natural environment; informed consent, human dignity and other issues. Empirical studies on research ethics in several countries (Khakee & Dahlgren, 1990), suggest that researchers have to make choices among conflicting values and, therefore, tend to act differently in face of specific problems, according to their values and ethical principles. Moreover, ethical dilemmas emerge in all stages of a research process. For example, during the formulation of the research study (e.g., to select certain themes, to
ignore others; to adopt qualitative or quantitative methodologies) or in the process of data collection (e.g., in interviews, especially if audio or video recorded, questionnaires, focus groups, access to a database on identifiable persons, fieldwork). Also, in the following stages, such as in the storage of primary data (e.g., control and use of data stored), in the analysis and interpretation (e.g., type of analysis and categories used), in the publication of findings and intellectual property, during the implementation, and in the critical issue of research funding. The purpose of this chapter is to discuss professional ethical issues in research activities conducted in e-public administration1, most of which are common to the private and non-profit sectors. It offers an overview of key ethical issues in this field and identifies ethical challenges raised by the application of information and communications technologies (ICT)2 in public administration research activities. The evidence available shows that ICT places new ethical challenges but does not change radically the nature of ethical problems characteristic of paper-based and face-to-face public administration.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Research Ethics in E-Public Administration
background In theory, it is possible to approach professional ethics in different ways. For example, if the researcher holds a deontological perspective of ethics (Darwall, 2003) the respect of key ethical principles in the research process is more important than to seek the greatest good for the greatest number. On the contrary, if he has a consequentialist perspective of ethics he will argue that the most important is to seek the greatest good for the greatest number in all options he might need to make in all stages of the research process (Darwall, 2003a). However, in practice, professionals tend to hold a mixed perspective of ethics. The importance given to professional ethics issues and to research ethics in particular increased since the 1980s (Silva, 2005), with virtual research ethics issues becoming more visible since the late 1990s (Hinman, 2002). The ethical dimension in research work was further emphasised by post-positivist approaches and the related calls for new visions of social justice or for an ethic of care (Smith, 2005), as well as by the rise of environmental ethics. For example, post-modern thinking emphasizes difference and particularity, and challenges universal conceptions of ethics and post-structuralism (Popke, 2003) and feminism (Hendler, 1994), with its complex view of the social world, raises a wider range of ethical concerns, namely the issue of positionality, reflexivity and situated knowledge (Butler, 2001). These are examples of factors that explain why professional organizations, governments, and international public organizations developed a strong commitment in recent years towards professional ethical issues, including ethical problems related to research activities. Most professional organizations encouraged ethical discussions among its members, adopting codes of ethical conduct3 and other measures (Anderson, Johnson, & Gotterbarn, 1993; Gotterbarn, 1996; Silva, 2005). Nonetheless, most of these codes of professional conduct have important shortcomings, as Lucy (1997) and Gotterbarn (1996) point out, not to mention the fact that most of them are silent about ethical challenges raised by e-government.
Ethical issuEs in E-public administration overview of professional Ethics issues As mentioned before, professionals in e-public administration face complex ethical problems, addressed by professional organizations and governments in the form of codes of ethical conduct, and follow certain commonly agreed ethical principles of professional conduct4. They share ethical principles with professionals of private and nonprofit sectors, but they also face ethical problems somehow specific of public sector concerns. In fact, governments, international institutions and professional organizations all seem to recognize that public administration have an overall duty for the promotion of the public interest, to balance the needs of individuals and communities and to seek social justice by expanding choice and opportunity for all persons. They have, as any other professional, an ethical obligation of professional competence, honesty, integrity and responsibility, as stated in all codes of professional conduct5. First, regarding professional competence, those working in public administration shall use the appropriate scientific and technical means available, including information and communication technologies, in order to achieve excellence. They shall not harm other persons (e.g., intentional destruction or modification of files and software programs, etc.), shall consider the interrelations of decisions, and must avoid tasks for which they are not qualified. Second, in what respects professional integrity, they shall be fair and respectful of colleagues and other citizens, give proper credit for intellectual property (e.g., copyright and patents of software, etc.) and honor property rights. They shall avoid conclusions and decisions based on untrue evidence (e.g., avoid false claims about a system design, etc.). Third, in what regards professional responsibility, they must accept full responsibility for their work and must behave in a way that does not compromise public trust on e-public administration or on his professional group. Finally, they shall avoid con-
Research Ethics in E-Public Administration
flicts of interest or the appearance of a conflict (e.g., situations in which personal or financial interests may force to perform professional work in an unfair manner). Professionals working in e-public administration shall respect the rights, dignity and diversity of citizens. They shall respect people and avoid discrimination based on age, gender, race, religion, nationality, sexual orientation, health, disability, social status, values, cultures, and other similar factors. Researchers in e-public administration must grant information security (confidentiality, integrity, availability and non-repudiation), especially in relation to sensitive information, which requires to be aware of key security issues in information systems (Irvine, 2000; Joshi et al., 2002). This means that they need to guarantee that information collected and researched by the administration is accessible only to those authorized (principle of confidentiality), and only the information that is necessary (principle of least privileged), or that is necessary for the accomplishment of a specific duty (the principle of ‘need-to-know’). It implies also that the information collected and published is accurate, complete and reliable (principle of integrity) and that it will be available to those that need it when it is needed (principle of availability). Finally, this also means that it is possible to prove that dispatcher and recipient are indeed the parties that sent and received the information (the principle of non-repudiation). In sum, researchers have an ethical obligation and legal as well, to balance the right of access to government information with privacy interest (Hammitt, 2000). They also have ethical obligations in relation to the long-range consequences of present administrative decisions and actions, and shall also consider the needs of future generations, of other species and avoid the use of irreplaceable natural resources. They shall also consider and respect the impact of administrative decisions outside its administrative boundaries. Besides these broad ethical issues and principles of professional conduct, professionals doing research work in e-public administration have to consider also specific ethical principles, in the different stages of the research process.
research Ethics issues in E-public administration Data Collection One of the key ethical principles in online research is informed consent6 (Frankfort-Nachmias & Nachmias, 2000). It means that the researcher must inform in advance all human participants7 about the nature and content of the research, whether based on a passive analysis of internet communities’ postings or on a more active form of analysis, such as online surveys, online semi-structured interviews, online focus group, Delphi questionnaires, or online brainstorming. He must get permission by e-mail to use research participants’ postings in mailing lists, chat rooms, newsgroups, either prospectively or retrospectively. This implies that he must refer objectives, methods and procedures to be followed, who is doing the research, who pays it, what type of results will be produced and how they will be published and disseminate. Participants should also be aware of all other implications (potential benefits, risks and dangers) of his or her participation. In practice, the social and cultural context of the participant, and the age in the case of children (Morrow & Richards, 1996), can affect and distort this objective. If there is any risk for participants in the research, the institution should provide an adequate insurance cover and inform participants in advance about that. In addition to informed consent forms, researchers must provide a bill of rights to their research participants. The Web site must provide all this information to participants, although it may be difficult to know the extent to which participants are fully aware of the details of the research to which they are giving their informed consent. Participants should also be informed how the researcher will guarantee information security (confidentiality, anonymity and privacy), without unrealistic guarantees about this. The researcher must emphasize, for example, that the promise of confidentiality and anonymity cannot be honored in all situations (e.g., if the researcher has the legal duty of making data accessible to legal
Research Ethics in E-Public Administration
authorities), and the possible risks for the privacy of participants. The use of internet communities for research may increase this risk, as it is not always easy to differentiate postings regarded as public from those that are private. The Website must also state how the researcher will protect data during transmission and storage. Opposite to informed consent is the use of deception in research involving human participants. It means omission, in part or totally, of the aims and purposes of the research due to methodological conditions or moral prejudices, a practice to be avoided (Neuman, 2000). When doing research on ‘vulnerable’ populations, the researcher have a moral obligation to respect self-determination of any human participant, especially in situations when other people can make a decision for themselves, in the sense that they cannot decide freely. The researcher must be aware of the need to recognize the child’s autonomy and the importance of their views, as advocated by the United Nations Convention on the Rights of the Child, avoiding paternalism. Issues of confidentiality, anonymity and privacy can be more difficult to tackle in the case of children, especially in the internet. When minors, children or young people, are involved as participants in online research, and increasingly they are in public participation processes8, it raises complex ethical situations, but does not mean necessarily to treat children as different from adults (Christensen, 2004). The researcher must adopt procedures to obtain their informed consent and to guarantee afterwards confidentiality and the respect of their privacy, similar to those applied in the case of adults (Morrow & Richards, 1996). For example, consent must be obtained from the parent or legal guardian, on a paper form sent by post, if the research is low risk, or face to face, if it is not.
Data Storage Ethical obligations do not end with data collection. The growing use of digital databases increased the potential to infringe harm on the researched, considering the long-term use of such data in e-public administration archives. In fact, the possibilities
offered by search engines to retrieve information on individual persons, based on a query of apparently inoffensive data, or the misuse of computers may lead to disclosure of personal information or to unauthorized change of data (Hammitt, 2000). There is a moral obligation to obtain permission to store research material containing information on persons and to allow participants to have access to the final product of the research. If other persons gain access to the data stored, and employ it for a different purpose, or if shared with other researchers, there is a potential to harm human participants. After the death of a person, the researcher must seek the approval of descendents or relatives before using the data about that person for a different purpose. Even when research is mainly or totally library or archival based there will be ethical dilemmas related with the respect of human dignity. That is the case of research relying on administrative documentation and other data collected by official institutions and publicly available on digital platforms (e.g., census, other statistics, official reports, etc.) and not on direct collection of information from individual persons (e.g., through questionnaires, interviews, focus groups, panels, brainstorming sessions, taped interviews or video records, etc.).
Data Publication The simple publication of information, in analogical or digital format, does not mean that it is necessarily in the public domain. Before the publication of results in written, visual or oral form, it is more correct to eliminate the details that can identify, in one way or another, a specific person, if the person did not give his or her informed consent, in spite of the low risk for the privacy of persons mentioned in some of these official documents.
futurE trEnds Researchers will continue to encounter ethical problems as those identified and discussed in the
Research Ethics in E-Public Administration
previous section, but they will also face new situations and challenges due to ICT, to globalization and to the new economy. For example, the use of ICT to store personal information collected from online survey participants (e.g., data archives on servers connected to the Internet, etc.) increases the level of risk in terms of information security compared to traditional paper archives, in case of malicious practices. In addition, the growing importance of post-positivist ethical approaches (e.g., feminist ethics) will certainly lead researchers, working in e-public administration, to address ethical issues in innovative ways. While virtual research may not need new ethical rules, radically different from paper-based and face-to-face research, it seems nevertheless necessary to adapt these principles to the new technical conditions of online research (e.g., to encrypt data in order to protect the privacy of online survey participants and the confidentiality of data collected and stored in digital databases, etc.). Some ethical guidelines that are common in face-to-face research may prove difficult to apply directly in research in the cyberspace. That is the case, for example, of informed consent (e.g., a signature in a paper form is not exactly the same as a simple click in the computer, when the age and level of understanding of the participant is not known), which requires the adoption of new technical and administrative procedures.
our increasingly multi-cultural and multi-ethnic society, the researcher must treat any citizen with the uppermost standards of respect, independently of his ethnic, religious or social background, as well as age, gender, sexual orientation or disability, in order to preserve his or her dignity. Considering the heterogeneity of professions and occupations in the public sector, and the experience of several professional groups (Silva, 2005), it seems more appropriate to develop a code of research ethics for e-public administration in broad terms in order to cover different research contexts. This type of code must be a set of guidelines for ethical choices and an educational tool for both new and older members of the profession, rather than a prescriptive and exhaustive code of conduct in research or an instrument for disciplinary control. Therefore, a code of research ethics in e-public administration shall primarily aim to sensitize researchers in this sector for ethical issues associated with the move from paper-based to digital forms of public administration and from face-to-face to virtual research. It shall promote standards of ethical behavior, to safeguard public interest in research processes, to enhance professional integrity and to develop a research culture in e-public administration respectful of human rights and dignity of citizens.
futurE rEsEarch dirEctions conclusion In short, an ethical conduct in research carried out in the context of e-public administration means, as before, to value human dignity and to protect the rights and welfare of citizens in their relation with public administration. This implies to respect the privacy, self-determination and human dignity of citizens in all stages of the research process and that they never lose their autonomy. This indicates a prevalence of the human being over the interest of society, which underlines the fact that the researchers’ primary responsibility is to those they study and only afterwards to the advancement of knowledge. It also means that, in
These future trends that researchers in e-public administration are likely to face suggest a range of research directions. For example, in our postmodern societies, the research of ethical issues in e-government has to be discussed from contrasting ethical schools of thought, in the same way that the impact of new public management and other approaches needs to be examined from different ethical perspectives. Comparative research about the ethical dimensions of the various sub-sectors of e-public administration need to be developed, exploring similarities and differences, both in theory and practice, in different regions of the world (North-America, Europe, Asia, and other parts). The conditions for successful research
Research Ethics in E-Public Administration
ethics training must be investigated as well as the ethical attitudes and ethical behavior perceptions of public employees. The management of research ethics in e-public administration is of vital importance for the good governance of the overall administrative system and, therefore, legislative and administrative measures adopted must be examined and what really works in research ethics management in different countries must be compared and, when possible and justifiable, benchmarked. The consequences of ICT use, especially the Internet, and the design of proper technical and administrative procedures, in each stage of the research process (e.g., data collection, data analysis, data storage, publication), is another area for future research, if the respect of human dignity, the privacy of citizens and the security of information is to be maintained at the level of the best practices in the traditional paper based public administration.
rEfErEncEs Anderson, R., Johnson, D., Gotterbarn, D., & Perrole, J. (1993). Using the new ACM code of ethics in decision-making. Communications of the ACM, 36(2), 98-106. Butler, R. (2001). From where I write: the place of positionality in qualitative writing. In M. Limb & C. Dwyer (Eds.), Qualitative methodologies for geographers. Issues and debates (pp. 264-276). London: Arnold.
Frankfort-Nachmias, C., & Nachmias, D. (2000). Research methods in the social sciences. New York: Worth Publishers. Gotterbarn, D. (1996). An evolution of computing’s codes of ethics and professional conduct. Retrieved June 26, 2006, from http://csciwww.etsu. edu/gotterbarn/ Hammitt, H. A. (2000). The legislative foundation of information access policy. Balancing access against privacy and confidentiality. In G. D.Garson (Ed.), Handbook of public information systems (pp. 27-39). New York: Marcel Dekker. Hendler, S. (1994). Feminist planning ethics. Journal of Planning Literature, 9(2), 115-127. Hinman, L. M. (2002). The impact of the Internet on our moral lives in academia. Ethics and Information Technology, 4, 31-35. Irvine, C. E. (2000). Security issues for automated information systems. In G. D. Garson (Ed.), Handbook of public information systems (pp. 231-245). New York: Marcel Dekker. Joshi, J., Ghafoor, A., Aref, W., & Spafford, E. (2002). Security and privacy challenges of a digital government. In W. McIver & A. Elmagarmid, (Eds.), Advances in digital government. Technology, human factors, and policy (pp. 121-136). Dordrecht: Kluwer. Khakee, A., & Dahlgren, L. (1990). Ethics and values of Swedish planners: A replication and comparison with an American study. Scandinavian Housing and Planning Research, 7, 65-81.
CEC. (2004). Online availability of public services: How does Europe progress? Brussels: DG Information Society—Commission of the European Communities.
Lucy, W. (1997). APA’s ethical principles include simplistic planning theories. In S. Campbell & S. Fainstein (Eds.), Readings in planning theory. Oxford: Blackwell.
Christensen, P. H. (2004). Children’s participation in ethnographic research: issues of power and representation. Children & Society, 18, 165-176.
Morrow, V., & Richards, M. (1996). The ethics of social research with children: An overview. Children and Society, 10, 90-105.
Darwall, S. (Ed.) (2003). Deontology. Oxford: Blackwell.
Neuman, W. L. (2000). Social research methods. Qualitative and quantitative approaches. Boston: Allyn and Bacon.
Darwall, S. (Ed.) (2003a). Consequentialism. Oxford: Blackwell.
Research Ethics in E-Public Administration
Popke, E. J. (2003). Poststructuralist ethics: Subjectivity, responsibility and the space of community. Progress in Human Geography, 27(3), 298-316. Silva, C. N. (2005). Urban planning and ethics. In J. Rabin (Ed.), Encyclopedia of public administration and public policy (pp. 311-316). New York: Marcel Dekker — Taylor & Francis. Smith, S. J. (2005). States, markets and an ethic of care. Political Geography, 24(1), 1-20.
furthEr rEading Bauman, Z. (1994). Postmodern ethics. Oxford: Blackwell. Belmont Report. (1979). Ethical principles and guidelines for the protection of human subjects of research. Washington: The National Commission for the Protection of Human Subjects of Biomedical and Behavioural Research. Bostock, L. (2002). ‘God, she’s gonna report me’: The ethics of child protection in poverty research. Children & Society, 16, 273-283. Butz, D., & Besio, K. (2004). The value of autoethnography for field research in transcultural settings. The Professional Geographer, 56(3), 350-360. Cassell, C., & Symon, G. (1994). Qualitative methods in organizational research. A practical guide. London: Sage. Chouinard, V. (2000). Getting ethical: for inclusive and engaged geographies of disability. Ethics, Place and Environment, 3(1), 70-80. Cloke, P., Cooke, P., Cursons, J., Milbourne, P., & Widdowfield, R. (2000). Ethics, reflexivity and research: encounters with homeless people. Ethics, Place and Environment, 3(2), 133-154. Cook, I. (1997). Participant observation. In R. Flowerdew & D. Martin. (Eds.), Methods in human geography (pp. 127-149). Harlow: Longman. Curry, Michael R. (1991). On the possibility of ethics in geography: writing, citing and the con-
0
struction of intellectual property. Progress in Human Geography, 15(2), 125-147. Cutchin, M. (2002). Ethics and geography: Continuity and emerging syntheses. Progress in Human Geography, 26(5), 656-664. DeVerteuil, G. (2004). Systematic inquiry into barriers to researcher access: evidence from a homeless shelter. The Professional Geographer, 56(3), 372-380. England, K. (1994). Getting personal: reflexivity, positionality and feminist research. In T. Barnes & D. Gregory (Eds.)(1997), Reading human geography (pp. 69-82). The poetics and politics of inquiry. London: Arnold. Forster, N. (1994). The analysis of company documentation. In C. Cassell & G. Symon (Eds.), Qualitative methods in organizational research. A practical guide (pp.147-166). London: Sage. Gleeson, B. (2000). Enabling geography: Exploring a new political-ethical ideal. Ethics, Place and Environment, 3(1), 65-70. Hendler, S. (1991). Ethics in planning: The views of students and practitioners. Journal of Planning Education and Research, 10(2), 99-105. Hinton, P. (2002). The ‘Thailand controversy’ revisited. The Australian Journal of Anthropology, 13(2), 155-177. Howe, E., & Kaufman, J. (1979). The ethics of contemporary American planners. Journal of the American Planning Association, 45(3), 243-255. Howe, E., & Kaufman, J. (1981). The values of contemporary American planners. Journal of the American Planning Association, 47(3), 266-278. Jarvie, I.C. (1983). The problem of ethical integrity in participant observation. In R. Burgess (Ed.), Field research: a sourcebook and field manual (pp. 68-72). London: George Allen & Unwin. Kitchin, R., & Wilton, R. (2000). Disability, geography and ethics. Ethics, Place and Environment, 3(1), 61-65.
Research Ethics in E-Public Administration
Lee, R., & Renzetti, C. (1993). The problems of researching sensitive topics: an overview and introduction. In C. Renzetti & R. Lee (Eds.), Researching sensitive topics (pp. 3-13). London: Sage. Ley, D., & Mountz, A. (2001). Interpretation, representation, positionality: Issues in field research in human geography. In M. Limb & C. Dwyer (Eds.), Qualitative methodologies for geographers. Issues and debates. London: Arnold. McDowell, L. (2004). Work, workfare, worklife balance and an ethic of care. Progress in Human Geography, 28(2), 145-164. National Academy of Sciences (1995). On being a scientist: responsible conduct in research. Washington: National Academy Press. NESH (2001). Guidelines for research ethics in the social sciences, law and the humanities. Oslo: National Committee for Research Ethics in the Social Sciences and the Humanities. Price, D. (1998). Cold war anthropology: Collaborators and victims of the national security state. Identities, 14(3-4), 389-430. Proctor, J., & Smith, D. (Eds.) (1999). Geography and ethics: Journeys in a moral terrain. London: Routledge. Rose, G. (1997). Situating knowledges: Positionality, reflexivities and other tactics. Progress in Human Geography, 21(3), 305-320. Sieber, J. E. (1993). The ethics and politics of sensitive research. In C. M. Renzetti & R. M. Lee (Eds.), Researching sensitive topics (pp. 14-26). London: Sage. Singer, P. (1993). A companion to ethics. Oxford: Blackwell. Smith, D. M. (1997). Geography and ethics: A moral turn? Progress in Human Geography, 21(4), 583-590. Smith, D. M. (1999). Geography and ethics: How far should we go? Progress in Human Geography, 23(1), 119-125.
Smith, D. M. (2001). Geography and ethics: progress or more the same? Progress in Human Geography, 25, 261-168. Taylor, P. (1989). Respect for nature. A theory of environmental ethics. Princeton: Princeton University Press. Valentine, G. (2003). Geography and ethics: in pursuit of social justice — Ethics and emotions in geographies of health and disability research. Progress in Human Geography, 27(3), 375-380. Valentine, G. (2004). Geography and ethics: Questions of considerability and activism in environmental ethics. Progress in Human Geography, 28(2), 258-263. Valentine, G., Butler, R., & Skelton, T. (2001). The ethical and methodological complexities of doing research with ‘vulnerable’ young people. Ethics, Place and Environment, 4(2), 119-125. Wachs, M. (1989). When planners lie with numbers. Journal of the American Planning Association, 55, 476-479. Wachs, M. (1990). Ethics and advocacy in forecasting for public policy. Business and Professional Ethics Journal, 9(1-2), 141-157. Wilton, R. (2000). ‘Sometimes it’s OK to be a spy’: Ethics and politics in geographies of disability. Ethics, Place and Environment, 3(1), 91-97. Young, L., & Barrett, H. (2001). Ethics and participation: reflections on research with street children. Ethics, Place and Environment, 4(2), 130-134.
kEy tErms Code of Professional Ethics: A set of ethical guidelines aimed to guide the way a person behaves in the profession. Consequentialist or Utilitarian Perspective: The most important is to seek the greatest good for the greatest number in all options he or she might need to make in all stages of the research process.
Research Ethics in E-Public Administration
Deception: Omission, in part or totally, of the aims and purposes of that particular research. Deontological Perspective: The respect of key ethical principles in the research process is more important than to seek the greatest good for the greatest number. Information and Communication Technologies: Include local computer networks, the Internet, electronic mail, digital television, mobile communications, etc. Informed Consent: Any person involved in a research process have to be informed in advance about the nature and content of the research, its objectives, the methods and procedures to be followed, who is doing it, who pays it, what type of results will be produced, how will the results be published and disseminate and should also be aware of all other relevant impacts. Professional Conduct: The way, from an ethical point of view, how a person behaves professionally towards clients, the employer, other colleagues, citizens in general, the community, the professional group, the environment, other species, and future generations.
4
5 6
End notEs 1
2
3
e-public administration is the widespread use of information and communications technologies (ICT) in public administration. Considering the multiple dimensions of public administration, the following analysis takes research work in social and economic policies as its main reference framework. These ethical challenges will be more important in the fourth level of development of e-government (or e-public administration), in which there will be a full electronic handling of public services (CEC, 2004). For example: ASPA (1994). Code of ethics. Washington: American Society for Public Administration; ACM (1992). Code of ethics and professional conduct. New York: As-
7
8
sociation for Computing Machinery; ACS (2005). Code of ethics. Sydney: Australian Computer Society; CIPS (2005). Code of ethics and standards of conduct. Mississauga, Ontario: Canadian Information Processing Society; NZCS (2004). Code of ethics and professional conduct. Wellington: New Zealand Computer Society; IEEE (1999). Software engineer’s code of ethics and professional practice. Washington: Institute for Electrical and Electronic Engineers; APSA (1998). A guide to professional ethics in political science. Washington: American Political Science Association (2nd ed., 1998; reprinted 2004); AAG (1998). Statement of professional ethics. Washington: Association of American Geographers; AICP (2005). Code of ethics and professional conduct. Chicago: American Institute of Certified Planners. Most of which have been incorporated in national legislations or in technical regulations. For a list of relevant examples, see note 3. Principle first drafted during the Nuremberg War Crime Trials when medical experiments conducted in concentration camps were revealed (Neuman, 2000). The Nuremberg Code (adopted in 1947), a set of 10 principles about human experimentation, is the first internationally recognized code of research ethics. The other two landmarks in this process are the Helsinki Declaration (adopted in 1964) and the Belmont Report (adopted in 1979). Ethical issues in internet research have been addressed by scientific and professional organizations. See, for example: AAAS (1999). Ethical and legal aspects of human subjects research on the Internet. Washington: American Association for the Advancement of Science. Partially influenced by the United Nations Convention on the Rights of the Child (1989) and the UNICEF’ Child Friendly Cities Initiative.
Chapter XXXI
Medical Ethical and Policy Issues Arising from RIA Jimmie L. Joseph University of Texas at El Paso, USA David P. Cook Old Dominion University, USA
introduction New technologies can lead to social upheaval and ethical dilemmas which are unrecognized at the time of their introduction. Medical care technology has advanced rapidly over the course of the past two decades and has frequently been accompanied by unforeseen consequences for individuals, the medical profession and government budgets, with concomitant implications for society and public policy (Magner, 1992; Marti-Ibanez, 1962). Advances in information technology (IT) during the last decade and a half are now impacting the medical profession, and the delivery of medical advances, in ways that will impact public policy debates for the foreseeable future. The World Wide Web (Web) makes information that was once the eminent domain of medical professionals available to average citizens who are increasingly demanding medical treatments from the leading edge of medical technology. For example, CenterWatch (www.centerwatch. com) provides a wealth of information concern-
ing clinical trials and offers a conduit by which patients can become involved in such studies. The availability of such information has also led to patients suffering from life-threatening diseases not part of such clinical trials to request special access to potentially life-saving therapies. As a result, the Web is increasing the complexity of answering public policy questions surrounding what medical technologies to make available to the public, who will be eligible to receive new medical treatments, and at what cost.
background In medicine, it has traditionally been the medical practitioner who has possessed the greater breadth and depth of information and knowledge in the provider-patient relationship (Magner 1992; Porter 1992; Robinson 1931). This condition, where one party has more information in a transaction than another party or parties (relative to the transaction), is known as information asymmetry. The
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Medical Ethical and Policy Issues Arising from RIA
existence of information asymmetry, in many transactions, has created the need to develop public policies to protect the interests of parties with less information (Akerlof, 1970; Hellwig 2001; Nayyar, 1990). The risk of false advertising, the presence of disclaimers for contests, the existence of lemon laws for cars, and insider trading laws all attest to the need for societies to establish guidelines to handle the ethical issues which can arise from information asymmetry. Public policy has been designed to protect medical services consumers from the potential deleterious effects of information asymmetry (Akerlof, 1970; Hellwig, 2001; Milgrom & Roberts 2001). Specifically, it has dictated that doctors and other medical professionals be licensed in the nation, state or province in which they practice (Davies & Beach, 2000; Digby, 1997; Fournier & McInnes, 1997). Laws governing the practice of medicine, and what is acceptable and legal in the course of treatment and medical experiments have been developed. Oversight boards have also been established to ensure healthcare providers offer care in accordance with standards of care (Fournier & McInnes, 1997). Such precautions are intended to increase patients’ confidence in the care being provided, even under the yoke of information asymmetry. The precautions do not reduce or eliminate the presence of information asymmetry. A commonly held belief is that information parity, and information access that moves parties in transactions toward parity, is the cornerstone for the elimination of the ethical quandaries introduced by information asymmetry (Akerlof, 1970; Diamond, 1984; Hellwig, 2001). Until recently, medical information has presented two problems for the lay person: accessibility and comprehensibility (Ghalioungui, 1963; Marti-Ibanez, 1962; Magner, 1992; Porter, 1992; Robinson, 1931). Accessibility refers to the opportunity to find, as well as the ease and convenience associated with locating, information. Medical journals are expensive, and prior to online libraries, many university libraries prided themselves on the expansiveness of their collections of journals. For a patient with a medical problem, or a family
problem, a primary constraint was the ability to locate and access the salient medical information dealing with the problem. If the patient did not live in a town with a medical school, he or she would have to depend on the local library’s often sparse repository of medically-related offerings to search for relevant information. Generally, assuming the patient could identify an appropriate source of information, it was not locally available (Digby, 1997; Matthews, 2000). This would necessitate relying upon the availability of inter-library loan or a trip to the library that held the sought-after text or journal. More likely than not, however, the affected patient would not have the ability to reference such medical resources. Most commonly, the patient’s knowledge of medical treatment options was limited to that offered by a medical care provider, word-of-mouth, or from popular press publications. Historically, lack of information accessibility has been a powerful protector of information asymmetry. Even if patients could locate and access relevant medical information they were still faced with the problem of comprehensibility. Medical journals are written for an audience with a minimum understanding of biology, chemistry, physiology, anatomy, biochemistry, and pharmacology. There is a requisite base of knowledge for understanding medical texts and journals, and much medical literature is written for an audience with specialized knowledge of a particular field of medicine. For the lay audience, such literature is virtually indecipherable. With the practically insurmountable twin problems of accessibility and comprehensibility, patients received most of their medical information through the filter of the physician. Few physicians could be considered members of the avant garde. Consequently, patients were more than likely to learn only of medical therapies with which their physicians were familiar (Marti-Ibanez, 1962; Porter, 1992; Robinson, 1931). Thus, information on progressive treatments not approved for use, or uncovered by medical insurance, would not be widely disseminated to the general public through physicians. Further, licensed physicians familiar with such treatments would face numerous legal
Medical Ethical and Policy Issues Arising from RIA
hurdles to providing information regarding experimental or unapproved medical therapies.
dEvElopmEnts that havE rEducEd information asymmEtry In the twentieth century, information on many medical and scientific advances was made available to a large cross-section of the population via print and electronic mass media. However, the availability of such media outlets did not significantly alter the level of information asymmetry between the patient and the medical practitioner. First, national oversight of media outlets limited their ability to report on medical treatments and breakthroughs not licensed in that particular country (Barry & Raworth, 2002). Perhaps, just as importantly, mass media tend to report on items of broad interest to their constituents. Esoteric medical discoveries of interest to a minute segment of the population were less likely to garner much media attention. This dynamic has shifted radically since Tim Berners-Lee’s creation of the World Wide Web in the 1990s. With this innovation, the barriers to information symmetry in medical information have eroded, albeit imperceptibly at first. The Web facilitates global access to medical information without regard to national laws or availability of treatment. The filter on medical information provided by physicians (i.e., national governments, regional health boards and media censors) was removed, and patients could access information at a rate nearly commensurate with their physicians’. Patients are made aware of new and developing medical treatments rapidly via such online outlets as CenterWatch and ClinicalTrials.gov (www.clinicaltrials.gov). ClinicalTrials. gov is a service of the U.S. National Institutes of Health. Numerous Web sites are also dedicated to alternative medical therapies for a variety of illnesses. Yahoo! Health provides a starting point for identifying potential alternative medicine resources (dir.yahoo.com/Health/alternative_medicine). Individuals are thus free to decide which
medical innovations interest them, and then seek information on them. The information availability barrier to information symmetry has been significantly reduced, leaving the difficulty with comprehensibility as a challenge to information parity. The Internet and Web, however, allow motivated individuals to locate multiple presentations of information on the same medical innovation in an effort to identify an intelligible (to them) presentation of the information. E-mail, online chats, text messaging, personal computer (PC) based video conferencing and inexpensive long distance through voice over IP (VOIP) facilitate the ability of communities of consumers sharing similar medical conditions to pool their intellectual capital in an effort to interpret the extant information on a medical innovation and to reduce that information to the lowest common denominator of intelligibility. Wikis constitute a powerful tool by which all kinds of, including healthcare, information is being disseminated. A plethora of healthcare topics ranging from the flu to HIV are available. One of the advantages of wikis is the fact they are developed and edited by online communities of users. Consequently, even conceptually difficult and technically rigorous knowledge bases can be interpreted and presented in a non-technical manner understandable to a broad readership. As a result, the second barrier to information symmetry continues to be reduced. Technology has created a reduction in asymmetry (RIA) between the medical professional and the patient. Where the patient in the office of a medical professional once considered the practitioner before him to be the expert on the treatment of a condition, it is now feasible for the patient to have more information on the available treatments than the attending physician. The rapid advance of medical innovations and the volumes of information associated with each make it virtually impossible for a physician to be an expert on the entire spectrum of possible treatment options. Further, many medical therapies are experimental and will not find their way into mainstream medical care or even be approved for use for years. While a physician may have a passing familiarity with
Medical Ethical and Policy Issues Arising from RIA
such therapies, it is unlikely he or she will have a strong motivation to become expert on them. A patient with a particular malady, however, not only has great incentive to research that condition and all treatments, including those that may currently be unapproved or categorized as experimental, but the Web also provides a means by which he or she can do so. Online communities for particular maladies offer their sufferers a means of support in coping with their illnesses. Arguably, such communities provide a variety of informational and emotional support. ProHealth’s ArthritisSupport.com (www. arthritisSupport.com) provides an example of such an online community. It provides a full range of information and support for sufferers of both osteoarthritis and rheumatoid arthritis in the form of online chats, message boards, coping tips, clinical trial and drug information, physician selection, as well as information related to disability claims. The existence of such communities demonstrates the power of the Web to bring patients with similar interests together for their mutual benefit in terms of support and a reduction in information asymmetry.
nEw rEsEarch issuEs The supposition of information asymmetry research (e.g., in economics and finance) is that the efficiency of markets can be improved by diminishing the level of asymmetry in transactions (Akerlof, 1970; Milgrom & Roberts, 2001; Payne, 2003). Further, research and public policy is predicated on the commonly held belief that information parity, and information access that moves the parties towards parity, is the cornerstone for the elimination of the ethical quandaries introduced by information asymmetry (Akerlof, 1970; Cowles, 1976; Diamond, 1984; Hellwig, 2001). However, movement toward information symmetry in the patient-provider dyad creates a number of issues that have implications for public policy makers, a sample of which is discussed here. A significant issue with patients having unfettered access to medical information is that
the information available online is not limited by geopolitical boundaries. The Web provides information on pharmaceuticals, treatments, innovations, technologies and facilities that may not be approved for use in the patient’s home country or region. Large numbers of patients who desire and demand locally unavailable treatments place pressure on policy makers to examine the possibility of making such treatments available, perhaps prematurely. In some cases, pressure may mount to make a treatment available even if the long-term effects and efficacy of the treatment are uncertain. Policy makers are placed in the position of determining whether to relax standards by which treatment options are evaluated prior to their release for public consumption. Already, we have seen mechanisms put into place to make investigational new drugs (IND) available to certain patients on an emergency, individual, or compassionate basis. The Regulatory Affairs Professionals Society (www.raps.org) is an organization dedicated to the health product regulatory profession with members from 50 countries involving industry, government, research, clinical, and academic organizations. Such organizations are intended to serve as a resource for information and education for those involved in the regulation of the healthcare sector. Budgetary constraints frequently dictate the availability of medical therapies, particularly those that are new and expensive (Barry & Raworth, 2002; Schwartz et al., 1999). In nations with privatized healthcare, insurance companies may refuse to cover such treatments in favor of less expensive alternatives, or require the insured to pay a higher co-payment. In societies where healthcare has been nationalized, budgetary constraints are particularly troublesome since decision makers must allocate scarce medical resources either regionally or nationally. Consequently, even approved medical treatments may not be available in a particular area, or may be rationed (Barry & Raworth, 2002; Schwartz, Morrison et al., 1999). Thus, if exogenous information enters the system and creates demand for a particular form of care, patients may seek a mix of healthcare
Medical Ethical and Policy Issues Arising from RIA
alternatives unanticipated by medical boards. Conversely, citizens may not seek care in the volume expected by medical boards at particular locations. Consequently, resources may be over-allocated in one area, and under allocated in others. In situations where RIA leads patients to treatments which may be available in another country, the unanticipated current year expenditure to cover the extra-national treatment must come from existing budgets. Thus, funds budgeted for the current year for intra-national treatments are unavailable, and some must do without or a deficit will result. Additionally, allowing those on a long waiting list (or those excluded from the waiting list altogether) to jump ahead in the queue for treatment elsewhere, encourages those with low priority in the queue to find alternative treatment avenues. Another serious issue may develop with patients’ easy access to medical professionals in other countries. Scenarios exist where a locally unavailable treatment is available elsewhere and discussed with a patient. Patients may become well informed of the side-effects, efficacy, and benefits of a wide range of treatments not available to them; nonetheless, they will seek these treatments. When they learn of the treatments’ unavailability they will demand the treatment be made available to them, or they may seek illegal means of acquiring the therapy. Online sources of certain medical services and therapies are a likely means by which to circuitously avail oneself of desired medical treatment or pharmaceuticals. E-pharmacies, for example, represent a possible conduit for the improper dispensing or illegal sale of drugs in a region or country. The challenge, here, resides in how to allow the online healthcare provider to continue to offer valuable services while ensuring the source is not abused. The rapid rise of online communities, healthcare wikis, and other Web-based medical information sources leads to a number of important issues that need to be addressed. One such issue revolves around the accuracy of the information provided to patients via such conduits. How can we assure the veracity of the information? This is a particularly relevant question when consider-
ing user-developed informational sources such as wikis. In a wiki, the information’s accuracy is assessed by the users themselves. Consequently, it is possible that conceptually difficult and technically rigorous information may be incorrectly interpreted and consolidated by users. Thus, the simplification of information may pose certain dangers to its potential consumers. Further, concerns exist with respect to online articles targeted toward specific drugs or medical therapies. How can a user of such information be assured of the information’s quality? In traditional print media, we can rely, to some extent, upon an external review process. This quality check, however, is less certain in online arenas. Another potential danger of the ubiquitous availability of medical information is information overload. What are the potential deleterious effects of information overload? It may be possible that this information overload may contribute to patients’ increased confusion regarding medical conditions and available treatment modalities as opposed to alleviating such confusion. Many patients arguably make decisions concerning their healthcare, at least in part, based on the information they find online. Consequently, what is the legal culpability of the purveyors of such information when patients make decisions that are detrimental rather than beneficial to their medical care on the basis of said information? Another relevant issue that must be addressed is the degree to which governments are responsible for controlling and disseminating available information to their citizenries. In what types of information campaigns should governments engage? The answer to this question may lie to some degree upon national cultures as well as the nature of the government in question. With the wide availability of information sources that link online users to mainstream medical modalities as well as those considered to be alternative medical modalities, some governments provide advice with respect to using such sources of information. For example, the National Center for Complementary and Alternative Medicine (www. nccam.nih.gov) in the United States provides links to its Web site visitors that explain complementary
Medical Ethical and Policy Issues Arising from RIA
and alternative medical therapies in an effort to better educate medical consumers. The Australian government also maintains an internet presence to better educate its populace and to provide links to medical information through its Department of Health and Ageing (www.health.gov.au) as does the UK through the Health and Well-Being section of its DirectGov Web site (www.direct.gov. uk/HealthAndWellBeing/fs/en). Since many governments attempt to provide their citizens access to relevant health-related information, two important issues arise: (1) at what point does government responsibility end and (2) are less educated or technically aware citizens exposed to increased information asymmetry. The first issue does not arise strictly from RIA, but does pose an interesting policy issue. The response to this issue will clearly vary depending upon the nature of country’s society (e.g., socialist versus capitalist). The second issue also poses interesting policy implications that are directly related to information asymmetry. If we assume that lower income and/or educational levels contribute to less access to Internet-based information technologies for individuals, then is it reasonable to assume that these individuals are subject to traditional levels of information asymmetry? Consequently, we can expect RIA gaps to vary widely across the citizens within and between countries. As a policy issue within a country, then, the problem that must be addressed is how to halt the growth of, and reduce, RIA gaps. How can a society reduce the educational disparity that results from differing socio-economic statuses? And, assuming such a disparity can be appreciably reduced, how can that society grant equal access to the internet-based information technologies that have contributed so much to RIA?
futurE trEnds Web technologies not only provide a means by which to increase information parity, but also represent a vehicle by which the scientific process may be compromised. Individuals participating
in clinical trials can easily set up chat areas, news groups and email mailing lists to contact others in the same or similar studies. This allows participants to compare symptoms and progress, and may skew or completely thwart a medical study. Individuals in severe or chronic pain and those suffering from life threatening illness have great motivation to ensure they are getting the most efficacious treatment available. From the standpoint of public policy, however, this is problematic. While no one would argue that an individual does not have the right to seek the best treatment available for his or her disorder, altering the results of a clinical trial reduces the data available to determine what is best for an entire class of patients (Cowles, 1976; Digby, 1997; Matthews, 2000; Spilker & Cramer, 1992). As Web access continues on its march toward ubiquity, the world’s better-informed citizenry will, through their behaviors and desire for increasingly sophisticated medical therapies, place increased burdens on the health systems of their respective countries. Policy makers will continue to see citizens insinuating themselves into the process by which societies allocate their scarce medical resources. Debates about what medical technologies to make available, to whom, and at what cost which were once held in the relatively isolated environs of governmental and insurance provider offices are increasingly being influenced by the special interests of a diverse group of Websavvy citizens. We can expect to see continued growth in the number and sophistication of the forums by which such special interests are expressed. Consequently, not only will policy makers have to continue to make difficult decisions about healthcare delivery, but will also have to do so under the unforgiving lamp of public scrutiny. This will place a burden on policy makers to provide mechanisms by which public concerns can be effectively factored into the decision-making process. We are already seeing pressure mount to make medical therapies such as complementary and alternative medical treatments and promising treatments not yet authorized by the FDA available to citizens of the United States through
Medical Ethical and Policy Issues Arising from RIA
authorized health care practitioners. This pressure has resulted in the introduction of such proposed legislation as the Access to Medical Treatment Act to the U.S. House of Representatives. Such proposed legislation is strongly supported by special interest groups such as the American Association for Health Freedom (www.healthfreedom.net). Another example of the interest in allowing better access to experimental drugs is demonstrated by the Abigail Alliance (www.abigail-alliance.org). This Web site is dedicated to providing patients with life-threatening illnesses access to potentially life-saving treatments. Further, foundations such as the LiveStrong Lance Armstrong Foundation (www.livestrong.org) are highly visible sources of support and information for sufferers of lifethreatening diseases. The call to action is clear. Public policy decision makers must be prepared to deal with the cries of their citizenry. Dynamically assessing an increasingly complex information environment with ever-better informed medical consumers is necessary. It is not sufficient to react to demands for better access to medical care; policy makers must proactively develop coherent policies and information campaigns to facilitate the flow of relevant medical information and care.
conclusion This then is a microcosm of the problem facing public policy decision makers and society: Which is more important to society—individual medical treatment or the societal distribution of scarce medical resources? Is it in the public interest to allocate limited medical resources to those with the best information, or should medical professionals be trusted as better qualified to determine appropriate treatments and to prioritize patient access to medical technology? RIA between patients and medical practitioners raises public policy issues which will affect the delivery of medical care for decades to come. It is unrealistic to think that persons or families facing grave illnesses will not seek all available information that may improve their lives or the lives of their loved ones. Whether it is paid by the
patient, private health insurance or public funds from national insurance or nationalized healthcare, patients want the best and most efficacious care available. For most of these areas, RIA and increasing information parity reduces ethical, social and public policy issues. For the practice of medicine, however, RIA places physicians and governments in uncharted ethical and policy territory. An awareness of the issues that can be raised by RIA in this unusual circumstance can help the authors of public policy to prepare for social changes, and not be caught unawares.
futurE rEsEarch dirEctions IT facilitates communication and can de-layer the communications path between medical researchers and patients affected with a specific condition. Future research could examine the use of medical information bulletin boards, chat room and virtual communities by patients to exchange treatment results. As a means of determining motivation levels of patients, the research could survey patients to see what research they have done on their disease or condition. Future surveys could explore medical researchers and their experiences with control groups contaminated by the sharing of treatment results. Further, examination of physicians who recommended/submitted patients to participate in medical research can be conducted. The physicians could track the frequency with which patients ask questions indicating advanced knowledge of the study. The information could be combined with study information on the types of diseases being treated and ages of the patients in the medical study. Future research could investigate the degree to which non-hospital treatment centers assist in the dissemination of medical information. Many of these centers are staffed by nurses, not physicians, and may act as an additional layer of filtering for information to the patient. A study of the correlation between Internet access speed and the questions posed to physicians could help in understanding the impact of
Medical Ethical and Policy Issues Arising from RIA
broadband access on the RIA. Further, a correlation with income could disclose if a digital divide is forming in medical care. If information concerning medical studies and treatment options is readily available to those with computers and Internet access, but the same information is less readily available to non-computerized patients, medical studies may be weighted towards the wealthy and technically savvy. Future research could explore the creation of a clearinghouse for researchers to report tainted studies. In this way, the medical and IT communities can determine the degree of medical research contamination. Ethics researchers can explore and debate the future of the physicianpatient interaction, and how this will change as physicians lose their monopoly on diagnosis and treatment options. Public policy researchers can explore the legal options which may be necessary to ensure that medical treatments are equitably available. Public policy researchers can also examine the necessity of maintain laws which limit the discussions physicians can have with their patients, in the face of internationalization of medical technology information. Future research could also explore other areas in which RIA produces greater ethical issues for one of the parties. As information technology becomes pervasive, it is likely that other fields will discover that RIA does not always lead to more efficient markets. This understanding will allow researchers to identify situations in which RIA leads to greater moral dilemmas, not a reduction in ethical considerations.
rEfErEncEs Akerlof, G. A. (1970). The market for ‘lemons:’ Quality uncertainty and the market mechanism. Quarterly Journal of Economics, 84(3), 788500. Barry, C., & Raworth, K. (2002). Access to medicines and the rhetoric of responsibility. Ethics & International Affairs, 16(2), 57(15). Cowles, J. (1976). Informed consent. New York: Coward, McCann & Geoghegan, Inc. 0
Davies, C., & Beach, A. (2000). Interpreting professional self-regulation. London: Routledge Taylor & Francis Group. Diamond, D. W. (1984). Financial intermediation and delegated monitoring. The Review of Economic Studies, 51(3), 393-414. Digby, A. (1997). The patient’s view. Western medicine. I. Oxford University Press. Fournier, G. M., & McInnes, M. M. (1997). Medical board regulation of physician licensure: Is excessive malpractice sanctioned? Journal of Regulatory Economics, 12(2), 113-126. Ghalioungui, P. (1963). Magic and medical science in ancient Egypt. New York: Barnes & Noble, Inc. Hellwig, M. F. (2001). Risk aversion and incentive compatibility with ex post information asymmetry. Economic Theory, 18, 415-438. Magner, L. N. (1992). A history of medicine. New York: Marcel Dekker, Inc. Marti-Ibanez, F. (1962). The epic of medicine. New York: Clarkson N. Potter, Inc. Matthews, J. N. S. (2000). An introduction to randomized controlled clinical trials. London: Arnold, A member of the Hodder Headline Group. Milgrom, P., & Roberts, J. (2001). Informational asymmetries, strategic behavior, and industrial organization. Game Theory and Industrial Organization, 77(2), 184-193. Nayyar, P. R. (1990). Information asymmetries: A source of competitive advantage for diversified service firms. Strategic Management Journal, 11, 513-819. Payne, R. (2003). Informed trade in spot foreign exchange markets: An empirical investigation. Journal of International Economics, 61, 307329. Porter, R. (1992). The patient in England, c. 1660-c. 1800. Medicine in Society. Cambridge: Cambridge University Press.
Medical Ethical and Policy Issues Arising from RIA
Robinson, V. (1931). The story of medicine. New York: Tudor Publishing Co.
The Journal of the American Medical Association, 277(2), 155(2).
Schwartz, L., Morrison, J., et al. (1999). Rationing decisions: From diversity to consensus. Healthcare Analysis, 7(2), 195-205.
Friedman, L. M., Furberg, C. D., et al. (1998). Fundamentals of clinical trials. New York: Springer-Verlag.
Spilker, B., & Cramer, H. A. (1992). Patient recruitment in clinical trials. New York: Raven Press.
Gert, H. J. (2002). Avoiding surprises: A model for informing patients. The Hastings Center Report, 32(5), 23(11).
furthEr rEading
Harris, M., & Raviv, A. (1979). Optimal incentive contracts with imperfect information. Journal of Economic Theory, 20, 231-259.
Aguolu, I. E. (1997). Accessibility of information: A myth for developing countries? New Library World, 98(1132), 25. Annonymous. (2003a). Approvals of FDA-regulated products. Retrieved March 30, 2004, from http://www.fda.gov/opacom/7approvl.html Anonymous. (1998, Modified:09/01/1998). What is insider trading? Retrieved October 13, 2003, from http://www.sec.gov/divisions/enforce/insider.htm Anonymous. (2003, 03/25/2002). Complaints about broadcast advertising. Retrieved October 13, 2003, from http://ftp.fcc.gov/cgb/consumerfacts/advertising.html Anonymous. (2003). For business: Advertising guidance. Retrieved October 13, 2003, from http://www.ftc.gov/bcp/guides/guides.htm Anonymous. (2003b, Modified:06/27/2003). The laws that govern the securities industry. Retrieved October 13, 2003, from http://www.sec. gov/about/laws.shtml Barber, B. (1980). Informed consent in medical therapy and research. New Brunswick, NJ: Rutgers University Press. Borgman, C. L. (2000). From Guttenberg to the global information infrastructure: Access to information in the networked world. Cambridge, MA: MIT Press. Felch, W. C., & Scanlon, D. M. Scanlon (1997). Bridging the gap between research and practice: The role of continuing medical education. JAMA,
Hassinger, E. W. (1982). Rural health organization. Ames: Iowa State University Press. Holmstrom, B. R. (1985). The provision of services in a market economy. Managing the service economy: Prospects and problems. Cambridge, UK: Cambridge University Press. Joseph, J. L., & Cook, D. P. (2005). Information imbalance in medical decision making: Upsetting the balance. In L. A. Freeman & A. G. Peace (Eds.), Information ethics: Privacy and intellectual property.. Hershey, PA: Information Science Publishing. Majno, G. (1975). The healing hand: Man and wound in the ancient world. Cambridge, MA: Harvard University Press. Marks, F. R., & Cathcart, D. (1974). Discipline within the legal profession: Is it self-regulation? University of Illinois Law Review, 193(Spring), 236. Marti-Ibanez, F. (1958). Books in the physician’s life. In F. Marti-Ibanez (Ed.), Centaur: Essays on the history of medical ideas (pp. 3-28). New York: MD Publicantions. Marti-Ibanez, F. (1958). Centaur: Essays on the histroy of medical ideas. New York: MD Publications. Marti-Ibanez, F. (1961). A prelude to medical history. New York: MD Publications. Morgan, J. M. (2003). New device indications: Practice and cost implications in Europe. Cadiac Electrophysiology Review, 7(1), 49-53.
Medical Ethical and Policy Issues Arising from RIA
Osler, S. W. (1921). The evolution of modern medicine. New Haven, CT: Yale University Press. Pennachio, D. L. (2003). CME: How to get yours? Requirements are multiplying, but so are the opportunities to meet them. Medical Economics, 80(16), 21(3). Shaneyfelt, T. M. (2001). Building bridges to quality. JAMA, The Journal of the American Medical Association, 286(20), 2600(2). Shavell, S. (1979). Risk sharing and incentives in the principal and agent relationship. Bell Journal of Economics, 10, 55-73. Summers, A. (1997). Nurses and ancillaries in the Christian era. In I. Loudon (Ed.) Western Medicine (pp. 192-205). Oxford: Oxford University Press.
kEy tErms E-Pharmacy: A term that refers to the existence of an online pharmacy that offers medical care providers the ability to prescribe medications for patients online. These medications can then be delivered to patients without ever requiring them to leave the confines of their homes. In some, cases prescriptions can be filled without a physician ever having physical contact with the patient. Ethical Issue: A situation where the decision maker is required to weigh values and employ judgment in reaching a decision. Frequently, this is a situation which is outside of, or different from, those with which the decision maker has previous experience.
Information Asymmetry: A situation in which one party has more information in a transaction than another party or parties, relative to the transaction. Information Parity: As situation in which both parties in a transaction have equal information related to the transaction. Information Accessibility: The opportunity to find, as well as the ease and convenience associated with locating, information. Often, this is related to the physical location of the individual seeking the information and the physical location of the information in a book or journal. Information Comprehensibility: The ability to comprehend and utilize information once located. Comprehensibility, if the information is syntactically and grammatically correct, is often dependant on the preparation and training of the individual accessing the material. Locally Unavailable Treatment: Medical treatment which may not be available in a nation or region for economic, regulatory or technical reasons. The treatment may also be limited by law or administrative actions. Reduction in Information Asymmetry (RIA): Reducing the disparity in information between the parties in a transaction, relative to the transaction.
Chapter XXXII
Social Capital and the Gendering of Differential IT Use Lia Bryant University of South Australia, Australia Iolanda Principe University of South Australia, Australia
introduction Public information technology, as a term, implicitly suggests universal access by citizens to information through the use of technology. The concepts of social capital and the digital divide intersect in access to public information technology. Social inclusion or exclusion occurs as a consequence of the ways in which societies are stratified according to race, gender, (dis)ability, ethnicity and class. This chapter focuses on one aspect of stratification, gender and theorizes the gendering of differential access and use of information technologies. An understanding of gendered participation relevant to access to public information technology within the policy contexts for electronic government and social inclusion is important to inform public information technology policy, and service planning and delivery that are premised on the notion of universal access.
background Social capital is a multi-dimensional concept being applied in public policy for social services, health, crime prevention, and education for improved
health and well-being resulting from social connectedness and local/state government participation (Baum, 2000; Wilkinson, 1996). There is no absolute definition but, broadly defined, and based on Bourdieu’s perspective (1986), it refers to the resources that accrue to individuals from their participation in social networks—social capital being a symbolic or intangible resource present in social life along with material resources. Putnam’s (1993) perspective extends social capital to a society level resource built from neighborhood community and civic participation by citizens. It is argued that information and communications technology (ICT) that enables social networking and civic participation, includes access to public information services, facilitates the development of social capital for individuals and communities (Phipps, 2000). The opposite of social connectedness is the social isolation or marginalization of individuals and groups. While there are benefits resulting from the use of ICT in the delivery of government services, including ready access to public health information, there is also an amplification of inequality—producing a new societal divide—the ‘digital divide’ (Loader, 1998; Norris, 2000, 2001; NTIA, 1995, 1998). The digital divide can
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Social Capital and the Gendering of Differential IT Use
be considered as a metaphor for social exclusion relevant to public information access and use or, specifically, exclusion from infrastructure and services to which people have rights as citizens in a democratic, information-enabled society. Social inclusion or exclusion is a contemporary social policy theme specifically evident in the policies of British and Australian labor governments. For example, the labor government policies of the South Australia Government (2002) and the United Kingdom (Social Exclusion Unit, 1998, 2005) for social inclusion/exclusion aim to improve social connectedness, civic participation and community engagement in public service planning and decision-making and to improve government service delivery by ‘joined-up’ approaches. Gender is an important social category to consider when exploring the barriers and opportunities for access to universal IT services. Social sciences scholars have demonstrated that gender and technology are intricately linked given that the design and use of IT is gendered (Bryant, 2003; Cockburn & First-Dillic, 1994; Webster, 1995). These perspectives involve understanding “technological innovation as being complete only once the technology is in use” (Cockburn & First-Dillic, 1994, p. 514). In other words, the design and application of IT occur in the context of existing gender relations and therefore access to and use of public information technology will be gendered.
thEorising thE gEndEring of diffErEntial accEss and usE of public information tEchnology The development of knowledge about gender and technology design and use shows two opposing arguments recurring among feminists (Bryant, 2003). The first, stemming from second-wave feminism, is based on a strand of eco-feminism that claims a special bond between nature and women. This argument emphasizes that both nature and women are dominated by technology, capitalism and industrial development (e.g., Merchant, 1980).
However this perspective which assumes a special bond between nature and women (e.g., Mies & Shiva, 1997) has been criticized for its essentialist view of women (e.g., Kemp & Squire, 1997). Similarly, technological deterministic arguments have been used to suggest that masculine science and technology dominate women’s use and access to IT (e.g., Webster, 1995a, 1995b). Further explanations for whether men or women used IT in the workplace and accessed public information has tended to reproduce essentialist and binary arguments about men’s dominance of technology and women’s connection to nature (Bryant, 2003). Indeed in the context of empirical data these arguments are problematic. For example, Sadie Plant (1997, p. 37) has demonstrated that women historically have worked closely with computers. She explains: ...when computers were real machines women wrote the software on which they ran...Hardware, software...before their beginnings and beyond their ends, women have been the simulators, assemblers, and programmers of the digital machine. Nevertheless, Plant shows that while women have not been absent from technological work they remain underrepresented in decision-making and design positions of new technologies. Consequently, it could be suggested that Plant’s argument demonstrates that gender and technology are not explicable with regard to women’s lack of interest in technology or male technological prowess but are implicated in gendered dynamics associated with global economies and therefore power and change.
trEnds and futurE implicaitons for a continuing it gEndEr gap Thus, following Cockburn and First-Dillic, the theoretical position taken in this chapter is that the design and use of technology occur within existing gender relations and it is these which need to be explored in specific contexts to understand
Social Capital and the Gendering of Differential IT Use
gendered differential access and used IT within the arena of public information. Indeed, in some nations it has been argued that gender differences are decreasing if not disappearing in possession, access and use of IT. For example in the United States Van Dijik and Hacker (2003, p. 319) suggest that: In the year 2002 the gender difference in the possession of computers and the Internet and the time spent using them has been equalised in the United Sates…In the European Union (EU) women are still catching up… Further, the Information Economy Index Report argues that “in September 2005 there was absolute equity of [Internet] access between gender in the United States, Australia, Canada and Sweden” (Department of Communications, Information Technology and the Arts, 2006, p. 14). However, closer inspection of the literature shows a gender gap in differential skills and kinds of usage (Van Dijik & Hacker, 2003). For example, empirical studies of gender and computing show a lack of gender parity in completion of higher degrees in IT. However, women’s participation in computing courses increases when computing is combined with other subjects like language or business studies (Clegg & Treyhurn, 1999). Indeed the data underscore the different social capitals being acquired by women and men in higher education and consequently in the labor market (Ahuja, 2002). Ahuja shows that while there is a current labor shortage in IT in countries like the USA, Canada, UK and Australia women remain under represented particularly in analytical and managerial activities in the IT workforce. For example, in terms of computer programmers and systems analysts in Europe 25% of this technology workforce are women and women account for 20% of this workforce in the United States (Ahuja, 2002; Maitland, 2001). Further, despite predictions that the gender gap will close in the IT workforce (Reese, 1990) current data in Europe, the United Kingdom, the USA, and Australia suggest that percentages of women employed in higher skilled IT positions have remained constant (Ahuja, 2002).
It is noted that this lack of gender parity is occurring even though in some nations there is improved connectivity for most citizens and gender parity in IT education of children in schools. This underscores the importance of locating IT and the promise of universal access to public information technologies within a contextual knowledge of gender relations in the societies explored. With the increase of public services online, like health information, which members of our community own and have access to particular social capitals will shape equity in relation to public services among citizens. Whether the digital divide remains, will widen or reduce and for whom, is contingent upon the gender relations which underpin social systems and recognition by governments of such gendered relations. The Australian example alone suggests that the gendered digital divide will if not widen, remain constant (Department of Communications, Information Technology and the Arts, 2006).
conclusion In short, the evidence suggests a decreasing gap in IT possession and access among men and women but a continuing divide on the basis of the way IT is used and continues to be designed. Equity of access to public information technologies requires exploration in the context of understanding the complexity of the gender relations in the societies in question. Differential social capital ultimately corresponds to differential and therefore inequitable access. For public policy to live up to its aim of inclusively or universal access the gendering of information technologies will require analysis beyond the counting of numbers of men and women who own computers to understand the inequity of social capital accumulated by men and by women with regard to IT skill and types of usage.
futurE rEsEarch dirEctions There are many aspects to gendered IT use and designed that require further exploration. In this section, we suggest some important areas for further research. Firstly, we have argued that
Social Capital and the Gendering of Differential IT Use
to decrease differential social capitals between women and men there requires equity in access and retention in tertiary IT degrees. Thus, we argue for research which brings a gendered lens to identify how the design and teaching of IT degrees impact on the retention of women studying IT in tertiary education. Secondly, the lack of parity between men and women in managerial capacities in IT requires qualitative case study analysis of the conditions across a series of companies in which men are appointed to managerial positions to identify the process and cultures around managerial appointments and working conditions. Further there is also a need for qualitative research which provides rich narratives of women who have obtained managerial positions to identify workplace practices and cultures in such organizations. Our future research and theorizing requires foci which takes into account the cultural aspects of learning and work and not just work cultures but theories of self and identity. Thus, future research is required which gives emphasis to gendered and occupational identities and how these impact on women and men’s experiences and understandings of information technologies and their uses. Cultural analyses also open up the potential for undertaking research that explores discursive constructions of IT in company reports, in learning environment and in organizations. Thus, allowing for a structural analyses of the gendering of IT design and use in particular environments and for opportunities to uncover the interconnections or restraints to such discourses by women and men learning and working in IT.
rEfErEncEs Ahuja, M.K. (2002). Women in the information technology profession: A literature review, synthesis and research agenda. European Journal of Information Systems, 11, 20-34. Baum, F. (2000). Social capital, economic capital and power: Further issues for a public health agenda. Journal of Epidemiological Community Health, 54, 409-410.
Bourdieu P. (1986). The forms of capital. In J.G. Richardson (Ed.), The handbook of theory and research for the sociology of education (pp. 241258). New York: Greenwoood Press. Bryant, L. (2003). Gendered bodies: Gendered knowledge. Social Science Computer Review, 21(4), 464-474. Clegg, S., & Treyhurn, D. (1999). Gender and computing: Not the same old problem. The British Educational Research Journal, 26(1), 75-89. Cockburn, C., & First-Dillic, D. (1994). Bringing technology home: Gender and technology in a changing Europe. Buckingham, UK: Open University Press. Department of Communications, Information Technology and the Arts. (2006). Information economy index 2006. Canberra: Australian Government. Kemp, S., & Squire, J. (1997). Feminisms. Oxford: Oxford University Press. Loader, B. (Ed.) (1998). Cyberspace divide: Equality, agency and policy in the information society. London: Routledge. Maitland, A. (2001). A long-term solution to the IT skills shortage. Financial Times, February 22, 9. Merchant, C. (1980). Death of nature: Women, ecology and the scientific revolution. New York: Harper Collins. Mies, M., & Shiva, V, (1997). Ecofeminism. In S. Kemp & J. Squire (Eds.), Feminisms (pp. 494-497). Oxford: Oxford University Press. Norris, P. (2000). The worldwide digital divide: Information poverty, the Internet and development. Cambridge, M.A: John F. Kennedy School of Government Harvard University. Norris, P. (2001). Digital divide: Civic engagement, information poverty and the internet in femocratic societies. Cambridge: Cambridge University Press.
Social Capital and the Gendering of Differential IT Use
NTIA. (1995). Falling through the net: A survey of the ‘have nots’ in rural and urban america. National Telecommunications and Information Administration, Department of Commerce. Retrieved May 2, 2006, from http://www.ntia.doc. gov/ntiahome/fallingthru.html NTIA. (1998). Falling through the net II: New data on the digital divide. Washington, DC: National Telecommunications and Information Administration, Department of Commerce. Office of the Deputy Prime Minister. (2005). Inclusion through innovation tackling social exclusion through new technologies. A social exclusion unit final report. London: Office of the Deputy Prime Minister. Phipps, L. (1998). New communications technologies: A conduit for social inclusion, information, communication and society. London: Routledge.
Wilkinson, R. (1996). Unhealthy societies: The afflictions of inequality. London: Routledge.
furthEr rEading Acker, J. (1990). Hierarchies, jobs, bodies: A theory of gendered organizations. Gender and Society, 4, 2, 139-58. Adam, A. (1998). Artificial knowing: Gender and the thinking machine. London: Routledge. Adkins, L., & Lury, C. (1996). The cultural, the sexual and the gendering of the labour market. In L. Adkins & V. Merchant (Eds.), Sexualizing the social, power and the organization of sexuality (pp. 204-223). London: MacMillan. Alaimo, S. (1994). Cyborg and ecofeminist interventions: Challenges for an environmental feminism. Feminist Studies, 20(1), 133-153.
Plant, S. (1977) Zeros + one. Digital women + the new technoculture. London: Fourth Estate.
Alsop, R., Fitzsimons, A., & Lennon, K. (2002). Theorizing gender. Cambridge: Polity.
Putnam, R.D. (1993). Making democracy work: Civic traditions in modern Italy. Princeton: Princeton University Press.
Bryant, L. (2001). The text-book farmers: Young women in agriculture and the question of changing gender relations. In S. Lockie (Ed.), Consuming foods, sustaining environments (pp. 205-218). Brisbane: Australian Academic Press.
Reese, S. (1990). Information work and workers: Technology attitudes, adoption and media use in Texas. Information Age, 12, 159-164. Social Exclusion Unit. (1998). Prime Minister’s Speech on the Establishment of the Social Exclusion Unit. Retrieved March 12 from http://www. open.gov.uk/co/seu/more.html Social Inclusion Unit. (2002). Social inclusion initiative. Office of the Premier, South Australian Government, SA.
Bryant, L. (2001). A job of one’s own: women constructing occupations in agriculture. In S. Lockie & L. Bourke (Eds.), Rurality bites. Annandale: Pluto Press. Bryant, L. (2006). Marking the occupational body: Young women and men seeking careers in agriculture. Rural Society, 16(1), 62-79.
Van Dijik, J., & Hacker, K. (2003). The digital divide as a complex and dynamic phenomenon. The Information Society, 19, 315-326.
Cockburn, C. (1992). The circuit of technology: Gender, identity and power. In R. Silverstone & E. Hirsch (Eds.), Consuming technology: Media and information in domestic spaces (pp. 32-47). London: Routledge.
Webster, J. (1995a). The difficult relationship between technology and society. Work, Employment and Society, 9, 797-810.
Cockburn, C. (1997). Domestic technologies: Cinderella and the engineers. Women’s Studies International Forum, 20(3), 361-371.
Webster, J. (1995b). What do we know about gender and information technology at work? European Journal of Women’s Studies, 2, 315-334.
Social Capital and the Gendering of Differential IT Use
Faulkner, W. (2001). The technology question in feminism: A view from feminist technology studies. Women’s Studies International Forum, 24(1), 79-95. Fox, M. F. (1995). Women and scientific careers. In S. Jasanoff, G. E. Markle, J. C. Petersen, & T. Pinch (Eds.), Handbook of science and technology (pp. 203-24). Thousand Oaks, CA: Sage. Gherardi, S. (1996). Gendered organizational cultures: Narratives of women travellers in a male world. Gender, Work and Organization, 3(4), 187-201. Gibson-Graham, J. K. (1996). The end of capitalism (as we knew it): A feminist critique of political economy. Cambridge MA, Oxford: Blackwell.
Van Zoonen, L. (1992). Feminist theory and information technology. Media, Culture and Society, 14(1), 9-29. Van Zoonen, L. (2001). Feminist Internet studies. Feminist Media Studies, 1(1), 67-72. Wajcman, J. (2000). Reflections on gender and technology studies: In what state is the art? Social Studies of Science, 30(3), 447-464. Wajcman, J. (2004). Technofeminism. Cambridge: Polity. Walby, (1986). Patriarchy at work. Cambridge: Polity.
tErms and dEfinitions
Haraway, D. (1991). A cyborg manifesto; science, technology and socialist feminist in the late twentieth century. In D. Haraway (Ed.), Simians, cyborgs and women: the reinvention of nature (pp. 149-181). New York: Routledge.
Differential Access: Access to information technologies will occur differently for according to categories of stratification like, gender, age, ethnicity, and race.
Haraway, D. (1997). A manifesto for cyborgs. Science technology and socialist feminism in the eighties. In S. Kemp & J. Squires (Eds.), Feminisms. Oxford, UK: Oxford University Press.
Digital Divide: A metaphor for social exclusion relevant to ICT access and use or, specifically, exclusion from infrastructure and services to which people have rights as citizens in a democratic, information-enabled society.
Harding, S. (1991).Whose science? Whose knowledge? Thinking from women’s lives. Milton Keynes: Open University Press. Hewitson, G.J. (1999). Feminist economics: Interrogating the masculinity of rational economic man. Cheltenham, UK: Edward Elgar. Hooper, C. (2001). Manly states: Masculinities, international relations and gender politics. New York: Columbia University Press. Jackson, S. (1998). Theorising gender and sexuality. In S. Jackson & J. Jones (Eds.), Contemporary feminist theories (pp. 131-146). New York: New York University Press. Martin, P.Y. (2003). “Said and done” versus “saying and doing”: gendering practices, practicing gender at work. Gender and Society, 17(3), 342-366. Meyer, M. K., & Prugl, E. (Eds.). (1999). Gender politics in global governance. MD: Rowman and Littlefield.
Eco-Feminism: Feminist theories that explore interrelationships between women and nature, interrelationships between men and nature and gendered dynamics between women and men in relation to nature and technology. Gender Parity: Equality among women and men. Public Information Technology: Information online available to the public and is a term used by governments to refer to online and universally accessible information. Social Capital: Intangible value aggregated from social and organizational networks which can result in social and economic benefits to individuals and communities. Universal Access: All citizens in information enabled societies will have access to information and communication systems.
Chapter XXXIII
Technology Diffusion in Public Administration Eugene J. Akers Center for Advanced Technologies-Auburn Montgomery, USA
introduction The ability to understand the salient aspects of innovations, as perceived by the members of a social system, is essential to the success of planned change. The diffusion of information technology in the public sector provides the opportunity to apply the appropriateness of diffusion theory in a combined context of information technology and public policy innovation. Past studies support the salience of diffusion theory and the adoption of information technology (Attewell, 1992; Brancheau & Wetherbe, 1990; Chau & Tam, 1997; Cooper & Zmud, 1990; Damanpour, 1991; Fichman, 1992; Swanson, 1994; Tornatzky & Fleischer, 1990). Other studies suggest that existing theory in public policy adoption adequately provide a framework to guide research in technology adoption in the public sector (Akers, 2006; Berman & Martin, 1992; Berry, 1994; Berry & Berry, 1990; Glick & Hays, 1991; Gray, 1973; Hays, 1996; Hwang & Gray, 1991; Mintrom, 1997; Rogers, 1962; True & Mintrom, 2001; Walker, 1969; Welch & Thompson, 1980) However, there is little research that combines both frameworks for understanding the adoption of information technology in public organizations or within political subdivisions. Using classical diffusion theory, information technology adoption, and public policy adoption
theory, there is sufficient contextual relevance of these theories to guide research in the adoption of public information technology in public organizations and political subdivisions.
background Everett M. Rogers (1962) was first to outline the terminology and concepts of diffusion theory conceptualized from many different disciplines. Rogers (1995) defines diffusion as “the process by which an innovation is communicated through certain channels over time among the members of a social system” (p. 5). A review of diffusion theory finds three common empirical regularities associated with the diffusion of innovations that provide the framework for the visual understanding of diffusion theory. First, studies of the diffusion of innovations show a common regularity such that the cumulative adoption time path or temporal pattern of the diffusion process when plotted, takes the general distribution shape of an s-shaped curve (Brown & Cox, 1971:551; Rogers, 1995; Tarde, 1962). Another familiar graphical representation of the diffusion process is a spatial sequence. Spatial representation recognizes that a new adoption is highest in the vicinity of an earlier one and decreases with
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Technology Diffusion in Public Administration
distance. This is often referred to as the “neighboring effect” (Brown & Cox, 1971; Hägerstrand, 1967; Klingman, 1980). Finally, there may be a tendency in diffusion for more important places to adopt earlier than less important places creating a hierarchy effect (Brown & Cox, 1971; Leichter, 1983; Rogers, 1962; Walker, 1969). Rogers (1995) identified four critical elements associated with the analysis of innovation diffusion: the innovation, its communication from one individual to another, in a social system over time (p. 11). Several studies applied diffusion theory specifically to organizations as a social system (Becker & Whisler, 1967; Downs & Mohr, 1976; March & Simon, 1993; Menzel & Feller, 1977; Zaltman, Duncan & Holbek, 1973). These four elements provide the basic components for most diffusion studies. Rogers (1995) defined innovation as “an idea, practice, or object that is perceived as new by an individual or other unit of adoption” (p. 11). The communication system provides communication from one individual to another, or one social system to another. The purpose of this communication is to share ideas and reach some form of convergence in order to effect a specific change and may be viewed as bi-directional. The domain of the diffusion process is bound within some social system. A social system is defined as “a population of individuals who are functionally differentiated and engaged in collective problem-solving behavior” (Rogers, 1962, p. 14). The characteristics of a social system and an organization are generally interchangeable depending on the unit of analysis. The final critical element of the diffusion process is time. The length of the diffusion process is measured from the date that the first individual is aware of the innovation until it reaches a saturation point of adoption in a given social system.
public sEctor adoption of information tEchnology The salience of classical diffusion theory as a framework to study public policy adoption emerged in the field of public administration with
0
the publication of Jack Walker’s (1969) research, “The Diffusion of Innovations among the American States.” Walker believed there were other important factors that determined policy outcomes besides the generally accepted expenditure model. His research provided the framework for future public policy adoption studies over the next several decades and provided the initial definition of an public policy innovation “as a program or policy which is new to the states adopting it, no matter how old the program may be or how many other states may have adopted it” (Walker, 1969, p. 881). The focus of Walker’s analysis was the adoption process of new ideas and new services within a political subdivision. Subsequent research identified three prominent models of public policy adoption (Berry & Berry, 1990; Collier & Messick, 1975; Daniels & Darcy, 1985; Eyestone, 1977; Foster, 1978; Mooney, 2001; Walker, 1969). The determinants model examined the demographic, economic, and political factors of the governmental subdivision or organization. The regionalism model focused attention on the “inter-governmental context,” or the horizontal relationships among the states, as the principal influence that regulated the speed of adoption and the patterns of adoption. The federalism model noted the affect of federal stimulation to the adoption rate of public policy. The determinants of public policy adoption are generally divided into two broad categories: socio-economic and political. Past studies show socio-economic variables (i.e., wealth, education, urbanization, minority diversity, and governmental slack resources) and political determinants (i.e., legislative professionalism, executive leadership, government ideology, unified party control, policy entrepreneurs, policy networks, and administrative professionalism) have a significant impact on public policy adoption. Walker was the first to show that a state’s general tendency toward public policy adoption can be another important determinant and has been supported by subsequent research that suggest the importance of a state’s tendency toward public policy innovation functions as an intervening variable that reflects broad socio-economic and political determinants (Akers, 2006; Berman & Martin, 1992).
Technology Diffusion in Public Administration
The rapid emergence of management information systems and computer technology in the late 1970s and early 1980s brought a significant interest in understanding how new technology is adopted throughout an organization. The term information technology is broadly defined as “any artifact whose underlying technological base is comprised of computer or communications hardware and software” (Cooper & Zmud, 1990, p. 123; Rogers, 1995, p. 12-14). Cooper and Zmud (1990) further define technological diffusion of information technology “as an organizational effort directed toward diffusion appropriate information technology within a user community (p.124). Information technology diffusion research focuses on similar contextual factors referenced from existing diffusion theory—characteristics of the adopter community (external context), characteristics of the technology (technological context), and characteristics of the organization (internal context). Additionally, the characteristics of the tasks to which the technology are being applied impact the rate of adoption. (Chau & Tam, 1997; Cooper & Zmud, 1990; Stoneman & Diederen, 1994; Tornatzky & Fleischer, 1990) Critical Issues in Information Systems Research was one of the earlier efforts to understand the adoption of information technology in the organizational and social context (Boland & Hirschheim, 1987). Part II of the book focuses on the failure of organizational theory to explain the adoption of information systems and its affect on the organization. The early topology of information technology diffusion defines the three different research units of analysis (the individual, the organization, and the market) and their associated determinants (Swanson, 1987, p. 183). The study of information technology adoption must recognize the characteristics of the technology. Swanson (1994) posited three different types of IT innovations: Type I innovations confined to the IS function, Type II innovations supporting organizational administrative tasks, and Type III innovations imbedded in the core technologies of the organization. Table 1 represents a partial summary of the recognized determinants associated with the rate
of adoption from the classical diffusion perspective (Brown & Cox, 1971; Hägerstrand, 1967; Rogers, 1962; Zaltman et al., 1973), the public policy adoption perspective (Akers, 2006; Berman & Martin, 1992; Berry & Berry, 1990; Cannon & Baum, 1981; Collier & Messick, 1975; Daniels & Darcy, 1985; Feiock & West, 1993; Glick & Hays, 1991; Gray, 1973; Hays & Glick, 1997; McNeal, Tolbert, Mossberger et al., 2003; Menzel & Feller, 1977; Mintrom, 1997; Ringquist & Garand, 1999; True & Mintrom, 2001; Walker, 1969; Welch & Thompson, 1980; Winder & LaPlant, 2000), and the information technology adoption perspective (Attewell, 1992; Damanpour, 1991; Fichman, 1999; Kwon & Zmud, 1987; Larsen & McGuire, 1998; Swanson, 1994; Tornatzky & Fleischer, 1990) with common determinants. The technology base reflects characteristics of the organization based on more recent organizational theory. The technology perspective also reflects characteristics of the technology found in the classical perspective but lacking in public policy innovation. Table 1 further reflects five attributes of the innovation identified by Rogers not included in public policy diffusion and technology diffusion studies (Rogers, 1995).
futurE trEnds As information technologies become more pervasive within public organizations, the need for more accountability will continue to evolve. Public managers will be asked to design, evaluate, and implement comprehensive management information systems and technologies necessary for the success of the agency’s mission. As new models of government evolve, information technology will facilitate the level of agency collaboration and sharing of services necessary for public organizations to address today’s more complex issues (Fountain, 2001; Goldsmith & Eggers, 2004). Public managers will need to understand not only the technologies, but also the impact to the social structure of the organization. A review of existing theories in the adoption and implementation of technology in the public sector has generally
Technology Diffusion in Public Administration
Table 1.Summary of diffusion determinants from three perspectives
Classical Innovation Determinants I.
II.
III.
IV.
V.
Perceived Attributes of Innovation . Relative Advantage . Compatibility . Complexity . Trialability . Observability Type of Innovation-Process . Voluntary . Collective . Mandated Communication Channels . Mass Media . Interpersonal Nature of the Social System . Norms . Degree of network communications Change Agents’ Efforts
Public Policy Innovation Determinants I.
II. III. IV. V.
I.
II.
III.
IV.
Contextual Factors . User community characteristics . Organizational characteristics . Technology characteristics . Task characteristics . Environmental factors Dimensions within Organization . Spatial . Occupational . Hierarchical . Functional Type of Innovation . Administrative . Technical . Product . Process Type of IT Innovation . Type I – confined to IS function . Type II – supports organization administrative tasks . Type III – embedded in core technologies of the organization
Characteristics of the Social System . Socio-economic . Political Federalism Regionalism (Spatial) Innovative Tendency Type of Policy Innovation . Moral . Administrative . Economic
been limited to technology within a single political entity (state or local government) or public organization. Extant theory in technological determinism, rational actor models, incrementalism, technology enactment, sociotechnical systems theory, and system analysis has provided the basic framework of research (Fountain, 2001; Grafton, 2003; Norris, 2003). There exist two deficiencies with these frameworks for future research. First, all of these frameworks primarily represent a view from the social sciences perspective without significant consideration for the characteristics of the technology or the technology architecture of the organization. More importantly, the future trend of collaborative systems will need to focus on the diffusion of technologies among a cluster of different organizations with different missions and cultures. The cross-organizational diffusion
Technology Determinants
of technology will require a new framework in organizational theory associated within a more complex environment. New integrated services that require the sharing of information and a coordination of efforts will need a new research framework that combines all three perspectives and takes into consideration enterprise architectural standards and data standards.
conclusion Research shows there is a general lack of journal articles and textbooks that focus on information technology in public administration (Northrop, 2003). While there are several extant theoretical frameworks that examine the field of information technology and government, these frameworks
Technology Diffusion in Public Administration
are grounded in the traditional hierarchical government bureaucracy that is ill suited to address problems that transcend organizational boundaries. Technology will become the driving force behind the movement to create public value through a network of multi-organizations, multi-government, and multi-sectoral relationships characterizing modern government. These changes will be so different and so disruptive as to create a level of “organizational angst.” The challenge for public managers will be the implementation of information technology within this environment. New analytical frameworks will be needed that take into consideration the interaction of multiple missions, cultures and technological environments. Further research is needed to define a more robust framework that will be salient to this new environment.
futurE rEsEarch dirEctions The adoption of public policy is theory based on concepts and propositions that if causal determinants can be identified then outcomes can be satisfactorily explained. Research has shown the validity of non-monetary determinants in understanding the adoption of public policy (Akers, 2006; Berman & Martin, 1992; Savage, 1978; Walker, 1969) While the use of non-monetary determinants in research of policy adoption has been limited, there is sufficient evidence to warrant further study in the use of similar determinants. After a review and update of Walker’s innovativeness index in 1978 (Savage, 1978), there has been no further research to update or test the relevance of the state’s general innovative tendency as it relates to the adoption of public policy. Future research should focus on the update of Walker’s policy innovativeness index and the subsequent testing of its relevance in causal analysis of policy adoption. There exists the possibility that a similar nonmonetary index would be of value in understanding a public organization’s tendency toward technology adoption. For example, the construct of a state technology innovativeness index would allow
further testing of causal models of government adoption of technology. As a broad composite of many different socio-economic and political determinants in the state, the technology innovativeness index can be used to further understand a state’s tendency toward the adoption of technology. Further research in the adoption of technology in the public sector should also focus on understanding the dynamics of social change in an organization and multiple concurrent organizations. Research in the adoption of technology in collaborative agencies is unique to the public sector. Private organizations function in a hierarchical environment where technology is generally driven top-down through the supply chain. Public organizations are generally more equal and must function in a cooperative manner to effectively adopt new technology. Further research may provide an opportunity to understand the adoption of technology within government organizations from a post-positivist perspective than other causal models.
rEfErEncEs Akers, E. J. (2006). A study of the adoption of digital government technology as public policy in the American states. (Doctoral Dissertation, Auburn University Montgomery 2006). Dissertation Abstracts International, 67, 06A. Attewell, P. (1992). Technology diffusion and organizational learning: The case of business computing. Organizational Science, 3(1), 1-19. Becker, S., & Whisler, T. L. (1967). The innovative organization: A selective view of current theory and research. Journal of Business, 40(4), 462-469. Berman, D. R., & Martin, L. L. (1992). The new approach to economic development: An analysis of innovativeness in the state. Policy Studies Journal, 20(1), 10-21. Berry, F. S. (1994). Innovation in public management. Public Administration Review, 54(4), 322-330.
Technology Diffusion in Public Administration
Berry, F. S., & Berry, W. (1990). State lottery adoptions as policy innovations: An event history analysis. American Political Science Review, 84(June), 395-415. Boland, R. J., & Hirschheim, R. A. (Eds.). (1987). Critical issues in information systems research. New York: John Wiley & Sons. Brancheau, J. C., & Wetherbe, J. C. (1990). The adoption of spreadsheet software: Testing innovation diffusion theory in the context of end-user computing. Information System Research, 1(2), 115-143. Brown, L. A., & Cox, K. (1971). Empirical regularities in the diffusion of innovation. Annals of the Association of American Geographers, 61(3), 551-559. Cannon, B. C., & Baum, L. (1981). Patterns of adoption of tort law innovations: An application of diffusion theory to judicial doctrine. American Political Science Review, 75, 975-987.
Eyestone, R. (1977). Confusion, diffusion, and innovation. American Political Science Review, 71, 441-447. Feiock, R. C., & West, J. P. (1993). Testing competing explanations for policy adoption: Municipal solid waste recycling program. Political Research Quarterly, 46(2), 399-419. Fichman, R. G. (1992). Information technology diffusion: A review of empirical research. Paper presented at the International Conference on Information Systems (ICIS). Fichman, R. G. (1999). The diffusion and assimilation of information technology innovation. In R. W. Zmud (Ed.), Framing the domains of IT management: Projecting the future through the past. Cincinnati, OH: Pinnaflex Educational Resources, Inc. Foster, J. L. (1978). Regionalism and innovation in the American states. Journal of Politics, 41(1), 179-187.
Chau, P. Y. K., & Tam, K. Y. (1997). Factors affecting the adoption of open systems: An exploratory study. MIS Quarterly, 21(1), 1-24.
Fountain, J. E. (2001). Building the virtual state: Information technology and institutional change. Washington, DC: Brookings Institution.
Collier, D., & Messick, R. E. (1975). Prerequisites versus diffusion: Testing alternative explanations of social security adoption. American Political Science Review, 69(4), 1299-1315.
Glick, H. R., & Hays, S. P. (1991). Innovation and reinvention in state policymaking: Theory and the evolution of living will laws. Journal of Politics, 53(3), 835-851.
Cooper, R. B., & Zmud, R. W. (1990). Information technology implementation research: A technological diffusion approach. Management Science, 36(2), 123-139.
Goldsmith, S., & Eggers, W. D. (2004). Governing by network - The new shape of the public sector. Washington, DC: Brookings Institution Press.
Damanpour, F. (1991). Organizational innovation: A meta-analysis of effects of determinants and moderators. The Academy of Management Journal, 34(3), 555-590. Daniels, M. R., & Darcy, R. E. (1985). As time goes by: The arrested diffusion of the equal rights amendment. Publius, 15, 51-60. Downs, G. W., & Mohr, L. B. (1976). Conceptual issues in the study of innovation. Administrative Science Quarterly, 21, 700.
Grafton, C. (2003). “Shadow theories” in fountain’s theory of technology enactment. Social Science Computer Review, 21(4), 411-416. Gray, V. (1973). Innovation in the states: A diffusion study. American Political Science Review, 67, 1174-1185. Hägerstrand, T. (1967). Innovation diffusion as a spatial process. Chicago: University of Chicago Press.
Technology Diffusion in Public Administration
Hays, S. P. (1996). Policy reinvention and the diffusion of public campaign funding laws. Spectrum: Journal of State Government, 96(2), 23-32.
Norris, D. F. (2003). Building the virtual state ... or not? Social Science Computer Review, 21(4), 417-424.
Hays, S. P., & Glick, H. R. (1997). The role of agenda setting in policy innovation. American Politics Quarterly, 25(4), 497-517.
Northrop, A. (2003). Information technology and publc administration: The view from the profession. In G. D. Garson (Ed.), Public information technology: Policy and management issues (pp. 1-19). Hershey, PA: Idea Group Publishing.
Hwang, S.-D., & Gray, V. (1991). External limits and internal determinants of state public policy. Western Political Quarterly, 44(2), 277-298. Klingman, D. (1980). Temporal and spatial diffusion in the comparative analysis of social change. American Political Science Review, 74(March), 123-137. Kwon, T. H., & Zmud, R. W. (1987). Unifying the fragmented models of information systems implementation. In R. Boland & R. A. Hirschheim (Eds.), Critical issues in information systems research. New York: Wiley. Larsen, T. J., & McGuire, E. (1998). Information systems innovation and diffusion: Issues and directions. Hershey, PA: Idea Group. Leichter, H. M. (1983). The patterns and origins of policy diffusion: The case of the commonwealth. Comparative Politics, 15(2), 223-233. March, J. G., & Simon, H. (1993). Organizations (2nd Ed.). Cambridge, MA: Blackwell Business. McNeal, R. S., Tolbert, C. J., Mossberger, K., & Dotterweich, L. J. (2003). Innovating in digital government in the American states. Social Science Quarterly, 84(1), 52-70. Menzel, D. C., & Feller, I. (1977). Leadership and interaction patterns in the diffusion of innovations among the American states. Western Political Quarterly, 30(4), 528-536.
Ringquist, E. J., & Garand, J. C. (1999). Policy change in American states. In R. E. Weber & P. Brace (Eds.), American states and local politics: Directors for the 21st century (pp. 268-299). New York: Chatham House Publishers. Rogers, E. M. (1962). Diffusion of innovations (1st ed.). New York: Free Press. Rogers, E. M. (1995). Diffusion of innovations (4th ed.). New York: Free Press. Savage, R. L. (1978). Policy innovativeness as a trait of American states. Journal of Politics, 40(1), 212-224. Stoneman, P., & Diederen, P. (1994). Technology diffusion and public policy. The Economic Journal, 104(425), 918-930. Swanson, E. B. (1987). Information systems in organization theory: A review. In R. Boland & R. A. Hirschheim (Eds.), Critical issues in information systems research (pp. 181-204). New York: John Wiley & Sons. Swanson, E. B. (1994). Information systems innovation among organizations. Management Science, 40(9), 1069-1092. Tarde, G. d. (1962). The laws of imitation (E. C. Parsons, Trans.). New York: H. Holt and Company.
Mintrom, M. (1997). Policy entrepreneurs and the diffusion of innovation. American Journal of Political Science, 41(3), 738-770.
Tornatzky, L. G., & Fleischer, M. (1990). The process of technological innovation. Lexington, MA: Lexington Books.
Mooney, C. Z. (2001). Modeling regional effects on state policy diffusion. Political Research Quarterly, 54(1), 103-124.
True, J., & Mintrom, M. (2001). Transnational networks and policy diffusion: The case of gender mainstreaming. International Studies Quarterly, 45, 27-57.
Technology Diffusion in Public Administration
Walker, J. L. (1969). Diffusion of public policy innovation among the American states (ICPSR Ed.). Ann Arbor, MI: Inter-University Consortium for Political and Social Research. Welch, S., & Thompson, K. (1980). The impact of federal incentives on state policy innovation. American Journal of Political Science, 24(4), 715-729. Winder, D. W., & LaPlant, J. T. (2000). State lawsuits against “Big Tobacco”: A test of diffusion theory. State and Local Government Review, 32(2), 132-141. Zaltman, G., Duncan, R., & Holbek, J. (1973). Innovations and organizations. New York: WileyInterscience Publication.
furthEr rEading Balla, S. J. (2001). Interstate professional association and the diffusion of policy innovations. American Politics Research, 29(3), 221-245. Berry, F. S., & Berry, W. (1992). Tax innovation in the states: Capitalizing on political opportunity. American Journal of Political Science, 36(3), 715-742. Berry, F. S., & Berry, W. (1994). The politics of tax increases in the states. American Journal of Political Science, 38(3), 855-859. Brown, L. A. (1981). Innovation diffusion: A new perspective. New York: Methuen.
Daley, D. M., & Garand, J. C. (2002, April 22-25). Determinants of state hazardous waste programs: A pooled cross-sectional time series analysis. Paper presented at the Midwest Political Science Association, Chicago. Dolowitz, D., & Marsh, D. (1996). Who learns what from whom: A review of the policy transfer literature. Political Studies, XLIV, 343-357. Downs, G. W., Jr. (1976). Bureaucracy, innovation, and public policy. Lexington, MA: Lexington Books. Downs, G. W., & Mohr, L. B. (1979). Toward a theory of innovation. Administration and Society, 10, 379-408. Dye, T. R., & Robey, J. S. (1980). “Politics versus economics:” Development of the literature on policy determination. In T. Dye & V. Gray (Eds.), The determinants of public policy (p. 227). Lexington, KY: Lexington Books. Fliegel, F., & Kivlin, J. E. (1966). Attributes of innovations as factors in diffusion. American Journal of Sociology, 72(3), 235-248. Jensen, J. L. (2004). A multipopulation comparison of the diffusion of public organizations and policies across space and time. Policy Studies Journal, 32(1), 109-127. Katz, E., Levin, M. L., & Hamilton, H. (1963). Traditions of research in the diffusion of innovations. American Sociological Review, 28(2), 237-252.
Burke, B. F., & Wright, D. S. (2002). Reassessing and reconciling reinvention in the American states: Exploring state administrative performance. State and Local Government Review, 34(1), 7-19.
Kellough, J. E., & Selden, S. C. (2003). The reinvention of public personnel administration: An analysis of the diffusion of personnel management reforms in the states. Public Administration Review, 63(2), 165-176.
Clark, J. (1985). Policy diffusion and program scope: Research directions. Publius, 15, 61-70.
Keynes, J. M. (1972). Essays in persuasion. London: Macmillan Press.
Cnudde, C. E., & McCrone, D. J. (1969). Party competition and welfare policies in the American states. American Political Science Review, 63(3), 858-866.
Klass, G. M. (1980). The determination of policy and politics in the 1948-1974. In V. Gray & T. R. Dye (Eds.), The determinants of public policy (p. 227). Lexinton, MA: LexingtonBooks.
Technology Diffusion in Public Administration
Light, P. C. (1998). Sustaining innovation: Creating nonprofit and government organizations that innovate naturally. San Francisco: Jossey-Bass. Mahajan, V., & Peterson, R. A. (1985). Models for innovation diffusion. Newberry Park, CA: Sage Publications. Mintrom, M. (1997b). The state-local nexus in policy innovation diffusion: The case of school choice. Publius, 27, 41-59. Mintrom, M. (2000). Policy entrepreneurs and school choice. Washington, DC: Georgetown University Press. Mintrom, M., & Vergari, S. (1998). Policy networks and innovation diffusion: The case of state education reform. Journal of Politics, 60(1), 126-148. Mohr, L. B. (1969). Determinants of innovation in organizations. American Political Science Review, 63(March), 111-126. Mossberger, K., & Hale, K. (2002). “Polydiffusion” in intergovernmental programs: Information diffusion in the school-to-work network. American Review of Public Administration, 32(4), 398-422. Purcell, C. (2004). Carolyn Purcell on incremental transformation. In J. D. Williams, R. Stapilus, & L. Watkins (Eds.), 1st century government: Digital promise, digital reality (pp. 70-81). Boise, ID: Ridenbaugh Press. Rogers, E. M., & Shoemaker, F. F. (1971). Communication of innovation: A cross-cultural approach (2nd ed.). New York: The Free Press. Rose, R. (1993). Lesson-drawing in public policy. Chatam, NJ: Chatham House Publishers, Inc. Savage, R. L. (1985). Diffusion research traditions and the spread of policy innovations a federal system. Publius, 15, 27. Schneider, M., Teske, P., & Mintrom, M. (1995). Public entrepreneurs: Agents for change in American government. Princeton, NJ: Princeton University Press.
Seneviratne, S. J. (1999). Information technology and organizational change in the public sector. In G. D. Garson (Ed.), Information technology and computer applications in public administration: Issues and trends (p. 304). Hershey, PA: Idea Group Publishing. Sigelman, L., Roeder, P. W., & Sigelman, C. (1981). Social service innovation in the American states. Social Science Quarterly, 62, 503-515. Walker, J. L. (1973). Problems in research on the diffusion of policy innovations. American Political Science Review, 67(4), 1186-1191. Zaltman, G., & Duncan, R. (1977). Strategies for planned change. New York: John Wiley & Sons.
kEy tErms Determinants Model of Policy Adoption: Recognizes the demographic, economic and political factors affect the rate of adoption of public policy within a political boundary. Diffusion Process: The adoption over time of an innovation by individuals linked by specific channels of communications to a social structure given to a system of values or culture. Federalism Model of Policy Adoption: Recognizes that rate of adoption of public policy in a federal system is both horizontal (that is, state to state) and vertical (that is, federal to state or state to local) and federal mandates and funding have a positive impact on the rate of public policy adoption. Innovation: An idea, practice, or object that is perceived as new by an individual or other unit of adoption. Innovation-Decision Process: The mental process through which an individual passes from first knowledge about an innovation to forming an attitude toward the innovation, to a final deci-
Technology Diffusion in Public Administration
sion to adoption or rejection, to implementation and use of the new idea, and to confirmation of this decision. Innovativeness: The degree to which an individual or other unit of adoption is relatively earlier in adopting new ideas than the other members of a social system.
Regionalism Model of Policy Adoption: Recognizes that the process of competition and emulation affects the pace and direction of the adoption of social and political change in the American states. Technology Determinism: Emphasizes the belief that technology causes changes in society.
Chapter XXXIV
Institutional Theory and E-Government Research Shahidul Hassan University at Albany, SUNY, USA J. Ramon Gil-Garcia Centro de Investigación y Docencia Económicas, Mexico
introduction Recent developments in institutional theory are highly promising for the study of e-government. Scholars in various disciplines, such as economics (North, 1999; Rutherford, 1999), sociology (Brinton & Nee, 1998), and political science (March & Olsen, 1989; Peters, 2001), have used institutional approaches to understand diverse social and organizational phenomena. Insights gained from these studies can be valuable for guiding research in e-government. In fact, there are some initial efforts in information systems and e-government research that have applied institutional theory and proved useful in generating new insights about how information technologies are adopted (Teo, Wei, & Benbasat, 2003; Tingling & Parent, 2002), designed and developed (Butler, 2003; Klein, 2000; Laudon, 1985), implemented (Robey & Holmstrom, 2001), and used (Fountain, 2001) in organizations. In this chapter, we provide a brief overview of some of these initial studies to highlight the usefulness of institutional theory in e-government research. We also suggest some
opportunities for future research in e-government using institutional theory. This chapter does not capture all the essential theoretical and empirical issues related to using institutional theory in information systems and e-government research. Instead, it is a brief review and a good starting point to explore the potential of institutional theory. We hope that e-government scholars find it interesting and useful. The chapter is organized in five sections, including this introduction. The second section provides a brief overview of institutional theory in various disciplinary traditions, with an emphasis on institutional theory in sociology. Then the chapter identifies various patterns of the use of institutional theory in information systems and e-government research. Based on our analysis of the current state of the art, the fourth section suggests some opportunities for future research. Finally, the fifth section provides some final comments.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Institutional Theory and E-Government Research
background: an ovErviEw of institutional thEory Defining Institutions Institutions can be conceptualized as guidelines for human actions, or the “logic of appropriate” behavior in society (March & Olsen, 1984). Jepperson (1991) defines institutions as “those social patterns that, when chronically reproduced, owe their survival to relatively self-activating social processes.” People tend to believe that there is a fundamental rationale for these patterns’ existence. As institutions are historically produced, their legitimacy is often not questioned. Accordingly, they are taken-for-granted standardized sequences of activities, and institutionalization is the process through which these activities come to be seen as objective or taken for granted (Zucker, 1977). Institutions consist of cultural-cognitive, regulative, and normative elements (Scott, 2001). With associated resources, they provide stability and meaning to social life. They are transmitted over time and across space through various symbolic and relational systems, routines, procedures, and artifacts (Scott). They operate at multiple levels from global to national and local contexts. Even though institutions suggest social stability and persistence, they are subject to both incremental and radical change and discontinuity (Scott).
Institutional Theory in Various disciplines Institutional theory has progressed over the years in multiple disciplines such as economics, history, political science, and sociology. Within each of these disciplines, institutional theory has its own distinct flavors. However, institutional scholars in all disciplines have a common skepticism toward the rational-actor model that has dominated the study of human behavior in social sciences over the last 50 years. In addition, they all agree that institutions and the social processes surrounding their evolution, maintenance, and transformation
0
matter in the fashioning of social life (Scott, 2001).
Institutionalism in Economics In general, institutional theory in economics focuses on microeconomic theory and proposes that individuals maximize their utilities over stable and consistent preferences under cognitive limits (Simon, 1945), incomplete information, and difficulties of monitoring and enforcing contractual agreements (Williamson, 1975, 1985). Institutions, in such perspective, come into existence as they offer greater benefits than the transaction costs (Coase, 1937) incurred in creating and sustaining them. More specifically, institutions help us to reduce uncertainty associated with the asymmetric information, bounded rationality, and opportunistic behavior of others by providing a set of stable and consistent legal frameworks for economic exchange. Therefore, institutions can be seen as governance structures and rules (North, 1999) that facilitate the smooth functioning of markets by minimizing transaction costs.
Institutionalism in Political Science Institutional theory in political science is mainly concerned with political decision-making processes, especially the ways in which political institutions shape political outcomes (DiMaggio & Powell, 1991). Rational-choice institutionalists consider political institutions as being critical to account for unpredictable and paradoxical decisions that can result from majority voting systems (DiMaggio & Powell). Complementing institutional theory in economics, these scholars argue that political institutions reduce uncertainty associated with political exchange and opportunistic behaviors among politicians, and enhances the prospects of political cooperation by creating institutional checks and balances (Moe, 1984). Both formal and informal political institutions and mechanisms, according to this perspective, are critical for the stability in and continuation of political processes. In contrast, historical and normative institutionalism in political science
Institutional Theory and E-Government Research
focus on the evolution of institutional forms and emphasize on the norms of institutions, or “the logic of appropriateness” (March & Olsen, 1984, 1989), as means of understanding human behavior. Unlike rational-choice theorists, who see individual preferences as stable properties of actors, these scholars consider them as more problematic, emergent from situations, and context specific (Scott, 2001).
Institutionalism in Sociology From 0s to 0s Institutional theory in sociology is mainly concerned about how social structures come to be seen as institutions in private and public spheres of social life. Early writers such as Phillip Selznick (1949, 1957) focused on how formal organizations become institutionalized when they are “infused with values beyond the technical requirements of the task at hand” (Selznick, 1957). He viewed organizations as rational but expendable tools. However, as organizations are influenced over time by the norms and values of their members as well as the constraints of their external environments, they develop characters of their own that may not be congruent with their formal goals. Selznick thought that organizations with precisely defined goals and strong technological bases are less likely to be institutionalized than ones with ambiguous goals and weak technological bases.
From 0s to 0s The new institutional theory in sociology depends heavily on the ethnomethodology (Garfinkel, 1967) and phenomenology (Berger & Luckmann, 1967) for their conceptualization of institutions. Berger and Luckmann define institutions as symbolic systems that are “experienced as possessing reality of their own, a reality that confronts the individual as an external and coercive fact” (p. 58). Meyer and Rowan (1977), in their seminal article on organizations, argued that formal organizational structures and operating procedures have both symbolic and action-generating properties.
Organizations are shaped by a set of “rationalized myths” or “shared belief systems” that normatively specify how organizational goals can be attained. Organizations tend to adopt these procedures not because they improve performance, but because they provide legitimacy (Meyer & Rowan). Later, DiMaggio and Powel (1983) and Meyer and Scott (1992) expanded these initial ideas and developed the macrofoundations of institutional theory. DiMaggio and Powel identified three mechanisms—coercive, mimetic, and normative—through which institutional effects are diffused in various organizational fields. They argued that early adopters of organizational innovations are driven by organizational performance; however, as innovations spread, they are no longer adopted because of technical merits but because they provide legitimacy. This means that in the short run, organizations may try to differentiate themselves from others, but in the long run this effect is lessened by their aggregate effort to minimize uncertainty in their organizational field (DiMaggio & Powell). DiMaggio and Powel used the term isomorphism to capture this process of homogenization. In addition, Meyer and Scott suggested that organizations are faced with both technical and institutional pressures, but they are more strongly influenced by one or the other. Thus, institutionalization of certain organizational practices is influenced by both competitive and institutional pressures (Scott, 2001). Around the same time, Zucker (1983, 1991) advanced the microfoundations of institutional theory. Drawing directly on the work of Berger and Luckmann, she presented three sequential processes of institutionalization: habitualization, objectification, and sedimentation. Habitualization refers to developing patterned problem-solving behaviors in response to particular problem situations, objectification means developing a shared understanding about those patterned behaviors, and sedimentation is the extent to which these patterned typifications posses a reality of their own or have the taken-for-granted property. This set of three sequential processes suggest variability in levels of institutionalization, meaning some
Institutional Theory and E-Government Research
patterns of social behavior are subject to more critical evaluation and modification than others.
From 0s to Date Contemporary works in institutional theory have tried to come up with a more comprehensive framework to study the effects of institutions on organizations. For example, Scott (2001) provides an analytic framework comprising culturalcognitive, normative, and regulative elements of institutions. Drawing on Gidden’s (1979, 1984) structuration theory, he notes that institutions together with associated resources provide stability and meaning to social life. Resources are critical for the conceptualization of social structures as they are inherently related to asymmetries of power: “Rules and norms, if they are to be effective, must be backed with sanctioning of power” (Scott). Also, the levels of unit of analysis have expanded in more recent works of institutional theory. It is acknowledged that organizations operate in multiple institutional environments (Ghosal, 1993) and thus may face institutional pressures that are contradictory rather than confluent. Additionally, recent works pay more attention to the incompatibilities between cultural-cognitive aspects and power relations in organizations. The relationship between formal organizational structures and the cultural systems they involve may change over time. Power relations and dominant symbolic and cognitive systems may be changed by the reflexive behavior of individuals (Barley, 1986; Giddens, 1984; Orlikowski, 1996, 2000). Through sense making and interpretation, actors within an organization may construct new definitions of the organization’s situation and continuously shape and reshape their meaning and value systems (Weick, 1995). Thus, institutional elements in organizations are fluid rather than monolithic and stable (Avgerou, 2002). Institutional transformation, or even discontinuation, can take place when the embedded institutional structures within an organization are discredited either as a result of competing ones in the environment or because they are seen as failing to
contribute to organizational outcomes (Avgerou; Scott, 2001). Similarly, incongruities among the social, political, and economic situations within the environment can destabilize an organization’s existing institutional arrangements and propose alternative ones (Fountain, 2001).
institutional thEory in is and E-govErnmEnt rEsEarch IT initiatives in public-sector organizations involve a complex set of decisions and interactions. These decisions and interactions are equally influenced by material properties of technology and business imperatives and existing rules, norms, values, and meaning systems (Swanson & Ramiller, 1997, 2004). Therefore, institutional arrangements are important elements for understanding how information technologies are selected, designed, implemented, and used in public organizations (Gil-García, 2005). IT is often touted as a key enabler of administrative reform in public organizations. However, such a technologically deterministic view of IT does not capture the organizational complexity involved in IT-based reform initiatives. In fact, widespread IT project failures in various types of public organizations (Heeks, 1999, 2003) point to the need for adopting a more critical approach to guide e-government studies. An institutional perspective offers e-government researchers a vantage point for conceptualizing IT initiatives in public organizations as complex and emergent phenomena (Orlikowski & Barley, 2001) that are shaped by both technical-rational and institutional issues. A number of information systems and e-government researchers have already used institutional theory to guide their research. They cover six interrelated issues or topics: IT adoption, innovation, information systems design, development, institutionalization or assimilation of IT, IT and organizational change, and the enactment of IT in organizations. We briefly review some of these works in Table 2.
Institutional Theory and E-Government Research
Table 1. A summary of key points of institutional theory in sociology OLD INSTITUTIONALISM
NEW INSTITUTIONALISM
CURRENT TRENDS
(1950-1960)
(1970-1980)
(1990-present)
•Institutionalization is a process instilling values in organizations.
• Institutionalization is a process of constructing reality.
• Institutions consist of culturalcognitive, normative, and regulative elements.
• If focuses on how norms and
• It focuses on symbolic and
values as well as environmenta
taken-for-granted aspects of
pressures influence organizationa
institutions in organizations.
• Institutions are both enablers and constraints to actions.
structures. • Institutions reflect the persistence and • Organizations are adaptive social structures.
stability of, and provide legitimacy to,
• Organizational actors are knowledgeable and reflexive.
the organization. • Institutions need to be
• The unit of analysis is organizationalhistory over time.
• Innovations are spread not because
conceived together with resources.
they improve efficiency, but because they provide legitimacy.
• Organizations are shaped by both technical and institutional factors.
• The unit of analysis is the organizational field.
• Although institutions suggest stability, they are subject to change and discontinuity. • Levels of units of analysis cover local to national and global contexts.
it adoption and innovation IT adoption and innovation are two interrelated areas of research. The first one is about identifying factors that lead organizations to adopt certain technologies whereas the latter concentrates on understanding the processes through which IT innovations are spread in organizations. The adoption of an IT artifact essentially involves making a choice from multiple, competing options. However, what fits the best to organizational needs is not always the key criterion in the selection process; institutional factors also play an important role. For example, Teo et al. (2003) tried to identify the factors that prompt organizations to adopt interorganizational financial electronic data interchange (FEDI) systems. They surveyed 222 companies in Singapore and found
that institutional isomorphic pressures—mimetic, normative, and coercive pressure—had a significant influence on organizational decision makers’ inclination to adopt FEDI. Similarly, applying institutional theory to decision and cognitive models, Tingling and Parent (2002) conducted a quasi-experiment on 348 senior IT managers to determine the degree to which managers conform to mimetic institutional pressures while selecting between competing technologies. They discovered that in highly ambiguous and uncertain situations, IT managers selected inferior technologies if they were informed that their competitors had selected those technologies. Swanson and Ramiller (1997, 2004) offer a process-oriented IT innovation model that is both cognitive and institutional. They suggest that a network of diverse interorganizational communities consisting of technology vendors,
Institutional Theory and E-Government Research
consultants, industry experts, government, prospective adopters, business and trade journalists, and academics take part in creating and diffusing an “organizing vision” surrounding a particular IT innovation. This organizing vision serves as a cognitive template for organizations to interpret, make sense of, and legitimize the innovation as well as to mobilize necessary economic and organizational resources for its deployment. It is determined by various institutional forces and continuously shaped and reshaped by the community’s discourse over the innovation in meetings, conferences, workshops, popular and trade journals, and mass media, as well as by IT practitioners’ world view, motivating businessproblematic and material and technical properties of the innovation. Recently, Currie (2004) applied Swanson and Ramiller’s organizing-vision model to study the emergence and decline of application service provisioning (ASP), an IS innovation through which firms could access their software applications remotely over the Internet. Currie’s 4-year longitudinal study shows how powerful institutional forces within the technology sector first facilitated and then constrained the ASP organizing vision.
it and organizational change The relationship between IT and organizational change is often described in academic and practitioner literature as technically deterministic. Recently, a number of researchers have used institutional theory to provide alternative explanations. For example, Robey and Holmstrom (2001) look at the social processes surrounding the adoption and implementation of an information system (OLAP) in a municipal government in Umea, Sweden. Their longitudinal case study illustrates the unpredictable nature of IT-based organizational change in a municipal government. They use organizational politics theory to analyze how IT is promoted by various actors to serve their interests and how these actors try to overcome forces opposing the OLAP system and its social implications. Using an institutional lens, they show how the interaction of multiple
institutional forces operating in the municipal government’s local and global contexts lead to contradictory organizational outcomes. Traditionally, institutional theory is used to illustrate the persistence of social order and stability. However, Robey and Holmstrom argue that forces operating in an organization’s institutional context may not always have a harmonizing effect. Local forces may oppose the global ones. In addition, Avgerou (2000) studied the role of IT in the organizational evolution of Pemex, a Mexican oil company, from a state-owned bureaucracy to a market-oriented corporation. She notes that IT did have an impact in the organizational transformation, but the gradual institutionalization of IT in Pemex and deinstitutionalization of its hierarchical organizational structures were largely loosely coupled.
use and Enactment of it Researchers focusing on the enactment of IT from institutional tradition are primarily concerned about how institutional factors shape the way information technologies are used in organizations (Fountain, 2001; Gil-García, 2005; Lamb & Kling, 2003; Puron Cid & Gil-García, 2004). Recently, Fountain (1995, 2001) attempted to provide a comprehensive theoretical framework to explain the effects of organizational forms and institutional arrangements on the information technology used by government agencies. She pays attention to the complex relations among IT, organizational structure, and institutions, and notes that a comprehensive understanding of human agency (Giddens, 1984) is required to comprehend how IT is enacted in organizations. To illustrate the role of institutions in the use of IT, she presents three detailed cases: an interorganizational information system (the International Trade Data System), an interagency Web site project (the U.S. Business Advisor), and the maneuver control system of the 9th Infantry Division of U.S. Army. Furthermore, using concepts from both institutional and structuration theories, Lamb and Kling present a social-actor model with detailed analysis and empirical evidence of how users’ IT-related practices are shaped by interactions with networks
Institutional Theory and E-Government Research
Table 2. Uses of institutional theory in IS and e-government research RESEARCH TOPIC
OLD
NEW
INSTITUTIONALISM
INSTITUTIONALISM
CURRENT TRENDS
IT Adoption &
King, Gurbaxani, Kraemer,
King et al. (1994)
Innovation
McFarlan, Raman, and Yap
Swanson and Ramiller (1997,
(1994)
2004)
Damsgaard and Scheepers (1999)
Currie (2004)
Swanson and Ramiller (1997,
Fountain (2001)
2004) Tingling and Parent (2002) Teo et al. (2003) Fountain (2001) IS Development
Laudon (1985)
Necolaou (1999)
Klein (2000)
Butler (2003)
Butler (2003)
Butler (2003)
Avgerou (2000)
Chatterjee, Grewal, and
Institutionalization of IS
Sambamurthy (2002) Alvarez (2001) Silva and Backhouse (2003) IT & Organ i zat ional Change
Avgerou (2000, 2002)
Avgerou (2002)
Fountain (2001)
Robey and Holmstrom (2001) Fountain (2001)
IT Use & Enactment
Fountain (2001)
Fountain (2001)
Lamb and Kling (2003)
Lamb and Kling (2003)
Lamb, King, and Kling (2003) Other
of professional and organizational affiliations; social, political, and economic institutions; and individual and collective identities.
futurE trEnds and rEsEarch opportunitiEs Our brief review of the progression of institutional theory and its application to IS and e-government research indicates an array of research issues that can be studied using institutional approaches. It is also encouraging that researchers in IS and e-government are using a variety of research methods and approaches, both positivist and interpretive, when using institutional theory to study how IT is adopted, developed, and used in organizations.
Wareham (2002)
However, it is clearly noticeable that there is very little research work that has tried to capture institutional impacts on the adoption, development, and use of IT in public organizations from both micro and macro levels. Therefore, future studies will or should increasingly attempt to use institutional theory to understand micro and macro aspects of the use of IT in government. Institutional approaches are capable of capturing cultural-cognitive aspects as well as the effects of laws and regulations on the relationship between information technologies and organizations, but this explanatory power has not been sufficiently exploited. Some scholars consider that applications of institutional theory to e-government research do not pay adequate attention to the material prop-
Institutional Theory and E-Government Research
erties of IT and deal with technology at a very abstract level by focusing on the organizational or institutional issues, overlooking the powerful influence of IT on organizations (Garson, 2003). Others have argued that e-government researchers using institutional theory have not fully elaborated or disentangled the role of human agency in the institutional change process (Yang, 2003). Indeed, current trends in institutional theory assume a reciprocal relation between agency and social structure (Scott, 2001). Institutions have an impact on the selection, adoption, implementation, and use of IT in government, but at the same time IT can affect organizational structures and institutional arrangements. However, systematic empirical research trying to capture this reciprocal relation in government settings remains largely limited. Therefore, future e-government research should increasingly attempt to understand this complex interplay and disentangle the relationships between these important constructs. Also, most of the e-government research is done using case studies with the exception of a few quantitative papers (see, for example, Gil-García, 2005). Institutional studies using multimethod approaches will surely advance our understanding of the e-government phenomenon (Gil-García & Pardo, 2006). In addition, e-government researchers have largely ignored institutional theory from economics and political science. Transaction-cost economics, rational choice, and historical institutionalism in political science are relevant to the study of e-government. Even though we did not cover this in our review, there exists a rich body of IS literature that uses transaction-cost economics, which can also help to advance e-government research.
conclusion Institutional theory offers a powerful set of principles and concepts that can be useful for egovernment researchers. This chapter shows how institutional theory has evolved in the last few decades and identifies the main characteristics of recent theoretical developments. It also describes
how institutional theory has been used by IS and e-government researchers and suggests that the use of institutional approaches could be extended to a great variety of topics and can profit from recent developments in diverse disciplinary traditions. For instance, institutional theory has the potential to allow a more comprehensive understanding of the multiple and complex relationships between information technologies, organizational structures and processes, and institutional arrangements. Institutional approaches are also able to take into consideration recursive causalities and understand how micro and macro forces interact. It is important to clarify that institutional theory is not the solution to every e-government research problem. However, we believe that e-government researchers can benefit from knowing more about the potential and limitations of institutional approaches.
acknowlEdgmEnt This work was partially supported by the National Science Foundation under Grant No. 0131923. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
rEfErEncEs Alvarez, R. (2001). “It was a great system”: Facework and the discursive construction of technology during information systems development. Information Technology & People, 14, 4. Avgerou, C. (2000). IT and organizational change: An institutional perspective. Information Technology & People, 13(4), 234-262. Avgerou, C. (2002). Information systems and global diversity. Oxford University Press. Barley, S. R. (1986). Technology as an occasion for structuring: Evidence from observations of CT scanners and the social order of radiology
Institutional Theory and E-Government Research
departments. Administrative Science Quarterly, 31(1), 78-108.
Garfinkel, H. (1967). Studies in ethnomethodology. Englewood Cliffs, NJ: Prentice Hall.
Berger, P. L., & Luckmann, T. (1967). The social construction of reality: A treatise in the sociology of knowledge. New York: Anchor Books.
Garson, G. D. (2003). Technological teleology and the theory of technology enactment. Social Science and Computer Review, 21(4), 425-431.
Brinton, M. C., & Nee, V. (1998). The new institutionalism in sociology. Stanford, CA: Stanford University Press.
Ghosal, S. (1993). Organization theory and the multinational corporation. New York: St. Martins.
Butler, T. (2003). An institutional perspective on developing and implementing intranet- and Internet-based information systems. Information Systems Journal, 13(3), 209-232.
Giddens, A. (1979). Central problems in social theory: Action, structure and contradiction in social analysis. Berkeley: University of California Press.
Chatterjee, D., Grewal, R., & Sambamurthy, V. (2002). Shaping up for e-commerce: Institutional enablers of the organizational assimilation of Web technologies. MIS Quarterly, 26(2), 65-89.
Giddens, A. (1984). The constitution of society. Berkeley: University of California Press.
Coase, R. (1937). The nature of the firm. Economica, 4, 385-405. Currie, W. (2004). The organizing vision of application service provision: A process oriented analysis. Information and Organization, 14, 237267. Damsgaard, J., & Scheepers, R. (1999). Power, influence and intranet implementation: A safari of South African organizations. Information Technology & People, 12(4), 333-358. DiMaggio, P. J., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields. American Sociological Review, 48, 147-160. DiMaggio, P. J., & Powell, W. W. (1991). The new institutionalism in organizational analysis. Chicago: University of Chicago. Fountain, J. E. (1995). Enacting technology: An institutional perspective. Cambridge, MA: John F. Kennedy School of Government, Harvard University. Fountain, J. E. (2001). Building the virtual state: Information technology and institutional change. Washington, DC: Brookings Institution Press.
Gil-García, J. R. (2005). Enacting state Websites: A mixed method study exploring e-government success in multi-organizational settings. Unpublished doctoral dissertation, State University of New York, Albany. Gil-García, J. R., & Pardo, T. A. (2006, January). Multi-method approaches to digital government research: Value lessons and implementation challenges. Paper presented at the 39th Hawaii International Conference on System Sciences (HICSS), HI. Heeks, R. (1999). Reinventing government in the information age: International practice in IT-enabled public sector reform. New York: Routledge. Heeks, R. (2003). Success and failure rates of egovernment in developing/transitional countries: Overview. Retrieved from http://www.egov4dev. org/sfoverview.htm Jepperson, R. (1991). Institutions, institutional effects, and institutionalization. Chicago: University of Chicago. King, J. L., Gurbaxani, V., Kraemer, K., McFarlan, F. W., Raman, K. S., & Yap, C. S. (1994). Institutional factors in information technology innovation. Information Systems Research, 5(2).
Institutional Theory and E-Government Research
Klein, H. K. (2000). System development in the federal government: How technology influences outcomes. Policy Studies Journal, 28(2), 313. Lamb, R., King, J. L., & Kling, R. (2003). Informational environments: Organizational context of online information use. Journal of American Society for Information Science and Technology, 54(2), 97-114.
Orlikowski, W. (2000). Using technology and constituting structures: A practice lens for studying technology in organizations. Organization Science, 11(4), 404-428. Orlikowski, W., & Barley, S. (2001). Technology and institutions: What can research on information technology and research organizations learn from each other? MIS Quarterly, 25(2), 145-165.
Lamb, R., & Kling, R. (2003). Reconceptualizing users as social actors in information systems research. MIS Quarterly, 27(2), 197-235.
Peters, B. G. (2001). Institutional theory in political science: The “new” institutionalism. London: Continuum.
Laudon, K. C. (1985). Environmental and institutional models of system development: A national criminal history system. Communications of the ACM, 28(7), 728-740.
Puron Cid, G., & Gil-García, J. R. (2004). Enacting e-budgeting in Mexico. Public Finance and Management, 4(2), 182-217.
March, J. G., & Olsen, J. P. (1984). The new institutionalism: Organizational factors in political life. American Political Science Review, 78, 734-749. March, J. G., & Olsen, J. P. (1989). Rediscovering institutions: The organizational basis of politics. New York: Free Press. Meyer, J., & Rowan, B. (1977). Institutionalized organizations: Formal structures as myth and ceremony. American Journal of Sociology, 83, 340-363. Meyer, J., & Scott, R. (1992). Organizational environments: Rituals and rationality. Beverly Hills, CA: Sage. Moe, T. (1984). The new economics of organizations. American Journal of Political Science, 28, 739-777. Necolaou, A. (1999). Social control in information systems development. Information Technology & People, 12(2), 130-147. North, D. C. (1999). Institutions, institutional change, and economic performance. New York: Cambridge University Press. Orlikowski, W. (1996). Improvising organizational transformation over time: A situated change perspective. Information Systems Research, 7(1), 63-92.
Robey, D., & Holmstrom, J. (2001). Transforming municipal governance in global context: A case study of the dialectics of social change. Journal of Global Information Technology Management, 4(4), 19-31. Rutherford, M. (1999). Institutions in economics: The old and the new institutionalism. New York: Cambridge University Press. Scott, W. R. (2001). Institutions and organizations (2nd ed.). Thousand Oaks, CA: Sage. Selznick, P. (1949). TVA and the grass roots. Berkeley: University of California Press. Selznick, P. (1957). Leadership in administration. New York: Harper & Row. Silva, L., & Backhouse, J. (2003). The circuitsof-power framework for studying power in institutionalization of information systems. Journal of Association of Information Systems, 4(6), 294-336. Simon, H. A. (1945). Administrative behavior: A study of decision-making processes in administrative organization. New York: Free Press. Swanson, B., & Ramiller, N. (1997). The organizing vision in information systems innovation. Organization Science, 8(5), 458-1997.
Institutional Theory and E-Government Research
Swanson, B., & Ramiller, N. (2004). Innovating mindfully with information technology. MIS Quarterly, 28(4), 553-583. Teo, H. H., Wei, K. K., & Benbasat, I. (2003). Predicting intention to adopt interorganizational linkages: An institutional perspective. MIS Quarterly, 27(1), 19-49. Tingling, P., & Parent, M. (2002). Mimetic isomorphism and technology evaluation: Does imitation transcend judgment? Journal of Association of Information Systems, 3, 113-143. Wareham, J. (2002). Anthropologies of information costs: Expanding the new-institutional view. Information and Organization, 12, 219-248. Weick, K. E. (1995). Sensemaking in organizations. Thousand Oaks, CA: Sage. Williamson, O. (1975). Markets and hierarchies: Analysis and antitrust implications. New York: Free Press. Williamson, O. (1985). The economic institutions of capitalism. New York: Free Press. Yang, K. (2003). Neoinstitutionalism and e-government. Social Science and Computer Review, 21(4), 432-442. Zucker, L. G. (1977). The role of institutionalization in cultural persistence. American Sociological Review, 42, 726-743. Zucker, L. G. (1983). Organizations as institutions. Greenwich, CT: JAI Press. Zucker, L. G. (1991). Postscript: Microfoundations of institutional thought. Chicago: The Chicago University Press.
furthEr rEading
Coase, R. (1937). The nature of the firm. Economica, 4, 385-405 DiMaggio, P. J., & Powell, W. W. (1991). The new institutionalism in organizational analysis. Chicago: University of Chicago. Fountain, J. E. (1995). Enacting technology: An institutional perspective. Cambridge, MA: John F. Kennedy School of Government, Harvard University. Garfinkel, H. (1967). Studies in ethnomethodology. Englewood Cliffs, NJ: Prentice Hall. Giddens, A. (1984). The constitution of society. Berkeley: University of California Press. March, J. G., & Olsen, J. P. (1984). The new institutionalism: Organizational factors in political life. American Political Science Review, 78, 734-749. Moe, T. (1984). The new economics of organizations. American Journal of Political Science, 28, 739-777. North, D. C. (1999). Institutions, institutional change, and economic performance. New York: Cambridge University Press. Scott, W. R. (2001). Institutions and organizations (2nd ed.). Thousand Oaks, CA: Sage. Selznick, P. (1949). TVA and the grass roots. Berkeley: University of California Press. Selznick, P. (1957). Leadership in administration. New York: Harper & Row. Simon, H. A. (1945). Administrative behavior: A study of decision-making processes in administrative organization. New York: Free Press. Williamson, O. (1975). Markets and hierarchies: Analysis and antitrust implications. New York: Free Press.
Avgerou, C. (2002). Information systems and global diversity. Oxford University Press.
Williamson, O. (1985). The economic institutions of capitalism. New York: Free Press.
Berger, P. L., & Luckmann, T. (1967). The social construction of reality: A treatise in the sociology of knowledge. New York: Anchor Books.
Zucker, L. G. (1991). Postscript: Microfoundations of institutional thought. Chicago: The Chicago University Press.
Institutional Theory and E-Government Research
kEy tErms Agency: Agency refers to the human capacity to produce an effect or change social structures. E-Government: It is the design, development, and use of information and communication technologies in government settings with aims to provide public services, improve managerial effectiveness, and promote democratic values and mechanisms. Institutionalization: Institutionalization is the process through which various social structures such as rules, norms, practices, and routines become taken for granted in everyday social life. Institutions: Institutions can be conceptualized as guidelines for human actions, or the “logic of appropriate” behavior in society (March & Olsen, 1984). Institutions consist of cultural-cognitive, regulative, and normative elements (Scott, 2001). With associated resources, they provide stability and meaning to social life.
0
Organizational Fields: An organizational field is a group of organizations with a similar set of products, suppliers, customers, and resources. Organizational Routines: Organizational routines are standardized procedures or practices in organizations. When they become taken for granted in daily organizational life, they reflect institutional properties. Technology Enactment: Technology enactment is about how various micro- and macrolevel institutional factors such as norms, values, perceptions, rules, routines, practices, rules, and regulations shape the way information and communication technologies are used, or come to be used, in various organizations.
Chapter XXXV
Structuration Theory and Government IT J. Ramon Gil-Garcia Centro de Investigación y Docencia Económicas, Mexico Shahidul Hassan University at Albany, SUNY, USA
introduction The relationship between information technologies (IT) and organizational (structural) change has been a topic of interest for public administration and policy scholars for a long time (Dawes, Gregg, & Agouris, 2004; Fountain, 2001; Garson, 2004; Heeks, 1999; Heintze & Bretschneider, 2000; Kling & Lamb, 2000; Kling & Schacchi, 1982; Kraemer & King, 2003; Kraemer, King, Dunkle, & Lane, 1989; Rocheleau, 2000). Initially, most studies were somewhat deterministic in nature, arguing that either IT had the power to transform organizational structures, or that organizational and institutional factors largely determined the characteristics and effects of IT. Current research in information systems (W. Orlikowski, 2000; W. J. Orlikowski, 1992; W. J. Orlikowski & Robey, 1991), organizational studies (Barley, 1990; DeSanctis & Poole, 1994), and public administration and policy (Fountain, 2001), however, indicate that the relationships between IT and organizational structures are not so simple. In fact, they are recursive, complex, and somewhat unpredictable.
Employing what has been called the ensemble view of technology (W. J. Orlikowski & Iacono, 2001), these studies argue that research on IT in organizations should focus not only on the technological artifacts themselves, but also on the social relationships around their adoption, development, and use. Thus, they use, and encourage others to use, theoretical approaches that call attention to the social and complex nature of IT in organizations. Structuration theory (Giddens, 1984) is one such theoretical approach that has proved to be useful in studying the dynamic relationship between IT and organizational structure. In this chapter, we present several examples of how structuration theory has been applied to study IT in both public- and private-sector organizations. We highlight the usefulness of this perspective in understanding incremental and swift episodic change in organizational and interorganizational settings. It is, however, only a brief review of the use of structuration theory in information systems and e-government research. We hope this chapter will serve as an introduction to the topic and a useful starting point for scholars interested in
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Structuration Theory and Government IT
using social sciences perspectives in e-government research. The chapter is organized in six sections, including the foregoing introduction. The second section highlights the characteristics of the ensemble view of IT in organizations and provides a brief overview of structuration theory. The third section presents four influential models that apply structuration theory to information systems research. The fourth section argues that previous models have mainly explained incremental change within organizational settings, and that an important future trend for e-government research should be to understand radical change and interorganizational relationships. The fifth section provides some concluding remarks, and finally the last section suggests some future research directions.
background: information tEchnologiEs, organizations, and structuration thEory the Ensemble view of it The ensemble view of technology (W. J. Orlikowski & Iacono, 2001) acknowledges that information technologies are not only physical artifacts, but they also embody the social relations around their adoption, development, and use. The technical artifact is only one component of a more complex sociotechnical system (Kling & Lamb, 2000; Kraemer, Dutton, & Northrop, 1980; Pasmore, 1988) that includes an ensemble, or web, of tools, techniques, applications, resources, people, organizational arrangements, and policies, among other elements (Kling & Schacchi, 1982). W. J. Orlikowski and Iacono (2001) identified four different variants of the ensemble view of IT: IT as a development project, production network, embedded system, and structure. To a certain extent, all these variants focus on the dynamic interactions among technology, structure, and human agency. Examples of theoretical endeavors grounded on the ensemble view of IT are the so-
ciotechnical systems theory (Bostrom & Heinen, 1977; Kraemer & King, 1986; Mumford, 2000), social informatics (Kling, 2001; Kling, Rosenbaum, & Hert, 1998), the technology enactment framework (Fountain, 1995, 2001; Gil-García, 2005), structurational perspectives (Barley, 1986; W. Orlikowski, 2000; W. J. Orlikowski, 1992; W. J. Orlikowski & Robey, 1991), and adaptive structuration theory (DeSanctis & Poole, 1994; Poole, Jackson, Kirsch, & DeSanctis, 1998). In this chapter, we focus only on the models of technology based on Giddens’ structuration theory (Giddens, 1979, 1984).
a brief overview of structuration Theory According to structuration theory, human actions and social structures are mutually constitutive (Giddens, 1984). More specifically, individual actions are constrained by certain societal rules, but at the same time, these practices shape or reinforce those social structures: “The basic domain of study of the social sciences, according to the theory of structuration, is neither the experience of the individual actor, nor the existence of any form of societal totality, but social practices ordered across space and time” (Giddens, p. 2). These social practices can refer to relationships among individuals and relationships between individuals and technological artifacts (DeSanctis & Poole, 1994; Gil-García, 2005; W. Orlikowski, 2000; W. J. Orlikowski, 1992). There is no clear causality between social structures and individual actions. In fact, according to Giddens (1984, p. 19), “one of the main propositions of structuration theory is that the rules and resources drawn upon in the production and reproduction of social action are at the same time the means of system reproduction (the duality of structure).” Therefore, a dynamic interaction between individual actions and social structures exists. Previous studies using structuration theory have conceptualized information technologies as both a constitutive part of the structures and the result of the interactions between those structures and individual actors (DeSanctis & Poole, 1994;
Structuration Theory and Government IT
Gil-García, 2005; W. Orlikowski, 2000; W. J. Orlikowski, 1992). The concept of structure presented by Giddens is not the static notion that can be found in some versions of structuralism or strategic choice models. In the structuration view, the structure is formed by what individuals use to interact with others in their daily lives. Giddens (1984) expresses this difference clearly: “Let us regard the rules of social life, then, as techniques or generalizable procedures applied in the enactment/reproduction of social practices” (p. 21). Therefore, social practices are not structures, but the result of interactions between individual actions and social structures.
structuration thEory in it rEsEarch According to the models of technology based on structuration theory, information technologies have the potential to impact social and organizational structures, but at the same time, they are shaped by these structures in their design, implementation, and use (Gil-García, 2005). Below, we
discuss some of the main characteristics of four different models that have applied structuration theory to information systems research.
Technology as an Occasion for structuring Barley (1986) was one of the first researchers to use structuration theory to study the relationship between technology and organizational structure. His longitudinal field study revealed how the introduction of radiographic technology, CT (computerized tomography) scanners, in two hospitals triggered similar social dynamics, but led to different structural outcomes. He found that in each organization the implementation of CT scanners changed departmental structures by altering institutionalized roles and patterns of social interactions. However, surprisingly, decision making in one organization became more decentralized than the other with the adoption of the same technology. Technology, according to Barley (1986), is not a material cause, but a material trigger that instantiates certain social dynamics and may
Figure 1. Sequential model of the structuring process (Barley, 1986)1
Structuration Theory and Government IT
lead to both intended and unintended organizational consequences. So, technology does not, and probably cannot, totally determine the nature of organizational change. Furthermore, technology should be treated as a social rather than a physical object, and structure should be conceptualized as a dynamic process rather than a static entity (Barley, 1986). Comparing the results of the implementation of CT scanners in each organization, Barley concluded that the introduction of new technology may indeed result in change in organizational structure, but the outcomes depend on the specific historical processes in which they are embedded, as well as the patterns of social interactions between the technology and human agents: “...the [CT] scanners occasioned change because they became social objects whose meanings were defined by the context of their use” (p. 106).
Structurational Model of Technology Similar to Barley’s earlier efforts, W. J. Orlikowski (1992) uses structuration theory to obtain a deeper understanding of the interaction between IT and organizational structure. She is critical of earlier studies that assumed technology to be an objective, external force or an outcome of strategic choice, and considers their approach incomplete. She argues that to understand the divergent definitions and opposing perspectives associated with research on IT, a reconceptualization of the concept is needed. Therefore, using structuration theory, she develops a model to analyze the nature and role of technology in organizations, which she calls the “structurational model of technology” (p. 398). As depicted in Figure 2, there are three components in the structurational model of technology: (a) human agents, (b) technology, and (c) institutional properties. According to the model, IT is created and changed by human actions. Humans also use IT to accomplish certain purposeful tasks. W. J. Orlikowski (1992) calls this recursive aspect of technology “the duality of technology,” which is also the first premise of her model. In simple terms, the duality of technology means technology is nei-
ther an objective force nor a socially constructed product, but a mixture of both. She notes: Technology is a product of human action, while it also assumes structural properties. That is, technology is physically constructed by actors working in a given social context, and technology is socially constructed by actors through the different meanings they attach and the various features they emphasize and use. However, it is also the case that once developed and deployed, technology tends to become reified and institutionalized, losing its connection with the human agents that constructed it or gave it meaning, and it appears to be part of the objective structural properties of the organization. (p. 406) The second premise of the model is the notion of the interpretive flexibility of technology. By that, W. J. Orlikowski (1992) refers to “the degree to which users of a technology are engaged in its constitution (physically and/or socially) during development or use” (p. 409). She also notes that the interpretive flexibility of technology is an attribute of the relationship between technology and human agents. This attribute is influenced by various material properties of the technology; characteristics of human agents such as motivation, experience, and interest; and characteristics of the social context within which the technology is deployed and used. Furthermore, the interpretive flexibility of technology is critical in understanding the consequences of technology adoption and use in various organizational contexts. W. J. Orlikowski and Robey (1991) argued that the structuration approach is particularly useful for information systems research to better understand and go beyond what they consider “several of the false dichotomies (subjective vs. objective, socially constructed vs. material, macro vs. micro, and qualitative vs. quantitative) that persist in investigations of the interaction between organizations and information technology” (p. 143). In addition, they propose that structuration theory is potentially useful for at least two areas of information systems research: (a) systems development processes and (b) social consequences
Structuration Theory and Government IT
Figure 2. Structurational model of technology (W. J. Orlikowski, 1992)2
of information technology. However, they also recognize that greater value and explanatory power would come from applying structuration theory to a more comprehensive study of the relationship between systems development and the implications of information systems use. Therefore, one of the advantages of the structurational model of technology is its capacity to develop more complete explanations by including technological, organizational, institutional, and political factors, as well as their interrelationships.
Adaptive Structuration Theory To account for the inconsistencies between the potential of IT in organizational change and actual change, DeSanctis and Poole (1994) propose a model by integrating structuration and decision theories that they call adaptive structuration theory (AST). They argue that AST helps to understand IT-based organizational change processes from two vantage points: (a) the kinds of structures that are provided by the technologies and (b) the structures that actually emerge as people interact with these technologies. In the AST model, there are two important theoretical constructs: structures and social interaction (DeSanctis & Poole, 1994). These constructs focus on the dynamic nature of technology adoption and use in various organizational settings. As mentioned before, for DeSanctis and Pool, there are social structures within the technologi-
cal artifact itself and in the actions of humans in relation to that artifact. In addition, individuals select certain characteristics of the technology to use and decide how they use them. Appropriation is not determined only by the characteristics of the technological artifact; it is also shaped by the interaction processes among individuals and between individuals and the technology (DeSanctis & Poole). From the AST perspective, social structures provided by information technology can be conceptualized in two ways: the structural features of a given technology and “the spirit of this feature set”: Structural features are the specific types of rules and resources, or capabilities, offered by the system…The spirit is the “official line” which the technology presents to people regarding how to act when using the system, how to interpret its features, and how to fill in gaps in procedure which are not explicitly specified. (DeSanctis & Poole, p. 126) The structural features and the spirit of the technology together provide the “structural potential” of a certain technology that can be drawn by various user groups such as designers and managers in their social interaction with the technology and with other individuals. According to this dynamic model, organizational change occurs as various user groups bring the structural potential of a technology into interaction. Figure 3
Structuration Theory and Government IT
Figure 3. Adaptive structuration theory (DeSanctis & Poole, 1994)3
shows how different user groups can appropriate different structural features and adopt or change the spirit of the technologies. DeSanctis and Poole (1994) provide an empirical illustration of such structural change by examining the use of a group decision support system (GDSS) over time in small group settings.
Enactment of technologies in practice In her recent work, W. Orlikowski (2000) proposes an extension to the structurational model of technology. In the new approach, she uses a practice lens to study how people, during their interaction with technology in their ongoing daily practices, enact social structures that shape their emergent and situated use of that technology. She argues that some of the previous efforts, including her own work, are valuable in explaining divergent outcomes associated with the use of technologies in various contexts, but do not adequately explain the ongoing changes in both technology and in their use. Using Giddens’ original work and adopting a social constructionist perspective, she tries to re-
solve two important problems: emergent structures and the ongoing enactment of structures. The basic arguments of what she calls a practice lens are still solidly grounded in structuration theory and the notion of causal recursiveness between social structures and information technology (see Figure 4). However, the new framework allows a better understanding of the emergence of technologies in use as people interact within certain structural properties of the system (W. Orlikowski). The practice approach does not make any assumption about the stability and relative completeness of the technologies: “Instead, the focus is on what structures emerge as people interact recurrently with whatever properties of the technology are at hand, whether these were built in, added on, modified, or invented on the fly” (W. Orlikowski, 2000, p. 407). Further, the new framework recognizes that as social interactions with technology are situational, different groups can enact different properties of the technologies according to their own facilities, norms, and interpretive schemes (W. Orlikowski). Following this rationale, different enactments can result from what may appear to be the same technological properties (see Figure
Structuration Theory and Government IT
Figure 4. Enactment of technologies in practice: One social group (W. Orlikowski, 2000)4
5). Social groups with different professional backgrounds, job positions, and personal interests will enact different technologies in practice even if the technological artifact is essentially the same (Bijker, Hughes, & Pinch, 1987; Bijker & Law, 2000; Gil-García, 2005; W. Orlikowski).
futurE trEnds: undErstanding radical changE The models reviewed above were developed through empirical evidence from intra-organizational settings and with the purpose of understanding incremental change or differences across organizations or groups. In these perspectives, organizational change occurs as a result of a series of modifications, adaptations, or improvisations on technological artifacts (DeSanctis & Poole, 1994; Harrison, Pardo, Gil-García, Thompson, & Juraga, in press; W. Orlikowski, 2000; W. J. Orlikowski, 1992; W. J. Orlikowski & Robey, 1991). This type of organizational change characterizes conditions of somewhat routine technology use and has proven to be useful in understanding incremental or gradual
change. However, Giddens (1984) proposes various conditions in which swift or profound change can take place. Examples of such conditions may be wars or threats of attacks, which may bring different forms of social organizations together and may result in sudden structural change and significant modifications in the way these social systems work (Giddens; Gil-García, Harrison, Juraga, Pardo, & Thompson, 2004; Harrison et al.). Therefore, the application of structuration theory can potentially be extended to study interorganizational change in situations in which critical relationships across organizational boundaries exist, such as responses to terrorist attacks, natural disasters, corporate mergers and acquisitions, and sudden institutional or political changes. Few recent studies have attempted to incorporate this kind of swift, episodic change through studying interorganizational relationships in emergency situations. For instance, Gil-García et al. (2004) and Harrison et al. (in press) argue that the World Trade Center (WTC) crisis was an example of such a change episode (Giddens, 1984) in which groups that were previously relatively isolated from each other were required to intensively collaborate (see Figure 6). In terms of Giddens,
Structuration Theory and Government IT
Figure 5. Enactment of technologies in practice: Two social groups (Gil-García, 2005) Str uctur e
Technologies-in-Practice
F acili ties e.g., hardware, software
Nor ms e.g., protocols, etiquette
A gency
Other structures enacted in the use of technology
G r oup 1 Enactment of Technology in Practice X
Other structures enacted in the use of technology
G r oup 2 Enactment of Technology in Practice Y
I nterpretative Schemes e.g., assumptions, knowledge
Ongoing, Situated Use of Technology
Str uctur e
Technologies-in-Practice
F acili ties e.g., hardware, software
A gency
Nor ms e.g., protocols, etiquette
I nterpretative Schemes e.g., assumptions, knowledge
Ongoing, Situated Use of Technology
the terrorist attacks produced a time-space edge in which multiple social groups (first responders, geographical information system [GIS] experts, emergency managers, policy makers, etc.) needed to exchange their facilities, norms, and interpretive schemes to respond to the crisis situation. At the end of this intensive collaboration and colocated work, the enactments from the different social groups were still different, but they shared more elements in common and were able to understand each other’s position much better (Gil-García et al., 2004). For instance, their conceptualizations of GIS for emergency response became similar enough to allow better communication and effective collaboration among the multiplicity of groups involved in the response efforts (Harrison, Gil-García, Pardo, & Thompson, 2006; Harrison et al., in press). In addition, from an analytical perspective, the concepts of time-space edge and change episode were useful in framing and better understanding the effects of the intensive and close collaboration that took place during the response. Similarly, the concept of technologies in practice (W. Orlikowski, 2000) was essential in describing some emerging uses of information
technologies, such as the emergency management online locator system (EMOLS) and the innovative use of light detection and ranging (LIDAR) and remote sensing technologies, in the response (Harrison et al., in press). Furthermore, crisis situations are not the only social phenomena that can be studied using Giddens’ change-episode concept. For instance, Gil-García, Canestraro, Costello, Baker, and Werthmuller (2006) suggest that change episodes can be induced by providing the conditions to actors from different groups to intensively collaborate for a relatively short period of time. They use a comprehensive prototyping experience as an example of this type of situation in which actors with different organizational and professional backgrounds worked together for about 6 months, sharing their experiences, perspectives, and concerns about using XML (extensible markup language) for Web site content management in government agencies (Gil-García, Pardo, & Baker, 2007). At the end of this period, technical and program staff from all the organizations involved had a greater common understanding of the characteristics, benefits, and barriers of XML
Structuration Theory and Government IT
Figure 6. Episodic characterization of the GIS response to the WTC crisis (Gil-García et al., 2004)
than before. From a structuration theory perspective (Giddens, 1984), the intensive collaboration and social exchange required in situations such as the comprehensive prototyping experience can modify actors’ facilities, norms, and interpretive schemes in a relatively short period of time. Organizational changes like these, however, are not necessarily expected in other circumstances. Different interorganizational systems would experience different degrees of change, and the nature of these changes may also be different:
Following the same logic, even if two interorganizational systems had very similar internal structural and functional characteristics, their relationships with other organizations and their relative position among other interorganizational systems would affect their potential for change and, if change occurs, the specific characteristics that do change may also be different.
Turning first to inter-societal systems, it is rather easy to see that even if two societies were to be “internally” identical in all relevant aspects, substantially different conditions for change might exist based upon both their respective intersections with inter-societal systems and their time-space distantiation vis-à-vis other societies in the context of inter-societal systems. (Cohen, 1989, p. 274)
The use of IT in organizations has created expectations about structural change and innovation in both private- and public-sector organizations. However, understanding IT-based organizational change seems to be highly complex, and the application of integrative social theories could significantly contribute to our current knowledge about this topic. Structuration theory offers scholars a powerful lens to untangle the complexity of e-
conclusion
Structuration Theory and Government IT
government initiatives and the possibility to relate this research to broader social science concerns. Other social science approaches and perspectives such as institutional theory, actor-network theory, and social constructivism have also been applied to information systems research and proven to be very useful. E-government scholars should be aware of the usefulness of these theoretical and methodological alternatives, which can help develop novel and meaningful ways to study complex social and organizational issues associated with e-government projects. The present chapter is just a small step toward this more ambitious goal.
futurE rEsEarch dirEctions Structuration theory has been very useful to study gradual change in intra-organizational settings and has provided valuable insights about the recursive relationships between technologies, social structures, and human agency. Within this theoretical tradition, recent research is also applying structurational concepts and frameworks to the study of profound or swift radical change in interorganizational settings. Future studies in e-government can explore this potential by investigating a broad array of situations with a main focus on profound change and interorganizational relationships such as mergers and acquisitions, natural disasters, and cross-boundary information integration. In addition to studying gradual organizational change using structuration theory, we think Giddens’ concepts of time-space edge and change episode hold great potential for studying the role of IT in various social and organizational crises. Understanding how actors in these extreme settings make sense of the situation, take action, use IT, and develop a shared cognitive schema is critical for improving the management of those situations. Furthermore, an area that evidently requires more attention is the development of innovative, yet rigorous, methodological strategies for employing structuration theory to study both gradual and swift episodic organizational change.
0
Some strategies have been proposed in the past few years, but more are needed to take advantage of the full potential of this powerful theoretical lens. Finally, more research using structuration theory to study digital government initiatives is also needed. Most previous studies were based on cases from the private sector, and it is essential to explore the usefulness of this approach in studying IT in government and nonprofit organizations.
acknowlEdgmEnt This work was partially supported by the National Science Foundation under Grant No. 0131923. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
rEfErEncEs Barley, S. R. (1986). Technology as an occasion for structuring: Evidence from observations of CT scanners and the social order of radiology departments. Administrative Science Quarterly, 31(1), 78-108. Barley, S. R. (1990). The alignment of technology and structure through roles and networks. Administrative Science Quarterly, 35(1), 61-103. Bijker, W. E., Hughes, T. P., & Pinch, T. (1987). The social construction of technological systems. Cambridge, MA: The MIT Press. Bijker, W. E., & Law, J. (2000). Shaping technology/building society: Studies in sociotechnical change. Cambridge, MA: The MIT Press. Bostrom, R. P., & Heinen, S. J. (1977). MIS problems and failures: A socio-technical perspective part ii. The application of socio-technical theory. MIS Quarterly, 1(4), 11-28. Cohen, I. J. (1989). Structuration theory: Anthony Giddens and the constitution of social life. New York: St. Martin’s Press.
Structuration Theory and Government IT
Dawes, S. S., Gregg, V., & Agouris, P. (2004). Digital government research: Investigations at the crossroads of social and information science. Social Science Computer Review, 22(1), 5-10. DeSanctis, G., & Poole, M. S. (1994). Capturing the complexity in advanced technology use: Adaptive structuration theory. Organization Science, 5(2), 121-147. Fountain, J. E. (1995). Enacting technology: An institutional perspective. Cambridge, MA: John F. Kennedy School of Government, Harvard University. Fountain, J. E. (2001). Building the virtual state: Information technology and institutional change. Washington, DC: Brookings Institution Press. Garson, G. D. (2004). The promise of digital government. In A. Pavlichev & G. D. Garson (Eds.), Digital government: Principles and best practices (pp. 2-15). Hershey, PA: Idea Group Publishing. Giddens, A. (1979). Central problems in social theory: Action, structure and contradiction in social analysis. Berkeley: University of California Press. Giddens, A. (1984). The constitution of society. Berkeley: University of California Press. Gil-García, J. R. (2005). Enacting state Websites: A mixed method study exploring e-government success in multi-organizational settings. Unpublished doctoral dissertation, University at Albany, State University of New York, Albany. Gil-García, J. R., Canestraro, D., Costello, J., Baker, A., & Werthmuller, D. (2006, November). Structuration theory and the use of XML for Web site content management in government: Comprehensive prototyping as an induced change episode (poster). Paper presented at the ASIS&T 2006 Annual Meeting “Information Realities: Shaping the Digital Future for All,” Austin, TX. Gil-García, J. R., Harrison, T., Juraga, D., Pardo, T. A., & Thompson, F. (2004, November). The structuring of GIS technologies: The World Trade Center crisis as a change episode (poster). Paper presented at the ASIS&T 2004 Annual Meeting
“Managing and Enhancing Information: Cultures and Conflicts,” Providence, RI. Gil-García, J. R., Pardo, T. A., & Baker, A. (2007, January). Understanding context through a comprehensive prototyping experience: A testbed research strategy for emerging technologies. Paper presented at the 40th Hawaii International Conference on System Sciences (HICSS), Mānoa. Harrison, T., Gil-García, J. R., Pardo, T. A., & Thompson, F. (2006, January). Learning about interoperability for emergency response: Geographic information technologies and the World Trade Center crisis. Paper presented at the 39th Hawaii International Conference on System Sciences (HICSS), Mānoa. Harrison, T., Pardo, T. A., Gil-García, J. R., Thompson, F., & Juraga, D. (in press). Structuring geographic information technologies: The New York City’s response to September 11, 2001. Journal of the American Society for Information Science and Technology. Heeks, R. (1999). Reinventing government in the information age: International practice in IT-enabled public sector reform. New York: Routledge. Heintze, T., & Bretschneider, S. (2000). Information technology and restructuring in public organizations: Does adoption of information technology affect organizational structures, communications, and decision making? Journal of Public Administration Research and Theory, 10(4), 801-830. Kling, R. (2001). Social informatics. In Encyclopedia of library and information science. Kluwer Publishing. Kling, R., & Lamb, R. (2000). IT and organizational change in digital economies: A sociotechnical approach. In E. Brynjolfsson & B. Kahin (Eds.), Understanding the digital economy: Data, tools, and research. Cambridge, MA: The MIT Press. Kling, R., Rosenbaum, H., & Hert, C. (1998). Social informatics in information science: An
Structuration Theory and Government IT
introduction. Journal of the American Society for Information Science, 49(12), 1047-1052. Kling, R., & Schacchi, W. (1982). The web of computing: Computer technology as social organization. In Advances in computers (Vol. 21, pp. 1-90). Kraemer, K. L., Dutton, W. H., & Northrop, A. (1980). The management of information systems. New York: Columbia University Press. Kraemer, K. L., & King, J. L. (1986). Computing and public organizations. Public Administration Review, 46, 488-496. Kraemer, K. L., & King, J. L. (2003, September). Information technology and administrative reform: Will the time after e-government be different? Paper presented at the Heinrich Reinermann Schrift Festival, Speyer, Germany. Kraemer, K. L., King, J. L., Dunkle, D. E., & Lane, J. P. (1989). Managing information systems: Change and control in organizational computing. San Francisco: Jossey-Bass. Mumford, E. (2000). A socio-technical approach to systems design. Requirements Engineering, 5(2), 125-133. Orlikowski, W. (2000). Using technology and constituting structures: A practice lens for studying technology in organizations. Organization Science, 11(4), 404-428. Orlikowski, W. J. (1992). The duality of technology: Rethinking the concept of technology in organizations. Organization Science, 3(3), 398-427. Orlikowski, W. J., & Iacono, C. S. (2001). Research commentary: Desperately seeking the “it” in IT research: A call to theorizing the IT artifact. Information Systems Research, 12(2), 121-134. Orlikowski, W. J., & Robey, D. (1991). Information technology and the structuring of organizations. Information Systems Research, 2(2), 143-169. Pasmore, W. A. (1988). Designing effective organizations: The sociotechnical systems perspective. New York: John Wiley.
Poole, M. S., Jackson, M., Kirsch, L., & DeSanctis, G. (1998). Alignment of system and structure in the implementation of group decision support systems. Paper presented at the 1998 Academy of Management, San Diego, CA. Rocheleau, B. (2000). Prescriptions for public-sector information management: A review, analysis, and critique. American Review of Public Administration, 30(4), 414-435.
furthEr rEading Bajjaly, S. T. (1999). Managing emerging information systems in the public sector. Pubic Performance & Management Review, 23(1), 40-47. Barley, S. R., & Tolbert, P. S. (1997). Institutionalization and structuration: Studying the links between action and institution. Organization Studies, 18(1), 93-118. Barrett, M., & Walsham, G. (1999). Electronic trading and work transformation in the London Insurance Market. Information Systems Research, 10(1), 1-21. Bryant, C. G. A., & Jary, D. (1991). Giddens’ theory of structuration: A critical appreciation. London: Routledge. Chengalur-Smith, I., & Duchessi, P. (1999). The initiation and adoption of client-server technology in organizations. Information & Management, 35, 77-88. Cresswell, A. M., & Pardo, T. A. (2001). Implications of legal and organizational issues for urban digital government development. Government Information Quarterly, 18, 269-278. Cushing, J., & Pardo, T. A. (2005). Research in the digital government realm. IEEE Computer, 38(12), 26-32. Dawes, S. S., Gregg, V., & Agouris, P. (2004). Digital government research: Investigations at the crossroads of social and information science. Social Science Computer Review, 22(1), 5-10.
Structuration Theory and Government IT
Dawes, S. S., Pardo, T., & DiCaterino, A. (1999). Crossing the threshold: Practical foundations for government services on the World Wide Web. Journal of the American Society for Information Science, 50(4), 346-353. Giddens, A. (1976). New rules of sociological method. Hutchinson, New York: Basic Books. Giddens, A. (1981). Agency, institution, and time-space analysis. In K. Knorr-Cetina & A. V. Cicourel (Eds.), Advances in social theory and methodology: Toward an integration of microand macro-sociologies. Boston: Routledge & Kegan Paul. Gil-García, J. R., & Helbig, N. (2006). Exploring e-government benefits and success factors. In A.-V. Anttiroiko & M. Malkia (Eds.), Encyclopedia of digital government. Hershey, PA: Idea Group Inc. Gil-García, J. R., & Luna-Reyes, L. F. (2006). Integrating conceptual approaches to e-government. In M. Khosrow-Pour (Ed.), Encyclopedia of e-commerce, e-government and mobile commerce. Hershey, PA: Idea Group Inc. Hall, R. H. (2002). Organizations: Structures, processes, and outcomes. Upper Saddle River, NJ: Prentice Hall. Heintze, T., & Bretschneider, S. (2000). Information technology and restructuring in public organizations: Does adoption of information technology affect organizational structures, communications, and decision making? Journal of Public Administration Research and Theory, 10(4), 801-830. Held, D., & Thompson, J. B. (1989). Social theory of modern societies, Anthony Giddens and his critics. Cambridge: Cambridge University Press. Jones, M., Orlikowski, W., & Munir, K. (2004). Structuration theory and information. In J. Mingers & L. Willcocks (Eds.), Social theory and philosophy for information systems. Chichester, UK: John Wiley & Sons. Luna-Reyes, L. F., Mojtahedzadeh, M., Andersen, D. F., Richardson, G. P., Pardo, T. A., Burke, B.,
et al. (2004). Scripts for interrupted group model building: Lessons from modeling the emergence of governance structures for information integration across governmental agencies. Proceedings of the 22nd International System Dynamics Conference, Albany, NY. Luna-Reyes, L. F., Zhang, J., Gil-García, J. R., & Cresswell, A. M. (2005). Information systems development as emergent socio-technical change: A practice approach. European Journal of Information Systems, 14(1), 93-105. Lyytinen, K., & Ngwenyama, O. K. (1992). What does computer support for co-operative work mean? A structurational analysis of computer supported co-operative work. Accounting, Management and Information Technology, 2(1), 19-37. Markus, M. L., & Robey, D. (1988). Information technology and organizational change: Causal structure in theory and research. Management Science, 34(5), 583-598. Orlikowski, W. J. (1993). CASE tools as organizational change: Investigating increment. MIS Quarterly, 17(3), 309-340. Orlikowski, W., & Barley, S. R. (2001). Technology and institutions: What can research on information technology and research on organizations learn from each other? MIS Quarterly, 25(2), 245-265. Pardo, T. A., Cresswell, A. M., Thompson, F., & Zhang, J. (2006). Knowledge sharing in crossboundary information system development in the public sector. Information Technology and Management, 7(4), 293-313. Pozzebon, M., & Pinsonneault, A. (2005). Challenges in conducting empirical work using structuration theory: Learning from IT research. Organization Studies, 26(9), 1353-1376. Rocheleau, B. (2000). Prescriptions for public-sector information management: A review, analysis, and critique. American Review of Public Administration, 30(4), 414-435. Sahay, S. (1997). Implementation of information technology: A time-space perspective. Organization Studies, 18(2), 229-260.
Structuration Theory and Government IT
Sahay, S., & Robey, D. (1996). Organizational context, social interpretation, and the implementation and consequences of geographic information systems. Accounting, Management and Information Technology, 6(4), 255-282. Sarker, S., & Sahay, S. (2003). Understanding virtual team development: An interpretive study. Journal of the AIS (JAIS), 4, 1-38. Star, S. L. (1989). The structure of ill-structured solutions: Boundary objects and heterogeneous distributed problem solving. In L. Gasser & M. Huhns (Eds.), Distributed artificial intelligence (Vol. 2, pp. 37-54). San Mateo, CA: Morgan Kaufmann Publishers. Turner, J. H. (1991). Structuration theory of Anthony Giddens. In The structure of sociological theory. Wadsworth. Walsham, G. (2002). Cross-cultural software production and use: A structurational analysis. MIS Quarterly, 26(4), 359-380. Walsham, G., & Han, C.-K. (1991). Structuration theory and information systems research. Journal of Applied Systems Analysis, 17, 77-85. Walsham, G., & Sahay, S. (1999). GIS for district-level administration in India: Problems and opportunities. MIS Quarterly, 23(1), 39-65. Yates, J., & Orlikowski, W. J. (1992). Genres of organizational communication: A structurational approach to studying communication and media. The Academy of Management Review, 17(2), 299-326. Zhang, J., Cresswell, A. M., & Thompson, F. (2002). Participant’s expectations and the success of knowledge networking in the public sector. Paper presented at the AMCIS Conference, TX.
tErms and dEfinitions Change Episode: This is a period with a defined beginning and end in which significant change occurs. This is normally produced by a time-space edge in which two different societies
or groups have intensive interaction for a relatively short period of time and, as a consequence, their facilities, norms, and interpretive schemes are transformed. Duality of Structure: Duality of structure refers to the fact that individual behavior is constrained by existing social structures, but at the same time, this behavior constitutes social structures by reifying or challenging the current status (see Giddens, 1984). Duality of Technology: Following the logic of duality of structure, duality of technology argues that actors use information technologies to constitute structures, but at the same time, information technologies become part of the structures constraining individual actions (see W. J. Orlikowski, 1992). Incremental Change: Incremental change refers to change that occurs slowly and without necessarily modifying the essence of social structures or organizational practices. Radical Change: Radical change refers to change that occurs relatively fast and modifies the essence of social structures or organizational practices. Specifically, this type of change affects the resources, norms, and interpretive schemes of groups and individuals. Social Structure: Social structure is the abstract set of rules that individuals use in their daily lives. They do not exist in reality, but are instantiated in human actions. Spirit of Technology: The spirit of technology is the overall intention of use for which a technological artifact was created (see DeSanctis & Poole, 1994). Structuration Theory: Originally developed by Anthony Giddens, structuration theory is an attempt to integrate micro and macro approaches to the study of society. Its basic premise is that individual actions are constrained by social structures, but, at the same time, these actions affect or constitute social structures.
Structuration Theory and Government IT
Technologies in Practice: A technology in practice is the specific enactment of an information technology by a certain social group. Despite certain features of a technological artifact, individuals can perceive it and use it very differently (see W. Orlikowski, 2000).
3
EndnotEs 1
2
Reprinted from “Technology as an Occasion for Structuring: Evidence from Observations of CT Scanners and the Social Order of Radiology Departments” by Stephen R. Barley, published in Administratively Science Quarterly (Volume 31, Issue 1), with permission of Administratively Science Quarterly (© 1986 by Cornell University) Reprinted from “The Duality of Technology: Rethinking the Concept of Technology in Organizations” by W. J. Orlikowski, published in Organization Science (Volume 3, Issue 3) with permission (© 1992 by the Institute for Operations Research and the Management Sciences, 7240 Parkway Drive, Suite 310, Hanover, Maryland 21076)
4
Reprinted from “Capturing the Complexity in Advanced Technology Use: Adaptive Structuration Theory” by G. DeSanctis and M. S. Poole, published in Organization Science (Volume 5, Issue 2), with permission (© 1994 by the Institute for Operations Research and the Management Sciences, 7240 Parkway Drive, Suite 310, Hanover, Maryland 21076) Reprinted from “Using Technology and Constituting Structures: A Practice Lens for Studying Technology in Organizations” by W. Orlikowski, published in Organization Science (Volume 11, Issue 4), with permission of Organization Science (© 2000 by the Institute for Operations Research and the Management Sciences, 7240 Parkway Drive, Suite 310, Hanover, Maryland 21076)
Section III
Security and Protection
Massive data theft has become an everyday reality in the world of information technology. In 2006, for instance, it was revealed that personally identifying data, including names and social security numbers, on up to 26 million Americans were compromised in the U.S. Department of Veterans Affairs. In the United States, information technology security rose to first place in budget priority after the bombing of the World Trade Center and has remained a top priority to the present day. McAfee Avert Labs, leaders in security software, released a list of top security threats for 2007. Noting over 217,000 various known threats, the report outlined several disturbing trends affecting security: •
• • • • •
Malware is increasingly being created by professionalized organizations, including organized crime, characterized by sophisticated programming by development teams who test and automate the production of malware. Sophisticated malware characteristics include polymorphism, recurrent parasitic infectors, rootkits, and software using cycling encryption to release new versions on an automated basis. The black market for malware is lucrative and growing. Trends in technology are making malware easier to transmit and more dangerous (e.g., Bluetooth or Wi-Fi wireless networking, “smart” mobile phones). File sharing, particularly in video and music, creates a vulnerable population of users willing to open transmitted files, and malware is more easily concealed in media than in text files. Imposter Web sites simulating popular destinations (e.g., eBay) are increasingly employed to steal passwords. Identity theft is also increasing through corporate and government data breaches.
More than 4 years after terrorist-controlled airplanes smashed into the World Trade Center and the Pentagon, and after the U.S. Federal Aviation Administration had promised to secure its systems within 3 years, the U.S. Government Accountability Office issued a report finding that air traffic control systems operated by the Federal Aviation Administration contained significant cybersecurity weaknesses and were still vulnerable to attack. Weaknesses cited included outdated security plans, inadequate security awareness training, inadequate system testing and evaluation programs, limited security incident detection capabilities, and shortcomings in providing service continuity for disruptions in operations. These weaknesses are not unique to this particular agency but are commonplace worldwide, making all the more important the need for research on the many dimensions of cybersecurity. G. David Garson, September 2007
Chapter XXXVI
Intelligence and Security Informatics Jimmie L. Joseph University of Texas at El Paso, USA
introduction Intelligence and security informatics (ISI) is the application of information systems (IS), databases, and data coding schemes to issues of intelligence gathering, security, and law enforcement. ISI differs from other disciplines of informatics because of the critical role played by the general public in data gathering and information dissemination. Informatics provides a means of quantifying and organizing information in a manner that is meaningful to practitioners of a specific field (Sawyer & Rosenbaum, 2000; Yeh, Karp, et al., 2003). Often, informatics is of little value to the general public. Generally, understanding informatics systems requires a significant investment of time in extensive educational preparation or significant work experience in the subject being coded (von Solms, van de Haar, et al., 1994; Brancheau, Janz, et al., 1996; Sawyer & Rosenbaum). The fields of biology, sociology, medicine, and museums and archival employ informatics, but these fields filter the results through professionals who communicate mostly with other professionals in the respective fields (Kling 1999; Lorence & Ibrahim, 2003). When it is necessary to inform the public of key findings, a basic synopsis of the
information is typically distributed via the general media, with a reference to professionals or organizations that can discuss, explain, or clarify the information. While it is important to consider the user in any information system (Kling, 1999), the time-sensitive nature of intelligence gathering requires that information be collected from or delivered to the public, not just the professional informatics system user, as quickly as possible. From the perspective of IS, ISI poses unique concerns not pertinent to other disciplines that employ informatics (Atkins 1996; Kling; von Solms et al., 1994). The success of ISI is dependent upon informatics systems designed for the rapid dissemination of critical information to the general public to a degree not present in medical, biological, museum and archival, or other areas of informatics (Besser & Trant, 1994; Danchin, 2000). IS can provide a conduit for disseminating the information in a manner that is useful to the general public. Protecting the public from security threats requires the dissemination, in a timely manner, of the threat analysis results in a format that is readily available to (and quickly digestible by) the general public. From an IS perspective, a significant factor differentiating ISI from other fields employing
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Intelligence and Security Informatics
informatics is the need to design a bidirectional communication with the general public. Three major differences exist between ISI and other forms of informatics, and these differences make ISI unique in terms of data collection and dissemination. The differences are the following:
audience produces public policy issues unique to the field of ISI.
1. 2.
Reliable data are essential to both scientific and intelligence communities, and determining data reliability is a crucial factor in an intelligence system. Intelligence data may contain intentionally false information (disinformation), unintentionally false information (misinformation), correct data, and noise (information unassociated with the current problem, the correctness of which is immaterial to the current issue). Given the nature of security threats, it is possible that data that are of no value in a low-priority investigation may be lifesaving data of inestimable value in a higher priority threat investigation. The need to gather data that are useful but uncorroborated and filter false data and noise makes ISI different in concept and implementation from other forms of informatics. In other disciplines, peer review’s need for replicability of the experiment make the verification of results a core, standard part of the discipline, not a function of the informatics system (Lorence & Ibrahim, 2003). In ISI, the data input is often raw intelligence or law enforcement data. The data represent phenomena that are not reproducible and are often unverifiable. For example, it is possible that the simple act of verifying the information may provide clues to an opponent that indicates a security leak exists within the organization, and signal that operations have been compromised. Indeed, a concern for intelligence agencies is the placement of information designed to isolate the source of a security concern by the targets of the investigation. Once false data are entered into an ISI system, analysts must decide which data are false and which are reliable.
3.
Data source reliability The need to determine which datum is relevant The need to disseminate the finding to the general public without knowing in advance the appropriate individuals or institutions needing to be informed
background Disciplines using informatics traditionally acquire input from medical professionals (Database, 2003; Greiner & Knebel, 2003), scientists (Besser, Trant, et al., 1996), and professional researchers (Besser, 1995; Trant, 2000). The education and training of the contributors allows for a simplification in the coding systems because writers and coders may assume a minimal (albeit relatively high) level of audience sophistication. The results produced are for consideration by individuals and institutions steeped in the vocabulary, traditions, and skills of their specific discipline (Besser et al.; Kling, 1999; Trant). The research from disciplines such as social science informatics, bioinformatics, and archive and museum informatics impact our lives through the efforts of professionals and consultants who interpret and implement the advantages found through the research using coding systems unique to each field. While IS contributes to medical informatics, bioinformatics, and museum and archival informatics, ISI has different characteristics. The differences, relevant to the information input and dissemination requirements for ISI, represent significant hurdles for the design of an ISI system, but also present unique opportunities to design ISI constructs that maximize the advantages offered by modern IS infrastructure. The need for the rapid dissemination of ISI results to the target
designing for dissemination Reliability of the Data
Determination of Relevant Data The nature of the problem dictates the types of data that are likely to be important to most re
Intelligence and Security Informatics
search endeavors (Besser, 1995; Besser & Trant, 1994; Trant, 2000). For bioinformatics researchers determining the genome of a mouse, the type of data that will be needed is clearly defined in the question (Danchin, 2000). In archive and museum informatics, there is a finite set of information available concerning archived items in the inventory of even the largest museums (Trant). In other informatics disciplines, the type of data needed also limits the probable number of consumers of the information worldwide to a few hundred, or possibly, a few thousand individuals and organizations. Researchers can meet at professional and academic conferences to disseminate their findings. Researchers at these meetings are also able to define terms in the informatics system, generating a standard vocabulary. The intelligence community must deal with new organizations that may have no record of terrorist or criminal activity. Thus, there would be no a priori reason to monitor the activities of the group or individuals in the group. The same problem applies to splinter cells of existing groups. There may be no way of knowing that a splinter has formed or that new subcells are dangerous until after a violent incident. In ISI then, there may be sufficient data for a postevent analysis to determine what happened by whom and why. These signals, because each security situation may present novel data, may not rise to the level of significance necessary to signal to the intelligence community that there is a new and novel threat on the horizon prior to the event taking place.
Disseminating the Finding to the General Public Scientific or medical information is not usually disseminated directly to the public. Typically, there is some level of interpolation of the results by experts. Dissemination may come from the family physician, an announcement from the American Medical Association, or press releases (Danchin, 2000; Database, 2003). It is necessary to test and validate any results, replicate experiments, and develop guidelines for new drugs. Thus, it may be years between the announcement of a finding
0
and practical treatment. This interval allows the scientific community time to develop guidelines for disseminating the information to the affected segments of society. There is also time to fine-tune the information’s delivery pathway. For ISI, time is critical in preventing tragedy. Information on the clothing being worn by a person of interest to the security community or concerning a suspect’s location may not be applicable for very long. Information may have value for only a few hours, or possibly a few minutes. Thus, it is critical that ISI include a means for coding information in a manner that makes it easy to disseminate to the general public. Currently, this must be done even though there may be no uniform, tested, trusted, targeted, and institutionalized means of getting the information to the public for the particular threat identified. Granted, there is a means of broadcasting natural disaster warnings, and some localities in danger areas have supplemental warning systems for tornadoes or floods, but this is not specific enough to deliver warnings, sightings, and concerns to the public. Many states have now instituted an Amber Alert system to disseminate information concerning abducted children in a uniform, trusted, and institutionalized manner. One reason for institutionalizing the means of disseminating the information is to ensure public confidence that the warning is authentic. This confidence enables the public to trust the instructions on what to do once they are presented with the information (i.e., run and hide, stand and fight, or ignore the information because it does not pertain to them). Successful security and intelligence gathering activities necessitate public involvement. Unlike other fields that benefit from informatics, ISI cannot rely solely on highly trained professionals for input. In seeking suspects or clues, security and intelligence professionals must rely on the general public to come forward with information. Often, the individuals possessing the needed information may not know that they possess critical data. Unfortunately, mass distribution of information to the public may be suboptimal in the context of ISI. Radio and television broadcasts can target a large segment of the population, but are poor at
Intelligence and Security Informatics
targeting specific segments of the populations or specific, limited geographic areas. Broadcasting to a large area the fact that a suspect is in a different area of town is inefficient. Widespread broadcasting of security alerts, warnings, or information requests risks desensitizing the population in areas that are not affected yet receive frequent alerts. Radio and television can provide thorough information, and does so in crisis situations including Hurricane Katrina and the September 11th attacks. Time limitations on scheduled news programs, however, may result in superficial coverage of many issues. As with books, magazines, and newspapers, television currently provides no method for bidirectional communications. Radio shares the advantages and shortcomings of television. Table 1 summarizes current communications media and their characteristics vis-à-vis communicating security concerns with the public. Books, magazines, and newspapers provide thorough and in-depth information, but are neither very timely nor well targeted geographically. Pagers provide a means for ISI systems to target users with warnings, instructions, or information requests. Those with pagers could receive current, applicable security warnings and information in the same manner stock quotes are currently delivered. The timeliness of delivery is high if the pager is turned on. Some pagers are two-way pagers, permitting some bidirectional commu-
nications. Unfortunately, the limited memory of current pagers makes their thoroughness of information low. Cell phones provide the advantages of pagers, but their added memory and larger screens increase the thoroughness of information delivery. Cell phones also allow bidirectional communications. The GPS (global positioning system) capabilities currently being built into cell phones, combined with upgrades to cell phone towers to support E911 service, make it feasible to provide security warnings with a high degree of geographical specificity. E-mail provides rapid dissemination of information. The only way to achieve geographical targeting of message delivery is for the recipient to provide a location (e. g., city name or zip code). Unfortunately, there is no way to verify the current location of the person accessing the e-mail. Thus, a person on vacation may receive warnings targeted for their home or office, or miss warnings or information targeted for the location they currently occupy.
futurE trEnds Specialized devices operating on an enhanced system similar to the weather warning radio network could enable security warning to be
Table 1. Characteristics of communications media Thoroughness of Information
Timeliness of Delivery
Geographical Specificity
Bi-directional Communications
Books
High
Low
Low
No
Magazines
High
Low
Low
No
Newspapers
High
Low
Low
No
Moderate/High
High
Low
No
Radio
Moderate
High
Low
No
Pager
Low
High
Moderate
Possibly
Moderate
High
High
Yes
Email
High
High
Moderate
Yes
Special Security Device
High
High
High
Yes
Television
Cell Phone
Intelligence and Security Informatics
timely, thorough, and targeted. These devices could provide widespread distribution to localize warnings based on ISI codes of the threat(s). Current technology allows wireless bidirectional communication. Biometrics or other systems could provide nonrepudiation of messages sent, reducing false messages or hoaxes during critical times.
conclusion ISI has requirements that differ from other areas of informatics. The necessity to accept input from the general public, combined with a duty to disseminate the product of analysis to the public, make the design of ISI systems more complex than their more narrowly targeted counterparts. When the information dissemination needs are coupled with the time-sensitive nature of the distribution of the information, it can make traditional methods of informatics information dispersal undesirable. Table 2 depicts some of the characteristics of different disciplines of informatics, showing how ISI differs from other areas of informatics on most of the characteristics of interest. Those differences with other disciplines of informatics require that ISI incorporate in its initial design the needs for acquiring information from, and disseminating information to, the targeted components of the general public. Effectively, ISI necessitates a public information system infrastructure.
ISI also will not succeed if the results of analysis are disseminated only to professionals in the field. ISI systems must be planned, designed, and implemented so that the results of the analysis can be delivered to those in affected areas. In situations where there may be a security risk, time is a critical factor. Thus, the delay in delivering information to TV or radio stations to be relayed to the public may not be acceptable. IS can provide a means to deliver the information to the public directly, more quickly than mass broadcast, and at a lower cost than is possible through current channels. The formulation stage of ISI represents an opportunity to develop tools, procedures, and technologies to ensure that individuals and organizations in the computing public are aware of the role they must play in national security. The characteristics of ISI illustrated in Table 2 make the need to communicate with the public unique to intelligence and security informatics. No other area of informatics has had its success dependent upon the time-sensitive ability to receive information from, and pass information to, the general public. This added complexity can be handled by analysis and planning for ISI systems so that they are designed with dissemination in mind. Modern means of segmenting and targeting the population made possible through GPS systems and specialized devices make it possible to warn those in areas where trouble is suspected without causing undue concern to the rest of the population.
Table 2. Characteristics of informatics disciplines Intelligence and Security Informatics
BioInformatics
Medical Informatics
Archive and Museum Informatics
Reliability of Inputs
Low
High
High
High
Training of the Data Suppliers
Low
High
High
High
Input Subjectivity
High
Low
Low
Moderate
Data Noise
High
Low
Low
Low
Pre-Processing of Data
Low
High
Low
High
Immediacy of Information Dissemination
High
Low
Low
Low
Ambiguity of Needed Data
High
Low
Low
Low
Addition of Unforeseen Data Categories
High
Low
Low
High
Intelligence and Security Informatics
futurE rEsEarch dirEctions Future research into ISI and information dissemination could explore the practical applications of real-time dissemination in a targeted manner. This research could survey current network administrators on the value of receiving targeted e-mails from hardware and software vendors on system vulnerabilities. This targeted information could be compared with generally distributed information such as food safety recalls for effectiveness. Researchers could also explore the penetration of existing warning systems and the degree to which the public utilizes current security or computer virus warnings. Computer virus warnings have been sounded for nearly decades, and research could explore the degree to which users and organizations purchase and maintain their antivirus software. Research in this area could also examine the effectiveness of operating system or application software security vulnerability warnings, and the impact these warnings have on end-user updating of the affected software. A new trend in law enforcement is the posting of photos and videos of suspects or fugitives on news and video Web sites. An exploration of the success and effectiveness of such postings could help security and law enforcement officials determine an optimum mix of disseminations channels for finding suspects or fugitives. Future research could also explore the benefits of a centralized storehouse of such photos and videos on a law enforcement Web site. A large-scale study of Amber Alert systems could provide information on their effectiveness in finding lost or abducted children. This research could also explore the benefits of providing photos to cell phones within a specific area. A study comparing the effectiveness of general Amber Alerts against the effectiveness of NOAH severe storm and tornado warnings could also illuminate dissemination pathways that may target the optimum audience. The coding of ISI information is also an area of study for future research. An exploration of the coding options that would allow information to be quickly transformed for public dissemination
would help reduce confusion and speed the flow of information to the public during critical situations. Systems to prevent the distribution of sensitive information, or allow for seamless redaction of critical categories of information, would speed the flow of information from security and law enforcement organizations to the public.
rEfErEncEs Atkins, D. (1996). Internet security professional reference. Indianapolis, IN: New Riders Publishing. Besser, H. (1995). Categories for the description of works of art. J. Paul Getty Trust, Art History Information Program. Besser, H., & Trant, J. (1994). Describing image files: The need for a technical standard. Orlando, FL: Coalition for Networked Information. Besser, H., Trant, J., et al. (1996). The arts and humanities data service: Collecting digital research data. Building a framework for digital resource preservation and interchange. Ariadne. Brancheau, J. C., Janz, B. D., et al. (1996). Key issues in information systems management: 199495 SIM Delphi results. MIS Quarterly, 20(2), 225-242. Danchin, A. (2000). A brief history of genome research and bioinformatics in France. Bioinformatics, 6(1), 65-75. Database, M. G. (2003). Mouse genome informatics (MGI). Retrieved July 30, 2003, from http:// www.informatics.jax.org/ Greiner, A. C., & Knebel, E. (2003). Health professions education: A guide to quality. Washington, DC: Institute of Medicine of the National Academes. Kling, R. (1999). What is social informatics and why does it matter? D-Lib Magazine, 5(1). Lorence, D. P., & Ibrahim, I. A. (2003). Disparity in coding concordance: Do physicians and
Intelligence and Security Informatics
coders agree? Journal of Health Care Finance, 29(4), 43. Sawyer, S., & Rosenbaum, H. (2000). Social informatics in the information sciences: Current activities and emerging directions. Informing Science, 3(2). Trant, J. (2000). Archives & museum informatics, 2000. Pittsburgh, PA: Museums and the Web 2000. von Solms, R., van de Haar, H., et al. (1994). A framework for information security evaluation. Information & Management, 26(3), 143-153. Yeh, I., Karp, P. D., et al. (2003). Knowledge acquisition, consistency checking and concurrency control for gene ontology (GO). Bioinformatics, 19(2), 241-248.
furthEr rEading Badenhorst, K. P., & Eloff, J. H. P. (1989). Framework of a methodology for the life cycle of computer security in an organization. Computers & Security, 8(5), 433-442. Baskerville, R. (1988). Designing information systems security. Chichester, UK: John Wiley. Besser, H. (1995). Categories for the description of works of art. J. Paul Getty Trust, Art History Information Program. Bottom, N. R., Jr. (1983). An informational theory of security. Computers & Security, 2(3), 275-280. Brancheau, J., & Wetherbe, J. C. (1987). Key issues in information systems: 1986. MIS Quarterly, 11(1), 23-45. Carter, D. L., & Katz, A. J. (1996). Trends and experiences in computer-related crime: Findings from a national study. Proceedings of the Annual Meeting of the Academy of Criminal Justice Sciences, Las Vegas, NV. Cary, J. M. (1981). Data security and performance overhead in a distributed architecture system. Ann Arbor, MI: UMI Research Press.
Cerullo, M. J., & McDuffie, R. S. (1992). Computer contingency plans and the auditors: A survey of businesses affected by Hurricane Hugo. Computers & Security, 11(7), 620-622. Dam, K. W., Lin, H., et al. (Eds.). (1996). Cryptography’s role in securing the information society. Washington, DC: National Academy Press. D’Elia, G., Branin, J., et al. (1991). Issues relating to the implementation and management of an integrated information center. Journal of the American Society of Information Science, 42(2), 147-151. Delio, M. (2003). Worm exploits weak link: PC users. Retrieved August 31, 2003, from http://www. wired.com/news/infostructure/0,1377,59994,00. html Frishman, D., Heumann, L., et al. (1998). Comprehensive, comprehensible, distributed and intelligent databases: Current status. Bioinformatics, 14(7), 551-561. Garfinkel, S., & Shelat, A. (2003). Remembrance of data passed: A study of disk sanitation. IEEE Security and Privacy, 1(1), 17-27. Gefen, D., & Straub, D. W. (1997). Security of information systems. In G. B. Davis (Ed.), The Blackwell encyclopedic dictionary of management information systems (pp. 199-203). Cambridge, MA: Blackwell. Guttman, B., & National Institute of Standards and Technology. (1992). Computer security considerations in federal procurements: A guide for procurement initiators, contracting officers, and computer security officials. Springfield, VA: U.S. Department of Commerce Technology Administration, National Institute of Standards and Technology. Guttman, B., Roback, E., et al. (1995). An introduction to computer security: The NIST handbook. Gaithersburg, MD: U.S. Department of Commerce Technology Administration, National Institute of Standards and Technology. Hakala, D. (2003). Cyber crime tsunami. VARBusiness, 19(43), 2.
Intelligence and Security Informatics
Holmes, J. D. (2003). Formulating an effective computer use policy. Information Strategy, 20(1), 26-32. Johnson, N. F., & Kong, E. G. (2002). Investigating hidden information: Steganography and computer forensics. Proceedings of American Academy of Forensic Sciences (AAFS) 54th Annual Meeting, Atlanta, GA.
tErms and dEfinitions Disinformation: Disinformation is information provided to security or intelligence sources that is intentionally false or intended to deceive. Informatics: Informatics is a means of quantifying and organizing information in a manner that is meaningful to practitioners of a specific field
Junnakar, S. (2003). Old hard drives yield data bonanza. Retrieved August 31, 2003, from http:// zdnet.com.com/2100-1103-980824.html
Information Dissemination: Information dissemination is to distribute or broadcast information.
Kling, R. (1992). Behind the terminal: The critical role of computing infrastructure in effective information systems’ development and use. In W. Senn & J. Cotterman (Eds.), Challenges and strategies for research in systems development. London: John Wiley & Sons.
Intelligence and Security Informatics (ISI): ISI is the application of information systems, databases, and data coding schemes to issues of intelligence gathering, security, and law enforcement.
Lemke, T. (2003). “Blaster” master sought by FBI. Retrieved August 30, 2003, from http://www. washtimes.com/business/20030813-0935393381r.htm Lorence, D. P., & Ibrahim, I. A. (2003). Disparity in coding concordance: Do physicians and coders agree? Journal of Health Care Finance, 29(4), 43. Macauley, J., Wang, H., et al. (1998). A model system for studying the integration of molecular biology databases. Bioinformatics, 14(7), 575582.
Nonrepudiation: Nonrepudiation is verification that ensures that the sender cannot deny that he or she sent, or is responsible for sending, a message. Noise: Noise is information that is unassociated with the current problem. Relevance: Relevant data are data that are applicable to the current need or situation. Reliability: Reliability refers to the presumption that information is accurate and can be trusted.
Chapter XXXVII
Practical Measures for Securing Government Networks Stephen K. Aikins University of South Florida, USA
introduction The modern network and Internet security vulnerabilities expose state and local government networks to numerous threats such as denial of service (DoS) attacks, computer viruses, unauthorized access, confidentiality breaches, and so forth. For example, in June 2005, the state of Delaware saw a spike of 141,000 instances of “suspicious activity” due to a variant of the mytopb worm, which could have brought the state’s network to its knees had appropriate steps not been taken (Jarrett, 2005; National Association of State Chief Information Officers [NASCIO], 2006b). On an average day, the state of Michigan blocks 22,059 spam e-mails, 21,702 e-mail viruses, 4,239 Web defacements, and six remote computer takeover attempts. Delaware fends off nearly 3,000 attempts at entering the state’s network daily (NASCIO, 2006b). Governments have the obligation to manage their information security risks by securing mission-critical internal resources such as financial records and taxpayer sensitive information on their networks. Consequently, public-sector information security officers are faced with the challenge to contain damage from compromised
systems, prevent internally and Internet-launched attacks, provide systems for logging and intrusion detection, and build frameworks for administrators to securely manage government networks (Oxlenhandler, 2003). This chapter discusses some of the cost-effective measures needed to address government agency information security vulnerabilities and related threats.
background At the 2005 midyear conference of the National Association of State Chief Information Officers, 89% of responding CIOs polled ranked security among their top three most important issues. However, information technology security initiatives often must compete with other IT resource demands that appear to provide more tangible and immediate business value. The funding and resource constraints facing many state and local governments make it imperative to design and implement an information security model that takes into account the necessary steps and control measures that provide basic information security at the most cost-effective means. Implementing such a security model implies going beyond the
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Practical Measures for Securing Government Networks
obvious items such as physical security, routers, firewalls, and antivirus, and looking at several other important issues, which include confidentiality, data integrity, content filtering, and incidence response. In many ways, a government data network will be designed and constructed in a similar manner as any other business data network. However, unlike any private-sector organization, most government agencies do not have the resources to implement overly expensive network architecture. A cost-effective means of securing a public network begins with the documentation of a security policy that reflects the goals of the agency: a realistic assessment of the risks faced by the agency and identification of the resources (manpower, hardware, budget) that are available (Oxlenhandler, 2003). A state or local government agency can manage its network risks by balancing the need for security and cost effectiveness through information security decisions that restrict access to its network, protect against viruses, control network traffic, and provide information assurance within the confines of the available funding structure.
Microsoft Corporation, 2003). The use of cryptocapable routers will help provide connectivity with the ability of session encryption, thereby preventing the snooping of network traffic. A government agency can use internal routers to segment the organization network into smaller networks for security reasons, such as isolating networks from each other, and for performance reasons such as increasing available routes for data to travel and increasing bandwidth for users. In addition, border routers could also be used to connect to the local government’s Internet service provider (Pastore, 2003). In addition to physical security for the routers, a government agency should maintain up-to-date network configuration documentation and ensure that logical configurations within the routers, firewalls, and servers are not open to attack and possibly compromised. This can be achieved by keeping up to date with vendor patches and ensuring there are complex administrative passwords and settings as hackers usually scan and probe networks for insecure factory-default passwords and settings (McClure, Scambray, & Kurtz, 2002; Ruth et al., 2003).
managing govErnmEnt nEtwork sEcurity risks
antivirus protection
Physical Security As a basic means of securing its information resources, a state or local government agency should be able to restrict access to its network hardware. This includes having a data center with access cards to prevent the bad “guys from having unrestricted physical access to systems” (Microsoft Corporation, 2003), and having other physical devices such as routers hopefully located within a wiring closet. Routers play a key role by transferring and routing all the data communication across the network in a proper mode. Each router maintains a routing table and address resolution protocol (ARP) cache. ARP caches are mappings that correlate an IP (Internet protocol) address to a media access control (MAC) address (Ruth, Hudson, &
One concern that every data network, both governmental and private, needs to address is the issue of antivirus protection. Over the past couple of years, much of the damage done to many networks has been caused by virus attacks. For any antivirus network software development, local or state government agencies can focus on a comprehensive solution that addresses the desktop, server, gateway, firewall, and e-mail servers, for example. This defense-in-depth approach is essential as government agencies cannot trust the desktop solution 100%. Desktops must be kept up to date and protected at all times with the antivirus product that is managed centrally for control. As part of its comprehensive security management program, the state of Michigan installed software that filters spam and serves as antivirus solution for incoming and outgoing e-mail. Through this solution, the state stopped monthly averages of
Practical Measures for Securing Government Networks
74,519 viruses in 2003, 657,271 viruses in 2004, and 181,238 viruses in 2005 (NASCIO, 2006a).
firewall Local and state government agencies can use firewalls to filter the traffic between their network and the Internet to address the numerous threats resulting from Web-related attacks and unauthorized access. In 2004, 53% of all reported fraud complaints to the Federal Trade Commission (FTC) were Internet related (e.g., spam, phishing, spyware, and other malware threats). In addition, nearly 5% of Americans have been victims of identity theft, for which many of the cases were Internet related, within the past 5 years, resulting in estimated financial losses of $48 billion per year (NASCIO, 2006a). It is generally agreed that a properly configured firewall would be set to use a “default-deny rule” (Internet Security Systems, 2002; Nokia, 2002; Pastore, 2003). What this means is that the agency would lock down all ports and then open any desired ports as needed to enable the firewall to block any unwanted traffic. The government agencies can consider two firewalls: a border or perimeter firewall to the external world, and an internal firewall to protect various entity servers from internal users in order to reduce or prevent internal data theft (Bridis, 2000; Cisco Systems, 2002; Frederick, 2001; St. Bernard Software, 2003). As a best practice, Michigan uses SurfControl and filtering systems to prevent system users from accessing Web sites that are deemed risks to the state’s network. These are used to prevent possible disclosure of confidential information, help ensure worker productivity by preventing access to sites that are not business related, and protect the network from valuable bandwidth diversion and system infections (e.g., viruses, worms, Trojan programs, spyware, fraud and scam sites, etc.). The state’s SurfControl metrics show that approximately 80,000 connection attempts to spyware Web sites are blocked every month, resulting in an annual cost savings of approximately $3.3 million (NASCIO, 2006a).
Virtual private network (VPN) access can also be granted to remote users in an effort to protect the entity’s data. To minimize the risk of unauthorized intrusion, the VPN concentrator could be located outside the firewall. This is necessary to avoid compromise of network security through tunnel connection. In addition, an intrusion detection system (IDS) or an intrusion prevention system (IPS) may be considered in the efforts to provide defense-in-depth security for the data network. For large government entities with complex networks, it is a good idea to have an intrusion detection mechanism in place quickly to detect serious probing or threat activities (Frederick, 2001)
content filtering Content filtering can also be used by government agencies to block access to specific types of Web content that may be deemed inappropriate for the entities’ Internet users (Frederick, 2001; N2H2, 2003). A solution for content filtering can be either appliance based or software based. An appliance is an all-in-one “black box” that contains its own software and databases. A software solution would need a separate server running a mainstream operating system. What may be applicable in the local or state government environment can be determined by the size of the network, bandwidth availability, vendor support, degree of testing, and so forth. If implemented properly, content filtering systems have the potential to yield significant benefits to state and local government agencies. To address Internet security vulnerabilities, Michigan installed integrated e-mail and spam filtering software on the state’s gateway. This solution filters spam and serves as the state’s antivirus solution for incoming and outgoing e-mail, blocking an average of 2.6 million spam e-mails per month.
Fundamentals of Data Security For a government agency to have adequate and effective information security, it must adhere to the three cornerstones of information assurance that constitute the fundamentals of data security.
Practical Measures for Securing Government Networks
These are confidentiality, data integrity, and system availability. •
•
System Availability: The government agency should ensure that its servers are available for the required amount of uptime to meet or exceed the service-level agreement (SLA). To accomplish this, it can purchase or lease servers that have hot-swappable redundant power supplies, mirror its system disk when installing its operating system, and add redundancy to its data drives (Hayden, 2000; Steel, 2002). These kinds of steps can reduce the single points of failure that could cause a server to become unavailable. The ability to swap out a failed power supply or failed disk drive without turning off the server is very desirable. Having an available uninterruptible power supply (UPS) installed in a data center or in front of a critical application server can also serve to increase availability on the entity’s network to end users (Steel, 2002). The government entity can also consider redundant network systems with additional routers and switches available in a fail-over mode in the event of a hardware failure or a DoS attack. Confidentiality: The public expects that their government will keep data such as taxpayer information and other information pertaining to their dealings with government very secure. The records of students in public schools and patient information in hospitals, for example, are all subject to privacy laws. To this end, there is a requirement in any government or its affiliated institutions that data be kept very secure, and measures are taken to ensure that this is the case. Local and state government agencies could install auditing or logging software to detect, log, and store any adverse activity for later forensics. In the United States, the Family Educational Rights and Privacy Act (FERPA) lays out the rights and permissions regarding the disclosure of student information (U.S. Department of Education, 2003). The Health Insurance Portability Act (HIPAA) requires
•
health organizations, including public health organizations, to take certain measures to protect the health information of patients. Data Integrity: Every government entity has the obligation to protect its data from unauthorized modification in order to ensure the accuracy of information provided to the public. Data integrity involves protecting the data from defacement and replacement (NIST, 2001). All documents on the entity’s system should be unalterable by untrusted sources. To accomplish this, file integrity should be maintained and examined on a regular basis. The number of trusted sources with the ability to publish content should be limited to the fewest possible, and the publishing mechanism should be separate from the delivery mechanism (Herzorg et al., 2001). To enhance integrity, access to the entity’s data should follow a least-privilege model. All public access should be restricted and enforced at multiple layers. For example, separate user IDs and passwords can be required when logging on to the network and onto specific applications on the network.
incidence response To be able to detect, track, and respond effectively to all computer incidents and alerts, state and local governments and related entities could implement a computer and human reporting system to monitor incidents. The computer system tracks all alerts as they occur, and classifies them initially as high, medium, or low priority. Alerts such as an automated scan of the file transfer protocol (FTP) port would generate a low–priority alert, whereas a DoS signature would generate a high-priority alert (Frederick, 2001). To ensure effectiveness, all alerts should be reviewed by daily security monitoring personnel, depending on the size of the entity and the complexity of the network, in order to investigate, take appropriate action, and document the entire process for late evidentiary and reference usage.
Practical Measures for Securing Government Networks
futurE trEnds The existing privacy laws and citizen expectations will continue to oblige public-sector agencies to take appropriate steps to address network security vulnerabilities and threats. However, the public sector is constrained by limited financial resources, the need to justify information security expenses, and inadequate technical personnel to deal with these challenges. Consequently, government information security officers at all levels will be motivated to find practical ways to share information about best practices and new security solutions to help secure agency data in the most cost-effective manner. Information sharing about best practices will be essential given, for example, the interconnected relationships between federal and state information technology along the vertical lines of business, such as health and law enforcement. In order to secure the needed funding to provide information security solutions, state and local government information security officers will have to analyze their risks and document business cases as a common practice in order to justify the needed funds. By understanding their environment and conducting IT risk assessments to determine their agencies’ information security needs, they will be able to formulate their strategic business intent, which will likely involve the protection of citizen privacy and the agencies’ information resources. As new security vulnerabilities emerge and regulatory requirements abound, public entities will have to explore ways to attract and retain qualified personnel, perform effective requirement analysis, and document prudent IT security business cases outlining the problems to be addressed, the cost-benefit analysis performed, and how the benefits are in the overall strategic interest of their agencies. With effective requirements and cost-benefit analysis, public agencies should be able to acquire information security solutions that meet their needs and also take into account their financial constraints. To this end, it will be beneficial to request governmental discount when approaching vendors for information security
0
solutions. Vendors who desire a presence in the state or municipality will often provide sizable discounts in an effort to gain a foothold within the community.
conclusion The core of government agency information security management is the ability of information security officers to elicit and analyze measurable levels of risk and apply suitable countermeasures that will eliminate vulnerabilities and mitigate exposure. Implementing adequate safeguards to protect the interdependencies of agency assets requires a comprehensive understanding of the threats that exploit vulnerabilities, the calculable probability the threats will occur, and the damage and financial impact the security breach will have on governmental effectiveness. Securing public agencies’ information resources can help ensure the integrity of the data utilized for administrative decision making, the confidentiality and privacy of citizen information, and the availability of the agencies’ systems for the required amount of uptime to meet or exceed the SLA. In this regard, the ability of public agency officials to make information security decisions that restrict access to agency networks, protect against viruses, control network traffic, and provide information assurance within the confines of the available funding structure is crucial.
futurE rEsEarch dirEctions Although numerous studies have been done on e-government, very little attention has been paid to information security threats at the state and local government levels and their impacts on the effective discharge of government responsibilities. An empirical investigation into the degree of state and local government information security vulnerabilities, threats and exploits, and their relationships to agencies’ ability to accomplish their missions and effectively manage emergencies is warranted. Homeland Security Presidential Direc-
Practical Measures for Securing Government Networks
tive, HSPD-7 (2003) declares, “It is the policy of the United States to enhance the protection of our Nation’s critical infrastructure and key resources against terrorist acts that could…undermine State and local government capacities to maintain order and to deliver minimum essential public services.” It also designates “emergency services,” most of which are delivered by state and local authorities, as being among the nation’s “critical infrastructure sectors.” Considering the fact that the nation’s information infrastructure is constantly under attack, a study of the interrelationships between information security vulnerabilities, threats and exploits, and the ability to provide essential public services and maintain emergency preparedness will be beneficial. This chapter outlined the steps that a local government entity can take to implement a network that balances the need for security and cost effectiveness. The concept of appropriate balance between information security needs and cost effectiveness is relatively new in the public sector and is worthy of empirical investigation. For example, under what conditions can a state or local government agency be preoccupied with information security threat concerns and allocate resources that would otherwise be deployed for competing social services? How could such resource allocations be justified? How and when will the government agency determine the equilibrium between information security needs and cost effectiveness? What are the policy implications of information security management strategies that prioritize the addressing of threats over cost effectiveness, and vice versa? These questions and many more could be answered through empirical studies. The extent of citizen concerns about privacy and their expectation of governments to secure sensitive citizen information and agency missioncritical data are also worthy of empirical investigation. As part of such empirical investigation, the policies, protocols, and security management strategies needed to balance citizen privacy protection and effective use of governmental information should be explored. In addition, the role of budgetary resources, agency size, population, and demographics, as well as intergovernmental
relations and related fund transfers in enhancing or constraining public agency ability to address information security vulnerabilities, could be explored.
rEfErEncEs Bridis, T. (2000). Congressional panel says no to filters. The Wall Street Journal Online. Retrieved August 23, 2004, from http://zdnet.com/2100-11_2524852.html
Cisco Systems. (2002). Design guide Cisco IOS firewall. Retrieved February 27, 2005, from http:// www.cisco.com/warp/public/cc/pd/iosw/prodlit/ firew_dg.htm Cyber-security. (2006). 2006 Michigan IT strategic plan. Retrieved from http://www.michigan. gov/documents/AppendixF_149547_7.pdf Frederick, K. (2001). Network monitoring for intrusion detection. Security Focus. Retrieved March 3, 2005, from http://online.securityfocus. com/infocus/1220 Hayden, M. (2000). National information assurance certification and association process. NSA. Retrieved December 11, 2005, from http://www. nstissc.gov/Assets/pdf/nstissi_1000.pdf Herzorg, P., et al. (2001). The open source security testing manual, v1.5. IdeaHamster. Retrieved May 6, 2005, from http://www.ideahamster.org/osstmm.htm Homeland security presidential directive, HSPD-7. (2003). Retrieved March 6, 2007, from http://www. whitehouse.gov/news/releases/2003/12/200312175.html Internet Security Systems. (2002). RealSecure network protection. Retrieved January 19, 2004, from http://www.iss.net/products_services/enterprise_protection/rsnetwork/index.php Jarrett, T. (2005). Securing cyberspace: Efforts to protect national information infrastructures continue to face challenges. Testimony presented
Practical Measures for Securing Government Networks
before the Subcommittee on Federal Financial Management, Government Information and International Security of the Senate Committee on Homeland Security and Government Affairs. McClure, S., Scambray, J., & Kurtz, G. (2002). Network devices. In Hacking exposed: Network security secrets & solutions (4th ed., chap. 9). McGraw-Hill Osborne Media. Microsoft Corporation. (2003). The ten immutable laws of security. Microsoft Security Response Center. Retrieved November 21, 2005, from http://www.microsoft.com/technet/treeview/default.asp?url=/technet/columns/security/essays/ 10imlaws.asap N2H2. (2003). About Internet filtering. Retrieved January 27, 2006, from http://www.filteringinfo. org/about.php National Association of State Chief Information Officers (NASCIO). (2006a). Appendix F: Securing the state of Michigan information technology resources. In Findings from NASCIO’s strategic cyber security survey. National Association of State Chief Information Officers (NASCIO). (2006b). The IT security business case: Sustainable funding to manage the risk. Retrieved March 4, 2007, from http://www. nascio.org/publications/documents/NASCIOIT_Security_Business_Case.pdf NIST. (2001). International standard ISO/IEC 17799:2000 NIST. Retrieved September 9, 2005, from http://csrc.nist.gov/publications/secpubs/ otherpubs/reviso-fag.pdf Nokia. (2002). Firewall solutions, Nokia and checkpoint. Retrieved June 20, 2005, from http:// www.nokia.com/securitysolutions/network/firewall/html Oxlenhandler, D. (2003). Designing a secure local area network. SANS Institute. Pastore, M. (2003). Infrastructure and connectivity. In Security + study guide (chap. 3). Sybex. Ruth, A., Hudson, K., & Microsoft Corporation. (2003). Network infrastructure security.
In Security + certification training kit (chap. 4). Microsoft Press. St. Bernard Software. (2003). The Internet access management solution of choice used in education and libraries, iPrism v3.4. Retrieved February 6, 2004, from http://www.stbernard.com/products/ docs/ip_education.ppt Steel, C. (2002). Case study of a multi-site redundant Web hosting environment. IEEE Internet Computing. U.S. Department of Education. (2003). Family Education Rights and Privacy Act (FERPA). Retrieved March 7, 2006, from http://www.ed.gov/ policy/gen/guid/fpco/ferpa/index.html
furthEr rEading Abbot, M., & Peterson, L. (1993). Increasing network throughput by integrating protocol layers. IEEE ACM Transactions on Networking, 1(5), 600-610. Anderson, T., et al. (1996). Serverless network file systems. Proceedings of the Fifteenth Symposium on Operating Systems Principles (pp. 109-126). Andreas, E. F. (2002). On the necessity of management of information security: The standard ISO17799 as international basis. Retrieved January 27, 2007, from http://noweco.com/wp_ iso17799e.htm Arbaugh, W. A. (2004). A patch in nine saves time. IEEE Computer, pp. 82-83. Atkinson, R. (1995). Security architecture for the Internet protocol (IETF RFC 1825). Austin, R. D., & Darby, C. A. R. (2003). The myth of secure computing. Harvard Business Review, 81(6), 120-126. Authentication policy for federal agencies. (2003). Cisco Press. Bhagyavati & Hicks, G. (2003). A basic security plan for a generic organization. Journal of Computing Sciences in Colleges, 19(1), 248-256.
Practical Measures for Securing Government Networks
Birman, K. P., et al. (1996). Software for reliable networks. Scientific American, 274(5), 64-69.
the Third Information Survivability Workshop, ISW-2000.
Blunk, I., & Vollbrecht, J. (1998). PPP extensible authentication protocol (EAP) (IETF RFC 2284).
Miller, S. P., et al. (1998). Kerberos authentication and authorization system. Massachusetts Institute of Technology.
Braun, T., & Diot, C. (1995). Protocol implementation using integrated layer processing. Proceedings of SIGCOMM-95.
Molver, Natick, et al. (n.d.). Krypto knight authentication and key distribution system. Proceedings of ESORICS.
Butler, D. (2003). Expert fear network paralysis as computer worm blast Internet. Nature.
National Association of State Chief Information Officers (NASCIO). (2006a). A current view of the state CISO: A national survey assessment. Retrieved March 4, 2007, from http://www.nascio. org/publications/documents/NASCIO-CISOsurveyReport.pdf
Buzzard, K. (2003). Adequate security: What exactly do you mean? Computer Law and Security Report, 19(5), 406-410. David, D. W., & Price, W. L. (1984). Security for computer networks. Wiley. Davide, J. (2002). Policy enforcement in the workplace. Computers and Security, 21(6), 506-513. Gripp, F. J., & Siegel, J. G. (2001). Security issues on the Internet. The CPA Journal, 7(10), 64-68.
National Association of State Chief Information Officers (NASCIO). (2006b). Findings from strategic CyberSecurity survey. Retrieved March 4, 2007, from http://www.nascio.org/publications/ documents/NASCIO-CyberSec_Survey_Findings.pdf
Horton, M., & Mugge, C. (2003). Network security: Portable reference. McGraw-Hill/Osborne.
National Security Agency. (2002). Router security configuration guide by system and network attack center (SNAC) (Version: 1.0k).
Information technology security evaluation criteria (ITSEC), provisional harmonized criteria. (1991). Commission of the European Communities, DG XIII.
NetScreen Technologies. (2003). Layered security: Re-establishing the trusted network (White paper).
IP denial of service attacks, teardrop and land attacks (CERT Advisory CA 1997-28). (n.d.). Retrieved January 29, 2007, from http://cert.org/ advisories/CA-1997-28.html Lane, V. P. (1985). Security of computer based information systems. Macmillan. Malik, S. (2003). Network security principles and practices. Cisco Press. Mason, A. G. (2002). Cisco secure virtual private networks. Cisco Press. Massey, J. L (1988). An introduction to contemporary cryptography. Proceedings of the IEEE, 76(5). Meadows, C. (2000, October). A framework for denial of service analysis. Proceedings of
NIST. (2002). Procedures for handling security patches. Retrieved February 6, 2007, from http://csrc.nist.gov/publications/nistpubs/80040/sp800-40.pdf Nortel Networks. (2003). Unified security architecture for enterprise network security (White Paper No. NN102060-0902). Nyanchama, M., & Stafania, M. (2003). Analyzing enterprise network vulnerabilities. Information Systems Security, pp. 44-49. Pall, G., & Zorn, G. (2001). Microsoft point-topoint encryption protocol (MPPE) (IEFT RFC 3078). TCP SYN flooding and IP spoofing attacks (CERT Advisory CA 1996-21). (n.d.). Retrieved May 14,
Practical Measures for Securing Government Networks
2006, from http://www.cert.org/advisories/CA1996-21.html VeriSign. (2004). Securing wireless local area networks, understanding network vulnerabilities. Retrieved February 19, 2007, from http://www.verisign.com/resources/wp/index. html#networksSecurity
tErms and dEfinitions Antivirus Program: It is a utility that searches a hard disk for viruses and removes any that are found. Most antivirus programs include an automatic update feature that enables the program to download profiles of new viruses so that it can check for new viruses as soon as they are discovered. Availability: This is the property that a given resource will be usable during a given period. It is the state that exists when required automated services or system data can be obtained within an acceptable period at a level and in the form the system user wants. Availability is one the three goals of a security program. Confidentiality: Confidentiality is the concept of ensuring that data are disclosed only to authorized subjects (e.g., individuals, processes). Confidentiality protects data from unauthorized disclosure. It could involve ensuring all user data is protected or fields are selectively protected. Traffic flow confidentiality may also be provided, protecting the information that may be derived from a traffic analysis. Confidentiality is one of the three goals of a security program Data Integrity: This is the concept of being able to assure that data or voice transmissions can be maintained in an unimpaired state or condition and is not subject to unauthorized modification, whether that modification is intentional or inadvertent. It protects against modification, insertion, deletion, or replay of data. Data integrity is one of the three goals of a security program.
Denial of Service (DoS) Attack: A DoS attack is a type of computer attack that prevents any part of a system on a network from functioning in accordance with its intended purpose. It is a type of attack on a network that is designed to bring the network to its knees by flooding it with useless traffic. Firewall: A firewall is a system designed to prevent unauthorized access to or from a network. Firewalls can be implemented in both hardware and software, or a combination of both. There are several types of firewall techniques. These include packet filter, application gateway, circuit-level gateway, and proxy server. IP Address: It is an identifier for a computer or device on a TCP/IP (transmission-control protocol/Internet protocol) network. Networks using the TCP/IP protocol route messages based on the IP address of the destination. The format of an IP address is a 32-bit numeric address written as four numbers separated by periods. Router: A router is a device used to link two networks. Routers play a key role by transferring and routing all the data communication across the network in a proper mode. Each router maintains a routing table and address resolution protocol (ARP) cache. A router keeps a record of the network node addresses and current network status. Service Level Agreement (SLA): An SLA is a service contract between a network service provider and a subscriber guaranteeing a particular service’s quality characteristics. These agreements are concerned with network availability and data delivery reliability. Virus: A virus is a program or piece of code that is loaded onto your computer without your knowledge and infects programs already in existence by inserting new code. Viruses can replicate themselves and are dangerous because they can quickly use all available memory and bring the system to a halt.
Chapter XXXVIII
Digital Convergence and Cybersecurity Policy Anthony W. Buenger, Jr. National Defense University, USA
introduction Digital convergence constitutes the full realization of the Information Age and provides the foundation to link cultural, personal, business, governmental, and economic affairs into a rapidly expanding global digital world called cyberspace. However, this linking of people around the globe is challenging the government to actively work with private industry to ensure its critical infrastructures and associated information is adequately protected. The purpose of this chapter is to explain how digital convergence is affecting the public sector and the need for a cybersecurity policy that includes the active involvement of both the public and private sectors. Digital convergence has made incredible inroads thanks to rapidly developing technologies such as the ubiquitous Internet, seemingly endless bandwidth (including wireless), and rapid advances in computer processing power that are all responsible for the processing, transporting, and storing of digital information throughout cyberspace. Moreover, these technologies have brought about the collision of three colossal industrial segments within the private sector—(a) computing, (b) consumer electronics, and (c) telecommunications providers—and are provid-
ing a multitude of compatible services via various digital devices (Figure 1). Without a doubt, the explosion of digital convergence has produced a flourishing multimedia, multidevice, and multitasking environment (Baker & Green, 2004). A significant impact of a converged society is the empowerment of individuals (consumers) and organizations to collaborate and compete on a global scale. Most importantly, however, these highly mobile and perpetually connected consumers are putting information at a greater risk as they have access to this information outside of its traditionally protected network boundaries in an environment where this information is increasingly vital to the nation’s critical infrastructure assets. The government must be able to effectively secure the information flowing throughout cyberspace.
background The idea of digital convergence is not new, but has grown exponentially over the last 15 to 20 years as digitized capabilities became widely available, affordable, and usable. A significant milestone on the path to digital convergence can be traced back to the late 1970s when Steve Jobs and Bill Gates gave up their analog typewriters for the digitized
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Digital Convergence and Cybersecurity Policy
personal computer (PC). Around the same time in 1978, James Martin in his book The Wired Society envisioned the integration of mobile, wireless phones, the Internet, small powerful PCs, e-mail, telecommuting, and portable digital devices into a technical savvy society (Martin, 1978). Martin’s visionary concept is truly amazing considering that PC and software gurus like Steve Jobs and Bill Gates may still have been pecking away at their analog typewriters in 1978. Since then, mobile, digital, multimedia devices have invaded the home, office, and academic environments. The maturation of integrated circuit-board technology allowed for the rapid development of smaller computing devices. PC processing power became more powerful and easier to produce, manufacturing costs dropped, availability increased, and consumer costs subsequently dropped to commodity levels. The ubiquitous presence of the Internet, along with robust bandwidth and mobile devices, now links consumers, employees, and corporations in cyberspace, enabling them to work outside the traditional workplace. The term cyberspace, although overused and not always understood, simply means being connected to the digital global world. The virtual world of cyberspace, where everyone has a feeling of connectedness, and fostered via the Internet and collaborative capabilities, provides ubiquitous online capabilities to allow employees to work virtually from anywhere and at anytime. Figure 1. Digital convergence environment
bandwidth
CO
d
NV
internet
iti
ER
ig tio n
TECHNOLOGIES
n tio iza git di
computing consumerElectronics telecommunications providers
E NC
INDUSTRIAL SECTORS
GE
za
processing
Rapidly advancing digital-based technologies continue to pave the way for integrated multimedia on a single portable device. Moreover, with these devices, individuals are empowered to collaborate and compete globally on their own behalf more than ever before (Friedman, 2005). Cyberspace is no longer a mutually exclusive computer-led, television-led, Internet-led, or cell-phone-led race to the finish line. Up until very recently, cell phones and personal digital assistants (PDAs) have led the convergence push by incorporating digital cameras, Web browsers, and radio receivers into conveniently small devices. However, the playing field is muddled as these multimedia capabilities are incorporated into digital platforms that can seamlessly collaborate with each other any time of day or night. As a result, individuals have endless access to the information they need, whether at home or in the workplace.
impacts Digital convergence has far-reaching implications and is directly tied to the security of the nation’s critical infrastructures. Furthermore, digital convergence has changed the nature of the industry and how consumers use information that is accessible 24 hours per day (Huston, 1998). However, the dependence on digital information from both the private and public sectors makes it especially critical to protect this information as it is processed, stored, and transported throughout cyberspace. In particular, the private development of the Internet has accelerated the security risks to the nation’s critical infrastructure assets, including the transportation, oil, power, energy, health, and information infrastructures. The information infrastructure, in which the Internet is one element and along with digitized capabilities provides the foundation for cyberspace, is traditionally privately owned and operated. In other words, both the public and private sectors depend on a reliable Internet for their mission-critical and vital life functions, and both have responsibility to ensure that cyberspace is a safe and trusted environment (George Mason University, 2006). Both sectors
Digital Convergence and Cybersecurity Policy
must ensure that strategic policies, procedures, and guidelines are in place and enforced to secure the nation’s information infrastructure, which resides in cyberspace. Significant disruption within cyberspace could wreak insurmountable havoc on the nation’s critical infrastructures and subsequently in the lives of many Americans. Consequently, it is critical for the nation to protect its information infrastructure. Relying on the public sector alone or private sector alone will not bring about the coordinated approach that is needed to protect the nation’s critical infrastructures. Table 1 lists the nation’s critical infrastructures with their respective agencies responsible for protecting their specific critical infrastructure.
Impact to the Private Sector Today’s consumers are not afraid of information technology and are very comfortable using it. More importantly, these consumers bring this state of mind to the workforce with them,
and in turn, are changing the business model of today’s companies (Huston, 2002). The face of digital convergence looks much like an ordinary household appliance or utility providing easily accessible services. For example, in the home, convergence has moved digital capabilities from the den, to the living room, to every teenager’s bedroom, to schools, and now to the workplace. Many consumers feel it is no longer practical to walk to the living room to watch television. They feel that they should be able to watch television from anywhere in their house, most notably from their mobile device, whether it is a PC or other personal handheld device. In the workplace, employees are mobile and now have the capability to work outside of the physical walls of their company to include working from their home. As these consumer-employees become accustomed to readily access from virtually anywhere, their expectations increase, and thus, force a change in their company’s business model. Conversely, these converged digital capabilities have moved from the workplace to the home as well.
Table 1. National critical infrastructures (Adapted from Department of Homeland Security [DHS], 2006, p. 20) Sector-specific Agency
Critical Infrastructure
Department of Agriculture
Agriculture and Food
Department of Health and Human Services
Disease Control and Prevention
Department of Defense
Defense Industrial Base
Department of Energy
Electrical
Department of the Interior
National Monuments and Icons
Department of the Treasury
Banking and Finance
Environmental Protection Agency
Drinking Water and Water Treatment Systems
Department of Homeland Security
Chemical Commercial Facilities Dams Emergency Services Commercial Nuclear Reactors, Materials, Waste
Office of Cyber Security and Telecommunications
Information Technology Telecommunications
Transportation Security Administration
Postal and Shipping
Immigration and Customs Enforcement Federal Protective Service
Government Facilities
Digital Convergence and Cybersecurity Policy
Individuals, as tech-savvy consumers, are empowered more than ever before and have interjected cyberspace as a global social phenomenon. Individuals are always connected and understand how to use the free marketplace to their advantage (Friedman, 2005). They have an ever-present feeling that they are living in cyberspace and no longer acknowledge the physical barriers between work and home as digital convergence blurred the distinctive boundaries between the consumer and business environments (Covell, 2004). Employees no longer work strictly in the office and consumers no longer strictly play at home anymore, which begs the question, “What is the difference between the consumer and the employee?” It appears that the traditional business-to-business (B2B), businessto-employee (B2E), and business-to-consumer (B2C) components of the communications world are converging as well. In fact, Covell argues that digital convergence has borne a new component called the creator-collaborator. Interaction with customers, suppliers, and employees has changed to the point that organizations must create new opportunities to improve (e.g., cost reduction, performance, customer satisfaction, etc). Furthermore, the private sector must understand the fundamental changes taking place in an ever-connected environment where the consumer as a creator-collaborator has taken a firm foothold within the organization. Understanding, and being able to track, the impact of the commingling of business and consumer-generated content is critical (PriceWaterhouseCoopers, 2006). Digital convergence introduces a fundamental risk for current business models still operating under the paradigms built on controlling information consumption through vertical organizational structures and predefined telecommunications capabilities or traditional devices such as the PC (PriceWaterhouseCoopers). Additionally, protecting information as it flows throughout cyberspace is an ongoing battle. Perhaps the most significant challenge for the private sector, besides information crime, is within its own protective network boundaries. The consumer-employee is increasingly roaming outside of the organization’s traditional security
boundaries, possibly exposing critical information to additional vulnerabilities as data are taken outside of the organization’s protective firewalls and antivirus capabilities. As the consumer-employee becomes accustomed to ready access from virtually anywhere, their expectations increase, and thus, force a change in their organization’s business model. Businesses within the private sector must be ready to tackle the challenges of a changed work environment where the individual or small company has just as much competitive power as a large corporation (Friedman, 2005). Additionally, since the nation’s critical infrastructures rely on cyberspace, it is critical that the security of information is maintained as consumers access this information from practically anywhere in the world and at anytime.
The Impact to the Public Sector The private sector owns and operates most of the nation’s critical infrastructure assets, but the government is expected to develop strategic policies to protect those assets. The information infrastructure consists of the three industrial segments previously discussed. The challenge is to find a balance between the private (i.e., business) and public (i.e., government) sectors to protect the nation’s critical infrastructure assets that are put at risk due to the changing geopolitical environment and tech-savvy consumers working in both the public and private sectors. The advent of digital convergence and the rapid growth of the Internet have projected cyberspace security as a national security challenge. The nation’s critical infrastructures drive the essential functions on which its society depends, but people tend to ignore the fact until it is too late (George Mason University, 2006). However, the private sector tends to balk at spending funds on information security, not from just a profit perspective, but also from the perspective that the government is most responsible for national strategy and has the most clout to protect the infrastructures at a national level (Lewis & Darken, 2005). However, this cannot be further from the truth. The private sector has just as much at stake as the public sec-
Digital Convergence and Cybersecurity Policy
tor does when it comes to protecting information traversing cyberspace. In particular, the telecommunications, computing, and consumer-electronics industries are operating under a long history of traditional government policies. The government still regulates the private industry via extensive regulations, such as the Telecommunications Act of 1996 and the Sarbanes-Oxley Act of 2002. In other words, the government controls the nation’s critical infrastructure assets that the private sector depends upon for its very existence. For example, the Telecommunications Act of 1996 readjusted government control over the telecommunications companies. Telecommunications companies, such as Verizon and AT&T, had to pay billions of dollars in license fees to the national government for the right to broadcast cellular telephone signals (Lewis & Darken, 2005). Additionally, the state governments subsequently set the actual prices for telephone service, which left very little profit for the telecommunications sector. While the federal government historically exercised strong control over the telecommunications industry, it still has a great deal of work to do when it comes to regulating cybersecurity for the critical infrastructure assets. The DHS has the responsibility for protecting the nation’s cyberspace and interdependent critical infrastructures. However, the government is slow to develop and enforce forward-looking cyberspace strategies that directly affect the telecommunications, computing, and consumer-electronics industries. While other critical infrastructures, such as the electrical power infrastructure, have information security policies and standards, the information infrastructure as an entity does not. For example, the Department of Energy and Federal Energy Regulatory Commission (FERC) regulate the electrical power infrastructure. On the other hand, the Federal Communications Commission does not regulate the entire information infrastructure, which includes the telecommunications, computing, and consumer-electronics industries. The cable TV market is one example.
Although the private sector appears to be ahead of the public sector and associated government policy, it does not mean all is lost. The United States has a rich history of public policy catching up with rapid private-sector development, such as with the railroad industry during the American Civil War or the early growth of the telecommunications infrastructure during the first half of the 20th century. Accordingly, the information infrastructure has followed suit.
Cyberspace Security Policy Consumers expect the Internet to be affordable, accessible, and ubiquitous. The purpose of public policy is to protect the interests of the government’s people for the common good. In this case, public policy must recognize the relevance of the Internet as a national resource (Department of Homeland Security, 2006). In particular, a cyberspace security policy must provide guidance that represents the current nature of digital convergence, as well as its vision of the future. According to Huston (2002), it would not be appropriate to continue developing policies based on past policies. The National Strategy to Secure Cyberspace (NSSC) is a policy developed in February 2003 to articulate a vision for the nation’s need to secure cyberspace for both the public and private sectors (White House, 2003). The advent of digital convergence and the Internet have projected cyberspace security as a national security challenge. DHS has a daunting task as it continues to develop a cyberspace security policy that covers both the private and public sectors. The NSSC outlines top-level objectives, but does not specifically address the challenges and action-oriented plans for securing information in a digitally converged environment (McAdams, 2006). As a whole, the strategy is reactive or responsive in nature. Many of the much-needed specifics of the strategy were omitted at the prodding of the private sector (McAdams). What appears to be missing in the NSSC is covered in a subsequent public policy published in January 2006. The National Infrastructure Protection Plan (NIPP) builds on the basic elements of
Digital Convergence and Cybersecurity Policy
the NSSC. It adds action-oriented details needed for the public and private sectors to cooperate and develop effective cyberspace standards. Finally, sector-specific plans that provide additional detail from a risk management perspective are currently under development. These plans are expected to address the unique characteristics and risk environments of each sector, but most importantly, they will be developed in collaboration with all sectors. The critical infrastructures that support the nation’s daily way of life are complex, vulnerable, and highly interdependent upon each other. In addition, they are facing threats from all directions, not just from terrorist sources. The government sector, private sector, academic institutions, and consumers all have a vested interest in the protection of the information infrastructure.
futurE trEnds We are still on a fantastic journey on the road to a richer digitally converged society. Bandwidth availability will continue to increase, wireless capabilities will be more prevalent, processing power will continue to increase, power consumption will decrease, and, as a result, digital devices will continue to become smaller with more converged capabilities. Simultaneously, the three industrial sectors are in a heated race to leverage their own competitive advantage for the home and workplace. However, digitization should allow for these sectors to coexist, especially as their capabilities and services will continue to blur as they cross their traditional boundaries. Moreover, the transparency of these boundaries induces an inherent security challenge in which forwardlooking policies must be implemented to ensure the information traversing through cyberspace is adequately protected for all the nation’s critical infrastructures. The computing sector continues to develop microprocessor chips to meet the need for improved processing performance and power consumption on smaller mobile multimedia devices. Consumers and businesses expect efficient passing of
00
digital information, whether it is video, audio, or data, on mobile devices, smaller devices, cooler devices, and quieter devices (Otellini, 2006). The advent of dual-core processors allows for improved performance on smaller devices. This dual-core technology has been successful on PCs and network servers throughout 2005 and 2006, but is now being implemented on smaller devices, such as laptops, to provide very powerful on-thego digital capabilities (Shapiro, 2006). These new chips are 100 times more powerful than the first Pentium chip and are expected to blur the differences between the desktop PC and mobile digital devices (Otellini). Not to be outdone, the consumer-electronics sector continues to work hard to gain footing among the digitally converged society. Specifically, the consumer-employee expects fast and inexpensive access to multimedia information using any device from any location. Consumerelectronics companies are taking advantage of the Internet to ensure that multimedia information is available across all devices. In particular, Microsoft is working on what it calls a “cross-device approach” that puts the consumer’s information on the Internet and can be accessed from any device (Gates, 2006). Additionally, some companies are taking advantage of the broadband capabilities just now becoming available in the United States. Today’s cell phones and PDAs no longer suffer from very slow download speeds. Some consumerelectronics companies have capitalized on a new technology called evolution data only or evolution data optimized (EVDO) to provide fast wireless broadband capabilities for mobile devices. This technology is providing high-speed Internet access from anywhere without a wireless hotspot (Evdo.com, 2006). The telecommunications sector is also working to compete in a digitally converged environment. The telecommunications sector has traditionally been known as a communications path provider where separate communications channels were used for different media content. For example, up until about 5 years ago, cable television companies had always provided cable TV service into the home via coaxial cable channels, telephone
Digital Convergence and Cybersecurity Policy
companies had always provided voice service into the home via separate communications channels, and Internet service providers had always provided e-mail service into the home via borrowed telephone lines. Today this has changed. There is a significant blurring of services within the telecommunications sector. Cable TV providers are now providing telephone service while telephone providers are now providing Internet access services and so forth. Furthermore, these three industrial sectors have embraced a strategy to directly cross sectors to gain a competitive advantage over the others. As the volatile marketplace continues to rapidly grow, each sector will attempt to position itself to ensure its market value maintains competitiveness in a converged environment. For example, the crossing of boundaries has already happened between the computing and consumer-electronics sectors. Cisco, a longtime provider of computer network equipment to the corporate world (i.e., routers, firewalls), has broken into the consumer-electronics sector through the acquisition of LinkSys, a traditional provider of consumer-electronics computer devices such as the home router and wireless access point (Cisco, 2006). In some cases, cross-sector partnerships have developed such as between DirecTV and Microsoft (Gates, 2006). These companies have teamed to integrate DirecTV programming with Windows-based PCs and some portable devices (Gates). Basically, this partnership will allow for the convergence of two capabilities where consumers will no longer need the unsightly DirecTV set-top boxes on their TVs as the PC or notebook will function as the DirecTV receiver (Bangeman, 2006). It is not without reason to expect more cross-sector activity in the near future. The roles and responsibilities of the consumeremployee sector have changed, and its influence has significantly increased in a digitally converged world. The consumer-employee is increasingly affluent with technology and understands how to apply new digital capabilities to gain a competitive advantage within cyberspace. As the consumeremployee or creator-collaborator continually has access to information within cyberspace, this
information, along with the critical infrastructures that depend on it, is increasingly put at risk. And finally, the greatest impact is to the nation’s strategy and associated roles and responsibilities between the public and private sector to ensure the protection of cyberspace in a digitally converged environment. The nation’s cyberspace strategies to date have had some successes, which include the creation and development of (a) the U.S. Computer Emergency Readiness Team, which analyzes and distributes cyber-threat information, (b) the National Cyber Response Coordination Group, which consists of public- and private-sector memberships to coordinate cyber preparedness, response, and recovery efforts, (c) the Government Forum of Incident Response and Security Teams responsible for securing government information technology systems, and (d) the NIPP that is providing cybersecurity guidance for both the public and private sectors (McAdams, 2006). Additionally, the United States developed the first international cybersecurity exercise, called Cyber Storm, in which over 100 public and private organizations participate. However, there is still much more that needs to be done to ensure the protection of the nation’s critical infrastructures that rely on cyberspace.
conclusion The rapid changes in digital information technology have driven the convergence of the computing, consumer-electronics, and telecommunications industries. Subsequently, this digital convergence directly affects the public and private sectors, as well as the personal (consumer-employee) sector that directly affects both. With the rise of the ubiquitous Internet, successful Internet companies, online collaboration capabilities, tech-savvy consumer-employees, and electronic access virtually anytime and anywhere, the nation’s critical infrastructures have become more complex and increasingly interdependent. The consumer-employee has higher expectations in both the workplace and at home, and as a result, the transformation of management strategies, including information
0
Digital Convergence and Cybersecurity Policy
security strategies, in the workplace is critical to keep up with the increased pace and complexity of business processes and operations in a globally connected, converged environment. The advent of digital convergence and the rapid growth of the Internet have projected cyberspace security as a national security challenge. Both the private and public sectors, along with the international community, have a responsibility to protect the access facilities and the information that flows throughout cyberspace.
futurE rEsEarch dirEctions There is still much work to do beyond national cyberspace policy. Physical protection of the nation’s critical infrastructures is of paramount importance. Priority II in the National Strategy to Secure Cyberspace states that the federal government must improve the physical security of its cyber and telecommunications systems. As the cyber and telecommunications systems have become more complex, the information that rides and is stored on this infrastructure has become significantly vulnerable. Protection of the information from cyber threats has come to the forefront, possibly at the expense of physical protection. With the three colossal industrial segments within the private sector (i.e., computing, consumer-electronics, and telecommunications providers) apparently coming together, and with deregulation increasing, it appears more than ever that there is a widening gap of cooperation between the public and private sectors. Moreover, while digital convergence is bringing the consumers and their devices together, there is no single information infrastructure entity in which one unified organization can protect the information and telecommunications network at the physical and cyber levels. In the United States, the private sector and government agencies assume that their government owns and controls a unified global telecommunications network. Whether it is for the sake of convenience or something else, these organizations leave it up to the federal government to develop and enforce strategic policies to
0
protect cyberspace. As a result, these policies lack authority and any enforcement value. Consequently, there are many highly vulnerable physical nodes that exist in today’s cyberspace whereby most nodes are still unprotected. For example, various estimates state that most of the Department of Defense’s unsecured telecommunications traffic traverses over networks through unprotected, commercially run Internet transit points. While many of the popular Internet service providers have a global presence using these transit points, they are basically a web of independent entities with no central controlling authority (Fonow, 2006). These independent networks are interconnected via these unprotected access facilities that are vulnerable and open to physical attack. A partnership of key stakeholders of international governments and corporations is required to look at the issues associated with the cyber and physical threats to the Internet. This consortium should research the significant issues with regard to these threats. Additionally, this consortium should use the NIPP as a baseline to not only develop but carry out the necessary actions to improve physical and cyber security. Specifically, at the strategic level, it should consider the level of the physical security problem with the telecommunications access points that reside in major commercial facilities throughout the world. At the operational level, it should consider the increasing physical security problem as the consumer-employee continues to bring down the physical barriers between work and home as digital convergence further takes hold in both the public and private sectors. It is not without reason to expect a consortium of government leaders, telecommunications and network security experts, economists, and strategic policy makers to develop an enforceable framework to protect and manage an international telecommunications network.
rEfErEncEs Baker, S., & Green, H. (with Einhorn, B., Ihlwan, M., Reinhardt, A., Greene, J., & Edwards, C.). (2004). Big bang! Business Week, 3880, 68-76.
Digital Convergence and Cybersecurity Policy
Bangeman, E. (2006). DirecTV and Microsoft make love connection. ArsTechnica. Retrieved January 5, 2006, from http://arstechnica.com
McAdams, J. (2006, June 12). The best-laid plan? Federal Computer Weekly. Retrieved July 10, 2006, from http://www.fcw.com
Cisco. (2006). Linksys announces its latest home and office monitoring solution [Press release]. Retrieved January 6, 2006, from http://www. cisco.com
Otellini, P. (2006). Keynote address. 2006 International Consumer Electronics Show (pp. 1-48).
Covell, A. (2004). Digital convergence phase 2: A field guide for creator-collaborators. IL: Stipes Publishing. Department of Homeland Security. (2006). National infrastructure protection plan. Washington, DC. Evdo.com. (2006). What is evdo? Retrieved April 12, 2006, from http://www.evdoinfo.com/ EVDO/ Fonow, R. (2006). Cybersecurity demands physical security. Signal Magazine. Retrieved February 5, 2007, from http://www.afcea.org/signal/default/ Friedman, T. (2005). The world is flat: A brief history of the twenty-first century. New York: Farrar, Straus & Giroux. Gates, W. (2006). Keynote remarks. 2006 International Consumer Electronics Show (pp. 1-25). George Mason University. (2006). What is CIPP? Critical Infrastructure Protection Program Online. Retrieved October 6, 2006, from http://cipp. gmu.edu Huston, G. (1998). ISP survival guide: Strategies for running a competitive ISP. New York: John Wiley & Sons. Huston, G. (2002). Telecommunications policy and the Internet. The Internet Society. Retrieved July 6, 2006, from http://www.isoc.org Lewis, T., & Darken, R. (2005). Potholes and detours in the road to critical infrastructure protection policy. Homeland Security Affairs, 1(2). Retrieved October 26, 2006, from http://www. hsaj.org/hsa Martin, J. (1978). The wired society: A challenge for tomorrow. Englewood Cliffs, NJ: Prentice-Hall.
PriceWaterhouseCoopers. (2006). The rise of the lifestyle media: Achieving success in the digital convergence era. Tech Center’s Convergence Series. Retrieved March 26, 2006, from http://www. pwc.com Shapiro, G. (2006). Keynote address. 2006 International Consumer Electronics Show (pp. 1-48). White House. (2003). National strategy to secure cyberspace. Washington, DC. Retrieved from http://www.whitehouse.gov
furthEr rEading And then there was one. (2007, January 19). Investors Chronicle, p. 1. Baldwin, H. (2005). The road to convergence is still blocked. Electronic Business, 31(10), 41. Compaine, B. (2000). Media mergers, divestitures, and the Internet: Is it time for a new model for interpreting competition? In I. Vogelsang & B. Compaine (Eds.), The Internet upheaval: Raising questions, seeking answers in communications policy (Telecommunications Policy Research Conference) (1st ed., pp. 199-230). Cambridge, MA: MIT Press. Computer Science and Telecommunications Board. (2002). Cybersecurity today and tomorrow: Pay now or pay later. Washington, DC: National Academy Press. Retrieved from http://www.cstb. org Cordesman, A., & Cordesman, J. (2002). Cyber-threats, information warfare, and critical infrastructure protection: Defending the U.S. homeland. Westport, CT: Praeger Publishers. Erbschloe, M. (2005). Physical security for IT. Burlington, MA: Elsevier Digital Press.
0
Digital Convergence and Cybersecurity Policy
Fischer, H. (2006). Digital shock: Confronting the new reality. Quebec, Canada: McGill-Queen’s University Press. Fisher, D. (2003, February 24). Cyber plan’s future bleak. eWeek. Retrieved February 12, 2007, from http://www.eweek.com/print_article/ 0,3668,a=37497,00.asp Fonow, R. (2006). Cybersecurity demands physical security. Signal Magazine. Retrieved February 5, 2007, from http://www.afcea.org/signal/default/ Gorman, S. (2005). Networks, security and complexity: The role of public policy in critical infrastructure protection. Northampton, MA: Edward Elgar Publishing. Gow, G. (2005). Policymaking for critical infrastructure: A case study on strategic interventions in public safety telecommunications. Hampshire, UK: Ashgate Publishing. Handley, M. (2006). Why the Internet only just works. BT Technology Journal, 24(3), 119. Hart, J. (2006). The continuing role of state policy. Federal Communications Law Journal, 58(1), 215-221. Hundley, R., & Anderson, R. (1997). Emerging challenge: Security and safety in cyberspace. In J. Arquilla & D. Ronfeldt (Eds.), In Athena’s camp: Preparing for conflict in the information age (pp. 231-251). Santa Monica, CA: Rand. Negroponte, N. (1995). Being digital (1st ed.). New York: Knopf Publishing. Nelson, M. (2002). Next generation Internet: Where technologies converge and polices collide. In W. Lehr & L. Pupillo (Eds.), Cyber policy and economics in an Internet age (topics in regulatory economics and policy) (pp. 27-42). Norwell, MA: Kluwer Academic Publishers. Nichiporuk, B., & Builder, C. (1997). Societal implications. In J. Arquilla & D. Ronfeldt (Eds.), In Athena’s camp: Preparing for conflict in the information age (pp. 295-314). Santa Monica, CA: Rand.
0
Noam, E. (2002). The three digital divides. In W. Lehr & L. Pupillo (Eds.), Cyber policy and economics in an Internet age (topics in regulatory economics and policy) (pp. 19-26). Norwell, MA: Kluwer Academic Publishers. Pagani, M. (2003). Multimedia and interactive digital TV: Managing the opportunities created by digital convergence. Hershey, PA: IRM Press. Park, S. (2007). Strategies and policies in digital convergence. Hershey, PA: Idea Group Publishing. Personick, S., & Patterson, C. (Eds.). (2003). Critical information infrastructure protection and the law: An overview of key issues (Committee on Critical Information Infrastructure Protection and the Law, National Research Council). Washington, DC: The National Academies Press. Pilati, A. (2002). Globalization and the Internet challenge. In W. Lehr & L. Pupillo (Eds.), Cyber policy and economics in an Internet age (topics in regulatory economics and policy) (pp. 61-70). Norwell, MA: Kluwer Academic Publishers. Pipkin, D. (2000). Information security: Protecting the global enterprise. Upper Saddle River, NJ: Prentice-Hall. Reiter, D. (2005). DON CIP: A comprehensive solution to improve cyber and physical security of DON critical assets. CHIPS, pp. 35-37. Shin, D. (2006). Convergence of telecommunications, media and information technology, and implications for regulation. The Journal of Policy, Regulation and Strategy for Telecommunications, Information and Media, 8(1), 42-56. White House. (2002). The national strategy for homeland security. Washington, DC. Retrieved from http://www.whitehouse.gov White House. (2003). The national strategy for the protection of critical infrastructures and key assets. Washington, DC. Retrieved from http:// www.whitehouse.gov Williams, P. (1997). Transnational criminal organizations and international security. In J. Arquilla
Digital Convergence and Cybersecurity Policy
& D. Ronfeldt (Eds.), In Athena’s camp: Preparing for conflict in the information age (pp. 315-337). Santa Monica, CA: Rand.
Digital Convergence: Digital convergence is the ability to access all forms of information in cyberspace.
Yoffie, D. (1997). Competing in the age of digital convergence. Boston: Harvard Business School Press.
Digitization: Digitization is the process of converting data or information into a digital format, usually from analog format.
Zimmer, M. (2004). The tensions of securing cyberspace: The Internet, state power & the national strategy to secure cyberspace. First Monday, 9(3). Retrieved February 14, 2007, from http://firstmonday.org/issues/issues9_3/zimmer/index.html
Dual Core: This technology combines two microprocessors onto a single integrated computer chip to handle digital computations in parallel, which significantly improves speed and performance.
tErms and dEfinitions Bandwidth: Bandwidth is the amount of data that can be transported within a specific amount of time; it is usually expressed in bits per second for digital transmission. Broadband: This applies to the high-speed transport of data over a single transmission line, such as a cable TV modem. Creator-Collaborator: A creator-collaborator is one who aggressively uses technology to enhance his or her lifestyle and career. Critical Infrastructure: The critical infrastructure refers to the physical or virtual systems and assets so vital to the United States that the incapacity or destruction of these systems and assets would have a debilitating impact on national security, national economics, national public health or safety, or any combination. Cyberspace: It is a term used to describe the virtual, global network of connected digital devices. Digital: The term digital describes data or information stored as a series of ones and zeros for a computing device to understand and manipulate.
Evolution Data Optimized (EVDO): EVDO is a high-speed network protocol used for wireless Internet communications. Much like other broadband Internet technologies (i.e., cable modem), EVDO is an always-on service that does not need establishment of a slow dial-up connection. Information Age: The term refers to the period after the industrial age; it is applied to the period beginning around the 1980s, when the movement of information became faster than physical movement. Information Infrastructure: This is the system of public and private communications networks, interactive capabilities, hardware, software, computers, and consumer electronics that provide information to users. The Internet is one element of the information infrastructure. MP3: MP3 is short for MPEG-1 Audio Layer-3. It is a popular compressed audio file format that efficiently stores and plays on computers and many portable digital devices. Telecommunications: It is the long-distance transmission of voice, video, text, or images over a communications line, such as a modem, fax machine, telephone line, or fiber optic cable.
0
0
Chapter XXXIX
Bioterrorism Response and IT Strategies David A. Bray Emory University, Goizueta Business School, USA
introduction Most analyses of possible future bioterrorism events predict they may be similar to the anthrax events of 2001. Specifically, a limited population of individuals may experience morbidity or mortality, but the concern, panic, and worry stirred up by the threat will catch the attention of the entire nation. If public health IT is to help with bioterrorism preparedness, it needs to not only address the mitigation of civilian illnesses and deaths, but also help to manage individual and societal fears springing from the real or threatened occurrence of such an event.
background In order to understand how public health information technology can aid public health preparedness in terms of bioterrorism preparedness and associated emergency response, it is apt to start with a definition of bioterrorism. In 1998, the U.S. Department of Health and Human Services (DHHS) appointed the Centers for Disease Control and Prevention (CDC) to coordinate and lead the overall planning effort to upgrade national public
health capabilities to respond to biological and chemical terrorism. This action complemented the CDC’s established mission, specifically, to promote health and quality of life by preventing and controlling disease, injury, and disability (Gerberding, 2005). In establishing the Bioterrorism Preparedness and Response Program (BPRP) within the CDC, DHHS framed the context of bioterrorism as the use or threatened use of biological agents or toxins against civilians, with the objective of causing fear, illness, or death. The CDC was charged by DHHS to coordinate and assist local and state public health and medical officials with the detection, identification, and response to a bioterrorism event. BPRP has developed several public health IT solutions to support these objectives (Kun & Bray, 2003; RAND Science and Technology Policy Institute, 2001). Moreover, the CDC has distributed federal funds to support the Laboratory Response Network (LRN). The LRN serves as a network of labs prepared to respond to biological and chemical terrorism, and includes state and local public health, veterinary, military, and international labs. LRN labs are designated as either national, reference, or sentinel labs depending on the types of tests they
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Bioterrorism Response and IT Strategies
can perform and their containment facilities (i.e., lab capability). Since its launch, the LRN specifically has been involved with the anthrax events of 2001, the severe acute respiratory syndrome (SARS) events of 2003, and the ricin events of 2003 to 2004. The LRN represents a partnership that includes the Association of Public Health Laboratories (APHL), the CDC, the Department of Defense (DoD), the Federal Bureau of Investigation, and other collaborating entities. There are several key public health IT components to the LRN (Bray, 2003; Morse et al., 2003).
it approachEs usEd in rEsponsE to biotErrorism Public health IT can help with the detection and investigation of a bioterrorism event by advocating standards in the electronic reporting of collected epidemiological data and results at the local, state, and federal levels. Furthermore, public health IT can advocate standards in the routing and security of data. Routing and security differ as they represent how data are transported semantically from one location to another, as well as how data are made available to authorized users only. Currently, while there are several efforts to standardize
hospital and public health data, several areas key to public health surveillance efforts lack definitive standards for electronic collection, routing, and security of data (Devadoss, Pan, & Singh, 2005; U.S. General Accounting Office [GAO], 2003). BPRP has invested public health IT resources into an aberration detection solution known as the Early Aberration Reporting System (EARS). The EARS tool was developed to facilitate the analysis of public health surveillance data. The primary purpose of the EARS tool was to provide state and local health departments with the ability to apply aberration detection methods of proven degrees of sensitivity to either collected count or rate data. As the EARS tool is neutral when it comes to the specific data type or format being collected, public health professionals can analyze different types of data (e.g., 911 calls, ER [emergency room] data, school absenteeism data) using the same system (CDC, 2002; Kun & Bray, 2003). In addition to providing the code for the EARS tool freely to state and local public health departments, BPRP also has provided drop-in surveillance assistance for high-profile events, chiefly via site-specific implementations of the EARS tool and other tools. Such events included the 1999 World Trade Organization Meeting in Seattle, the 2001 Democratic National Convention, and the 2001
Figure 1. Timeline of a hypothetical bioterrorism response
0
Bioterrorism Response and IT Strategies
Figure 2. Comparison of electronic laboratory reporting for commercial vs. LRN lab test data
Super Bowl. BPRP has championed efforts to develop computer-based tools to collect data rapidly in a standardized fashion. Such tools need to be flexible to meet the needs of epidemiologists, yet map back to a standardized, logical data model that could allow for data to be stored and made available to multiple partners without having to merge data manually (CDC, 2002). Moreover, public health IT can help with lab reporting of test results for possible bioterrorism agents, specifically in support of LRN activities. Electronic test requests and results must be collected and stored in a syntactical fashion that facilitates rapid aggregation of data across multiple data sources. For LRN lab test results, it is essential that linkages between lab specimens and samples collected at the local level be maintained at the state and federal levels. Collisions may occur if a lab assigns an identifier to a specimen or sample that has already been assigned by a separate lab, thus rendering the identifier nonunique (Bray & Konsynski, 2006; RAND Science and Technology Policy Institute, 2001). In terms of public health IT, BPRP has provided the infrastructure and support for the LRN’s secure, restricted Web site. The restricted Web site is available only to organizational users who have demonstrated the need to access information available through the LRN. The LRN restricted Web site provides select users with access to agent identification protocols, the ability to order the necessary reagents to perform lab tests, announce-
0
ments and other information sheets, a directory of member labs and users, the ability to report proficiency testing results electronically, and other key features (Institute of Medicine [IOM], 2001; U.S. GAO, 2002). The need for electronic laboratory reporting (ELR) for bioterrorism preparedness and response was demonstrated by the anthrax events of 2001. The LRN restricted Web site implemented an ad hoc tool that allowed member labs to report their capacity levels and indicate the number and type of tests performed. Additionally, labs could indicate if they had a confirmed positive anthrax specimen or sample. Within the October to December 2001 time frame, over 120,000 environmental samples were tested nationally, amounting to an estimated 1 million individual tests total. These collected results demonstrated that the LRN had succeeded in its goal of preparedness and response as 94% of the samples were tested either by the LRN public health labs or the LRN military labs with the DoD. Had the CDC been forced to handle the entire volume of the requested lab tests alone, it probably would have been an overcapacity situation (Morse et al., 2003; U.S. GAO, 2003).
futurE rEsEarch trEnds In addition to advocating electronic data standards, public health IT can help develop solutions for the
Bioterrorism Response and IT Strategies
replication and reconciliation of locally stored data with de-identified data routed to state or federal partners. While local and state health departments may be allowed to collect data containing personal identifiers, other state and federal partners, under the Health Insurance Portability and Accountability Act of 1996 (HIPAA), explicitly may be required not to store or access such personal data. As such, while data should be shared electronically among local, state, and federal partners during a suspected bioterrorism event, locally stored data, when sent to external partners outside of the state collecting the data, must be stripped of such identifiers. This poses a problem, however, if the data, at a later date, are updated in the local data store, and said data store needs to inform other partners of the update. What unique identifier is shared among all partners at the local, state, and federal level that cannot be linked back to personally identifying data, yet allows for such reconciliation to occur? Such challenges remain poised for new solutions (IOM, 2002; Levin & Cross, 2004). The same problem of preserving linkages also occurs in reverse if a federal partner adds associated data to a de-identified data record of an individual, and then wishes to send the updated data back to the originating data store: By what mechanism is the updated data to be reconciled with the personally identifying data stored at the local level? How will federal partners eliminate duplicate data collection if personal identifiers cannot be collected? Furthermore, if two states have data on the same individual, yet cannot share personal identifiers either among themselves, state to state, or with the federal government, how then are the data to be identified referring to the same, singular individual and thus linked? Attempts to explore these legal and technological challenges continue (Kerr, Wolfe, Donegan, & Pappas, 2006; U.S. National Intelligence Council [NIC], 2000). Several public health IT efforts have focused recently on syndromic surveillance. Syndromic surveillance requires ongoing, routine monitoring of incoming data streams from multiple data sources (e.g., 911 calls, ER data, school absenteeism data). Public health IT can help to promote
the creation and analysis of said data streams for anomalous trends. Further, public health IT can help by developing tools for information management and integration from multiple data sources, including the possibility of integrating multiple surveillance systems for different diseases or jurisdictions (Gerberding, Hughes, & Koplan, 2002; Singh, 2005). Since 2000, BPRP has championed the need for the ability of the LRN member labs to report lab test results electronically. Such an effort is challenging as the states themselves currently do not have standards for sharing electronic lab data. A few states have managed internally to reach agreement as to a common set of standards locally. Public health has already attempted some efforts to generate standards, including the National Electronic Disease Surveillance System (NEDSS). NEDSS, however, was designed for disease reporting and surveillance, and with its initial design stages did not incorporate specific elements of use for bioterrorism preparedness and response. Thus, for bioterrorism IT, attention has been focused on working with the states themselves to identify a set of standards connecting public health departments, the LRN labs, and possibly major hospital electronic data systems. For epidemiological surveillance and ELR, such efforts are centered on expanding the set of standards contained in Health Level Seven (HL7) to incorporate elements of public health importance (U.S. GAO, 2004).
conclusion Given that detection and response will occur initially at the local level for a suspected bioterrorism event, local public health preparedness is essential. Should a bioterrorism event occur, mitigation of mortality is dependent upon rapidly detecting such an event and identifying the extent and nature of the agent(s) used. Investigation and implementation of control measures will require coordination among local, state, and federal partners. To meet the needs of bioterrorism preparedness and response, public health IT inherently must be agile, flexible,
0
Bioterrorism Response and IT Strategies
and adaptive to address these unique challenges. Several positive public health IT solutions have been generated to date, with future opportunities for additional solutions.
futurE rEsEarch dirEctions What is needed for bioterrorism preparedness and response is not too different from what is needed for most of public health (Bray, 2003). The differences between bioterrorism preparedness and response and outbreak preparedness (e.g., for West Nile virus or for SARS) include associated criminal and malicious intent with release of a bioterrorism agent, the federal funds provided to the states explicitly for bioterrorism preparedness, and the national security and DHS (Department of Homeland Security) aspects of bioterrorism response. However, for outbreak preparedness and other prevention efforts, a similar need for a standardized method of storing public health data electronically and sharing it across state lines exists. Public health has long needed an independent data standards organization. Furthermore, public health IT systems need to adopt common standards in electronic data exchange, routing, receipt, and encryption that respect the state-centric nature of public health, but transcends the borders involved (CDC, 2002). Developing these standards could be performed using a complex adaptive systems approach, akin to either a folksonomy or Wikipedia-like efforts. Such research directions remain possible but, as of yet, unproven.
rEfErEncEs Bray, D. (2003). IT needs of CDC’s Bioterrorism Preparedness & Response Program. Proceedings of the Public Health Information Network (PHIN) Annual Conference. Bray, D., & Konsynski, B. (2006). Fighting fear of a bioterrorism event with information technology: Real-world examples and opportunities. In Lecture
0
notes in computer science: IEEE intelligence and security informatics. Centers for Disease Control and Prevention. (2002). Supplemental guidance, technical review criteria focus areas B and C. Atlanta, GA: U.S. Department of Health and Human Services. Devadoss, P., Pan, S., & Singh, S. (2005). Managing knowledge integration in a national health-care crisis: Lessons learned from combating SARS in Singapore. IEEE Transactions, 9(2), 266 -275. Gerberding, J. (2005). Protecting health: The new research imperative. JAMA, 294, 1403-1406. Gerberding, J., Hughes, J., & Koplan, J. (2002). Bioterrorism preparedness and response: Clinicians and public health agencies as essential partners. JAMA, 287, 898-900. Institute of Medicine. (2001). Chemical and biological terrorism: Research and development to improve civilian medical response. Washington, DC: Author. Institute of Medicine. (2002). Making the nation safer: The role of science and technology in countering terrorism. Washington, DC: Author. Kerr, R., Wolfe, T., Donegan, R., & Pappas, A. (2006). A holistic vision for the analytic unit. Studies in Intelligence, 50(2). Kun, L., & Bray, D. (2002). Information infrastructure tools for bioterrorism preparedness: Building dual- or multiple-use infrastructures is the task at hand for state and local health departments. IEEE Transactions, 21(5), 69-85. Levin, D., & Cross, R. (2004). The strength of weak ties you can trust: The mediating role of trust in effective knowledge transfer. Management Science, 50(11), 1477-1490. Morse, S., Kellogg, R., Perry, S., Meyer, R., Bray, D., Nichelson, D., et al. (2003). Detecting biothreat agents: The laboratory response network. American Society of Microbiology News, 69(9), 433-437.
Bioterrorism Response and IT Strategies
RAND Science and Technology Policy Institute. (2001). A framework for information technology infrastructure for bioterrorism, results of the 1st summit. Washington, DC: RAND.
Carley, K., & Lin, Z. (1997). A theoretical study of organizational performance under information distortion. Management Science, 43(7), 976-997.
Singh, J. (2005). Collaborative networks as determinants of knowledge diffusion patterns. Management Science, 51(5), 756-770.
Clippinger, J. (Ed.). (1999). The biology of business: Decoding the natural laws of enterprise. San Francisco: Jossey-Bass.
U.S. General Accounting Office. (2002). Homeland security: New department could improve coordination but transferring control of certain public health programs raises concerns (GAO02-954T).
Cummings, J. (2004). Work groups, structural diversity, and knowledge sharing in a global organization. Management Science, 50(3), 352-364.
U.S. General Accounting Office. (2003). Bioterrorism: Information technology strategy could strengthen federal agencies’ abilities to respond to public health emergencies (GAO-03-139). U.S. General Accounting Office. (2004). HHS bioterrorism preparedness programs: States reported progress but fell short of program goals for 2002 (GAO-04-360R). U.S. National Intelligence Council. (2000). Global trends 2015: A dialogue about the future with nongovernment experts (NIC 2000-02).
furthEr rEading Alavi, M., & Leidner, D. (2001). Review: Knowledge management and knowledge management systems. Conceptual foundations and research issues. MIS Quarterly, 25(1), 107-136. Anderson, P. (1999). Complexity theory and organization science. Organization Science, 10(3), 216-232. Argote, L., & Ingram, P. (2000). Knowledge transfer: A basis for competitive advantage in firms. Organizational Behavior and Human Decision Processes, 82(1), 150-169. Bray, D. (2006). Exploration, exploitation, and knowledge management strategies in multi-tier hierarchical organizations experiencing environmental turbulence. Proceedings of the North American Association for Computational Social and Organizational Science 2006 Conference, South Bend, IN.
Daft, R., & Weick, K. (1984). Toward a model of organizations as interpretation systems. Academy of Management Review, 9(2), 284-295. Dawes, R., Orbell, J., Simmons, R., & Van De Kragt, A. (1986). Organizing groups for collective action. American Political Science Review, 80(4), 1171-1185. Frey, S., & Iris, B. (1996). Cooperation, communication and communitarianism: An experimental approach. Journal of Political Philosophy, 4(4), 322-336. Galbraith, J. (1982). Designing the innovating organization. Organizational Dynamics, 10(3), 4-25. Heckscher, C., & Donnellson, A. (Eds.). (1994). The post-bureaucratic organization: New perspectives on organizational change. Thousand Oaks, CA: Sage Publications. Kean, T., & Hamilton, L. (2004). The 9/11 commission report: Final report of the National Commission on Terrorist Attacks upon the United States. Washington, DC: U.S. General Printing Office. Kerr, R., Wolfe, T., Donegan, R., & Pappas, A. (2005). Issues for the U.S. intelligence community. Studies in Intelligence, 49(3). Kerr, R. Wolfe, T., Donegan, R., & Pappas, A. (2006). A holistic vision for the analytic unit. Studies in Intelligence, 50(2). Kling, R. (1991). Cooperation, coordination and control in computer-supported work. Communications of the ACM, 34(12), 83-88.
Bioterrorism Response and IT Strategies
Nonaka, I. (1994). A dynamic theory of organizational knowledge creation. Organizational Science, 5(1), 14-37.
vention. BPRP is responsible for several aspects of the detection, identification, and response to a national bioterrorism event.
Orbell, J., & Dawes, R. (1991). A “cognitive miser” theory of cooperators’ advantage. American Political Science Review, 85(2), 515-528.
Centers for Disease Control and Prevention (CDC): The CDC is a federal government agency located under the U.S. Department of Health and Human Services. The CDC’s mission is to promote health and quality of life by preventing and controlling disease, injury, and disability.
Ostrom, E., et al. (Eds.). (2002). The drama of the commons. Washington, DC: National Academy Press. Singh, J. (2005). Collaborative networks as determinants of knowledge diffusion patterns. Management Science, 51(5), 756-770. U.S. General Accounting Office. (2003). Public health response to anthrax incidents of 2001 (GAO-04-152). U.S. General Accounting Office. (2006). More comprehensive national strategy needed to help achieve U.S. goals and overcome challenges (GAO-06-953T). U.S. National Intelligence Council. (2000). Global trends 2015: A dialogue about the future with nongovernment experts (NIC 2000-02). U.S. National Intelligence Council. (2003). SARS: Down but still a threat (NIC ICA 2003-09). Wade-Benzoni, K., Tenbrunsel, A., & Mazerman, M. (1996). Egocentric interpretations of fairness in asymmetric, environmental social dilemmas: Explaining harvesting behavior and the role of communication. Organizational Behavior and Human Decision Processes, 67(2), 111-126.
Early Aberration Reporting System (EARS): The EARS is a freely available software tool developed to provide state and local health departments with the ability to apply aberration detection methods of proven degrees of sensitivity. Electronic Laboratory Reporting (ELR): ELR is the reporting of test results among multiple partners, including local health departments, hospitals, and federal entities. Health Insurance Portability and Accountability Act of 1996 (HIPAA): HIPAA was passed by the U.S. Congress. Part of HIPAA requires the removal of personal identifiers from data sent to state and federal partners. Health Level Seven (HL7): HL7 is a set of standards for the electronic exchange of healthrelated data.
tErms and dEfinitions
Laboratory Response Network (LRN): The LRN is a network of labs prepared to respond to biological and chemical terrorism. The LRN has responded to the anthrax events of 2001, the Severe Acute Respiratory Syndrome events of 2003, and the ricin events of 2003 to 2004, as well as other outbreak events.
Aberration Detection: Aberration detection is the analysis of data (e.g., 911 calls, ER data, school absenteeism data) for anomalous patterns in support of public health. Analysis can be either for sentinel or syndromic surveillance.
National Electronic Disease Surveillance System (NEDSS): The NEDSS is an effort to generate standards for the sharing of public health data primarily focused on the exchange of state health department data with the CDC.
Bioterrorism Preparedness and Response Program (BPRP): This is a program located under the Centers for Disease Control and Pre-
Syndromic Surveillance: It refers to the ongoing, routine monitoring of incoming data streams from multiple data sources for reported syndromes of interest.
Chapter XL
Federal Public-Key Infrastructure Ludwig Slusky California State University–Los Angeles, USA Parviz Partow-Navid California State University–Los Angeles, USA
introduction All branches of federal government are required to change their business practices to a paperless operation. Privacy and information security are critical for the protection of information shared over networks internally between the U.S. government agencies and externally with nonfederal organizations (businesses; state, local, and foreign governments; academia; etc.) or individuals. The public-key infrastructure (PKI) is the simplest, most widely used architecture for secure data exchange over unsecured networks. It integrates computer hardware and software, cryptography, information and network security, and policies and procedures to facilitate trust in distributed electronic transactions and mitigate the associated risks. Federal PKI (FPKI) is PKI designed for implementation and use by government agencies. Federal PKI research was under way since 1991, and by the end of 2005, the federal PKI included 13 cross-certified federal entities, three approved
shared service providers (SSPs; Verisign, CyberTrust, National Finance Center/U.S. Department of Agriculture [USDA]), one state, and three foreign countries (Canada, UK, and Australia; Alterman, 2005). Initially envisioned as an interoperability mechanism for federal organizations exclusively, the federal PKI is now positioned for trust interoperability and cross-certification internally among federal agencies and externally with other organizations.
background The federal PKI encompasses an interoperable public-key infrastructure that utilizes the capabilities of commercial-off-the-shelf (COTS), standard-based products and services as well as new solutions. Both PKI and federal PKI are based on the same four fundamental principles of information security (CIA+N/R).
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Federal Public-Key Infrastructure
• • • •
Confidentiality: Assurance that the information has been hidden during transport. Integrity: Assurance that the information has not been altered. Availability: Assurance of reliable and timely access to data. Nonrepudiation: Assurance of irrefutable evidence that an action took place.
Implementation of federal PKI is supported by public-key encryption technology with digital signatures, cross-certification, defined assurance levels, and personal identity verification cards designed to extend the federal PKI services to end users (i.e., employees, contractors of federal agencies).
Public-Key Encryption Technology At the core of PKI is the enabling cryptographic technology for the sharing of secure information. Contrary to symmetric encryption schemes operating with a single key for both encryption and decryption, PKI encryption technology is based on an asymmetric encryption schema with two mathematically related keys, of which one, the public key, is used to encrypt a message and another one, the private key, is used to decrypt it (Merlow & Brethaupt, 2006; National Institute of Standards and Technology, 2001). An asymmetric schema has an important advantage: It does not share a secret key. Public keys can be distributed and published openly. A private key, however, is only known to the owner; so, it is much easier to keep it secret. The PKI process takes place as follows: 1.
2. 3.
The recipient makes its public key known to others while keeping its private key secured. A sender encrypts a message with the known recipient’s public key. The recipient decrypts the message with its secret private key.
digital signature The mechanism to assure the accuracy of the public key is a public-key certificate. It binds the public-key value to the entity (also called a PKI subscriber), which owns this public key. The entity that receives the certificate (also called a PKI relying party) relies on the accuracy of the public key in that certificate. It does so by verifying the digital signature in the received subscriber’s certificate (Federal Public Key Infrastructure Steering Committee, 2005). So, a public-key certificate assures that the contained public key is accurate and belongs to the subscriber. Digital signature combines one-way secure hash functions with public-key cryptography as follows. A hash function is applied to the message to generate a fixed-length hash value (unique to the encrypted document). This value is then encrypted with the private key of the sender and attached to the message. Thus, the hash value assures data integrity of the message, and encryption of this value with the sender’s private key assures nonrepudiation of the message. However, by itself, a digital signature does not protect confidentiality of the message. That is accomplished by encrypting the message as discussed above (Stewart, Tittel, & Chapple, 2005). Upon receiving the message, the recipient validates the digital signature by generating and matching two hash values: One is produced by the hash function applied to the message, and another is produced by decrypting the digital signature with the sender’s public key. The PKI allows for multiple digital signatures on PKI transaction records representing cosigning or single signing of a part of the large document that has to be split between several transactions.
pki architecture A basic PKI architecture consists of five components: the certificate authority (CA), registration authority (RA), repository, archive, and users (Microsoft, 2005).
Federal Public-Key Infrastructure
The certificate authority acts as a trusted third party. It issues and signs certificates, and administers keys. To keep users informed about the valid certificates, a CA provides four services: It publishes current certificates, lists invalid (revoked) certificates (certificate revocation list, CRL), offers online certificate status, and maintains archives of status information. The registration authority verifies certificate contents (identity proofing and public key) for the CA. The repository stores and distributes certificates and CRLs signed by the CA, and other PKI information and policies. The archive is a long-term storage for a CA. It permits verification of old expired signatures, which were valid at the time of signing. As stated above, the two categories of PKI users are the subscriber (a certificate holder) and relying party (certificate recipient). The process of issuance of certificates involves interactions between all five PKI components as follows: 1.
2.
3.
A subscriber defines two keys: a private key, which it keeps secret, and a public key, which will be disclosed. Then, the subscriber sends its credentials to the RA and in return receives a pass code. Both the subscriber credentials and the pass code are also forwarded to an associated CA. The subscriber sends its public key and the pass code to the CA and in return receives a certificate, signed by the CA, containing the subscriber’s public key. The certificate is also forwarded to an associated repository.
After a certificate is issued, it can be used to send or receive a signed message as follows: •
The subscriber sends a message with its attached certificate to a relying party, which
•
will then validate the certificate by checking it in the CRL provided by the repository. The subscriber can also receive a message from a relying party, which gets the subscriber’s certificate from the repository and then encrypts its message to the subscriber with the subscriber’s public key found in the certificate.
Cross-Certification Initially, the PKI solutions were enterprise specific, not interoperable among themselves, and involved easy-to-manage certificates. Much more often, messaging and certificate exchange occur across boundaries of several enterprises (over large national and international networks) and require multiple interconnected CAs and cross-certification among them. This is the case of the interoperable federal PKI serving federal agencies. Cross-certification establishes a trust relationship between two PKIs by issuing and relying on certificates from each other as if they had issued them themselves. By reviewing each other’s policies and mapping the policy information of one CA into another CA, trust can be propagated despite differences among policies of cross-certified CAs. In a multiple CA environment, interoperability can be achieved with two types of PKI topologies: a hierarchical PKI with a common CA anchor and a nonhierarchical cross-certified CA. The hierarchical PKI is based on a certification trust path of subordination among CAs (from the subscriber’s CA up to a root CA), where every higher level CA issues certificates to subordinate CAs. So, trust in the subscriber’s public key propagates through the certificate trust path from the certificate issuer to the relying party with one or more CAs as intermediate trusted parties. The nonhierarchical cross-certified CA can be organized in two ways.
Federal Public-Key Infrastructure
• •
A mesh PKI architecture based on peer-topeer cross-certification. A bridge PKI architecture centered on a Bridge CA, which acts as an anchor for crosscertification, provides policy management (policy mapping), and maintains the CRL for all attached CAs.
In the nonhierarchical PKI topology, validation of a certification path for a given PKI is essential. Both the formats for public-key certificates and a certification path validation algorithm are defined in the X.509 standard for PKI. The latest version of X.509 supports the hierarchical and nonhierarchical topologies of PKI
fEdEral pki federal pki architecture The FPKI is a specific development and implementation of the bridge PKI architecture for the U.S. federal government. The FPKI architecture (FPKIA) is the central element of the FPKI implementation; it also provides support to the e-authentication service component (ASC) of the federal enterprise architecture. It encompasses multiple CAs, each implementing a separate FPKI or ASC certificate policy (U.S. General Services Administration [GSA], 2005b). A CA of the federal PKI is designed to support the Federal Identity Credentialing Initiative and implement the federal PKI certificate (U.S. General Accounting Office [GAO], 2003). The federal PKI architecture consists of the following four primary authorities (Alterman, 2005). • •
The Federal Bridge Certification Authority (FBCA). The Federal PKI Policy Authority (Common Policy CA).
• •
The Citizen and Commerce Class Common Certification Authority (C4CA). The E-Governance Certification Authority (EGCA).
The Federal Bridge Certification Authority is the bridge CA, which acts as an anchor for related CAs (Cooper, 2004). The FBCA allows discrete PKI to trust digital certificates issued by other entities whose policies have been mapped and cross-certified with the FBCA. It is two-way cross-certified with government agencies and external PKIs. A trusted federal PKI CA can issue certificates that are accepted nationwide for government, commercial, and financial transactions. The certification trust path can be built as User 1 CA Entity 1 FBCA CA Entity 2 User 2. To obtain a cross-certificate, the entity requesting cross-certification, the applicant, needs to apply to FBCA, which authenticates subscribers, issues and manages certificates, and publishes the certificate policy statement (CPS) and the certificate status for relying parties. The FBCA-certified service includes a key manager, which enables centralized key generation, private-key backup, and distributed-key recovery (Verisign, 2005). FBCA provides services to other bridges such as federal agencies, the private sector, foreign governments, and states, and provides access certificates for electronic services (ACES). The Federal PKI Policy Authority, also known as the Federal PKI Common Policy Framework Certification Authority (FCPFCA), administers all policy issues of the FPKI and is charged with responsibility to oversee the cross-certification and interoperability of nonfederal PKIs and FPKI. The FCPFCA facilitates inclusion of the PKI services from shared service providers (e.g., Verisign). This is in direct response to the recommendations of the E-Government Act to acquire services rather than building them. The SSPs are
Federal Public-Key Infrastructure
subordinate to the FCPFCA, and, therefore, are inherently one-way (optionally two-way) crosscertified with the FBCA. The FCPFCA provides policy mapping to enable common policy object identifiers (common policy OIDs) and interoperability between two organizations that apply similar issuance and application policies, but have deployed different policy OIDs. The CP defines requirements and standards for issuance and management of keys and certificates. It determines the level of trust the certificate provides and the requirements for the CA operations in order to maintain the trustworthiness of its certificates’ status information. The CP also defines users’ responsibilities for requesting and using certificates and keys. There can be more than one certificate policy adopted by the PKI. Many existing user certificates at the federal and nonfederal levels do not comply with the FPKI policy yet. The Citizen and Commerce Class Common Certification Authority operates at the FBCA rudimentary level (see below) of assurance using a memorandum of agreement rather than a detailed review of the certificates for compliance. It authenticates citizens and commercial enterprises as subordinates (with the option of two-way cross-certification) for many electronic services with the U.S. federal government. An E-Governance Certification Authority supports ASC, which represents a common infrastructure for electronically authenticating government-wide the identity of users of federal e-government services (U.S. GSA, 2005a). A PKI transaction process typically proceeds as follows: 1.
The digital signature signer (PKI subscriber) creates a digital signature with PKI digital signature software, applies the signature to the transaction, applies the relying party’s public key, and sends the transaction to the relying party.
2.
3.
The PKI architecture including the CA, RA, repository, cross-certification CAs, and Federal Bridge CA authenticates credentials of the subscriber for the relying party. The relying party’s PKI environment receives and authenticates the subscriber’s transaction, accepts or rejects the document, and saves the document with a digital certificate.
The internal practices and procedures for handling certificates are administered by the Federal Public Key Infrastructure Operational Authority (FPKI OA) and are summarized in a CPS. It describes the practices concerning certificate life cycle, such as issuance, management, revocation, and renewal.
assurance levels Trust in distributed electronic transactions with identity authentication is the cornerstone of ebusiness, e-government, and e-health. Identity authentication (but not authorization or access control), addressed in E-Authentication Guidance for Federal Agencies (2003), defines four identity e-authentication assurance levels for e-government transactions as follows: • •
•
•
E-Authentication Level 1: Asserts little or no confidence. E-Authentication Level 2: Provides some confidence about an initial identity assertion, but the details need to be verified independently. E-Authentication Level 3: Satisfies high confidence required to access restricted Web services without the need for details to be verified independently. E-Authentication Level 4: Attests very high confidence appropriate to gain access to highly restricted Web resources with no other identity assertion controls.
Federal Public-Key Infrastructure
These e-authentication levels are based on (a) the degree of confidence in the process of establishing the identity of the individual to whom the verifiable credential was issued, and (b) the degree of confidence that the individual who presented the credential is the individual to whom the credential was issued. The FBCA facilitates transaction interoperability by allowing the relying party to create a certificate trust path to the certificate issuer, and determine the certificate’s e-authentication level of trust (FBCA: Federal Bridge Certification Authority, 2006). Similar to e-authentication levels, FBCA policy defines four assurance levels: rudimentary, basic, medium, and high (Spencer, 2004). To comply with Federal Information Processing Standards 201 (FIPS 201), an organizational entity needs to have an enterprise PKI cross-certified with the FBCA at the medium assurance level or higher, or have an approved SSP.
record keeping Record keeping for the required period of time is the responsibility of the subscriber, the recipient, and the trusted PKI service provider(s). FPKI-authenticated transaction records contain the following (Federal Public Key Infrastructure Steering Committee, 2005). • • •
A human-readable version of the signer’s name or its metadata representation. A human-readable version of the transaction signing date and time. The purpose (explicit or implicit) for applying the PKI digital signature to the transaction, derived implicitly from the context or stated explicitly as a purpose statement.
The retention period of record keeping is from the record creation to its final disposition. However, the retention period for public-key certificates could be shorter, typically 12 to 36 months. Validity of a certificate issued by the
FPKI is limited to a maximum of 3 years. For comparison, FIPS 201 stipulates a maximum of a 5-year lifetime for ID cards. After its expiration, revalidation of a digital signature may present some problems: The old CRLs may no longer be available, technology may become obsolete, and PKI transaction format changes will result in a new digital signature hash value different from the old one. Furthermore, over a long period of time, key management of a multitude of unrelated individually encrypted transaction records can become a significant record-keeping burden. Consequently, the National Archives and Records Administration (NARA) currently has no intention of revalidating digital signatures to establish the authenticity of archived and permanent electronically signed records. Instead, NARA (2000) stipulates that these records must include “the printed name of the electronic signer, as well as the date…as part of any human readable form.” Security requirements for CA are stringent (Microsoft, 2005; National Institute of Standards and Technology, 2001). A CA can attract a higher risk of external attacks, while some shortcomings are internal and procedural in nature. For example, CRLs issued by a CA must be updated at least every 18 hours. This creates vulnerability: A certificate cannot be validated if the CRL has not been published within the past 18 hours.
piv card The validation of FPKI certificates, signatures, and asymmetric key pairs is now extended to personal identity verification (PIV; Information Technology Laboratory of National Institute of Standards and Technology, 2005). PIV, developed by the Institute of Standards and Technology, is an implementation of the federal ID standard for a secure and reliable form of identification issued by all federal agencies to their employees and contractors (White House, 2004). The Homeland
Federal Public-Key Infrastructure
Security Presidential Directive 12 (HSPD-12) requires the use of a PIV card to gain logical access to federally controlled information systems. FIPS 201 requires that PIV card authentication be supported by certificates, which are issued by CAs under FCPFCA. PIV cards have one or more asymmetric keys. To support interoperable PIV authentication, the associated public keys are managed through CA certificates. The FPKI validates signatures on all PIV data elements, the integrity of the asymmetric key pairs, and the integrity of PKI-related mandatory and optional PIV data elements present on the PIV card. The performance, interoperability, and security of the PIV system are enhanced through a set of requirements covering identity proofing, registration, and issuance of a PIV card. For example, the requirements define acceptable cryptographic algorithms and key sizes, biometric data, and interface and data elements of a PIV card (Information Technology Laboratory, 2005). The standard requires also that the cardholder authenticate the PIV card each time before performing a private-key computation with the digital signature key. The PIV card is subject to FIPS 140-2 regulation, which is a mandatory standard for “all Federal agencies that use cryptographic-based security systems to protect sensitive information in computer and telecommunication systems (including voice systems)” (National Institute of Standards and Technology, 2001). The PIV standard supports several levels of security to give agencies flexibility in selecting the appropriate level of security for each application, including handling PKI certificates from multiple vendors, capturing and installing biometric data, managing cryptographic keys, and issuing onepass contact and contactless cards. The PIV card’s smart chip stores the following data: •
The cardholder unique identifier, which identifies the card holder to a computer and
•
• •
can be used as a badge to have physical access control systems. Three digital certificates: authentication certificate, digital signature certificate (optional), and e-mail encryption key certificate (optional). Numeric password protecting the PKI keys and biometrics. Biometric data for authentication (e.g., fingerprints and facial images).
futurE trEnds The federal PKI is maturing with many more applications being implemented. Despite this success, some concerns about the implementation of FPKI remains, such as privacy concerns, agencies’ internal politics, vendor battles for market space, and cost. PKI architecture continues to evolve. Future FPKI extensions may include concepts like OpenCA and technological issues related to outsourcing, such as a “full-featured and Open Source out-of-the-box Certification Authority implementing the most used protocols with fullstrength cryptography world-wide” (OpenCA Research and Development Labs, 2006; Xenitellis, 2000). Outsourcing is another aspect of PKI evolution. For example, IBM offers outsourcing “for clients who wish to outsource their PKI. Such an arrangement allows clients to use local Registration Authorities for administering certificate registration requests…[and] certificate services from a trusted Certification Authority” (IBM, 2006).
conclusion The use of FPKI improves internal government operations and enhances the delivery of services to citizens. The use of FPKI supports four major areas of e-government initiatives: government
Federal Public-Key Infrastructure
to citizen, government to business, government to government, and internal effectiveness and efficiency. Interoperability remains the most critical issue for further FPKI growth. FPKI interoperability is built on two levels: Federal PKI Policy Authority for policy interoperability and the COTS-based FBCA for technical interoperability. PKI interoperability becomes much more complex as the number of PKI domains increase. Overall, the federal PKI helps to accomplish several goals: increased security for the exchange of sensitive data using cryptographic technologies and applications routinely, increased security for remote access log-on over unsecured networks using PIV cards, reduced certificate costs that may be available through the PKI services from SSPs and commercial enterprises, ease of manageability with defined standards, and scalability to meet the demands of a growing infrastructure.
•
• •
•
•
futurE rEsEarch dirEctions
•
Future FPKI research directions are focused on innovative methods and technologies. Some of them are listed:
•
•
•
• •
•
0
Analysis of FPKI and desktop solutions to facilitate bridge-enabled certificate validation (Federal PKI Policy Authority). Analysis of methods for control, recovery, and reissuance of a certificate with a different public key. Analysis of methods for key backup to enable deciphering of old messages. Analysis of ways to enhance security with FPKI and to enhance authorization services. Develop a multidisciplinary case study and methodology of security measured in terms of costs and scalability, not only in terms of cryptography (Rundgren, 2005b).
•
•
Development of methods for the exchange of more information about a user than just identity to make more intelligent access control decisions. Evaluation of new tools and technologies for FPKI. Further research on adding new token management functions to the growing list of functions provided by the FPKI. Identifying, resolving, and documenting outstanding technical and policy-related issues associated with cross-certification; cross-certification of various enterprises and systems with the Federal Bridge (test and validation) in areas such as e-learning, e-health, e-government, and e-business. Investigation of methods for X.509 path discovery (including path-discovery test suite) in a hierarchy and in a mesh network with one bridge and validation capabilities at multiple levels (e.g., enterprise, bridge enabled). Investigation of possible impact of Internet 2 on FPKI. Investigation of the use of biometrics in conjunction with FPKI and smart cards for authentication of users (e.g., to authenticate workers at U.S. ports, scheduled to start in March 2007). Research on a scalable and secure architecture for information exchange and collaboration. Research on a single-root FPKI; proponents of such solution argue that in the long term, the FBCA will be replaced by a single-root FPKI and that having a neutral federal identity issued according to a certain policy will reduce the need for having “agency affiliation” in a federal identity certificate (Rundgren, 2005a). Others (Alterman, 2007) emphasize that such a solution may be resisted as a significant challenge to individual privacy.
Federal Public-Key Infrastructure
•
•
•
Research on cross-certification of the higher education bridge (EDUCAUSE) with the Federal Bridge. Research on the expansion of FPKI toward a service-oriented architecture (SOA). SOA is an architectural approach to build business applications through a collection of loosely coupled services. SOA and process integration offer organizations an affective way to increase their business productivity. Service-oriented architecture is one of the hottest topics in IT today. SOA goes beyond the concepts of OOAD and CBD. It defines an independent, interoperable, scalable, and flexible service description (Infogain, 2007; Watson, 2006). Research on the possible architecture and functionality of the next generation of FPKI.
rEfErEncEs Alterman, P. (2005). Federal PKI policy authority overview and current status, FPKIPA. Retrieved May 29, 2006, from http://middleware.internet2. edu/pki06/proceedings/alterman-fpki_status.ppt Alterman, P. (2007). The U.S. Federated PKI and the Federated Bridge Certification Authority. Retrieved March 8, 2007, from http://72.14.253.104/ search?q=cache:0VhqZjG767EJ:www.cio.gov/ fbca/documents/altermanpaper.pdf+single-root +Federal+PKI&hl=en&ct=clnk&cd=1&gl=us& client=firefox-a Cooper, D. (2004). Public key infrastructures: PKI research. Security Technology Group. Retrieved May 29, 2006, from http://csrc.nist. gov/pki/PKIResearch.html E-authentication guidance for federal agencies: Office of Management & Budget. (2003). Retrieved May 26, 2006, from http://www.whitehouse.gov/ omb/memoranda/fy04/m04-04.pdf
FBCA: Federal Bridge Certification Authority. (2006). Retrieved May 28, 2006, from http://www. cio.gov/fbca/ Federal Public Key Infrastructure Steering Committee. (2005). PKI transaction records management guidance: Records management guidance for PKI digital signature authenticated and secured transaction records. Retrieved May 20, 2006, from http://www.cio.gov/fpkisc/library. htm IBM. (2006). Identrus financial institutions and their customers: PKI outsourcing and assessments. Retrieved September 30, 2006, from http://www-03.ibm.com/security/identrus/ Infogain. (2007). SOA-design concepts: The story of evolution. SOA (Executive summary, white paper). Retrieved from http://www.infogain. com/downloads/whitepapers/WP_SOA-DesignConcepts.pdf Information Technology Laboratory of National Institute of Standards and Technology. (2005). FIPS Pub 201 Federal Information Processing Standards publication: Personal identity verification (PIV) of federal employees and contractors, computer security division. Retrieved May 25, 2006, from http://www.smartcardalliance.org/ pdf/industry_info/FIPS_201_022505.pdf Merlow, M., & Brethaupt, J. (2006). Information security principles and practices. Upper Saddle River, NJ: Pearson Prentice Hall. Microsoft. (2005). Deploying PKI inside Microsoft (Technical white paper). Retrieved May 20, 2006, from http://www.microsoft.com/technet/itsolutions/msit/security/deppkiin.mspx National Archives and Records Administration. (2000). Records management guidance for agencies implementing electronic signature technologies. Retrieved May 29, 2006, from http://www. archives.gov/records-mgmt/faqs/pdf/electronicsigniture-technology.pdf
Federal Public-Key Infrastructure
National Institute of Standards and Technology. (2001). FIPS Pub 140-2: Change notices (12-03-2002), Federal Information Processing Standards publication (supersedes FIPS PUB 140-1, 1994 January 11). Security requirements for cryptographic modules, information technology laboratory. OpenCA Research and Development Labs. (2006). OpenCA PKI development project. Retrieved September 30, 2006, from http://www.openca. org/ Rundgren, A. (2005a). FPKI challenges. Retrieved from http://cio.nist.gov/esd/emaildir/lists/pkitwg/msg00659.html Rundgren, A. (2005b). NIST provides guidance for NHIN security. Retrieved from http://cio.nist. gov/esd/emaildir/lists/pki-twg/msg00686.html Spencer, J. (2004, October). U.S. Federal Bridge CA: State of play. I-CIDM Forum, London. Stewart, J. M., Tittel, E., & Chapple, M. (2005). CISSP certified information systems security professional: Study guide. San Francisco: Sybex. U.S. General Accounting Office (GAO). (2003, December). Information security status of federal public key infrastructure activities at major federal departments and agencies (GAO-04-157, Report to the Committee on Government Reform and the Subcommittee on Technology, Information Policy, Intergovernmental Relations and the Census, House of Representatives).
architecture. Certification practice statement. Retrieved May 29, 2006, from http:// www.cio.gov/fpkipa/documents/FPKIA_CPS. pdf#search=%22FCPFCA%22 Verisign. (2005). VeriSign® managed PKI service for the Federal Bridge Certification Authority. Retrieved May 29, 2006, from http://www.verisign.com/static/005352.pdf Watson, B. (2006). SOA: At the forefront of integrating applications. Fusion Zone. Retrieved from http://www.fusion-zone.com/article_soa_integrating_apps.html White House. (2004). Homeland security presidential directive/HSPD-12. Retrieved May 29, 2006, from http://www.whitehouse.gov/news/releases/2004/08/20040827-8.html Xenitellis, S. (2000). A guide to PKIs and opensource implementations. The open-source PKI book. Retrieved May 29, 2006, from http://ospkibook.sourceforge.net/docs/OSPKI-2.4.7/OSPKIhtml/ospki-book.htm
furthEr rEading Adams, C., & Lloyd, S. (2002). Understanding PKI: Concepts, standards, and deployment considerations (2nd ed.). Addison-Wesley Professional. Adams, C., & Lloyd, S. (2003). Public-key certificates and certification. Addison Wesley Professional.
U.S. General Services Administration (GSA). (2005a). Establishment of e-authentication service component: AGENCY. Office of Governmentwide Policy (C-05-N01). Retrieved May 29, 2006, from http://a257.g.akamaitech. net/7/257/2422/01jan20051800/edocket.access. gpo.fov/2005/pdf/05-15515.pdf
Brewer, D. C. (2005). Security controls for Sarbanes-Oxley section 404 IT compliance: Authorization, authentication, and access. Wiley.
U.S. General Services Administration (GSA). (2005b). Public X.509: U.S. federal PKI
Chadwick, D., & Zhao, G. (Eds.). (2005). Lecture notes in computer science: Public key infrastruc-
Bragg, R., Rhodes-Ousley, M., & Strassberg, K. (2004). Network security: The complete reference (1st ed.). McGraw-Hill.
Federal Public-Key Infrastructure
ture. Second European PKI workshop: Research and applications. Springer. Choudhury, S. (2002). Public key infrastructure and implementation and design (1st ed.). Wiley. Conklin, W. A., White, G. B., Cothren, C., Williams, D., & Davis, R. L. (2005). Principles of computer security: Security+ and beyond (1st ed.). McGraw-Hill.
Newcomer, E., & Lomow, G. (2005). Understanding SOA with Web services. Addison Wesley Professional. Oppliger, R. (2005). Contemporary cryptography. Artech House Publishers. Peltier, T. R. (2004). Information security policies and procedures: A practitioner’s reference (2nd ed.). Auerbach.
Feghhi, J. (1998). Digital certificates: Applied Internet security. Addison-Wesley Professional.
Peltier, T. R. (2005). Information security risk analysis (2nd ed.). Auerbach.
Hack, C. (2005). Reloaded: A step-by-step guide to computer attacks and effective defenses (2nd ed.). Prentice Hall PTR.
Raina, K. (2003). PKI security solutions for the enterprise: Solving HIPAA, E-Paper Act, and other compliance issues (1st ed.).
Harris, S. (2005). CISSP all-in-one exam guide (3rd ed.). McGraw-Hill Osborne Media.
Rescorla, E. (2000). SSL and TLS: Designing and building secure systems (1st ed.). Addison-Wesley Professional.
Hansche, S., Berti, J., & Hare, C. (2004). Official (ISC)2 guide to the CISSP exam. Auerbach Publications. Herrmann, D. S. (2007). Complete guide to security and privacy metrics: Measuring regulatory compliance, operational resilience, and ROI. Auerbach. Housley, R., & Polk, T. (2001). Planning for PKI: Best practices guide for deploying public key infrastructure. John Wiley.
Schellekens, M. H. M. (2004). Electronic signatures: Authentication technology from a legal perspective (Information technology and law). Asser Press. Tipton, H. F., & Krause, M. (2006). CISSP: Information security management handbook (5th ed., Volume 3). Auerbach Publications. Tittel, E., & Lindros, K. (2003). The computer security bookshelf: Part 1. Que.
Lomow, G., & Newcomer, E. (2005). Introduction to SOA with Web services. Addison Wesley Professional.
Utting, M., & Legeard, B. (2006). Practical model-based testing: A tools approach. Morgan Kaufmann.
McGraw, G. (2006). The role of architectural risk analysis in software security. Addison Wesley Professional.
Vacca, J. R. (2004). Public key infrastructure: Building trusted applications and Web services. Auerbach Publications.
Nadel, B. A. (2004). Building security: Handbook for architectural planning and design (1st ed.). McGraw-Hill.
Vacca, J. R. (2005). The U.S. government goes wireless: Read these two case studies to see how the U.S. government is using wireless technology to be more efficient. Mobile Business Advisor.
Nardelli, E., & Talamo, M. (Eds.). (2006). Certification and security in inter-organizational e-services: IFIP. Springer.
Vacca, J. R. (2007). Biometric technologies and verification systems. Butterworth-Heinemann.
Federal Public-Key Infrastructure
West, D. M. (2005). Digital government: Technology and public sector performance. Princeton University Press.
tErms and dEfinitions Certification Authority (CA): An authority trusted by one or more users to create and assign certificates. Digital Signature: An encrypted message digest created with a private key to authenticate the sender of a computer file or a message. Federal Bridge Certification Authority (FBCA): The certification authority designed to create trust paths among individual agency PKIs by allowing discrete PKIs to trust digital certificates issued by other entities whose policies have been mapped and cross-certified with the FBCA. Federal PKI: FPKI is PKI for secure data exchange over unsecured networks and is used in support for statutory mandates for e-government and implementing electronic-signature technology. Federal PKI Policy Authority: This is an authority in a network that determines participants and levels of cross-certification among participat-
ing individual agency PKIs, and administers a certificate policy. Personal Identity Verification (PIV) Card: The PIV card is a form of identification for U.S. federal government employees and contractors to meet the requirements of Homeland Security Presidential Directive 12 and Federal Information Processing Standard 201. Private Key: In PKI, the private key is a privately known (secret) key used with a corresponding public key to encrypt and decrypt messages. Public Key: In PKI, the public key is a publicly known key used with a corresponding private key to encrypt and decrypt messages. Public-Key Infrastructure (PKI): PKI is a technical and organizational infrastructure for secure data exchange over unsecured networks using an asymmetric encryption scheme (public and private keys) with public-key certificates to protect confidentiality, integrity, and nonrepudiation of transmitted messages. Registration Authority (RA): An RA is an authority in a network that verifies user requests for a digital certificate and tells the certificate authority to issue it.
Chapter XLI
Radio Frequency Identification (RFID) Technology David C. Wyld Southeastern Louisiana University, USA
introduction We are in the midst of what may become one of the true technological transformations of our time. RFID (radio frequency identification) is by no means a new technology. RFID is fundamentally based on the study of electromagnetic waves and radio, pioneered in the 19th century work of Faraday, Maxwell, and Marconi. The idea of using radio frequencies to reflect waves from objects dates back as far as 1886 to experiments conducted by Hertz. Radar was invented in 1922, and its practical applications date back to World War II, when the British used the IFF (Identify Friend or Foe) system to identify enemy aircraft (Landt, 2001). Stockman (1948) laid out the basic concepts for RFID. However, it would take decades of development before RFID technology would become a reality. Since 2000, significant improvements in functionality, decreases in both size and costs, and agreements on communication standards have combined to make RFID technology viable for commercial and governmental purposes. Today, RFID is po-
sitioned as an alternative way to identify objects with the ubiquitous bar code.
background Automatic identification, or Auto-ID, represents a broad category of technologies that are used to help machines identify objects, humans, or animals. Auto-ID is a means of identifying items and gathering data on them without human intervention or data entry. As can be seen in Figure 1, RFID a type of Auto-ID technology. Sometimes referred to as dedicated short-range communication (DSRC), RFID is “a wireless link to identify people or objects” (d’Hont, 2003, p. 1). RFID is, in reality, a subset of the larger radio frequency (RF) market, which encompasses an array of RF technologies, including the following: • • • • •
Cellular phones Digital radio The Global Positioning System (GPS) High-definition television (HDTV) Wireless networks (Malone, 2004)
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Radio Frequency Identification (RFID) Technology
Figure 1. The family of automatic identification technologies
Barcode Systems
Passive RFID
Auto-ID Technology
Biometric Systems
Smart Cards
RFID is a technology that already surrounds us. If you have an Exxon/Mobil SpeedPassTM in your pocket, you are using RFID. If you have a toll tag on your car, you are using RFID. If you have checked out a library book, you have likely encountered RFID. If you have been shopping in a department store or an electronics retailer, you have most certainly encountered RFID in the form of an EAS (electronic article surveillance) tag.
rfid tEchnology To best understand the power of radio frequency identification, it is first useful to compare RFID with bar-code technology, which is omnipresent today. The specific differences between bar-code technology and RFID are summarized in Table 1. The principal difference lies in the potential of RFID to provide unique identifiers for objects. While the bar code and the UPC (Universal Product Code) have become all pervading and enabled a host of applications and efficiencies (Brown, 1997), they only identify an object as belonging
Active RFID
Optical Character Recognition (OCR)
to a particular class, category, or type. Due to its structure (as shown in Figure 2), a bar code cannot uniquely identify a specific object: It can identify only the product and its manufacturer. Thus, a bar code on any one package of sliced meat in a grocery store is the same as on any other of a particular type or size from a particular firm. Likewise, the bar code on a case or pallet of military supplies cannot tell one shipment from another. The two technologies also differ in the way in which they read objects. With bar coding, the reading device scans a printed label with optical laser or imaging technology. However, with RFID, the reading device scans, or interrogates, a tag using radio frequency signals. There are three necessary elements for an RFID system to work. These are tags, readers, and the software necessary to link to a larger information processing system. In a nutshell, the technology works as follows. The tag is the unique identifier for the item it is attached to. The reader sends out a radio signal, and the tag responds to identify itself. The reader then converts the radio waves returned from the tag into data that can be passed
Radio Frequency Identification (RFID) Technology
Table 1. RFID and bar codes compared
•
• •
•
• •
•
bar-code technology Bar codes require line of sight to be read.
rfid technology •
Bar codes can only be read individually. Bar codes cannot be read if they become dirty or damaged. Bar codes must be visible to be logged.
•
Bar codes can only identify the type of item. Bar-code information cannot be updated.
•
Bar codes must be manually tracked for item identification, making human error an issue.
•
•
•
•
RFID tags can be read or updated without line of sight. Multiple RFID tags can be read simultaneously. RFID tags are able to cope with harsh and dirty environments. RFID tags are ultrathin, can be printed on a label, and can be read even when concealed within items. RFID tags can identify a specific item. Electronic information can be overwritten repeatedly on RFID tags. RFID tags can be automatically tracked, eliminating human error.
on to an information processing system to filter, categorize, analyze, and enable action based on the identifying information. There are three essential components that combine to form an RFID tag. These are the chip, the antenna, and the packaging that contains them. An RFID tag has an integrated circuit (IC) that contains the unique identifying data about the object to which is it is attached. One of the identifiers, but not the only one, that can be used to identify the item uniquely with an RFID tag is the EPC (Electronic Product Code). The IC is attached to a small antenna, which most commonly is a small coil of wires. The third element is the packaging of the tag that protects the IC and the antenna. RFID tags can take on a variety of forms for specific applications, including smart labels, keys or key fobs, watches, smart cards, disks and coins (which can be attached to an item with a fastening screw), and implantable glass transponders (for animal or human use). Hitachi has developed the mu-chip, a very tiny (0.4 mm2) RFID tag that is the size of a grain of rice (Anonymous, 2004). There are two basic categories of tags: passive and active. A summary of the differences between the two general categories is presented in Table
Figure 2. Anatomy of a bar code
Check Digit
Header Manufacturer's Identification Number
Item Identifier
Radio Frequency Identification (RFID) Technology
2. Passive tags are already very familiar to us as we see simple examples of such in the form of the EAS tags used throughout the retail industry. Without a power source, a passive tag is only able to transmit information when it is within range of an RFID reader. Passive tags function through a process known as energy harvesting, wherein energy from the reader is gathered by the tag, stored momentarily, and then transmitted back to the reader at a different frequency. RFID readers have three essential components. These are the antenna, transceiver, and decoder. RFID readers can differ quite considerably in their complexity, form, and price, depending upon the type of tags being supported and the functions to be fulfilled. Readers can be large and fixed, or small handheld devices. However, the read range for a portable reader will be less than the range that can be achieved using a fixed reader. The reader, either continuously (in the case of a fixed-position reader) or on demand (as with a handheld reader), sends out an electromagnetic wave to inquire if there are any RFID tags present in its active read field. When the reader receives any signal from a tag, it passes that information
on to the decoding software and processes it for forwarding to the information system it is a part of. Recently, it has been forecast that as soon as 2007, RFID reading capabilities will soon be capable of being integrated into cell phones, PDAs (personal digital assistants), and other electronic devices (Thomas, 2005). In brief, the science of a passive RFID system works as follows. The reader sends out electromagnetic waves, and a magnetic field is formed when the signal from the reader couples with the tag’s antenna. The passive RFID tag draws its power from this magnetic field, and it is this power that enables the tag to send back an identifying response to the query of the RFID reader. Singel (2004) likened passive RFID to a “high-tech version of the children’s game ‘Marco Polo.’” In a passive RFID system, the reader sends out a signal on a designated frequency, querying if any tags are present in its read field (the equivalent of yelling out “Marco” in a swimming pool). If a chip is present, the tag takes the radio energy sent out by the reader to power itself up and respond with the electronic equivalent of kids yelling “Polo” to identify their position.
Table 2. Differentiating passive and active RFID tags
• • • • •
active tags Powered by an internal battery More expensive Finite lifetime (because of battery) Greater weight (because of battery) Greater range (up to 00 meters)
•
Better noise immunity
•
•
Derive power from the electromagnetic field generated by the reader Require more powerful readers
• • •
Lower data transmission rates Less tags can be read simultaneously Greater orientation sensitivity
• • •
Internal power to transmit signal to the reader Can be effective with less powerful readers Higher data transmission rates More tags can be read simultaneously Less orientation sensitivity
• • • • • • •
passive tags Operate without a battery Less expensive Unlimited life (because of no battery) Less weight (because of no battery) Lesser range (up to - meters, usually less) Subject to noise
•
Radio Frequency Identification (RFID) Technology
All of this happens almost instantaneously. In fact, today’s RFID readers are capable of reading tags at a rate of up to 1,000 tags per second. Smart labels are a particularly important form of passive RFID tag. A smart label is an adhesive label that is human and quite often machine readable with a bar code. However, the label is also embedded with an ultrathin RFID tag inlay. Smart labels combine the functionality of passive RFID tags with the convenience of a printed label. Looking ahead, analysts have predicted that the vast majority of all RFID tags will come in the form of smart labels. In fact, it has been estimated that smart labels will constitute 99.5% of the trillion tags forecast to be in use a decade from now (Anonymous, 2005). An active tag functions in the same manner as its passive counterpart, but it contains a fourth element: an internal battery that continuously powers the tag. As such, the tag is always on and transmitting the information. The active tag is only readable, however, when it is in the reading field of an RFID reader. However, the battery significantly boosts the effective operating range of the tag. Thus, while a passive tag can only be read at a range of a few yards, active tags can be read at a distance of 10 to 30 yards. However, the
useful life of an active tag is limited by the life of the onboard battery (typically 5 years at present). Furthermore, due to the need for a battery, active tags will always cost more and weigh more than a passive tag. RFID tags are intentionally designed to not be the repository of item information. Rather, through a coding system known as EPC, the tag serves as an electronic license plate for each tagged item, directing the user via the Internet to the database where complete descriptive information about the item is housed (Aitoro, 2005). As can be seen in Figure 3, there are four elements that comprise the 96-bit-capacity Electronic Product Code. These are the following. 1.
2.
3.
Header (or Version): This section identifies the length of the EPC number including the code type and version in use (up to 8 bits). EPC Manager (or Manufacturer): This section identifies the company or entity responsible for managing the next two EPC elements (up to 28 bits). Object Class (or Product): This section identifies the class of the item (for example, the stock-keeping unit [SKU] or consumer unit; up to 24 bits).
Figure 3. The electronic product code
Version bits Manufacturer bits (> Million)
Product bits (> million)
Serial Number bits (> billion)
Radio Frequency Identification (RFID) Technology
Table 3. RFID applications
• • • • • • •
traditional rfid applications Security/Access Control Electronic Article Surveillance Asset/Fleet Management Mass Transit Library Access Toll Collection Animal Identification
• • • • • • • • • • • • • • •
4.
Serial Number: This section identifies a unique serial number for all items in a given object class (up to 36 bits).
With the 96-bit EPC structure, manufacturers should not have to worry about running out of EPC numbers for many decades. In fact, the EPC data structure can generate approximately 33 trillion different unique combinations, which according to Helen Duce of Cambridge University, would be enough to label all of the atoms in the universe (as cited in Anonymous, 2003). According to projections from the National Research Council’s Committee on Radio Frequency Identification Technologies (2004), this will allow for the billions of people on earth to have billions of tags each. Table 3 outlines some of the present and emerging RFID applications. Over time, RFID will likely supplement, rather than supplant, barcode technology for tracking items in supply chain management and other applications in organizations. They are by no means mutually exclusive technologies. Indeed, some of the most creative and cost-beneficial applications may come from combining RFID and bar codes together, where
0
Emerging rfid applications Warehouse Management Supply Chain Management Reverse Logistics Shipment Tracking Asset Tracking Retail Management Document Tracking Anticounterfeit Advance Access Control Mass Transit Monthly and Single Trip Airline Baggage Handling Aircraft Parts and Tools Health Care Applications Regulatory Compliance Payments
RFID tags and labels may be used to identify large groups of items and bar codes remain the tracking device for individual items.
futurE trEnds There are five principal drivers behind the recent upswing of interest across the American economy, and indeed globally, in RFID. These are depicted in Figure 4. First, as we have come to expect with all electronics, the costs and size of the technology have sharply decreased, and concomitantly, the capabilities and applications of it have rapidly increased. Thus, the increased accessibility of RFID tags and labels have made it possible today to have them be utilized in new, innovative ways. Second, emerging open, common standards in RFID technology will enable greater data sharing and collaboration between supply chain partners. Third, with increased competitive pressures and customer expectations, organizations throughout the supply chain need better, actionable information on which to make decisions that can impact successful operations in real time.
Radio Frequency Identification (RFID) Technology
Figure 4. The driving forces behind RFID
technological advances and cost declines
information technology infrastructure
rfid
Emergence of Emergence of Common Standards common standards
need for actionable, real-time intelligence
rfid mandates
Fourth, the investments that organizations have made in their IT infrastructures over the past decade now make it possible to capture and use this information. The final reason is simply that leading-edge organizations have recognized these four drivers of RFID technology and created a fifth driver by mandating its use in their inbound supply chains and seeking to integrate RFID into their internal operations. These mandates include those of major retailers, including the following. • • • • • •
Wal-Mart Target Best Buy Albertson’s Metro (Germany) Tesco (United Kingdom; Goodman, 2005)
However, it is the U.S. Department of Defense (DoD) that has issued the largest and most sweeping RFID mandate. While the RFID mandates from Wal-Mart, Target, Albertson’s, and other retailers will be important, the DoD’s RFID mandate is far more reaching than that of any retailer due to the sheer size and scope of the military supply chain. The U.S. military’s supply chain is a worldwide operation, which moves almost $29 billion worth of items worldwide each year (Sullivan, 2005). The military supply chain is not just bullets, bombs, and uniforms as it involves a wide panoply of goods, the majority of which are consumer goods as well. The DoD’s directive will ultimately affect approximately 60,000 suppliers, which are not necessarily the Lockheeds and Boeings of the world, but mostly small businesses, many employing only a few people (Wyld, 2005). As such, the DoD’s RFID mandate has been rightly categorized as the likely tipping point for widespread use of RFID in supply chains (Roberti, 2003). The forces working against RFID’s ascension are threefold. These surround issues of cost, technology, and privacy. First, the unit price for passive RFID tags is projected to steeply decline over the next decade as cost efficiencies in production take hold and volumes grow from millions of tags today to trillions of tags by 2015 (see Figure 5). However, even as prices approach a penny per tag (as depicted in Figure 6), there will still be applications for low-cost, non-mission-critical items where RFID-based identification will not be cost effective or practical in comparison to the almost free cost of bar-code identification (Webster, 2006). Thus, it is impractical to forecast that every item will someday bear an RFID tag. In the area of technology, the read rates for RFID tags suffer in environments where a great deal of water or metal is present. With a need to have 100% accuracy in reading tagged items for the RFID equation to work, this can be a very significant limitation to the use of the technology. However, the newest generation of RFID
Radio Frequency Identification (RFID) Technology
Figure 5. the projected price of rfid tags: 2005-2015 $0.25 $0.23 $0.20
$0.15
$0.10
$0.06 $0.05
$0.01 $0.00 2005
2010
2015
Source Data: IDTechEX (2005).
Figure 6.
the number of rfid tags in use: 2005-2015
10 Trillion
80 Billion
6.3 Million
2005 Source Data: IDTechEX (2005).
2010
2015
Radio Frequency Identification (RFID) Technology
tag technology (GEN 2) seems to perform better around liquids and in metallic environments (Swedberg, 2006). Also, RFID technology has not advanced to the point where it is simply a matter of plug and play to obtain 100% accuracy in tag reading in any environment. Thus, organizations must continually tinker with the positioning of readers and the placement of tags and smart labels on pallets, cartons, and items to obtain optimal accuracy with their RFID systems. Finally, there are serious concerns over the privacy aspects of RFID. As O’Shea (2003) reminds us, RFID is a technological tool, and “as with all technology, it can be used to manipulate our world or be abused for unwarranted control.” The fears of a Big Brother use of the technology and the data generated by it are widespread. These apprehensions are only inflamed by references to the Biblical mark of the beast (Jones, 2005) and to Orwellian popular culture examples, such as in the movies A Beautiful Mind and Minority Report. Fears of the privacy aspects of RFID has led to the introduction of proposed legislation, both at the state and federal levels, to regulate or prohibit the use of RFID at the consumer level (Trebilcock, 2006). There has also been an escalating outcry over efforts to integrate RFID into state-issued driver’s licenses and ID cards (Lipowicz, 2006) and in U.S. passports (Evers & McCullagh, 2006). Thus, there will be a continuing need for debate in both the public policy arena and in industry over the privacy aspects of RFID.
of things” (Schoenberger, 2002) or a “wireless Internet of artifacts” (Gadh, 2004), Saffo sees RFID as making possible what he terms “the sensor revolution.” This is based on viewing RFID as a media technology, making possible what he categorizes as “‘smartifacts’ or intelligent artifacts, that are observing the world on our behalf and increasingly manipulating it on our behalf” (as cited in O’Connor, 2005). Saffo thus stresses the importance of thinking outside the box on RFID and looking beyond today’s problems to find “unexpected applications,” which is where “the greatest potential for RFID lies” (as cited in O’Connor). Indeed, Saffo urges people to take a 20-year perspective on RFID, according to which we are in the early stages of “a weird new kind of media revolution” in that “RFID will make possible new companies that do things we don’t even dream about” (as cited in Van, 2005, p. B1). If we indeed take the long view of history, we can see that some of today’s biggest industries, most pedestrian technologies, and most indispensable parts of our lives come from sparks of imagination on how to use a technology in unimagined ways. Indeed, we have seen bar coding itself used in applications far beyond the supply chain functions it was created for (Brown, 1997). Thus, we are still in the early stages of this technological revolution, and we should be mindful of the advice of U.S. Undersecretary of Defense Alan Estevez (2005), who observed, “The real value of RFID lies not in what it can do today but in what it will do in the future.”
conclusion futurE rEsEarch dirEctions In the end, the current push for RFID may be a small part of a larger mosaic. Indeed, futurist Paul Saffo foresees that much of the focus on RFID today is on doing old things in new ways, but the truly exciting proposition is the new ideas and new ways of doing things that will come from RFID. Building upon the previously discussed ideas of RFID as making possible “an Internet
At this early stage in the widespread use of RFID technology, there are far more questions than answers, far more pilots than implementations, far more interested observers than users of RFID, and far more skeptics than enthusiasts among the general public about the value and integrity of the technology. Indeed, we are early on in the life
Radio Frequency Identification (RFID) Technology
span of RFID technology. In fact, many leading industry experts expect full-fledged implementation of RFID to take 10 to 15 years, or more. There will be a great need for a continuing academic research agenda as the RFID revolution moves forward. This research will generally fall into three categories. 1. 2. 3.
“Nuts-and-Bolts” Research “Big-Picture” Research “Ramifications and Permutations” Research
Of course, all of these areas are interdependent, and none can or even should be conducted in isolation of the others. One of the hallmarks of the development of RFID to date has been the openness of companies, executives, and academics to share their research, lessons learned, and bestpractice findings. Hopefully, this will continue to be a hallmark of this area of technology. The first area of research will be perhaps the most active in the short term, and this will fall in what can be best described as nuts-and-bolts research. For the next 5 to 10 years, and perhaps longer, there will be a great need for basic research into just how to make RFID technology work in various settings. This will involve such fundamental questions as the following. •
•
•
•
How and where should tags be applied to pallets, cases, and individual items to maximize their readability? How should individual readers be positioned to maximize their ability to scan tags, and how should arrays of readers be stationed to best ensure coverage of a specific type of area? What can be done to mitigate the effects of metals, water, and other environmental conditions on the ability to read tags? What are the environmental consequences of RFID tags, and what measures will need to be taken in the future to mitigate the pol-
lution and landfill problems that might be created through widespread use? Certainly, much of this research is being conducted by individual organizations, and hopefully, companies and governmental agencies will continue to be open in sharing both their best practices and lessons learned with the wider RFID community, both through presentations and written reports and case studies as they have been in the formative stages of the RFID revolution. If such research begins to be held as being proprietary, then the rising tide for RFID will be held back. There is a great need for academic work as well in this area of RFID research. At this early point, even if corporate and governmental interests wanted university involvement, there are very few true RFID experts in academia and few schools that have placed emphasis to date on such research. Right now, perhaps the leading center for such research is housed at the University of Arkansas. Certainly, other entrepreneurial-minded universities with similar capabilities in their business, engineering, and even public administration programs will follow suit in the near future. The second category of research will address the impact RFID can and will have on the big picture of organizations, both in the private and public sectors. This will focus on how RFID has and will affect organizations, both in terms of their internal systems, operations, and capabilities, and with their interorganizational relationships. In the latter regard, research should focus not only on supply chain relationships, but how real-time data sharing impacts areas such as service delivery, finance and payments, and customer service. This research should be carried out by discipline specialists in the following areas. • • • •
Strategic Management Marketing Health Care Administration Supply Chain Management
Radio Frequency Identification (RFID) Technology
• • •
Public Administration Engineering Communications
Again, there is great need for cross-disciplinary research and communication as concepts, theories, models, and cases from one area may apply equally well, if not better, in different application areas. The final area of research should be in what can be categorized as the ramifications and permutations of the technology, examining RFID’s impact on society, business, law, privacy, and ethics. Less applied than either of the prior two areas, this may be the toughest category of research to find funding and support for. However, it may well be the most important area of research. This area should draw upon the wealth of many disciplines, including, but by no means limited to, the following. • • • • • • • • •
Law Ethics Psychology Sociology Anthropology Computer Science Information Management Strategic Management Health Care Administration
The area would encompass research into how RFID technology is challenging and changing the boundaries, norms, and laws in specific areas of business, government, and society. It may at times be controversial and may bring to light varying perspectives on the impact of this new technology on all of us.
rEfErEncEs Aitoro, J. (2005, February 25). The government and RFID. VARBusiness. Retrieved June 8, 2005,
from http://www.varbusiness.com/sections/governmentvar/govt.jhtml?articleId=60403591 Anonymous. (2003). Pushing the envelope. Management Today, p. 39. Anonymous. (2004). Micro tracker. Technology Review, 107(3), 18. Anonymous. (2005, March 23). RFID unites the supply chain. Business Process Management Today. Retrieved March 30, 2005, from http://bpm-today.newsfactor.com/scm/story. xhtml?story_title=RFID-Unites-the-SupplyChain&story_id=31672&category=scm#storystart Brown, S. (1997). Revolution at the checkout counter. Cambridge: Harvard University Press. Committee on Radio Frequency Identification Technologies, National Research Council. (2004). Radio frequency identification technologies: A workshop summary. Retrieved April 30, 2005, from http://www.nap.edu/catalog/11189.html d’Hont, S. (2003). The cutting edge of RFID technology and applications for manufacturing and distribution. Retrieved July 10, 2003, from http://www.ti.com/tiris/docs/manuals/whtPapers/ manuf_dist.pdf Estevez, A. F. (2005). RFID vision in the DOD supply chain. Army Logistician. Retrieved May 7, 2005, from http://www.almc.army.mil/alog/rfid. html Evers, J., & McCullagh, D. (2006, August 5). Researchers: E-passports pose security risk. CNET. Retrieved August 17, 2006, from http://news.com. com/Researchers+E-passports+pose+security+ri sk/2100-7349_3-6102608.html?tag=st.ref.goo Gadh, R. (2004, August 11). The state of RFID. Computerworld. Retrieved August 30, 2004, from http://www.computerworld.com/mobiletopics/mobile/story/0,10801,95179,00.html
Radio Frequency Identification (RFID) Technology
Goodman, B. (2005, March 17). Is RFID taking off, or just taking its time? Integrating the Enterprise. Retrieved March 24, 2005, from http:// www.itbusinessedge.com/content/3Q/3qpub220050317.aspx IDTechEx. (2005, April 10). RFID market to reach $7.26Bn in 2008. Retrieved April 18, 2005, from http://www.idtechex.com/products/en/articles/00000169.asp Jones, J. (2005, April 3). Is the RFID chip the mark of the beast? Political Gateway. Retrieved April 8, 2005, from http://www.politicalgateway. com/main/columns/read.html?col=323 Landt, J. (2001). Shrouds of time. Retrieved August 1, 2003, from http://www.aimglobal.org/technologies/rfid/resources/shrouds_of_time.pdf Lipowicz, A. (2006, January 19). Group objects to driver’s license RFID. Washington Technology. Retrieved April 17, 2006, from http://www. washingtontechnology.com/news/1_1/daily _ news/27794-1.html Malone, R. (2004). Reconsidering the role of RFID. Inbound Logistics. Retrieved September 11, 2004, from http://www.inboundlogistics. com/articles/supplychain/sct0804.shtml O’Connor, M. (2005, April 13). RFID and the media revolution. RFID Journal. Retrieved April 20, 2005, from http://www.rfidjournal.com/article/articleview/1508/1/1/ O’Shea, P. (2003). RFID comes of age for tracking everything from pallets to people. ChipCenter. Retrieved July 12, 2003, from http://www.chipcenter.com/analog/ed008.htm Roberti, M. (2003, October 6). The tipping point. RFID Journal. Retrieved October 26, 2003, from http://www.rfidjournal.com/article/articleprint/607/-1/2/ Schoenberger, C. (2002, March 18). RFID: The Internet of things. Forbes. Retrieved September
18, 2003, from http://www.mindfully.org/Technology/RFID-Things-Forbes18mar02.htm Singel, R. (2004, October 21). American passports to get chipped. Wired. Retrieved December 12, 2004, from http://www.wired.com/news/privacy/0,1848,65412,00.html Stockman, H. (1948, October). Communication by means of reflected power. Proceedings of the Institute of Radio Engineers (pp. 1196-1204). Sullivan, L. (2004, October 25). IBM shares RFID lessons. InformationWeek. Retrieved November 9, 2004, from http://www.informationweek.com/shared/printableArticle. jhtml?articleID=51000091 Swedberg, C. (2006, April 19). University of Kansas’ tag for metal, liquids. RFID Journal. Retrieved May 7, 2006, from http://www.rfidjournal. com/article/articleview/2275/1/1/ Thomas, L. (2005, January 26). RFID cell phones? Maybe in 2007. Mobile Magazine. Retrieved January 30, 2005, from http://www.mobilemag. com/content/100/102/C3673/ Trebilcock, B. (2006, July 25). RFID goes to Washington. Modern Materials Handling. Retrieved August 3, 2006, from http://www.mmh. com/article/CA6355944.html Van, J. (2005, April 16). RFID spells media revolution, futurist says. Chicago Tribune, p. B1. Webster, J. (2006, January 2). RFID: Cost and complexity continue to block enterprise use. Computerworld, Retrieved February 28, 2006, from http://www.computerworld.com/managementtopics/management/story/0,10801,107308,00. html?source=NLT_EB&nid=107308 Wyld, D. C. (2005, February 24). Supporting the “warfighter.” RFIDNews. Retrieved March 1, 2005, from http://www.rfidnews.org/library/2005/02/24/supporting-the-warfighter/
Radio Frequency Identification (RFID) Technology
furthEr rEading Anonymous. (2007, January 18). Social Security Administration uses Intermec RFID technology to improve data collection accuracy, reduce labor costs. MSN/Money. Retrieved January 19, 2007, from http://news.moneycentral.msn.com/provider/providerarticle.aspx?Feed=BW&Date=20 070118&ID=6354695 Best, J. (2004, October 8). Senior management “clueless” about RFID. Silicon.com. Retrieved December 30, 2004, from http://news.zdnet.co.uk/ business/management/0,39020654,39169620,00. htm Best, J. (2005, January 25). 2015: RFID is all over. Make way for super RFID. Silicon.com. Retrieved February 2, 2005, from http://networks.silicon. com/lans/print.htm?TYPE=story&AT=3912733639024663t-40000017c Clarke, R. (2005). Assessing readability problems with RFID systems. RFID Product News. Retrieved March 12, 2005, from http://www. rfidproductnews.com/issues/2005.01/feature/ readability.php Collins, J. (2005, April 8). Consumers more RFID-aware, still wary: A recent survey finds that more U.S. consumers have heard about RFID, but worries about privacy remain. RFID Journal. Retrieved April 15, 2005, from http://www.rfidjournal.com/article/articleview/1491/1/1/ Committee on Radio Frequency Identification Technologies, National Research Council. (2004). Radio frequency identification technologies: A workshop summary. Retrieved April 30, 2005, from http://www.nap.edu/catalog/11189.html Douglas, R. (2005, February 14). Bar codes vs. RFID. London Globe and Mail. Retrieved February 16, 2005, from http://www.globetechnology.com/servlet/story/RTGAM.20050111. gtflbarcodejan11/BNStory/Technology/
Fox, R., & Rychak, L. (2004). The potential and challenges of RFID technology. Retrieved June 9, 2004, from http://www.mintz.com/publications/ detail/264/Communications_Advisory_The_Potential_and_Challenges_of_RFID_Technology/ Government Accountability Office (GAO). (2005). Report to Congressional requesters: Information security. Radio frequency identification technology in the federal government. Retrieved June 1, 2005, from http://www.gao.gov/new.items/d05551. pdf Greenemeier, L. (2004, December 13). Uncle Sam’s guiding hand: Government mandates increasingly translate directly into IT initiatives, setting the top priorities at many companies. InformationWeek. Retrieved December 29, 2004, from http://www.informationweek.com/story/ showArticle.jhtml?articleID=55301001 Grosso, B. (2004, November 15). Intelligent objects versus unhappy objects. RFID Buzz. Retrieved December 30, 2004, from http://www. rfidbuzz.com/news/2004/intelligent_objects_versus_unhappy_objects.html Hardgrave, B., & Miller, R. (2006). The myths and realities of RFID. International Journal of Global Logistics & Supply Chain Management, 1(1). Retrieved August 6, 2006, from http://spears. okstate.edu/msis/ijglscm/articles/volume01/number01/Hardgrave-Miller%20Article%20.pdf Harrop, P. (2006, August 23). The price-sensitivity curve for RFID. Smart Labels Analyst. Retrieved September 1, 2006, from http://www.idtechex. com/products/en/articles/00000488.asp Hasson, J. (2004, September 27). The next big thing for government. Federal Computer Week. Retrieved December 20, 2004, from http://www. fcw.com/fcw/articles/2004/0927/news-nextthing-09-27-04.asp Hesseldahl, A. (2004, July 29). A hacker’s guide to RFID. Forbes. Retrieved September 11, 2004,
Radio Frequency Identification (RFID) Technology
from http://www.forbes.com/2004/07/29/cx_ah_ 0729rfid_print.html Jones, J. (2005, April 3). Is the RFID chip the mark of the beast? Political Gateway. Retrieved April 8, 2005, from http://www.politicalgateway. com/main/columns/read.html?col=323 Kuchinskas, S. (2005, January 12). RFID tags a booming biz. Internetnews.com. Retrieved January 16, 2005, from http://www.internetnews. com/wireless/article.php/3458331 Maenza, T. (2005). Supply chain visibility exposes weak links, hidden costs. Insights. Retrieved June 6, 2005, from http://www.unisys.com/ commercial/insights/insights__compendium/ supply__chain__visibility__exposes__weak__ links__hidden__costs.htm Microsoft. (2006, May 15). White paper: RFID in the retail industry. Retrieved June 18, 2006, from http://www.microsoft.com/industry/retail/ businessvalue/rfidoverview.mspx Moore, J. (2005, April 18). RFID’s positive identification. Federal Computer Week. Retrieved April 20, 2005, from http://www.fcw.com/article8860304-18-05-Print O’Connor, M. C. (2005, April 13). RFID and the media revolution: Renowned futurist Paul Saffo predicts that RFID’s biggest impact will come from surprising applications. RFID Journal. Retrieved April 20, 2005, from http://www.rfidjournal.com/article/articleview/1508/1/1/ O’Connor, M. C. (2006a, July 28). DOD getting Gen 2-ready: The Department of Defense is expanding its RFID requirements and infrastructure while it takes steps toward transitioning its requirements to support the EPC UHF Gen 2 standard. RFID Journal. Retrieved December 13, 2006, from http://www.rfidjournal.com/article/articleview/2530/1/1/ O’Connor, M. C. (2006b, August 16). Will China’s RFID standards support EPC protocols, systems?
China has yet to release its much-anticipated RFID standards; some observers say it has produced too little, too late. RFID Journal. Retrieved December 1, 2006, from http://www.rfidjournal. com/article/articleview/2593/ Ricadela, A. (2005, January 24). Sensors everywhere: A “bucket brigade” of tiny, wirelessly networked sensors someday may be able to track anything, anytime, anywhere. InformationWeek. Retrieved February 1, 2005, from http:// www.informationweek.com/story/showArticle. jhtml?articleID=57702816 Rothfeder, J. (2004, August 1). What’s wrong with RFID? CIO Insight. Retrieved September 5, 2004, from http://www.cioinsight.com/print_ article/0,1406,a=133044,00.asp Saffo, P. (2002, April 15). Smart sensors focus on the future. CIO Insight. Retrieved July 6, 2003, from http://www.cioinsight.com/print_article/ 0,3668,a=25588,00.asp Schoenberger, C. R. (2002, March 18). RFID: The Internet of things. Forbes. Retrieved September 18, 2003, from http://www.mindfully.org/Technology/RFID-Things-Forbes18mar02.htm Shepard, S. (2005). RFID: Radio frequency identification. New York: McGraw-Hill. Sirico, L. (2005, February 3). Numbers that please the palate. RFID Operations. Retrieved February 12, 2005, from http://www.rfidoperations. com/newsandviews/20050203.html Sullivan, L. (2005, June 20). Where’s RFID going next? Supply-chain projects spurred development. Now chips are turning up in ever-more-innovative uses. InformationWeek. Retrieved June 21, 2005, from http://www.informationweek. com/story/showArticle.jhtml?articleID=164900 910&tid=5978 A summary of RFID standards. (2006). RFID Journal. Retrieved August 18, 2006, from http:// www.rfidjournal.com/article/articleprint/1335/1/1
Radio Frequency Identification (RFID) Technology
Swedberg, C. (2006, April 19). University of Kansas’ tag for metal, liquids. RFID Journal. Retrieved May 7, 2006, from http://www.rfidjournal. com/article/articleview/2275/1/1/ Van Osten, E. (2006). Fast forward: The past, present, and future of Metro’s experimental future store. RFID Product News. Retrieved August 30, 2006, from http://www.rfidproductnews.com/issues/2006.07/ff.php Wasserman, E. (2005, July 18). Agencies affirm privacy policies for RFID: A panel of government officials explained how agencies are trying to build privacy safeguards into potential U.S.-issued RFID-enabled IDs. RFID Journal. Retrieved September 10, 2005, from http://www.rfidjournal. com/article/articleview/1747/1/1/ Welsh, W. (2005, March 21). Growin’ on empty: RFID’s many uses outpace available funds. Washington Technology. Retrieved April 11, 2005, from http://www.washingtontechnology. com/news/20_6/statelocal/25834-1.html Wikipedia. (2006). Radio frequency identification. Retrieved August 8, 2006, from http:// en.wikipedia.org/wiki/RFID Wyld, D. C. (2005). RFID: The right frequency for government. The IBM Center for the Business of Government. Retrieved November 1, 2005, from http://www.businessofgovernment. org/pdfs/WyldReport4.pdf Wyld, D. C. (2006a). Better than advertised: The early results from Wal-Mart’s RFID efforts are in, and the technology may be outperforming even some of the most optimistic forecasts for improving retail…And the best is yet to come. Global Identification, pp. 10-13. Wyld, D. C. (2006b). Delivering at the “moment of truth” in retail: How RFID can reduce out-ofstocks and improve supply chain performance to store shelves, benefiting both retailers and product manufacturers. Global Identification, pp. 50-53.
Wyld, D. C. (2006c). The National Animal Identification System: Ensuring the competitiveness of the American agriculture industry in the face of mounting animal disease threats. Competition Forum, pp. 110-115. Wyld, D. C. (2006d). Sports 2.0: A look at the future of sports in the context of RFID’s “weird new media revolution.” The Sport Journal. Retrieved November 4, 2006, from http://www.thesportjournal.org/2006Journal/Vol9-No4/Wyld.asp Wyld, D. C., & Jones, M. A. (in press). RFID is no fake: The adoption of radio frequency identification technology in the pharmaceutical supply chain. International Journal of Integrated Supply Management. Zappone, C. (2006, July 13). E-passports: Ready or not here they come. The state department expresses confidence in “e-passports” while technologists fret about their security risks. CNN/Money. Retrieved November 30, 2006, from http://money. cnn.com/2006/07/13/pf/rfid_passports/
tErms and dEfinitions Active Tag: An active tag is a type of RFID tag that has its own power supply (battery or external power) and, when interrogated by a reader, the tag emits its own signal. Active tags have far greater read distances than passive tags, and they can be combined with sensors to provide information on the environment and condition of the item. They are also more expensive than passive tags and, due to the battery, have a limited life span. Automatic Identification (Auto-ID): AutoID is a broad term encompassing technologies used to help machines identify objects. A host of technologies fall under the automatic-identification umbrella, including bar codes, biometrics, smart cards, voice recognition, and RFID. Electronic Product Code (EPC): An EPC is a unique number, stored in the chip on an RFID
Radio Frequency Identification (RFID) Technology
tag, that identifies an item in the supply chain, allowing for tracking of that item. Frequency: Frequency is the number of repetitions of a complete wave within 1 second; 1Hz equals one complete waveform in 1 second, and 1KHz equals 1,000 waves in a second. Passive Tag: A passive tag is a type of RFID tag that does not have its own power supply. Instead, the tag draws power from the reader, which sends out electromagnetic waves that induce a current in the tag’s antenna. Without an onboard power source, passive tags have a lesser read range than active tags. However, they cost less than active tags and have an unlimited life span.
0
Radio Frequency Identification (RFID): RFID is an automatic identification technology that uses radio waves to identify objects. Semipassive Tag: This type of tag is similar to an active tag in that there is an onboard battery, which is used to run the microchip’s circuitry and boost the effective read range of the tag. It is also called a battery-assisted tag. Smart Label: This is a printed label that contains printed information, a bar-code identifier, and an RFID tag. It is considered to be smart because of its ability to communicate with an RFID reader.
Chapter XLII
Roaming-Agent Protection for E-Commerce Sheng-Uei Guan Brunel University, UK
introduction There has been a lot of research done in the area of intelligent agents. Some of the literature (Guilfoyle, 1994; Johansen, Marzullo, & Lauvset, 1999) only proposes certain features of intelligent agents, while some of it attempts to define a complete agent architecture. Unfortunately, there is no standardization in the various proposals, resulting in vastly different agent systems. Efforts are being made to standardize some aspects of agent systems so that different systems can interoperate with each other. Knowledge representation and exchange is one of the aspects of agent systems for which KQML (knowledge query and manipulation language; Finin & Weber, 1993) is one of the most widely accepted standards. Developed as part of the knowledge sharing effort, KQML is designed as a high-level language for run-time exchange of information between heterogeneous systems. Unfortunately, KQML is designed with little security considerations because no security mechanism is built to address common security concerns, not to mention specific security concerns introduced
by mobile agents. Agent systems using KQML will have to implement security mechanisms on top of KQML to protect themselves. While KQML acts as a sufficient standard for agent representation, it does not touch upon the security aspects of agents. In an attempt to equip KQML with built-in security mechanisms, Secret Agent is proposed by Thirunavukkarasu, Finin, and Mayfield (1995). Another prominent transportable agent system is Agent TCL developed at Dartmouth College (Gray, 1997; Kotz, Gray, Nog, Rus, Chawla, & Cybenko, 1997). Agent TCL addresses most areas of agent transport by providing a complete suite of solutions. It is probably one of the most complete agent systems under research. Its security mechanism aims at protecting resources and the agent itself. In terms of agent protection, the author acknowledges that “it is clear that it is impossible to protect an agent from the machine on which the agent is executing…it is equally clear that it is impossible to protect an agent from a resource that willfully provides false information” (Gray). As a result, the author “seeks to implement a verification mechanism so that each machine can check
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Roaming-Agent Protection for E-Commerce
whether an agent was modified unexpectedly after it left the home machine” (Gray). The other areas of security, like nonrepudiation, verification, and identification, are not carefully addressed. Compared with the various agent systems discussed above, the SAFE (secure roaming agent for e-commerce) transport protocol is designed to provide a secure agent roaming mechanism for e- or m-commerce. The other mobile agent systems are either too general or too specific to a particular application. By designing SAFE with mobile application concerns in mind, the architecture will be suitable for m-commerce. The most important concern is security as discussed previously. Due to the nature of m-commerce, security becomes a prerequisite for any successful m-commerce application. Other concerns are mobility, efficiency, and interoperability. In addition, the design allows certain flexibility to cater to different application needs.
background The introduction of the mobile Internet is probably one of the most significant revolutions of the 20th century. With a simple click, one can connect to almost every corner of the world thousands of kilometers away. This presents a great opportunity for m-commerce. Despite its many advantages over traditional commerce, m-commerce has not taken off successfully. One of the major hindrances is that of security. The focus of this chapter is the secure transport of mobile agents. A mobile agent is useful for handheld devices like palmtops or PDAs (personal digital assistants). Such m-commerce devices usually have limited computing power. It would be useful if the users of such devices can send an intelligent mobile agent to remote machines to carry out complex tasks like product brokering, bargain hunting, and information collection. When it comes to online transactions, security becomes the primary concern. The Internet was
developed without too much security in mind. Information flows from hub to hub before it reaches the destination. By simply tapping into wires or hubs, one can easily monitor all traffic transmitted. For example, when Alice uses her Visa credit card to purchase an album from Virtual CD Mall, the information about her card may be stolen if it is not carefully protected. This information may be used maliciously to make other online transactions, thus causing damage to both the card holder and the credit card company. Besides concerns of security, current m-commerce lacks the intelligence to locate the correct piece of information. The Internet is like the world’s most complete library collection unsorted by any means. To make things worse, there is no competent librarian that can help readers locate the book wanted. Existing popular search engines are attempts to provide librarian assistance. However, as the collection of information is huge, none of the librarians are competent enough at the moment. The use of intelligent agents is one solution for providing intelligence in m-commerce. However, having an agent that is intelligent is insufficient. There are certain tasks that are unrealistic for agents to perform locally, especially those that require a huge amount of information. Therefore, it is important to equip intelligent agents with roaming capability. Unfortunately, with the introduction of roaming capability, more security issues arise. As the agent needs to move among external hosts to perform its tasks, the agent itself becomes a target of attack. The data collected by agents may be modified, the credit carried by agents may be stolen, and the mission statement on the agent may be changed. As a result, transport security is an immediate concern to agent roaming.
agent protection SAFE is a protocol designed to provide a secure roaming mechanism for intelligent agents.
Roaming-Agent Protection for E-Commerce
Here, both general and roaming-related security concerns are addressed carefully. Furthermore, several protocols are designed to address different requirements. An m-commerce application can choose the protocol that is most suitable based on its need. As a prerequisite, each SAFE entity must carry a digital certificate issued by the SAFE certificate authority, or SCA. The certificate itself is used to establish the identity of a SAFE entity. Because the private key to the certificate has signing capability, this allows the certificate owner to authenticate itself to the SAFE community. An assumption is made that the agent’s private key can be protected by function hiding (Thomas, 1998). (Other techniques were also discussed in the literature but will not be elaborated in this chapter, e.g., Bem, 2000; Westhoff, 2000.) From the host’s viewpoint, an agent is a piece of foreign code that executes locally. In order to prevent a malicious agent from abusing the host resources, the host should monitor the agent’s usage of resources (e.g., computing resources, network resources). An agent receptionist will act as the middleman to facilitate and monitor agent communication with the external party.
general message format In SAFE, agent transport is achieved via a series of message exchanges. The format of a general message is as follows. SAFE Message = Message Content + Time Stamp + Sequence Number + MD(Message Content + Time Stamp + Sequence Number) + Signature(MD) The main body of a SAFE message comprises the message content, a time stamp, and a sequence number. The message content is defined by individual messages. Here, MD stands for the message digest function. The first MD is the function
applied to the message content, time stamp, and sequence number to generate a message digest. The second MD in the equation is the application of a digital signature to the message digest generated. A time stamp contains the issue and expiry time of the message. To prevent replay attack, message exchanges between entities during agent transport is labeled according to each transport session. A running sequence number is included in the message body whenever a new message is exchanged. In order to protect the integrity of the main message body, a message digest is appended to the main message. The formula of the message digest is as follows. Message Digest = MD5(SHA(message_body) + message_body) Here, SHA (secure hash algorithm) is a set of related cryptographic hash functions. The most commonly used function, SHA-1, is employed in a large variety of security applications. The message digest alone is not sufficient to protect the integrity of a SAFE message. A malicious hacker can modify the message body and recalculate the value of the message digest using the same formula, producing a seemingly valid message digest. To ensure the authenticity of the message, a digital signature on the message digest is generated for each SAFE message. In addition to ensuring message integrity, the signature serves as a proof for nonrepudiation as well. If the message content is sensitive, it can be encrypted using a symmetric key algorithm (e.g., triple DES). The secret key used for encryption will have to be decided at a higher level. To cater for different application concerns, three transport protocols are proposed: supervised agent transport, unsupervised agent transport, and bootstrap agent transport. These three protocols will be discussed in the following sections in detail.
Roaming-Agent Protection for E-Commerce
Figure 1. Supervised agent transport
supervised agent transport Supervised agent transport is designed for applications that require the close supervision of agents. Under this protocol, an agent has to request a roaming permit from its owner or butler before roaming. The owner has the option to deny the roaming request and prevent its agent from roaming to undesirable hosts. Without the agent owner playing an active role in the transport protocol, it is difficult to have tight control over agent roaming. The procedure for supervised agent transport is shown in Figure 1.
the pool will be activated to entertain the request. In this way, a number of agents can be serviced concurrently.
Request through Source Receptionist for Entry Permit To initiate supervised agent transport, an agent needs to request an entry permit from the destination receptionist. Communication between the visiting agent and foreign parties (other agents outside the host, the agent owner, etc.) is done using an agent receptionist as a proxy.
Request for Roaming Permit Agent Receptionist Agent receptionists are processes running at every host to facilitate agent transport. If an agent wishes to roam to a host, it should communicate with the agent receptionist at the destination host to complete the transport protocol. Every host will keep a pool of agent receptionists to service incoming agents. Whenever an agent roaming request arrives, an idle agent receptionist from
Once the source receptionist receives the entry permit from the destination receptionist, it simply forwards it to the requesting agent. The next step is for the agent to receive a roaming permit from its owner or butler. The agent sends the entry permit and address of its owner or butler to the source receptionist. Without processing, the source receptionist forwards the entry permit to the address as specified in the agent request.
Roaming-Agent Protection for E-Commerce
The agent owner or butler can decide whether the roaming permit should be issued based on its own criteria. If the agent owner or butler decides to issue the roaming permit, it will have to generate a session number, a random challenge, and a freeze-unfreeze key pair. The roaming permit should contain the session number, random challenge, freeze key, time stamp, entry permit, and a signature on all the above from the agent owner or butler. In order to verify that the agent has indeed reached the intended destination, a random challenge is generated into the roaming permit. A digital signature on this random challenge is required for the destination to prove its authenticity.
to the destination receptionist as specified in the entry permit. Once the transmission is completed, the source receptionist will terminate the execution of the original agent and make itself available to other incoming agents.
Agent Freeze
2.
With the roaming permit and entry permit, the agent is now able to request for roaming from the source receptionist. In order to protect the agent during its roaming, sensitive functions and codes inside the agent body will be frozen. This is achieved using the freeze key in the roaming permit. Even if the agent is intercepted during its transmission, the agent’s capability is restricted such that it cannot be run due to the freezing of agent functions. Not much harm can be done to the agent owner or butler. To ensure a smooth roaming operation, the agent’s life support systems cannot be frozen. Functions that are critical to the agent’s roaming capability must remain functional when the agent is roaming.
3.
Agent Transport Once frozen, the agent is ready for transmission over the Internet. To activate roaming, the agent sends a request containing the roaming permit to the source receptionist. The source receptionist can verify the validity and authenticity of the roaming permit. If the agent’s roaming permit is valid, the source receptionist will transmit the frozen agent
Agent Preactivation When the frozen agent reaches the destination receptionist, it will inspect the agent’s roaming permit and the entry permit (contained in the roaming permit) carefully. By doing so, the destination receptionist can establish the following. 1.
4.
The agent has been granted permission to enter the destination. The entry permit carried by the agent has not expired. The agent has obtained sufficient authorization from its owner or butler for roaming. The roaming permit carried by the agent has not expired.
If the destination receptionist is satisfied with the agent’s credentials, it will activate the agent partially and allow it to continue the agent transport process.
Request for Unfreeze Key and Agent Activation Although the agent has been activated, it is still unable to perform any operation since all sensitive codes and data are frozen. To unfreeze the agent, it has to request for the unfreeze key from its owner or butler. To prove the authenticity of the destination, the destination receptionist is required to sign the random challenge in the roaming permit. The request for the unfreeze key contains the session number, the certificate of destination, and the signature on the random challenge. The direct agent transport process is completed.
Roaming-Agent Protection for E-Commerce
Figure 2. Unsupervised agent transport
unsupervised agent transport The supervised agent protocol is not a perfect solution for agent transport. Although it provides tight supervision for an agent owner or butler, it has its limitations. Since the agent owner is actively involved in the transport, the protocol inevitably incurs additional overhead and network traffic. This results in lower efficiency of the protocol. This is especially significant when the agent owner or butler is located behind a network with lower bandwidth, or the agent owner is supervising a large number of agents. In order to provide flexibility between security and efficiency, unsupervised agent transport is proposed. The steps involved in unsupervised agent transport are shown in Figure 2.
Request for Entry Permit In supervised agent transport, the session ID and key pair are generated by the agent butler. However, for unsupervised agent transport, these are generated by the destination receptionists
because the agent butler is no longer online to the agents.
Preroaming Notification Unlike supervised agent transport, the agent does not need to seek for explicit approval to roam from its owner or butler. Instead, a preroaming notification is sent to the agent owner or butler first. It serves to inform the agent owner that the agent has started its roaming. The agent does not need to wait for the owner’s or butler’s reply before roaming.
Agent Freeze The agent freeze is very close to the same step under supervised agent transport, except that the encryption key is generated by the destination instead of the agent butler.
Agent Transport This step is the same as that in the supervised agent transport protocol.
Roaming-Agent Protection for E-Commerce
Request for Unfreeze Key
bootstrap agent transport
The identification and verification processes are the same as compared to supervised agent transport, the exception being that the unfreeze key comes from the destination receptionist.
Both supervised and unsupervised agent transport make use of a fixed protocol for agent transport. The procedures for agent transport in these two protocols have been clearly defined without much room for variations. It is realized that there exist applications that require special transport mechanisms for their agents. In order to allow this flexibility, we provide a third transport protocol: bootstrap agent transport. Under bootstrap agent transport, agent transport is completed in two phases. Bootstrap agent transport is illustrated in Figure 3. In the first phase, the transport agent is sent to the destination receptionist using either supervised or unsupervised agent transport with some modifications. The original supervised and unsupervised agent transport requires agent authentication and destination authentication to make sure that the right agent reaches the right destination. Under bootstrap agent transport,
Agent Activation This step is the same as that in supervised agent transport.
Postroaming Notification Upon full activation, the agent must send a postroaming notification to its owner or butler. This will inform the agent owner or butler that the agent roaming has been completed successfully. Again, this notification will take place through an indirect channel so that the agent does not need to wait for any reply before continuing with its normal execution.
Figure 3. Bootstrap agent transport
Roaming-Agent Protection for E-Commerce
the transmission of the transport agent does not require both agent authentication and destination authentication. Once the transport agent reaches the destination, it starts execution in a restricted environment. It is not given the full privilege as a normal agent because it has yet to authenticate itself to the destination. This is to prevent the transport agent from hacking attempts to the local host. Under the restricted environment, the transport agent is not allowed to interact with local host services. It is only allowed to communicate with its parent until the parent reaches the destination. SAFE allows individual transport agents to be customized to use any secure protocol for parent-agent transmission. When the parent agent reaches the destination, it can continue the handshake with the destination receptionist and perform mutual authentication directly. The authentication scheme is similar to that in supervised and unsupervised agent transport.
implementation considerations for public services The proposed agent transport would be useful for public services when agents can be customized upon users’ instructions to search public information. For such a service to be realizable, authentication and authorization of user agents needs to be in place (Yeo, Guan, & Zhu, 2002). Facilities like passports and visas (Guan, Wang, & Ong, 2003) can be incorporated into agents so that they can be verified upon entry to any public information portal or server. If service charges are incurred during access to public information, then payment made via the agent or user also needs to be in place (Guan & Hua, 2003). Furthermore, the fabrication of reliable and trustworthy agents should be taken into consideration for such services to be available to the users (Guan & Zhu, 2002; Guan, Zhu, & Maung, 2004).
futurE trEnds There are active research activities recently in the use of intelligent agents to mine user preferences: so-called personalization agents. Such agents, when equipped with inference engines, would be able to derive personal interests when observing Web or mobile user interactions or click streams during online transactions. They would carry sensitive, personal data that should not be disclosed to outsiders. The protection of such agents is crucial. The migration of such agents may be necessary when the service platform consists of multiple servers. For now, such agents usually reside on the server side, where strict security may already be in place. In the near future, such agents could be deployed on the client side, with a different name such as personal secretary, personal agent, butler agent, and so forth. Such an agent may be dispatched by the user to run errands such as online shopping, information gathering, or even price negotiation. An agent that carries user preference data is therefore vulnerable to attacks due to the fact that it has sensitive data inside. The protection of such type of mobile agents would then be necessary.
conclusion SAFE is designed as a secure agent transport protocol for m-commerce. The foundation of SAFE is the agent transport protocol, which provides intelligent agents with roaming capability without compromising security. General security concerns as well as security concerns raised by agent transport have been carefully addressed. The design of the protocol also takes into consideration differing concerns for different applications. Instead of standardizing on one transport protocol, three different transport protocols are designed, catering to various needs. Based on the level of control desired, one can choose between supervised agent
Roaming-Agent Protection for E-Commerce
transport and unsupervised agent transport. For applications that require a high level of security during agent roaming, bootstrap agent transport is provided so that individual applications can customize their transport protocols. The prototype of the SAFE agent transport protocol has been developed and tested.
Gray, R. (1997). Agent TCL: A flexible and secure mobile-agent system. Unpublished doctoral dissertation, Department of Computer Science, Dartmouth College.
futurE rEsEarch dirEctions
Guan, S.-U., Wang, T., & Ong, S.-H. (2003). Migration control for mobile agents based on passport and visa. Future Generation Computer Systems, 19(2), 173-186.
As an evolving effort to deliver a more complete architecture for agents, the SAFER (Secure Agent Fabrication, Evolution and Roaming) architecture is being proposed to extend the SAFE architecture. In SAFER, agents not only have roaming capability, but can make electronic payments and evolve to perform better. Security has been a prime concern from the first day of our research (Guan et al., 2003; Guan & Yang, 1999). By building strong and efficient security mechanisms, SAFER aims to provide a trustworthy framework for mobile agents, increasing trust factors for end users by providing trustworthiness, predictable performance, and a communication channel.
rEfErEncEs Bem, E. Z. (2000). Protecting mobile agents in a hostile environment. Proceedings of the ICSC Symposium on Intelligent Systems and Applications (ISA 2000). Corley, S. (1998). The application of intelligent and mobile agents to network and service management. Proceedings of the Fifth International Conference on Intelligence in Services and Networks, IS&N’98, Antwerp, Belgium. Finin, T., & Weber, J. (1993). Draft specification of the KQML agent communication language. Retrieved from http://www.cs.umbc.edu/kqml/ kqmlspec/spec.html
Guan, S.-U., & Hua, F. (2003). A multi-agent architecture for electronic payment. International Journal of Information Technology and Decision Making (IJITDM), 2(3), 497-522.
Guan, S.-U., & Yang, Y. (1999). SAFE: Secureroaming agent for e-commerce. Proceedings of the 26th International Conference on Computers & Industrial Engineering (pp. 33-37). Guan, S.-U., & Zhu, F. (2002). Agent fabrication and its implementation for agent-based electronic commerce. International Journal of Information Technology and Decision Making (IJITDM), 1(3), 473-489. Guan, S.-U., Zhu, F., & Maung, M. T. (2004). A factory-based approach to support e-commerce agent fabrication. Electronic Commerce and Research Applications, 3(1), 39-53. Guilfoyle, C. (1994). Intelligent agents: The new revolution in software. London: OVUM. Johansen, D., Marzullo, K., & Lauvset, K. J. (1999). An approach towards an agent computing environment. Proceedings of the ICDCS’99 Workshop on Middleware. Kotz, D., Gray, R., Nog, S., Rus, D., Chawla, S., & Cybenko, C. (1997). Agent TCL: Targeting the needs of mobile computers. IEEE Internet Computing, 1(4), 58-67. Odubiyi, J. B., Kocur, D. J., Weinstein, S. M., Wakim, N., Srivastava, S., Gokey, C., et al. (1997). SAIRE: A scalable agent-based information re-
Roaming-Agent Protection for E-Commerce
trieval engine. Proceedings of the Autonomous Agents ’97 Conference (pp. 292-299). Rus, D., Gray, R., & Kotz, D. (1996). Autonomous and adaptive agents that gather information. Proceedings of the AAAI ’96 International Workshop on Intelligent Adaptive Agents. Rus, D., Gray, R., & Kotz, D. (1997). Transportable information agents. In M. Huhns & M. Singh (Eds.), Readings in agents. San Francisco: Morgan Kaufmann Publishers. Sander, T., & Tschundin, C. F. (1998). Protecting mobile agents against malicious hosts. In Lecture notes in computer science: Vol. 1419. Mobile agents and security (pp. 44-60). Schneider, F. B. (1997). Towards fault-tolerant and secure agentry. Proceedings of the 11th International Workshop on Distributed Algorithms, Saarbrücken, Germany. Schneier, B. (1996). Applied cryptography: Protocols, algorithms, and source code in C (2nd ed.). New York: John Wiley & Sons, Inc. Schoonderwoerd, R., Holland, O., & Bruten, J. (1997). Ant-like agents for load balancing in telecommunications networks. Proceedings of the 1997 1st International Conference on Autonomous Agents (pp. 209-216). Thirunavukkarasu, C., Finin, T., & Mayfield, J. (1995). Secret agents: A security architecture for the KQML agent communication language. Proceedings of the CIKM’95 Intelligent Information Agents Workshop, Baltimore. Westhoff, D. (2000). On securing a mobile agent’s binary code. Proceedings of the ICSC Symposium on Intelligent Systems and Applications (ISA 2000). White, D. E. (1998). A comparison of mobile agent migration mechanisms. Unpublished senior honors thesis, Dartmouth College.
0
Yeo, W. C., Guan, S.-U., & Zhu, F. (2002). An architecture for authentication and authorization of mobile agents in e-commerce. In S. Nansi (Ed.), Architectural issues of Web-enabled electronic business (pp. 348-361). Idea Group Publishing.
furthEr rEading Evans, A., Fernandez, M., Vallet, D., & Castells, P. (2006). Adaptive multimedia access: From user needs to semantic personalization. Proceedings of the 2006 IEEE International Symposium on Circuits and Systems (ISCAS 2006). Guan, S.-U., & Yang, Y. (1999). SAFE: Secureroaming agent for e-commerce. Proceedings of the 26th International Conference on Computers & Industrial Engineering (pp. 33-37). Guan, S.-U., Zhu, F. (2002). Agent fabrication and its implementation for agent-based electronic commerce. International Journal of Information Technology and Decision Making (IJITDM), 1(3), 473-489. Guan, S.-U., Zhu, F. M., & Ko, C. C. (2000). Agent fabrication and authorization in agentbased electronic commerce. Proceedings of the International ICSC Symposium on Multi-Agents and Mobile Agents in Virtual Organizations and E-Commerce (pp. 528-534). Gunupudi, V., & Tate, S. R. (2004). Performance evaluation of data integrity mechanisms for mobile agents. Proceedings International Conference on Information Technology: Coding and Computing, 1, 62-69. Jorstad, I., van Thanh, D., & Dustdar, S. (2005). The personalization of mobile services. IEEE International Conference on Wireless and Mobile Computing, Networking and Communications, 4, 59-65.
Roaming-Agent Protection for E-Commerce
Koutrika, G., & Ioannidis, Y. (2004). Personalization of queries in database systems. Proceedings of the 20th International Conference on Data Engineering (pp. 597-608). Panayiotou, C., Andreou, M., Samaras, G., & Pitsillides, A. (2005). Time based personalization for the moving user. International Conference on Mobile Business (pp. 128-136). Park, J. Y., Lee, D. I., & Lee, H. H. (2001). Data protection in mobile agents: One-time key based approach. Proceedings of the 5th International Symposium on Autonomous Decentralized Systems (pp. 411-418). Poh, T. K., & Guan, S.-U. (2000). Internet-enabled smart card agent environment and applications. In S. M. Rahman & M. Raisinghani (Eds.), Electronic commerce: Opportunities and challenges. Idea Group Publishing. Sim, L. W., & Guan, S.-U. (2002). An agent-based architecture for product selection and evaluation under e-commerce. In S. Nansi (Ed.), Architectural issues of Web-enabled electronic business (pp. 333-346). Idea Group Publishing. Specht, G., & Kahabka, T. (2000). Information filtering and personalisation in databases using Gaussian curves. 2000 International Database Engineering and Applications Symposium (pp. 16-24). Tam, K. Y., & Ho, S. Y. (2003). Web personalization: Is it effective? IT Professional, 5(5), 53-57. Tan, X., Yao, M., & Xu, M. (2006). An effective technique for personalization recommendation based on access sequential patterns. IEEE AsiaPacific Conference on Services Computing, APSCC ’06 (pp. 42-46). Treiblmaier, H., Madlberger, M., Knotzer, N., & Pollach, I. (2004). Evaluating personalization and customization from an ethical point of view: An empirical study. Proceedings of the 37th An-
nual Hawaii International Conference on System Sciences, HI. Wang, Y., Kobsa, A., van der Hoek, A., & White, J. (2006). PLA-based runtime dynamism in support of privacy-enhanced Web personalization. Proceedings of the 10th International Software Product Line Conference. Wang, Y. H., Wang, C. L., & Liao, C. H. (2004). Mobile agent protection and verification in the Internet environment. The Fourth International Conference on Computer and Information Technology (pp. 482-487). Wu, D., Im, I., Tremaine, M., Instone, K., & Turoff, M. (2003). A framework for classifying personalization scheme used on e-commerce Websites. Proceedings of the 36th Annual Hawaii International Conference on System Sciences (pp. 12-23). Yang, Y. (2006). Provisioning of personalized pervasive services: Daidalos personalization functions. 1st International Symposium on Pervasive Computing and Applications (pp. 110-115). Yang, Y., & Guan, S.-U. (2000). Intelligent mobile agents for e-commerce: Security issues and agent transport. In S. M. Rahman & M. Raisinghani (Eds.), Electronic commerce: Opportunities and challenges. Idea Group Publishing. Yee, G. (2006). Personalized security for e-services. Proceedings of the First International Conference on Availability, Reliability and Security, ARES 2006. Yu, P. S. (1999). Data mining and personalization technologies. Proceedings of the 6th International Conference on Database Systems for Advanced Applications (pp. 6-13). Zhao, Y., Yao, Y., & Zhong, N. (2005). Multilevel Web personalization. Proceedings of the 2005 IEEE/WIC/ACM International Conference on Web Intelligence (pp. 649-652).
Roaming-Agent Protection for E-Commerce
Zhou, J., Onieva, J. A., & Lopez, J. (2004a). Analysis of a free roaming agent result-truncation defense scheme. Proceedings of the IEEE International Conference on E-Commerce Technology (pp. 221-226). Zhou, J., Onieva, J. A., & Lopez, J. (2004b). Protecting free roaming agents against resulttruncation attack. Proceedings of the 2004 IEEE 60th Vehicular Technology Conference, 5, 32713274.
kEy tErms Agent: An agent is a piece of software that acts to accomplish tasks on behalf of its user. Agent Freeze: In order to protect the agent during its roaming, sensitive functions and codes inside the agent body will be frozen, that is, encrypted using the freeze key in the roaming permit. Agent Transport: This is the shipping of agents across machines in the Internet. Digital Certificate: This is a certificate that uses a digital signature to bind together a public key with an identity: information such as the name of a person or an organization, an address, and so
forth. The certificate can be used to verify that a public key belongs to an individual. E-Commerce: Electronic commerce or ecommerce consists primarily of the distributing, buying, selling, marketing, and servicing of products or services over electronic systems such as the Internet and other computer networks. Knowledge Query and Manipulation Language (KQML): KQML is one of the most widely accepted standards. Developed as part of the knowledge sharing effort, KQML is designed as a high-level language for run-time exchange of information between heterogeneous systems. M-Commerce: M-commerce or mobile commerce stands for electronic commerce made through mobile devices. Protocol: A protocol is a convention or standard that controls or enables the connection, communication, and data transfer between two computing endpoints. Protocols may be implemented by hardware, software, or a combination of the two. At the lowest level, a protocol defines a hardware connection. Security: Security is the effort to create a secure computing platform designed so that agents (users or programs) can only perform actions that have been allowed.
Chapter XLIII
Integrity Protection of Mobile Agent Data Sheng-Uei Guan Brunel University, UK
introduction One hindrance to the widespread adoption of mobile-agent technology is the lack of security. Security will be the issue that has to be addressed carefully if mobile agents are to be used in the field of electronic commerce. SAFER (secure agent fabrication, evolution and roaming) is a mobile-agent framework that is specially designed for the purpose of electronic commerce (Guan & Hua, 2003; Guan, Zhu, & Maung, 2004; Zhu, Guan, Yang, & Ko, 2000). Security has been a prime concern from the first day of our research (Guan & Yang, 2002; Yang & Guan, 2000). By building strong and efficient security mechanisms, SAFER aims to provide a trustworthy framework for mobile agents to assist users in conducting mobile or electronic-commerce transactions. Agent integrity is one such area crucial to the success of agent technology (Wang, Guan, & Chan, 2002). Despite the various attempts in the literature, there is no satisfactory solution to the problem of data integrity so far. Some of the common weaknesses of the current schemes are vulnerabilities to revisit attacks, when an agent
visits two or more collaborating malicious hosts during one roaming session, and illegal modification (deletion or insertion) of agent data. The agent monitoring protocol (AMP; Chionh, Guan, & Yang, 2001), an earlier proposal under SAFER to address agent data integrity, does address some of the weaknesses in the current literature. Unfortunately, the extensive use of PKI (public-key infrastructure) technology introduces too much overhead to the protocol. Also, AMP requires the agent to deposit its data collected to the agent owner or butler before it roams to another host. While this is a viable and secure approach, the proposed approach, Secure Agent Data Integrity Shield (SADIS), will provide an alternative by allowing the agent to carry the data by itself without depositing them (or the data hash) onto the butler. Besides addressing the common vulnerabilities of current literature (revisit attacks and data-modification attacks), SADIS also strives to achieve maximum efficiency without compromising security. It minimizes the use of PKI technology and relies on symmetric key encryption as much as possible. Moreover, the data encryption
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Integrity Protection of Mobile Agent Data
key and the communication session key are both derivable from a key seed that is unique to the agent’s roaming session in the current host. As a result, the butler can derive the communication session key and data encryption key directly. Another feature in SADIS is strong security. Most of the existing research works focus on detecting integrity compromise (Esparza, Muñoz, Soriano, & Fomé, 2006) or bypassing integrity attacks by requiring the existence of a cooperating agent that is carried out within a trusted platform (Ouardani, Pierre, & Boucheneb, 2006). However, these works neglect the need to identify the malicious host. With SADIS, the agent butler will not only be able to detect any compromise to data integrity, but will identify the malicious host effectively.
background Agent data integrity has been a topic of active research in the literature for a while. SADIS addresses the problem of data integrity protection via a combination of techniques discussed by Borselius (2002): execution tracing, encrypted payload, environmental key generation, and undetachable signature. One of the recent active research works is the security architecture by Borselius, Hur, Kaprynski, and Mitchell (2002). Their architecture aims at defining a complete security architecture designed for mobile-agent systems. It categorizes security services into the following: agent management and control, agent communications service, agent security service, agent mobility service, and agent logging service. SADIS addresses the agent communication service as well as agent security services (integrity protection), while previous research on SAFER addresses agent mobility service. While many of the security services are still under active research, the security mechanisms for protecting agents against malicious hosts were
described by Borselius, Mitchell, and Wilson (2001). The paper proposes a threshold scheme to protect mobile agents. Under the mechanism, a group of agents is dispatched to carry out the task, with each agent carrying a vote. Each agent is allowed to contact a merchant independently and gathers a bid based on the given criteria. Each agent votes for the best bid (under a trading scenario) independently. If more than n out of m (m > n) agents vote for the transaction, the agent owner will agree to the transaction. Such a mode of agent execution effectively simplifies agent roaming by allowing one agent to visit one merchant only. While the approach avoids the potential danger of having the agent compromised by the subsequent host, it does not employ a mechanism to protect the agent against the current host. Most important of all, the threshold mechanism’s security is based on the probability that no more than n hosts out of m are malicious. In another words, the security is established based on probability. Different from this approach, SADIS’s security is completely based on its own merits without making any assumption about probability of hosts being benign or malicious. This is because the author believes that in an e-commerce environment, security should not have any dependency on probability. Other than the research by Borselius (2002), Borselius et al. (2002), and Borselius et al. (2001), there are related research works in the area. One such research work on agent protection is SOMA (Secure and Open Mobile Agent) developed by Corradi, Cremonini, Montanari, and Stefanelli (1999). It is a Java-based mobile-agent framework that provides for scalability, openness, and security on the Internet. One of the research focuses of SOMA is to protect the mobile agent’s data integrity. To achieve this, SOMA makes use of two mechanisms: the multihop (MH) protocol and trusted third party (TTP) protocol. The MH protocol works as follows. At each intermediate site the mobile agent collects some data and appends them to the previous ones collected.
Integrity Protection of Mobile Agent Data
Each site must provide a short proof of the agent computation, which is stored in the agent. Each proof is cryptographically linked with the ones computed at the previous sites. There is a chaining relation between proofs. When the agent moves back to the sender, the integrity of the chained cryptographic proofs is verified, allowing the sender to detect any integrity violation. The advantage of the MH protocol is that it does not require any trusted third party or even the agent butler for its operation. This is a highly desirable feature for an agent integrity protection protocol. Unfortunately, the MH protocol does not hold well against revisit attacks when the agent visits two or more collaborating malicious hosts during one roaming session (Chionh et al., 2001). Roth (2001) provides more detailed descriptions on potential flaws of the MH protocol. Another agent system that addresses data integrity is Ajanta (Tripathi, 2002). Ajanta is a platform for agent-based application on the Internet developed in the University of Minnesota. It makes use of an append-only container for agent data integrity protection. The main objective is to allow a host to append new data to the container but prevents anyone from modifying the previous data without being detected. Similar to the MH protocol, such an append-only container suffers from revisit attacks. From these attacks in existing research, the importance of protecting agent itinerary is obvious. In SADIS, the agent’s itinerary is implicitly updated in the agent butler during key seed negotiation. This prevents any party from modifying the itinerary recorded on the butler and guards against all itinerary-related attacks. There is one recent research work on agent data integrity protection called the One-Time Key Generation System (OKGS) being studied in Kwang-Ju Institute of Science and Technology, South Korea (Park, Lee, & Lee, 2002). OKGS does protect the agent data against a number of attack scenarios under revisit attacks, such as data-insertion attacks and data-modification
attacks, to a certain extent. However, it does not protect the agent against deletion attacks as two collaborating malicious hosts can easily remove roaming records in between them. Inspired by OKGS’s innovative one-time encryption key concept, SADIS will extend this property to the communication between agent and butler as well. Not only the data encryption key is one-time, but the communication session key as well. Using efficient hash calculations, the dynamic communication session key can be derived separately by the agent butler and the agent with minimum overhead. Despite the fact that all keys are derived from the same sessionbased key seed, SADIS also ensures that there is little correlation between these keys. As a result, even if some of the keys are compromised, the key seed will still remain secret.
protEction of agEnt data intEgrity SADIS is designed based on the SAFER framework. The proposal itself is based on a number of assumptions that were implemented under SAFER. First, entities in SAFER, including agents, butlers, and hosts, should have globally unique identification number (IDs). These IDs will be used to uniquely identify each entity. Second, each agent butler and host should have a digital certificate that is issued by a trusted certificate authority (CA) under SAFER. Each entity with a digital certificate will be able to use the private key of its certificate to perform digital signatures and, if necessary, encryption. Third, while the host may be malicious, the execution environment of mobile agents should be secure and the execution integrity of the agent should be maintained. This assumption is made because protecting the agent’s execution environment is a completely separate area of research that is independent of this chapter. Without a secure execution environment and execution integrity,
Integrity Protection of Mobile Agent Data
none of the agent data protection scheme will be effective. The last assumption is that entities involved are respecting and cooperating with the SADIS protocol. Finally, SADIS does not require the agent to have a predetermined itinerary. The agent is able to decide which host is the next destination independently.
Key Seed Negotiation Protocol When an agent first leaves the butler, the butler will generate a random initial key seed, encrypt it with the destination host’s public key, and deposit it into the agent before sending the agent to the destination host. It should be noted that agent transmission is protected by the supervised agent transport protocol (Guan & Yang, 2002). Otherwise, a malicious host (“man in the middle”) can perform an attack by replacing the encrypted key seed with a new key seed and encrypt it with the destination’s public key. In this case, the agent and the destination host will not know the key seed has been manipulated. When the agent starts to communicate with the butler using the wrong key seed, the malicious host can intercept all the messages and reencrypt them with the correct key derived from the correct key seed and forward them to the agent butler. In this way, a malicious host can compromise the whole protocol. The key seed carried by the agent is session based; it is valid until the agent leaves the current host. When the agent decides to leave the current host, it must determine the destination host and start the key seed negotiation process with the agent butler. The key seed negotiation process is based on the Diffie-Hellman (DH) key exchange protocol (Diffie & Hellman, 1976) with a variation. The agent will first generate a private DH parameter a and its corresponding public parameter x. The value x, together with the ID of the destination host, will be encrypted using a communication session key and sent to the agent butler.
The agent butler will decrypt the message using the same communication session key (derivation of communication session key will be discussed later in the section). It, too, will generate its own DH private parameter b and its corresponding public parameter y. With the private parameter b and the public parameter x from the agent, the butler can derive the new key seed and use it for communications with the agent in the new host. Instead of sending the public parameter y to the agent as in normal DH key exchange, the agent butler will encrypt the value y, host ID, agent ID, and current time stamp with the destination host’s public key to get message M. Message M will be sent to the agent after encrypting with the communication session key. M = E(y + host ID + agent ID + time stamp, HpubKey) At the same time, the agent butler updates the agent’s itinerary and sends it to the agent. When the agent receives the double-encrypted DH public parameter y, it can decrypt with the communication session key. Subsequently, the agent will store M into its data segment and requests the current host to send itself to the destination host using the agent transport protocol (Guan & Yang, 2002). On arriving at the destination host, the agent will be activated. Before it resumes normal operation, the agent will request the new host to decrypt message M. If the host is the right destination host, it will be able to use the private key to decrypt message M and thus obtain the DH public parameter y. As a result, the decryption of message M not only completes the key seed negotiation process, but also serves as a means to authenticate the destination host. Once the message M is decrypted, the host will verify that the agent ID in the decrypted message matches the incoming agent, and the host ID in the decrypted message matches that of the current host.
Integrity Protection of Mobile Agent Data
With the plain value of y, the agent can derive the key seed by using its previously generated private parameter a. With the new key seed derived, the key seed negotiation process is completed. The agent can resume normal operation in the new host. Whenever the agent and the butler need to communicate with each other, the sender will first derive a communication session key using the key seed and use this communication session key to encrypt the message. The receiver can make use of the same formula to derive the communication session key from the same key seed to decrypt the message. The communication session key KCSK is derived using the formula below.
Data Integrity Protection Protocol
KCSK = Hash(key_seed + host ID + seqNo)
or
The sequence number is a running number that starts with 1 for each agent roaming session. Whenever the agent reaches a new host, the sequence number will be reset to 1. Given the varying communication session keys, if one of the messages is somehow lost without being detected, the butler and agent will not be able to communicate afterward. As a result, SADIS makes use of TCP/IP (transmission-control protocol/Internet protocol) as a communication mechanism so that any loss of messages can be immediately detected by the sender. In the case of an unsuccessful message, the sender will send ping messages to the recipient in plain format until the recipient or the communication channel recovers. Once the communication is reestablished, the sender will resend the previous message (encrypted using the same communication session key). When the host provides information to the agent, the agent will encrypt the information with a data encryption key K DEK. The data encryption key is derived as follows.
Di = di + IDhost + IDagent + time stamp
K DEK = Hash(key_seed + host ID)
The key seed negotiation protocol lays the necessary foundation for integrity protection by establishing a session-based key seed between the agent and its butler. Agent data integrity is protected through the use of this key seed and the digital certificates of the hosts. Our data integrity protection protocol is comprised of two parts: chained signature generation and data integrity verification. Chained signature generation is performed before the agent leaves the current host. The agent gathers data provided by the current host di and construct Di as follows. Di = E(di + IDhost + IDagent + time stamp, kDEK)
The inclusion of the host ID, agent ID, and time stamp is to protect the data from possible replay attacks, especially when the information is not encrypted with the data encryption key. For example, if the agent ID is not included in the message, a malicious host can potentially replace the data provided for one agent with that provided for a bogus agent. Similarly, if the time stamp is not included in the message, earlier data provided to the same agent can be used at a later time to replace current data provided to the agent from the same host. The inclusion of the IDs of the parties involved and a time stamp essentially creates an unambiguous memorandum between the agent and the host. After constructing Di, the agent will request the host to perform a signature on the following: ci = Sig(Di + ci-1 + IDhost + IDagent + time stamp, kpriv), where c0 is the digital signature on the agent code by its butler.
Integrity Protection of Mobile Agent Data
There are some advantages with the use of chained digital signature compared to the conventional signature approach. In the scenario where a malicious host attempts to modify the data from an innocent host i and somehow manages to produce a valid digital signature ci, the data integrity would have been broken if the digital signatures were independent and not chained to each other. The independent digital signature also opens the window for host i to modify data provided to the agent at a later time (one such scenario is the agent visits one of the host’s collaborating partners later). Regardless of the message format used, so long as the messages are independent of each other, host i will have no problem reproducing a valid signature for the modified message. In this way, data integrity can be compromised. With chained digital signature, even if the malicious host (or host i itself) produces a valid digital signature after modifying the data, the new signature ci’ is unlikely to be the same as ci. If the new signature is different from the original signature, as the previous signature is provided as input to the next signature, the subsequent signature verification will fail, thus detecting compromise to data integrity. The inclusion of the host ID, agent ID, and time stamp prevents anyone from performing a replay attack. When the agent reaches a new destination, the host must perform an integrity check on the incoming agent. In the design of SADIS, even if the new destination host does not perform an immediate integrity check on the incoming agent, any compromise to the data integrity can still be detected when the agent returns to the butler. The drawback, however, is that the identity of the malicious host may not be established. One design focus of SADIS is not only to detect data integrity compromise, but more importantly, to identify malicious hosts. To achieve malicioushost identification, it is an obligation for all hosts to verify the incoming agent’s data integrity before activating the agent for execution. In the event of
data integrity verification failure, the previous host will be identified as the malicious host.
futurE trEnds Besides agent data integrity and agent transport security, there are other security concerns to be addressed in SAFER. One such concern is a mechanism to assess the agent’s accumulated risk level as it roams. There have been some considerations for using the agent battery concept to address this during the earlier stages of research. Furthermore, in order to establish the identity of different agents from different agent communities, a certain level of certification by trusted third parties or agent passports are required (Guan, Wang, & Ong, 2003). More research can be conducted in these areas.
conclusion In this chapter, a new data integrity protection protocol, SADIS, is proposed under the SAFER research initiative. Besides being secure against a variety of attacks and robust against vulnerabilities pointed out in related work in the literature, the research objectives of SADIS include efficiency. This is reflected in the minimized use of PKI operations and reduced message exchanges between the agent and the butler. The introduction of a variation to DH key exchange and evolving communication session keys further strengthened the security of the design. Unlike solutions suggested in some existing literature, the data integrity protection protocol aims not only to detect data integrity compromise, but more importantly, to identify the malicious host. With security, efficiency, and effectiveness as its main design focus, SADIS works with other security mechanisms under SAFER (e.g., agent transport protocol) to provide mobile agents with a secure platform.
Integrity Protection of Mobile Agent Data
futurE rEsEarch dirEctions Recently there have been active research activities on the use of intelligent agents to mine user preferences: so-called personalization agents. Such agents, when equipped with inference engines, would be able to derive personal interests when observing Web or mobile-user interactions or click streams during online transactions. They would carry sensitive, personal data that should not be disclosed to outsiders. The protection of data in such agents is crucial. The migration of such agents or personalized data may be necessary when the service platform consists of multiple servers. For now, such agents usually reside on the server side, where strict security may already be in place. In the near future, such agents could be deployed on the client side, with a different name such as personal secretary, personal agent, and so forth. Such an agent may be dispatched by the user to run errands such as product brokering, information collection, or even transaction negotiation. An agent that carries user preference data is therefore vulnerable to attacks due to the fact that it has sensitive data inside. Protection of data in such type of agents would then be necessary.
rEfErEncEs Borselius, N. (2002). Mobile agent security. Electronics & Communication Engineering Journal, 14(5), 211-218. Borselius, N., Hur, N., Kaprynski, M., & Mitchell, C. J. (2002). A security architecture for agentbased mobile systems. Proceedings of the Third International Conference on Mobile Communications Technologies (pp. 312-318). Borselius, N., Mitchell, C. J., & Wilson, A. T. (2001). On mobile agent based transactions in moderately hostile environments. Advances in Network and Distributed Systems Security: Proceedings of the IFIP TC11 WG11.4 First An-
nual Working Conference on Network Security (pp. 173-186). Chionh, H. B., Guan, S.-U., & Yang, Y. (2001). Ensuring the protection of mobile agent integrity: The design of an agent monitoring protocol. Proceedings of the IASTED International Conference on Advances in Communications (pp. 96-99). Corradi, A., Cremonini, M., Montanari, R., & Stefanelli, C. (1999). Mobile agents and security: Protocols for integrity. Proceedings of the Second IFIP WG 6.1 International Working Conference on Distributed Applications and Interoperable Systems (DAIS’99). Diffie, W., & Hellman, M. E. (1976). New directions in cryptography. IEEE Transactions on Information Theory, 22, 644-654. Esparza, O., Muñoz, J. L., Soriano, M., & Forné, J. (2006). Secure brokerage mechanisms for mobile electronic commerce. Computer Communications, 29(12), 2308-2321. Guan, S.-U., & Hua, F. (2003). A multi-agent architecture for electronic payment. International Journal of Information Technology and Decision Making (IJITDM), 2(3), 497-522. Guan, S.-U., Wang, T., & Ong, S.-H. (2003). Migration control for mobile agents based on passport and visa. Future Generation Computer Systems, 19(2), 173-186. Guan, S.-U., & Yang, Y. (2002). SAFE: Secure agent roaming for e-commerce. Computer & Industrial Engineering Journal, 42, 481-493. Guan, S.-U., Zhu, F., & Maung, M. T. (2004). A factory-based approach to support e-commerce agent fabrication. Electronic Commerce and Research Applications, 3(1), 39-53. Ouardani, A., Pierre, S., & Boucheneb, H. (2006). A security protocol for mobile agents based upon the cooperation of sedentary agents. Journal of Network and Computer Applications.
Integrity Protection of Mobile Agent Data
Park, J. Y., Lee, D. I., & Lee, H. H. (2002). One-time key generation system for agent data protection. IEICE Transactions on Information and Systems (pp. 535-545). Roth, V. (2001). On the robustness of some cryptographic protocols for mobile agent protection. Mobile Agents 2001 (MA’01) (pp. 1-14). Tripathi, A. R. (2002). Design of the Ajanta system for mobile agent programming. Journal of Systems and Software, 62(2), 123-140. Wang, T., Guan, S.-U., & Chan, T. K. (2002). Integrity protection for code-on-demand mobile agents in e-commerce. Journal of Systems and Software, 60(3), 211-221. Yang, Y., & Guan, S.-U. (2000). Intelligent mobile agents for e-commerce: Security issues and agent transport. In Electronic commerce: Opportunities and challenges. Idea Group Publishing. Zhu, F., Guan, S.-U., Yang, Y., & Ko, C. C. (2000). SAFER e-commerce: Secure agent fabrication, evolution and roaming for e-commerce. In Electronic commerce: Opportunities and challenges. Idea Group Publishing.
furthEr rEading Boll, S. (n.d.). Modular content personalization service architecture for e-commerce applications. Proceedings of the Fourth IEEE International Workshop on Advanced Issues of E-Commerce and Web-Based Information Systems (WECWIS 2002) (pp. 213-220). Evans, A., Fernandez, M., Vallet, D., & Castells, P. (2006). Adaptive multimedia access: From user needs to semantic personalization. Proceedings of the 2006 IEEE International Symposium on Circuits and Systems (ISCAS 2006). Guan, S.-U., Tan, S. L., & Hua, F. (2004). A modularized electronic payment system for agent-based
0
e-commerce. Journal of Research and Practice in Information Technology, 36(2), 67-87. Guan, S.-U., & Yang, Y. (1999). SAFE: Secureroaming agent for e-commerce. 26th International Conference on Computers & Industrial Engineering, Australia. Guan, S.-U., & Zhu, F. (2002). Agent fabrication and its implementation for agent-based electronic commerce. International Journal of Information Technology and Decision Making (IJITDM), 1(3), 473-489. Guan, S.-U., Zhu, F. M., & Ko, C. C. (2000). Agent fabrication and authorization in agent-based electronic commerce. Proceedings of International ICSC Symposium on Multi-Agents and Mobile Agents in Virtual Organizations and E-Commerce (pp. 528-534). Gunupudi, V., & Tate, S. R. (2004). Performance evaluation of data integrity mechanisms for mobile agents. Proceedings of the International Conference on Information Technology: Coding and Computing, ITCC 2004, 1, 62-69. Jorstad, I., van Thanh, D., & Dustdar, S. (2005). The personalization of mobile services. IEEE International Conference on Wireless and Mobile Computing, Networking and Communications, (WiMob 2005), 4, 59-65. Koutrika, G., & Ioannidis, Y. (2004). Personalization of queries in database systems. Proceedings of the 20th International Conference on Data Engineering (pp. 597-608). Panayiotou, C., Andreou, M., Samaras, G., & Pitsillides, A. (2005). Time based personalization for the moving user. International Conference on Mobile Business (ICMB 2005) (pp. 128-136). Park, J. Y., Lee, D. I., & Lee, H. H. (2001). Data protection in mobile agents: One-time key based approach. Proceedings of the 5th International Symposium on Autonomous Decentralized Systems (pp. 411-418).
Integrity Protection of Mobile Agent Data
Poh, T. K., & Guan, S.-U. (2000). Internet-enabled smart card agent environment and applications. In S. M. Rahman & M. Raisinghani (Eds.), Electronic commerce: Opportunities and challenges. Idea Group Publishing. Sim, L. W., & Guan, S.-U. (2002). An agent-based architecture for product selection and evaluation under e-commerce. In S. Nansi (Ed.), Architectural issues of Web-enabled electronic business (pp. 333-346). Idea Group Publishing. Specht, G., & Kahabka, T. (2000). Information filtering and personalisation in databases using Gaussian curves. 2000 International Database Engineering and Applications Symposium (pp. 16-24). Tam, K. Y., & Ho, S. Y. (2003). Web personalization: Is it effective? IT Professional, 5(5), 53-57. Tan, X., Yao, M., & Xu, M. (2006). An effective technique for personalization recommendation based on access sequential patterns. IEEE AsiaPacific Conference on Services Computing, APSCC ’06 (pp. 42-46). Treiblmaier, H., Madlberger, M., Knotzer, N., & Pollach, I. (2004). Evaluating personalization and customization from an ethical point of view: An empirical study. Proceedings of the 37th Annual Hawaii International Conference on System Sciences. Tseng, B. L., Lin, C.-Y., & Smith, J. R. (2002). Video personalization and summarization system. 2002 IEEE Workshop on Multimedia Signal Processing (pp. 424-427). Wang, Y., Kobsa, A., van der Hoek, A., & White, J. (2006). PLA-based runtime dynamism in support of privacy-enhanced Web personalization. 10th International Software Product Line Conference. Wang, Y. H., Wang, C. L., & Liao, C. H. (2004). Mobile agent protection and verification in the Internet environment. The Fourth International
Conference on Computer and Information Technology (pp. 482-487). Wu, D., Im, I., Tremaine, M., Instone, K., & Turoff, M. (2003). A framework for classifying personalization scheme used on e-commerce Websites. Proceedings of the 36th Annual Hawaii International Conference on System Sciences (pp. 12-23). Yang, Y. (2006). Provisioning of personalized pervasive services: Daidalos personalization functions. 2006 1st International Symposium on Pervasive Computing and Applications (pp. 110-115). Yang, Y., & Guan, S. U. (2000). Intelligent mobile agents for e-commerce: Security issues and agent transport. In S. M. Rahman & M. Raisinghani (Ed.), Electronic commerce: Opportunities and challenges. Idea Group Publishing. Yee, G. (2006). Personalized security for eservices. The First International Conference on Availability, Reliability and Security (ARES 2006). Yu, P. S. (1999). Data mining and personalization technologies. Proceedings of the 6th International Conference on Database Systems for Advanced Applications (pp. 6-13). Zhao, Y., Yao, Y., & Zhong, N. (2005). Multilevel Web personalization. Proceedings of the 2005 IEEE/WIC/ACM International Conference on Web Intelligence (pp. 649-652).
tErms and dEfinitions Agent: An agent is a piece of software that acts to accomplish tasks on behalf of its user. Cryptography: Cryptography is the art of protecting information by transforming it (encrypting it) into an unreadable format, called cipher text. Only those who possess a secret key can decipher (or decrypt) the message into plain text.
Integrity Protection of Mobile Agent Data
Flexibility: Flexibility is the ease with which a system or component can be modified for use in applications or environments other than those for which it was specifically designed. Integrity: Integrity regards the protection of data or program code from being modified by unauthorized parties. Mobile Agent: Also called a roaming agent, it is an agent that can move from machine to machine for the purpose of data collection or code execution.
Protocol: A protocol is a convention or standard that controls or enables the connection, communication, and data transfer between two computing endpoints. Protocols may be implemented by hardware, software, or a combination of the two. At the lowest level, a protocol defines a hardware connection. Security: Security involves the effort to create a secure computing platform designed so that agents (users or programs) can only perform actions that have been allowed.
About the Editors
G. David Garson is full professor of public administration at North Carolina State University, where he teaches courses on American government, research methodology, computer applications, and geographic information systems. In 1995 he was recipient of the Donald Campbell Award from the Policy Studies Organization, American Political Science Association, for outstanding contributions to policy research methodology and in 1997 of the Aaron Wildavsky Book Award from the same organization. He is author of the Guide to Writing Quantitative Papers, Theses, and Dissertations (Dekker, 2001), editor of Social Dimensions of Information Technology (2000), Information Technology and Computer Applications in Public Administration: Issues and Trends (1999), and the Handbook of Public Information Systems (1999), and author of Neural Network Analysis for Social Scientists (1998), Computer Technology and Social Issues (1995), and is author, co-author, editor, or co-editor of 17 other books and author or co-author of more than 50 articles. For the last 20 years he has also served as editor of the Social Science Computer Review and is on the editorial board of four additional journals. Mehdi Khosrow-Pour, DBA, is currently executive director of the Information Resources Management Association (IRMA). Previously, he served 20 years on the Faculty of The Pennsylvania State University as an associate professor of information systems. He has written or edited more than 20 books in information technology management and is also editor of the Information Resources Management Journal, Journal of Cases on Information Technology, Journal of Electronic Commerce in Organizations, and International Journal of Cases on Electronic Commerce.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
About the Contributors
Esharenana E. Adomi had his university education at the University of Ibadan, Ibadan, and Delta State University, Abraka, both in Nigeria. He holds BEd, MEd, MLS, and PhD degrees. He lectures at the Department of Library and Information Science, Delta State University, Abraka, Nigeria. His research interests lie in Web and Internet technology and ICT development and applications in different settings. Amalia Agathou is currently in the final year of her undergraduate studies in the Department of Information and Communication Systems Engineering of the University of the Aegean in Greece. Her research interests include computer security, data modeling, and databases. Stephen K. Aikins received a PhD from the University of Nebraska at Omaha where he has taught graduate-level courses in public-sector information management. He is a certified public accountant (CPA) and a certified information systems auditor (CISA). Dr. Aikins has several years of experience working in various capacities in government and in the private sector. He has been a state revenue auditor, a municipal government auditor for a public accounting firm, a Medicare auditor for a federal government contractor, and an assistant vice president for a multinational bank. His research interests are electronic government, Internet-based citizen participation, electronic democracy, information security, public sector economics, public financial management and public budgeting. Eugene J. Akers is the senior director for the Center for Advanced Technology in the University Outreach Program of Auburn University Montgomery. The purpose of the center is to extend the resources and expertise of Auburn University Montgomery (AUM) and Auburn University to individuals, organizations, the community, and the environment in a manner that enhances productivity and the quality of life for individuals in Alabama. The center provides consulting services, training programs, economic development endeavors, technological advancements, and community service. As the senior director, Dr. Akers is responsible for establishing the strategic direction of the center. Dr. Akers also is responsible for business development, contract management, and client relationships. Dr. Akers currently serves as an adjunct assistant professor in the Department of Information Systems and Decision Science and the Department of Political Science and Public Administration of Auburn Montgomery. Micah Altman (PhD, California Institute of Technology, 1998) is an associate director of the Harvard-MIT Data Center, is archival director at the Henry R. Murray Research Archive, and is senior research scientist in the Institute for Quantitative Social Science at Harvard University. Dr. Altman has
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
About the Contributors
served as co-investigator in a number of research projects promoting the collection, sharing, citation, and preservation sharing of research data through the development of open-source software tools and methods. His extensively reviewed book, Numerical Issues in Statistical Computing for the Social Scientist, corrects common computational errors made across the range of social sciences. His more thab two dozen publications and four open source software packages span many disciplines in social and information sciences. Andrea Baker is a PhD candidate of information studies at the College of Computing and Information and a graduate research assistant at the Center for Technology in Government, University at Albany, State University of New York. At the University at Albany and Siena College, she is also an adjunct professor at the Department of English. Baker is the co-author of a chapter in The Internet Election: Perspectives on the Web in Campaign 2004 and several papers presented at conferences of prestigious professional associations such as the American Society for Information Science and Technology and the American Society for Public Administration. Her research interests include computer-mediated communication, the impact of new media on the press, open-source technology and community, and collaborative egovernment. David A. Bray is currently with the Goizueta Business School, Emory University. He will be a visiting fellow with Oxford University’s Internet Institute in 2007. Bray’s research focuses on bottomup (i.e., grassroots) sociotechnological approaches for fostering interindividual knowledge exchanges. Before academia, he served as the IT chief for the Bioterrorism Preparedness and Response Program at the Centers for Disease Control where he led the technology response to 9/11, anthrax, WNV, SARS, and other major outbreaks. Lia Bryant is deputy director for the Hawke Institute of Sustainable Societies, University of South Australia. She is also director of the Research Centre for Gender Studies and is a senior lecturer in the School of Social Work and Social Policy. Her research interests include gender and technologies, gender and sexuality, organizations, work, rurality, and space and place. Anthony W. Buenger, Jr. (Lt. Col., USAF, retired), is a professor of systems management (Military Faculty) and information operations and assurance at the Information Resources Management College, National Defense University, Fort McNair, Washington, DC. Buenger received his bachelor’s degree in electrical engineering from the University of Maryland in 1988. In 1992 he received an MA in space systems management from Webster University, Colorado. He is currently working on a PhD in business administration from Northcentral University, Arizona. He served his entire 21-year career in the communications, information technology, and information security fields. Donna Canestraro brings more than 25 years of professional experience to her role as program manager at the Center for Technology in Government (CTG). Prior to joining CTG, Canestraro has worked with both public- and private-sector organizations including Unisys Corporation and General Electric. Her most recent position prior to CTG was as program manager with the Professional Development Program at the University at Albany’s Nelson A. Rockefeller College of Public Affairs and Policy. Her current work focuses on the policy, management, and technology issues related to inter- and intraorganizational information integration. As project manager for the XML Testbed, she has orchestrated
About the Contributors
the development of the XML Testbed, as well as various conference presentations and academic papers about XML. Elena Castro received an MS in mathematics from Universidad Complutense of Madrid in 1995. Since 1998 she has been working at the Advanced Databases Group in the Department of Computer Science, Universidad Carlos III of Madrid. Recently, she has obtained a PhD in information science from Universidad Carlos III of Madrid. She is currently teaching relational and object-oriented databases. Her research interests include database conceptual and logical modeling, advanced database CASE environments, and information and knowledge engineering. Mirko Cesarini is currently working as professor’s assistant with the Faculty of Statistical Science, University of Milan Bicocca. He is also affiliated with CRISP (Interuniversity Research Center on Public Utility Services for the Individual). His research focuses on information systems. He has published several papers in refereed journals, and in proceedings of international conferences. He earned a PhD in 2005 in computer engineering and a master’s degree in the same field in 2001, both at the Politecnico di Milano, Milan, Italy. Ioannis P. Chochliouros is a telecommunications electrical engineer who graduated from the Polytechnic School of the Aristotle University of Thessaloniki, Greece, holding also an MS and a PhD from the University Pierre et Marie Curie, Paris VI. He possesses extreme research and practical experience in a great variety of matters for modern electronic communications. He currently works as the head of the research programs section of the Hellenic Telecommunications Organization S.A. (OTE) in Greece, where he has been involved in different national, European, and international R&D projects and market-oriented activities, many of which have received international awards. He has published more than 80 distinct scientific and business papers and reports in the international literature (books, magazines, conferences, and workshops), especially for technical, business, and regulatory options arising from innovative e-infrastructures and e-services in a global converged environment. He is a member of several national and international scientific committees and fora. He also works as a university lecturer with the Faculty of Science and Technology, Departments of Telecommunication Science, Computer Science and Technology, University of Peloponnese, Greece. Stergios P. Chochliouros is an independent consultant who is a specialist for environmental-related studies. He holds a PhD from the Department of Biology, University of Patras, Greece, and a university degree as an agriculturist. He has gained enormous experiences as an academic researcher and has been involved in various research activities, especially in options for the extended use and/or applicability of modern technologies. In particular, he has participated as an expert in many European research projects relevant to a variety of environmental studies. Moreover, he has gained significant experience both as an educator and advisor while being an author of several papers, studies, and reports. Gabriel Puron Cid received a master’s degree in public administration and policy at the Center for Research and Economics Teaching (CIDE) in Mexico. After the conclusion of this program, he contributed actively at the Ministry of Finance of the federal government in Mexico to design and implement budgeting for results. Today he is a doctoral student of public administration at the Rockefeller College of Public Affairs & Policy of the University at Albany with a specialization in information management
About the Contributors
and policy in the public sector. His research interests are information technology applications in public finance and budgeting, digital government, IT investment and failures in intergovernmental systems, and public management of information systems in governmental contexts. David Cook is an associate professor in the Department of Information Systems and Decisions Sciences, College of Business and Public Administration at Old Dominion University. His educational and work experience is in the area of operations management. He graduated from the University of Kentucky with a PhD in production and operations management. Dr. Cook has published in such journals as Production and Inventory Management, Human Systems Management, Production and Operations Management, The Journal of Computer Information Systems, and the e-Service Journal. His research interests include quality management, service operations, and electronic commerce. He is a member of the Decision Sciences Institute and INFORMS. James Costello serves as Webmaster and Web application developer at the Center for Technology in Government. Prior to joining CTG, Costello operated his own Web design and development company and worked for several private and public organizations, including KeyCorp, the Professional Development Program of Rockefeller College, Coopers & Lybrand, and the Office of Data Processing, Human Resources Administration, City of New York. Based on his experience at CTG, he developed a curriculum on the benefits and challenges of using XML for Web site content management, which he has presented to over 30 different NYS agencies. He and Derek Werthmuller were the two chief architects of the XML Toolkit (http://www.thexmltoolkit.org), a product of the XML Testbed. Ron Craig is a professor of business in the operations and decision sciences area of the School of Business & Economics, Wilfrid Laurier University, Waterloo, Canada. His research interests cover small business and information systems. He has published several articles and cases in these areas. Dolores Cuadra received an MS in mathematics from Universidad Complutense of Madrid in 1995. Since 1997, she has worked as an assistant lecturer at the Advanced Databases Group in the Department of Computer Science, Carlos III University of Madrid. In 2003 she obtained a PhD in computer science from Carlos III University of Madrid. She is currently teaching files organization, database design, and data models. Her research interests include data models, conceptual and logical modeling, and advanced database CASE environments. She has been working in the Department of Computer Science at Purdue University of West Lafayette (Indiana) for nearly a year, where she has applied her research in spatiotemporal databases. Alex Dunayev worked at the St. John IT Department developing the National Events Management System (NEMS). This won a national award for excellence in the not-for-profit organizations category. While working at St. John, Dunayev was also finishing degrees at the University of Auckland. His BCom (Hons) dissertation, which also received an award, forms the basis for his chapter in this book. Dunayev is CEO of AQXI Creative Software at the University of Auckland’s ICEHOUSE. He founded AQXI, largely staffed by other graduates from Auckland. Antonina Durfee (Kloptchenko) is an assistant professor at Appalachian State University in Boone. She holds a PhD from Abo Akademi University. Her current research is in text mining, knowledge
About the Contributors
discovery, human issues in technology adoption, and seeking behavior. She published in International Journal of Intelligent Systems in Accounting, Finance, and Management and International Journal of Digital Accounting Research. William D. “Denny” Eshee, Jr., attorney-at-law, is professor of business law at Mississippi State University. He received his Juris PhD degree from the University of Mississippi. He has been admitted to the Mississippi Bar Association, Federal Bar Association, Alabama Bar Association, United States Court of Military Appeals, and The United States Supreme Court. He has authored or co-authored four books, Mississippi Small Claims Court: A Debt Collection Guide for Mississippi Businesses, The Mississippi Entrepreneur’s Guide, The American Entrepreneur’s Guide, and The Legal Environment of Business, as well as a host of journal articles and other business publications. Vesile Evrim is currently a PhD student in computer science at the University of Southern California, and a research assistant in the Semantic Information Representation Laboratory at USC. She has received BS and MS degrees in applied mathematics and computer science at the Eastern Mediterranean University in Cyprus and another MS degree in computer science from USC. Her current research focuses on trust-based information retrieval, search engines, recommender systems, and social networks. Lixin Fu has been an assistant professor at the University of North Carolina at Greensboro since he joined the university in 2001. He has published more than a dozen papers in refereed journals and international conferences in the past 5 years. His main research areas include data warehousing, data mining, databases, and algorithms. Dr. Fu has obtained a PhD in computer and information sciences at the University of Florida in 2001. He earned a master’s degree in electrical engineering at Georgia Institute of Technology in 1997. He is a member of IEEE, ACM, and Upsilon Pi Epsilon Honor Society. Mariagrazia Fugini is an associate professor of computer engineering at Politecnico di Milano. Previously, she was an assistant professor at the Department of Industrial Automation, Faculty of Engineering, Università di Brescia (1983-1991), and an associate professor with the Department of Computer Science, University of Pavia. She has been a visiting professor at the University of Maryland (1985-1986), the Technical University of Vienna (1986-1991), and the University of Stuttgart. Her research interests are in information system security, information systems development, software reuse, information retrieval, cooperative information systems, and reengineering in public administrations. She has published several books, papers in refereed journals, and papers in proceedings of international conferences and symposia. J. Ramon Gil-Garcia is an assistant professor in the Department of Public Administration at the Centro de Investigación y Docencia Económicas in Mexico City and a research fellow at the Center for Technology in Government, University at Albany, State University of New York. Dr. Gil-Garcia is the author or co-author of articles in numerous academic journals including The International Public Management Journal, Government Information Quarterly, Journal of the American Society for Information Science and Technology, European Journal of Information Systems, Public Finance and Management, and Gestión y Política Pública. His research interests include collaborative e-government, interorganizational information integration, adoption and implementation of emergent technologies, digital divide policies, public management, public policy evaluation, and multimethod research approaches.
About the Contributors
Sheng-Uei Guan received an MS and PhD from the University of North Carolina at Chapel Hill. He is currently a professor and chair in intelligent systems in the School of Engineering and Design at Brunel University, UK. Professor Guan has worked in a prestigious R&D organization for several years, serving as a design engineer, project leader, and manager. After leaving the industry, he joined Yuan-Ze University in Taiwan for three and a half years. He served as deputy director for the Computing Center and the chairman for the Department of Information & Communication Technology. Later he joined the Electrical & Computer Engineering Department at National University of Singapore as an associate professor. Shahidul Hassan is a doctoral student in public administration and policy at the University at Albany, State University of New York. His research interests are interorganizational collaboration and technology and work-practice transformation in government agencies. Wen-Chen Hu received a PhD in computer science from the University of Florida, Gainesville, in 1998. He is currently an assistant professor in the Department of Computer Science at the University of North Dakota. Dr. Hu has been advising more than 50 graduate students and has published over 60 articles on refereed journals, conferences, books, and encyclopedias. His current research interests are in handheld computing, electronic and mobile commerce systems, Web technologies, and databases. He is a member of the IEEE Computer Society, ACM, and IRMA (Information Resources Management Association). Xiaohua (Tony) Hu is currently an assistant professor at the College of Information Science and Technology, Drexel University, Philadelphia. He received a PhD (computer science) from the University of Regina, Canada (1995) and an MS (computer science) from Simon Fraser University, Canada (1992). His current research interests are biomedical literature data mining, bioinformatics, text mining, rough sets, information extraction, and information retrieval. He has published more than 100 peer-reviewed research papers in the above areas. He is the founding editor-in-chief of the International Journal of Data Mining and Bioinformatics. Marijn Janssen is an assistant professor in the field of information systems at the Faculty of Technology, Policy, and Management, Delft University of Technology, The Netherlands. His research is focused on designing service architectures for public service networks. He received a PhD (2001) at Delft University of Technology in the field of management information systems and has been a consultant and information architect for the Ministry of Justice. Anton Joha is a consultant at Morgan Chambers in The Netherlands and an affiliate of Delft University of Technology. His projects are mainly in the field of outsourcing and shared service centers. He holds an MS in management information systems from Delft University of Technology, The Netherlands. Jimmie L. Joseph is an assistant professor in the Department of Information and Decision Sciences, College of Business, University of Texas at El Paso. He earned a BS in the triple majors of biology, computer science, and natural sciences from Indiana University of Pennsylvania, an MBA from the University of Pittsburgh’s Joseph M. Katz Graduate School of Business, an MS in MoIS from the University of Pittsburgh’s Joseph M. Katz Graduate School of Business, and a PhD in MIS from the University of
About the Contributors
Pittsburgh’s Joseph M. Katz Graduate School of Business. Dr. Joseph has published in such journals as the Journal of Management Information Systems, Human Systems Management, and the International Journal of Electronic Business. His research interests include human computer interaction, electronic commerce, and social impacts of information systems. He is a member of the Association for Information Systems and the Decision Sciences Institute. Rhoda C. Joseph is an assistant professor of information systems and technology at The Pennsylvania State University – Harrisburg. She has earned her PhD in business with a specialization in information systems from the City University of New York (CUNY), and her MBA from Baruch College (CUNY). Dr. Joseph’s research interests are in the area of technology nonadoption, e-government, and human factors in information systems. Kalu N. Kalu is an associate professor of political science at Auburn University Montgomery. He is also a research fellow at The Macmillan Center for International and Area Studies, Yale University. His research emphasis is in the areas of institutional development and democratic theory, citizenship and administrative theory, civil-military relations, IT-leadership interface, national security and intelligence policy, e-government, and health care politics and policies. Dr. Kalu has published articles in the Journal of Political and Military Sociology, Administrative Theory & Praxis, Public Administration Review, Administration & Society, International Review of Administrative Sciences, Social Science Computer Review, the Yale Political Quarterly, and others. His book Rentier Politics: State Power, Autarchy, and Political Conquest in Post-War Nigeria is forthcoming. Atreyi Kankanhalli is an assistant professor in the Department of Information Systems, School of Computing, National University of Singapore (NUS). She received a PhD in information systems from NUS. Dr. Kankanhalli’s research interests include knowledge management, e-government, virtual teams, and information systems security. Her work has been published in journals such as MIS Quarterly, Journal of the American Society for Information Science & Technology, Journal of Management Information Systems, International Journal of Human Computer Studies, Communications of the ACM, Decision Support Systems, and International Journal of Information Management, and conferences including ICIS, HICSS, and WITS. Dimitris K. Kardaras is an assistant professor in information systems management in the Department of Business Administration, Athens University of Economics and Business (AUEB), Greece. He holds a BS (Hons) in informatics and a BS (Hons) in management both from the Athens University of Economics and Business, and an MS in information systems engineering and PhD in information systems from the Department of Computation at the University of Manchester Institute of Science and Technology (UMIST), England. Dr Kardaras has participated in many research projects in IS and IT since 1990 and has been teaching in IS courses for over 12 years. He has published journal and conference papers in the areas of IS planning, fuzzy cognitive maps, IS modeling, and e-commerce. David P. Kitlan is an instructor in information sciences and technology at The Pennsylvania State University – Harrisburg. He is a PhD candidate in the public administration program at The Pennsylvania State University – Harrisburg. Kitlan holds an MS in information systems, an MBA, and an MEng from The Pennsylvania State University, Harrisburg. Mr. Kitlan completed his undergraduate degree in mechanical
About the Contributors
engineering, also at The Pennsylvania State University. He has over 20 years of corporate and industry experience in engineering, manufacturing, management, marketing, consulting, six-sigma methods, training, and project management. Kitlan’s research interests include areas of management, work teams, distance learning, e-government, data mining, information security, and the use of information systems in private, public, and nonprofit organizations. Hsiang-Jui Kung is an assistant professor of information systems at Georgia Southern University. He received a PhD in management information systems from Rensselaer Polytechnic Institute in 1997. He joined Georgia Southern University full time in 2001. His research interests include systems analysis and design, database, e-business, IS education, and software evolution. Giorgos Laskaridis holds a degree in informatics from the Department of Informatics, University of Athens, Greece, and is now a PhD candidate in the same department. As a research fellow, he has participated in several European and national R&D projects. His research interests are in the fields of electronic services, e-government, software engineering, and system analysis and design. He has published journal and conference publications in the area of e-services and e-government. He is currently advisor to the secretary general, general secretariat for information systems of the Hellenic Ministry of Economy and Finance. Vincent Lasnik is currently an independent knowledge architect and instructional design consultant in Oregon. His professional experience includes 18 years beyond the PhD in cross-disciplinary technical and creative communications; instructional design and training; hybrid, blended, distance, and/or online learning; information design; interactive learning and multimedia production; research and development; and knowledge generation and dissemination. He holds a BA in history and psychology (Case Western Reserve University), an MA in humanities education, a PhD in instructional design and technology (both from the Ohio State University), and an MS in human factors in information design (Bentley College, McCallum Graduate School of Business). Richard A. Leipold received a BA from Washington and Jefferson College in English literature, and an MA and PhD from the University of Pittsburgh in mathematics. After working as a computer engineer for Westinghouse Electric Corporation, he joined the Faculty of Waynesburg College. He is a professor of computer science and chair of the Department of Mathematics and Computer Science. He is a member of ACM, MAA, and Phi Beta Kappa. In addition to teaching, he directs a project to standardize training modules for digital forensics. Gloria Liddell, attorney-at-law (presently inactive), is an assistant professor of business law at Mississippi State University. Dr. Liddell obtained a Juris doctorate degree from Howard University in Washington, DC. She was admitted to the bars of the District of Columbia and the states of Maryland, Florida, and Mississippi. Her experience includes practicing in the Division of Market Regulation at the headquarters of the Securities and Exchange Commission and the Federal Reserve Board. She has authored several publications on bankruptcy law and received numerous honors and awards from civic, nonprofit, and academic organizations.
About the Contributors
Pearson Liddell, Jr., attorney-at-law, is an associate professor of business law at Mississippi State University. He obtained a Juris doctorate degree from Howard University, Washington, DC. He was admitted to the bars of Washington, DC, Maryland, Florida, and Mississippi, as well as admitted to practice before the federal courts, including the United States Supreme Court. Dr. Liddell is a coauthor of the textbook The Legal Environment of Business. He has published several articles on such diverse business law topics as taxation, bankruptcy, the Digital Millennium Copyright Act, and legal concepts of Internet marketing. Chad Lin is a research fellow at Curtin University of Technology. Dr. Lin has conducted extensive research as in the areas of IS and IT investments evaluation and benefits realization, IS and IT outsourcing, electronic commerce, e-health, virtual teams, and strategic alliance. Dr. Lin has published more than 80 refereed journal articles (e.g., in Information and Management, International Journal of Electronic Commerce, Information Technology and People, Industrial Management and Data Systems, and Journal of Research and Practice in IT) and conference papers as well as book chapters. Dr. Lin has also served as a member of the editorial review boards for several prestigious international journals. Luis Felipe Luna-Reyes is a professor of business at the Universidad de las Américas in México. He holds a PhD in information science from the University at Albany. Luna-Reyes is also a member of the Mexican National Research System. His research focuses on electronic government and on modeling collaboration processes in the development of information technologies across functional and organizational boundaries. Xenia J. Mamakou is a PhD candidate in the Department of Business Administration, Athens University of Economics and Business, and her field of study is in the development of methodologies for Web site evaluation. She received an MS in business information technology from UMIST in 2003. She has published articles on privacy and is doing research on e-commerce, online privacy, management information systems, and so forth. She is working at the Business Informatics Laboratory of AUEB teaching lab courses on business informatics, Web design, and design and development of management information systems. Konstantinos Markellos is an electrical and computer engineer and researcher in the Department of Computer Engineering and Informatics, University of Patras. He obtained a diploma from the Department of Electrical and Computer Engineering (1999) and an MS in hardware and software integrated systems (2003) from the Department of Computer Engineering and Informatics. Today, he is a PhD candidate in the latter department. His research interests lie in the area of Internet technologies and especially e-commerce and e-government, and he has published several research papers in national and international journals and conferences; he is coauthor of one book and three book chapters. Penelope Markellou is a computer engineer and researcher in the Department of Computer Engineering and Informatics, University of Patras. She obtained a PhD in techniques and systems for knowledge management in the Web (2005) and an MS in usability models for e-commerce systems and applications (2000) from the above university. Her research interests focus on algorithms, and techniques and approaches for the design and development of usable e-applications including e-commerce, e-learning,
0
About the Contributors
e-government, and business intelligence. She has published several research papers in national and international journals and conferences and is co-author of two books and eight book chapters. Paloma Martínez Fernández earned a degree in computer science from Universidad Politécnica of Madrid in 1992. Since 1992, she has been working at the Advanced Databases Group in the Department of Computer Science, Universidad Carlos III of Madrid. In 1998 she obtained a PhD in computer science from Universidad Politécnica of Madrid. She is currently teaching database design and advanced databases in the Department of Computer Science, Universidad Carlos III de Madrid. She has been working in several European and national research projects about natural language processing, information retrieval, advanced database technologies and software engineering. Dennis McLeod is currently a professor of computer science at the University of Southern California (USC), and director of the Semantic Information Representation Laboratory at USC. He received PhD, MS, and BS degrees in computer science and electrical engineering from MIT. Dr. McLeod has published widely in the areas of data and knowledge base systems, federated databases, database models and design, and ontologies. His current research focuses on dynamic ontologies, user-customized information access, database semantic heterogeneity resolution and interoperation, personalized information management environments via cooperative immersipresence, information management environments for geoscience and homeland security information, crisis management decision support systems, and privacy and trust in information systems. Kostas Metaxiotis is a lecturer in the National Technical University of Athens and senior advisor to the secretary for the information society in the Greek Ministry of Economy and Finance. He has wide experience in knowledge management, expert systems design and development, artificial intelligence, object-oriented knowledge modeling, inference mechanisms, and e-business. He has published more than 60 scientific papers in various journals, such as Journal of Knowledge Management, Journal of Information and Knowledge Management, Knowledge Management & Practice, Journal of Intelligent Manufacturing, Applied Artificial Intelligence, Industrial Management & Data Systems, Journal of Computer Information Systems, and so forth. He is a member of editorial boards of several leading journals in the field and is a member of the program committee at international conferences. Since 1996 he has been participating in various EC-funded projects within Tacis, Phare, MEDA, and IST Programmes as Senior ICT Consultant and Manager. Mario Mezzanzanica is currently an associate professor with the Faculty of Statistical Science, University of Milano Bicocca. His research and professional interests focus on information technology management. He has been chairman of the scientific committee of CRISP (Interuniversity Research Center on Public Utility Services for the Individual) since July 2005. He has also been a member of the scientific committee of CEFRIEL at the ICT Center of Excellence for Research, Innovation, Education and Industrial Labs since March 2006. His scientific research activities focus on information systems, on which several research activities have been carried out and were published in scientific journals, books, and conference proceedings, both international and national. He was awarded a degree in electronic engineering from the Politecnico di Milano (1985).
About the Contributors
Michael Middleton, PhD, is a senior lecturer in the School of Information Systems at Queensland University of Technology. His interests include information management, information use analysis and evaluation, and digital libraries. His publications include the books Information Management (published by Charles Sturt University Centre for Information Studies) and Integrative Document and Content Management, with Len Asprey (published by Idea Group). Middleton also has extensive experience as a consultant with investigations in information services areas including multicultural affairs, parliamentary services, library portals, and controlled vocabularies. Melissa Moore (PhD, University of Connecticut) is an associate professor of marketing at Mississippi State University. She received her PhD from the University of Connecticut. Dr. Moore’s research interests concentrate on understanding the development and maintenance of customer-firm relationships. Her research has appeared in the Journal of Internet Law, Journal of Business Research, Transportation Journal, Journal of Consumer Psychology, Marketing Management Journal, and the American Journal of Agricultural Economics, and has been presented at both domestic and international conference venues. Robert S. Moore (PhD, University of Connecticut) is an associate professor of marketing at Mississippi State University. He has presented at numerous conferences and published his research in various outlets including Journal of Internet Law, Journal of Advertising, Transportation Journal, Journal of Public Policy and Marketing, Journal of Services Marketing, Journal of End User Computing, Advances in Consumer Research, Marketing Management Journal, Journal for the Advancement of Marketing Education, Albany Law Journal of Science and Technology, and Seton Hall Legislative Review. His research interests center upon consumer behavior in e-commerce settings. Lourdes Moreno López received an MS in mathematics from Universidad Complutense of Madrid in 1999. She has been working with several IT companies, in R&D departments, working in matters about infometrics (information measurement) especially in Web channels. Since 2002, she has been working at the Advanced Databases Group in the Department of Computer Science, Universidad Carlos III of Madrid. She is a teacher in database design. She’s currently developing her doctoral thesis and her research interests include design and accessibility in Web applications. Krysnaia Nanini was awarded a degree in economics, management, and industrial engineering from the Politecnico di Milano, Milan, Italy, in 2005. She has published several papers in proceedings of international conferences. Dale K. Nesbary serves as vice president and dean for academic affairs at Adrian College. He also holds an appointment as professor of political science. His past positions include being a member of the Oakland University Faculty, and he held administrative/research positions with the City of Boston, the National Conference of State Legislatures, and the Michigan Senate. His research interests include public-sector technology, state tax policy, and policing. He has published in a wide range of outlets, including The Journal of Public Affairs Education, The Journal of Contemporary Criminology, Social Science Computer Review, The British Journal of Educational Technology, The International Journal of MS Care, and State Legislatures.
About the Contributors
Bruce Neubauer teaches in the public administration program in Government & International Affairs, University of South Florida. His areas include information systems, e-government, e-democracy, and the modeling and simulation of complex systems. His present research projects relate to the local provision of emergency medical services in the instance of a bird flu pandemic or similar event, and knowledge management among members of emergency response teams. Donald F. Norris is the director of the Maryland Institute for Policy Analysis and Research and professor of public policy at the University of Maryland, Baltimore County. He is a specialist in urban politics, public management, and the application, management, and impacts of information technology, including e-government, in public organizations. Dr. Norris’ works have been published in a number of scholarly journals. He holds a BS in history from the University of Memphis and both an MA and PhD in government from the University of Virginia. Carlos Nunes Silva, PhD, is a professor auxiliar in the Department of Geography, University of Lisbon, Portugal. His research concerns local government, urban planning, e-planning, urban and metropolitan governance, and planning ethics. Craig Orgeron currently serves as the enterprise architect for the Mississippi Department of Information Technology Services. Since being employed at the Mississippi Department of Information Technology Services, Orgeron has participated in numerous government information technology task forces and committees, such as the Digital Signature Committee, the Electronic Government Task Force, and the Governor’s Commission on Digital Government, which led to the implementation of the Enterprise Electronic Government in Mississippi. He holds a BBA degree in MIS and a master’s degree in public policy and administration from Mississippi State University. He is a certified public manager and a graduate of the Senator John C. Stennis State Executive Development Institute. Angeliki Panayiotaki obtained a diploma in computer engineering and informatics from the University of Patras (1996) and an MS in advanced information systems from the University of Athens (2000). She is currently working as a researcher (PhD student) at the Computer Engineering and Informatics Department of the University of Patras and also at the General Secretariat for Information Systems of the Hellenic Ministry of Economy and Finance. Her research interests focus on personalization, Web mining, and interoperability techniques applied in the e-commerce, e-government, and e-health domains. She has published several research papers in international and national conferences. Haralambos Papageorgiou is a professor of statistics in the Department of Mathematics, University of Athens, and is currently chairman of the Department of Mathematics. He has also served as a professor at the University of Kansas, and as a visiting professor at the Universities of Lille in France, Vienna in Austria, and City University of New York. His main research interests over the last 8 years are in the area of official statistics, in particular, in statistical metadata; statistical harmonization; and quality issues as well as data mining. He has authored more than 50 research articles published in International Statistical Journals and for more than 12 years he has participated in European research projects. Eleutherios A. Papathanassiou is a professor in MIS and director of the Business Informatics Lab at the Department of Business Administration, Athens University of Economics and Business (AUEB),
About the Contributors
Greece. He holds a BS (Hons) in mathematics from the University of Athens and a PhD in computer science from the University of St. Andrews, Scotland. He is scientific advisor to Greek companies and head of the center for distance learning (TeleEducation Centre) of the AUEB. He has published in the areas of IS modeling, e-commerce, and computer science. Parviz Partow-Navid has been at CSU, LA, since 1983. He earned his MBA and PhD from the University of Texas at Austin. He is currently director of graduate programs for the College of Business. He has published in journals such as the Computers and Operations Research, Journal of Systems Management, Journal of Information Technology Management, and Software Engineering. Dr. PartowNavid’s interests are in decision support systems, intelligent systems, e-commerce, cybersecurity, and distance learning. John Paynter was formerly a lecturer at the University of Auckland where he supervised over 30 postgraduate students, including Alex Dunayev. Paynter has also worked in government elections and for the national census. These assignments were to pursue his interest in democracy services and particularly in e-government. He is currently working on contracts to document and improve the processes around local government elections. Gabrielle Peko is a lecturer in operations management within the Department of Information Systems and Operations Management, University of Auckland. She teaches process modeling, project management, and enterprise systems. Her research focus is on supply chain management and in this instance in the processes around government systems involving supply chains and how they may be mediated electronically. Emery M. Petchauer is an instructor of education at Lincoln University, Pennsylvania, a historically black university. His research and writing surround hip-hop as an emerging worldview in formal and informal educational settings and its implications on teaching and learning. He is a former high school teacher who has contributed to local communities of hip-hop in San Diego, California and Norfolk, Virginia. Iolanda Principe is a director with the South Australian Department of Health. She has extensive experience in management and continues to undertake research. Her research interests include social capital and the digital divide, social justice, information and communication technologies, and primary health care. Imad Rahal is an assistant professor of computer science at the College of Saint Benedict & Saint John’s University, Minnesota. He earned PhD and MS degrees in computer science from North Dakota State University (2005 and 2003, respectively). He graduated summa cum laude with a bachelor’s degree from the Lebanese American University, Beirut. He was awarded a doctoral dissertation assistantship from the National Science Foundation. His research interests are largely in the broad areas of data mining, machine learning, databases, artificial intelligence, and bioinformatics. He is an active member of the ACM and ISCA professional societies.
About the Contributors
Christopher G. Reddick is an assistant professor of public administration at the University of Texas at San Antonio. Dr. Reddick regularly teaches courses in information technology, public administration, and public-sector financial management. Dr. Reddick’s research interests are in the tools and techniques that public managers can use to make their organizations efficient and effective. Some of his publications can be found in various public administration and information technology journals such as Government Information Quarterly, Public Budgeting & Finance, and the Review of Public Personnel Administration. Jean-Philippe Rennard is a senior professor at Grenoble Graduate School of Business and head of the Department of Management and Technology. He received his PhD from the University Pierre Mendès France of Grenoble. An economist, he is specialized in industrial development. A computer scientist, he is deeply involved in biologically inspired computing. He now mainly works on the applications of biologically inspired computing to economics and management Anne Rouse is an associate professor of IT and business strategy at the Deakin Business School, Deakin University, Melbourne, Australia. She has been researching outsourcing since 1997, and her doctoral thesis on outsourcing risks and benefits won the 2003 ACPHIS medal for best Australasian PhD thesis in information systems. Dr. Rouse conducted a longitudinal study into the Australian government’s “whole of government” IT outsourcing initiative, which had to be abandoned after it became clear that it was not meeting government expectations. Alfred P. Rovai is a professor of education at Regent University in Virginia and teaches research design, statistics, program evaluation, and assessment of student learning in a mostly online doctor-ofeducation program. Previously, he taught similar courses at Old Dominion University and instructional technology courses at UCLA in an online teaching program. He has written extensively on distance education and multicultural topics and recently coedited a book titled Closing the African American Achievement Gap in Higher Education. Jeffrey Roy is an associate professor in the School of Public Administration at Dalhousie University in Halifax, Nova Scotia, Canada. He specializes in models of democratic and multistakeholder governance and electronic government reforms. He is also a member of the Organization for Economic Cooperation and Development’s E-Government Network, associate editor of the International Journal of E-Government Research, featured columnist in CIO Government Review (a Canadian publication devoted to the nexus between technology and government), and author of a recent 2006 book E-Government in Canada: Transformation for the Digital Age (University of Ottawa Press). Giovanni Maria Sacco is currently an associate professor of information systems and human-computer interaction with the Department of Informatics, University of Torino, Italy. He is the author of two U.S. patents (one is an international IBM patent), and has held positions in industry for over 25 years as the chief designer and engineer for information retrieval and content management systems. He has published several scientific papers on the most important international journals. In addition to dynamic taxonomies, his work on security with Dorothy Denning is one of the bases of the MIT Kerberos security architecture, and his work at IBM on buffer management and the invention of recursive hash join are fundamental results for high-performance database systems.
About the Contributors
Rodrigo Sandoval-Almazán is an assistant professor in the Department of Business and Economics, Instituto Tecnológico de Estudios Superiores de Monterrey (ITESM) in Toluca City and a research fellow at the Center for Development of Information Technologies and Electronics of the ITESM. He has lectured on topics such as information systems for business, information systems strategy for business, political marketing strategy, electronic commerce development, political sciences foundations, the digital divide in emergent countries, policy analysis, organization theory, database applications, statistics, Web development, quantitative analysis and modeling, research methods, public administration theory, and local government management. His research interests include electronic government, information technologies and organizations, digital divide technology, online political marketing, new public management, and multimethod research approaches. Yuan Sherng Tay received a BEng from the National University of Singapore. He is currently an MEng student at the National University of Singapore. Giuseppe Sindoni is currently a national seconded export at Eurostat, Statistical Office of the European Commission, where he leads the Statistical Data and Metadata eXchange project. He has been a technologist at the Italian National Statistics Institute and assistant professor at Roma Tre University. He received a PhD in computer system engineering from La Sapienza University of Rome in 1999. He has published numerous research papers, and contributed a chapter to the book Multidimensional Databases and two entries to the Encyclopaedia of Data Warehousing and Data mining. He has coauthored a book on problem solving in economics with office automation tools. Ludwig Slusky is a professor of information systems at California State University, Los Angeles. He has published in Software Engineering, Information and Software Technology, Data Management, Idea Group Publishing, and others. Dr. Slusky’s latest research interests are in information security with emphasis on trustworthy health care networks and in capability maturity model integration (CMMI) with emphasis on software engineering. He is a certified computer information systems security professional (CISSP) and has completed a certified training at Carnegie Mellon Software Engineering Institute in the intermediate concepts of CMMI. His other professional interests are in database administration, systems development, e-commerce, and HIPAA. Anastasia S. Spiliopoulou is a lawyer, LLM, and member of the Athens Bar Association. During the latest years, she had a major participation in matters related to telecommunications and broadcasting policy in Greece and abroad within the wider framework of the information society. She has been involved in current legal, research, and business activities as a specialist for e-commerce and e-businesses, electronic signatures, e-contracts and e-procurement, e-security and other modern information society applications. She has published more than 70 scientific and business papers in the international literature (books, magazines, conferences, and workshops), with specific emphasis given on regulatory, business, commercial, and social aspects. She has mainly focused her activities on recent aspects of the European regulatory policies and on their implications in the competitive development of the converged telecommunications market. She currently works as an OTE (Hellenic Telecommunications Organization S.A.) lawyer for the Department of Regulatory Issues of the General Directorate for Regulatory Affairs.
About the Contributors
Larry Stillman, PhD, is a senior research fellow at the Centre for Community Networking Research, Monash University, Australia. His main interest is working with community-based organisations to better use technology for social and community change, as well as theoretical aspects of human-computer relationships. He is particularly interested in how we know what is valued by communities and people (usually government) who support community initiatives—they are not always the same thing. Other areas of interest include gender, culture, and their relationships with technology use. He is very active in international community informatics research networks. Randy Stoecker is an associate professor in the Department of Rural Sociology, University of Wisconsin with a joint appointment at the University of Wisconsin – Extension Center for Community and Economic Development. He moderates and edits COMM-ORG: The On-Line Conference on Community Organizing and Development (http://comm-org.wisc.edu), conducts trainings, and speaks frequently on community organizing and development, participatory research and evaluation, and community information technology. Stoecker has written extensively on community organizing and development and community-based research, including the books Defending Community (Temple University Press, 1994) and Research Methods for Community Change (Sage Publications, 2005), and he co-authored the book Community-Based Research in Higher Education (Jossey-Bass, 2003). Leonardo Tininini is currently a researcher at the CNR Institute of Systems Analysis and Computer Science and lecturer at the Campus Bio-medico University in Rome. He received a PhD in computer science engineering from La Sapienza University of Rome in 2000. He has published numerous scientific papers on statistical databases, aggregate data, query languages, and spatiotemporal databases, and has been referee for prestigious international conferences and journals. He also worked at the Italian National Institute of Statistics (ISTAT), mainly on the design and implementation of statistical dissemination systems on the Web. Athanasios Tsakalidis obtained a diploma in mathematics from the University of Thessaloniki, Greece (1973), and a diploma in computer science (1981) and PhD (1983) from the University of Saarland, Saarbuecken, Germany. His is currently a full professor in the Department of Computer Engineering and Informatics, University of Patras, and R&D coordinator of the Research Academic Computer Technology Institute (RACTI). His research interests include data structures, graph algorithms, computational geometry, expert systems, medical informatics, databases, multimedia, information retrieval, and bioinformatics. He has published several research papers in national and international journals and conferences and is co-author of the Handbook of Theoretical Computer Science and other book chapters. Hui-Lien Tung is an assistant professor of management information systems in the Division of Business at Paine College. Her teaching and research interests include database management, system analysis and design, and Web-based learning systems. Theodoros Tzouramanis received a 5-year BEng (1996) in electrical and computer engineering and a PhD (2002) in informatics from the Aristotle University of Thessaloniki. Currently, his is lecturer at the Department of Information and Communication Systems Engineering, University of the Aegean. His research interests include access methods and query processing for databases, database security and privacy, and geographical information systems.
About the Contributors
Maria Vardaki is the holder of a BS in mathematics, an MS in management, and a PhD in public administration. Her current research interests are in the area of official statistics and data mining and she is author of 20 research papers published in international journals, encyclopedias, and conferences in research areas like statistical metadata, modeling, quality issues and harmonization of time series data. Since 1996 she has participated in various European and national research projects. Baoying Wang received a PhD in computer science from North Dakota State University. She received a master’s degree from Minnesota State University, St. Cloud. She is currently an assistant professor in Waynesburg College, PA. Her research interests include data mining, data warehouse, bioinformatics, and high-performance computing. She serves as a reviewer and committee member of several internal conferences. She is a member of ACM, ISCA, and SIGMOD. John Wang is a full professor at Montclair State University. Having received a scholarship award, he came to the U.S. and completed his PhD in operations research from Temple University in 1990. He has published over 100 refereed papers and four books. He has also developed several computer software programs based on his research findings. He is the editor of the Encyclopedia of Data Warehousing and Mining (1st and 2nd editions). Also, he is the editor in chief of both the International Journal of Information Systems and Supply Chain Management and the International Journal of Information Systems in the Service Sector. Chee Wei Phang is currently a doctoral candidate in the Department of Information Systems, School of Computing, National University of Singapore (NUS). His research interests include e-government, e-participation, IT-induced organizational change, and IT innovation adoption. His work has been published or is forthcoming in IEEE Transactions on Engineering Management, Communications of the ACM, and top-tier information systems conferences, such as International Conference on Information Systems (ICIS), International Federation for Information Processing (IFIP) WG8.2 Conference, European Conference on Information Systems (ECIS), Hawaii International Conference on System Sciences (HICSS), and Americas Conference on Information Systems (AMCIS). Derek Werthmuller has managed the Technology Solutions Laboratory and the Technology Services Unit at the Center for Technology in Government for over 10 years. Prior to joining CTG, Werthmuller spent 6 years at Siena College as a computer and network specialist. His accomplishments at Siena include establishing an Internet presence for the college, expanding the campus network, and designing a multilayer computer and network security system. Werthmuller is the co-author, along with Jim Costello, of the white paper XML: A New Web Site Architecture (September 2002), which detailed the CTG’s migration to an XML-based Web site. Anne Wiggins recently completed a PhD in the Department of Information Systems, London School of Economics and Political Science. She also holds an undergraduate degree from the University of Sydney and a master’s degree from Birkbeck, University of London. As a consultant in the fields of ICTs and the Internet, she has worked with public and commercial organizations in Australia, the U.S., and the UK. Her research focuses on EU and UK SME (small- to medium-sized enterprise) e-business adoption and implementation.
About the Contributors
David C. Wyld is the Robert Maurin professor of management at Southeastern Louisiana University where he directs the College of Business’ Strategic e-Commerce/e-Government Initiative. He is a noted RFID speaker, consultant, and writer, being a frequent contributor to both academic and industry publications. He is a contributing editor to both RFID News and Global Identification. He is also the author of the recent research report RFID: The Right Frequency for Government, the most downloaded report in the history of the IBM Center for the Business of Government. In 2006, he was named a Rising Star in Government Information Technology by Federal Computer Week. Feng Xu received a BS in electronic engineering from Lanzhou University. She is currently a PhD candidate in the Department of Electronic Engineering at Tsinghua University. In 2005, she worked as an intern in Microsoft Research Asia for more than 3 months. Her research interests include contentbased image retrieval, automatic image annotation, and machine learning. Yu-Jin Zhang (PhD, State University of Liège, Belgium) has been professor of image engineering at Tsinghua University, Beijing, China, since 1997. He has published 300 research papers and a dozen of books, including two monographs, Image Segmentation and Content-Based Visual Information Retrieval (Science Press), and two edited collections, Advances in Image and Video Segmentation and Semantic-Based Visual Information Retrieval (IRM Press). He is vice president of the China Society of Image and Graphics and the director of the academic committee of the society. He is deputy editor-in -chief of Journal of Image and Graphics and on the editorial board of several scientific journals. Dan Zhu is an associate professor at the Iowa State University. She obtained a PhD from Carnegie Mellon University. Dr. Zhu’s research has been published in the Proceedings of National Academy of Sciences, Information System Research, Naval Research Logistics, Annals of Statistics, Annals of Operations Research, and so forth. Her current research focuses on developing and applying intelligent and learning technologies to business and management.
Index A accountability, bias 724 accountability, definition of 723 accountability, in e-budgeting 724 adaptive structuration theory (AST) 365 administration to business (A2B) 578 administration to citizen (A2C) 578 agent freeze 445 agent integrity 453 agent preactivation 445 agent receptionist 444 agent transport 445 agent transport, bootstrap 447 agent transport, supervised 444 Australia 711–721 authentication, e- 417
B Beyond2020 584 blogosphere 86 blogs 16, 81–87 blogs, as therapy 83 blogs, definition 81 blogs, executives and 83 blogs, history of 82 blogs, public officials and 84 blogs, public sector and 84–86 blogs, statistics 83 blogs, U.S. military and 85 Bluetooth technology 62, 69 Britain’s National Mapping Agency 833–853 BSD license 49 business-to-business (B2B) 115 business-to-consumer (B2C) 115
C causal-loop diagram 479 certificate authority (CA) 414, 455 change management 922 chief technology officer (CTO) 522
citizen partcipation, Internet based, challeges 34–35 citizen participation, Internet-based 40 citizen participation, Internet-based, issues 32–35 citizen participation, traditional 31–32 civic skills 80 clustering, hierarchical 571 clustering, hybrid 573 clustering, in text mining 595 clustering, partitioning 570 commentariat 93 commercial off the shelf(COTS) 525 communality 80 community, structure of 52–53 community-based research (CBR) 54, 59 community development 59 community informatics 50–60, 59 community informatics, definition 51 community informatics, ethical issues of 53 community informatics, sustainability 54 community informatics, technology concepts of 53 community organization 59 community portal 103 computerized tomography (CT) 363 confidentiality 414 connectivity 80 consumer-to consumer (C2C) 116 consumers-to-business (C2B) 115 content-based image retrieval (CBIR) 615 culture 500 customer relationship management (CRM) 494 customer relations management 558 cybercrime 689
D data-cube technology 630 database management system (DBMS) 776 database management system (DBMS), heterogeneity 779 data clustering, categorization 568 data longevity 49 data mining, and public administration 557
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Index
DaWinciMD 585 decision support system (DSS) 627 deliberation 80 Department of Administrative Services (DOAS) 522 developing nations, adoption of IT 905–924 Diffie-Hellman (DH) protocol 456 digital democracy 854–872 digital democracy, in public administration 855 digital divide 7, 34, 51–52, 60, 62, 80 digital government 790–804 digital signature 414 digital signature, and budgetary transactions 725 digital signature signer 417 dissemination process 497
E e-authentication 417 e-business 116 e-business, adoption in UK 107–108 e-business, benefits 105 e-business, SME adoption of 106 e-commerce 39, 116 e-democracy 39 e-distribution 116 e-governance 96, 103 e-government 22, 39, 69, 103, 116 e-government, and KM 508–519 e-government, evolutionary approach 30 e-government, services of 96 e-government, SME support 95–97 e-government, stages 23–27 e-government evolution, integration 521 e-government evolution, interaction 521 e-government evolution, publishing 521 e-government evolution, transaction 521 e-government to business (e-G2B) 94 e-management 39 e-marketplace 116 e-Mexico 873–888 e-Mexico initiative 495 e-participation, best practice initiatives 74–80 e-participation, ICT feature effects 73–74 e-participation, individual participation factors 72–73 e-participation, initiative implications 74 e-participation, systems and tools 72 e-participation initiatives 80 e-participation research 70–75 e-procurement 116 e-services 39
early adopters 11 electronic budgeting (e-budgeting) 722–734 electronic business (e-business), and organisational change 834 electronic commerce (e-commerce) 2 electronic data interchange (EDI) 116, 532 electronic government (e-government) 1–2, 11, 30 electronic government (e-government), definition of 750 electronic government (e-government), developing a generic framework 748–774 electronic government (e-government), evolutionary approaches 23 electronic government (e-government), evolutionary approaches, limitations of 23–27 electronic government (e-government), growth stages 3 electronic government (e-government), in Mexico 873–888 electronic health record (EHR) 673 Electronic Health Record Solution 673 electronoic health (e-health) 672–688 enterprise application integration (EAI) 523 enterprise resource planning (ERP) 560 evolutionary models 23 executive information systems (EIS) 627 extensible markup language (XML) 49, 368
F federal PKI (FPKI) 413 First Amendment 13, 19–21 Firstgov.gov 13 flaming 93 forum 21 forum, nonpublic 21 forum, relevant 21 fraud detection 560 full-time equivalent (FTE) 523
G geographical information system [GIS] 368 geospatial Web-based information system 775 GNU general public license 49 governance structure 45–46, 49 governance structure, hierarchy 49 governance structure, hybrid 49 government-to-business (G2B) initiatives 2 government-to-citizen (G2C) initiatives 2 government-to-employee (G2E) initiatives 2 government-to-government (G2G) initiatives 2
Index
government innovation 493 Greek municipality 513 group decision support system (GDSS) 366
(KQML) 441 knowledge transfer 497 knowledge utilization 498
H
L
heterogeneity 777 human resource management system (HRMS) 559 hyperlinks, embedded 12–13, 15–16, 18, 21
land use, in Wisconsin 779 late adopters (laggards) 11 leadership, gatekeeper 642 leadership, networked 641 leadership, organic 642 leadership role 641 lesser GPL 49 licensing components 44–45
I ICTs, effective use of 60 identification number (ID) 455 IEEE 69 IEEE 802.11 62 information and communication technologies (ICTs) 1, 6, 22, 103, 117 information retrieval (IR) 593 information systems (IS) 117 information technologies (IT) 361 information technology cost 7 innovation 11, 106 innovation diffusion theory 6 integration stage 30 integrity, agent 453 intelligent agent 441 inter-agency agreement (IAA) 526 Internet 11, 69 Internet, and its impact on political activism 889–904 Internet deliberative features 40 Internet use 896 interoperability 49 intra-business 117 intranet 11 intrusion detection, and public domain 464 intrusion detection system (IDS) 463 intrusion detection system (IDS), and data mining 463–473
K knowledge 494 knowledge discovery 516 knowledge elicitation 497 knowledge elite 890 knowledge identification 495 knowledge management, and e-government 508– 519 knowledge management, and leadership 500 knowledge management process 495 knowledge query and manipulation language
M make-or-buy decision 547 memory, organizational 516 micro business 104 Mississippi, IT personnel 737 MIT license 49 Mozilla Firefox 819 multidimensional navigation 581
N National Archives and Records Administration (NARA) 418 navigation, in government Web site 805–817 network 49 new technology, resistance to 6 nonpublic forum 13, 14
O online analytical processing (OLAP) 581, 627 online analytical processing (OLAP), and data-cube technology 630 online development community 49 online repositories 41 Ontario, e-health 672–688 open communities 41 openness 30 OpenOffice.org 818–832 open source library 49 open source license 49 open source software 41, 42–43, 49 open source software (OSS) 818–832 open source software, benefits of 42–43 open source software, challenges of 43 open standards 41, 42–43, 49 open standards, benefits of 42–43
Index
open standards, challenges of 43 organizational culture 500 organizational leadership 638 Organization for Economic Cooperation and Development (OECD) 790 outsourcing, in IT 664 outsourcing, in public-sector agencies 662–671
P paper reduction 4 partnering, private-sector 682 partnering, public-private 675 permalink 93 personal digital assistant (PDA) 592 podcasting 93 political efficacy 80 political participation 30 positive branding effect 13 privacy 7 privacy, government Web site 703 public-key infrastructure (PKI) 413, 453 public-key infrastructure (PKI), federal 413 public administration 1, 11 public administration, advantages of e-government 4–5 public administration, challenges of e-government 5–7 public administration, definition 3 public administration, e-government initiatives 3 public forum, designated 13, 14, 21 public forum, traditional 13, 14 public information technology, and human-factors design (UCD) 650–661 public managers 12–13, 19 public wireless internet 61–69
Q quality 604 quality assurance framework, in public administration 606 quality function deployment (QFD) 607
R radio frequency identification (RFID) 425–440 radio frequency identification (RFID), readers 428 really simple syndication (RSS) 81, 93 reasonableness test 14, 21 record keeping 418 reference mode 477 regional economic marketplace 104
registration authority (RA) 414 resource description framework (RDF) 499 risk, shared 676 risk assessment 548 roaming, and security 442 roaming permit, request 444
S SAFE protocol 442 SAFER 453 security 7 security, and certificate authority 418 security, elements of a program 690 security, government Web site 703 security, lack of 453 service-oriented architecture (SOA) 531 servicecenter (SC) 526 shared-service center (SSC) 544 shared-service decisions 547 shared services, risks of 549 sidebar 93 small- and medium-sized enterprises (SMEs) 117 small and medium-sized enterprises (SME) 104 small and medium-sized enterprises (SMEs) 94–95 smart label 429 SMEs 105–110 SMEs, UK policies of 108–110 social capital 53, 60, 80 source credibility 21 South Africa 749 South Korea, online policy forums 854–872 state government 735 state portal 30 State Technology Authority (STA) 522 statistical databases (SDBs) 579 statistical dissemination 581 StatLine 584 stock-and-flow diagram 479 strict scrutiny test 14, 21 structuration theory 362 structuration theory in IT 363 sustainability 60
T terrorist attacks 561 text mining 592–603 thread 93 trackback 93 traditional public forum 21 transaction efficiency 5
Index
transaction stage 30 transparent governance 5 trust 6 trust, dimensions of 6
very small, small/medium-sized enterprise(vSME) 104 very small SMEs (vSMEs) 96 virtual value chain three-stage development process 97
Web browser, Mozilla Firefox 819 Web services, government 533 Web site, assessment of municipal 791 Web site, government 805 Web site, usability in pubic sector 806 Web site evaluation 701 Web sites, as government property 15–16 Web sites, as nonpublic forums 16 web sites, government 13 WiFi 69 WiFi standard 62 wiki 93 WiMax 69 WiMax standard 62 Wireless Internet 69 wireless Internet, and e-government 63–64 wireless Internet, in Michigan 64–66 wireless networks 62 Wisconsin Land Information System (WLIS) 776 World Wide Web 69
W
X
Web-enabled governance 40 Web-enabled governance, trends 32–34 Web application, custom 525 Web application, EAI 525 Web application, static 523 Web application classification 522 Web browser 520
XML, for website management 49 XML tool kit 42, 43
U universal mobile telecommunications systems (UMTS) 69 user-centered design (UCD) 650 user help, and research questions 808 user help, in government Web site 805–817 user interface (UI) 525
V
Handbook of Research on Public Information Technology Volume II G. David Garson North Carolina State University, USA Mehdi Khosrow-Pour Information Resources Management Association, USA
InformatIon scIence reference Hershey • New York
Acquisitions Editor: Development Editor: Senior Managing Editor: Managing Editor: Copy Editor: Typesetter: Cover Design: Printed at:
Kristin Klinger Kristin Roth Jennifer Neidig Sara Reed Katie Smalley, Shanelle Ramelb Michael Brehm, Larissa Vinci Lisa Tosheff Yurchak Printing Inc.
Published in the United States of America by Information Science Reference (an imprint of IGI Global) 701 E. Chocolate Avenue, Suite 200 Hershey PA 17033 Tel: 717-533-8845 Fax: 717-533-8661 E-mail:
[email protected] Web site: http://www.igi-global.com and in the United Kingdom by Information Science Reference (an imprint of IGI Global) 3 Henrietta Street Covent Garden London WC2E 8LU Tel: 44 20 7240 0856 Fax: 44 20 7379 0609 Web site: http://www.eurospanonline.com Copyright © 2008 by IGI Global. All rights reserved. No part of this publication may be reproduced, stored or distributed in any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher. Product or company names used in this set are for identification purposes only. Inclusion of the names of the products or companies does not indicate a claim of ownership by IGI Global of the trademark or registered trademark. Library of Congress Cataloging-in-Publication Data Handbook of research on public information technology / David G. Garson and Mehdi Khosrow-Pour, editor. p. cm. Summary: “This book compiles estimable research on the global trend toward the rapidly increasing use of information technology in the public sector, discussing such issues as e-government and e-commerce; project management and information technology evaluation; system design and data processing; security and protection; and privacy, access, and ethics of public information technology”--Provided by publisher. ISBN-13: 978-1-59904-857-4 (hbk.) ISBN-13: 978-1-59904-858-1 (e-book) 1. Internet in public administration. I. Garson, David G. II. Khosrowpour, Mehdi, 1951JF1525.A8H363 2008 352.3’802854678--dc22 British Cataloguing in Publication Data A Cataloguing in Publication record for this book is available from the British Library. All work contributed to this book set is original material. The views expressed in this book are those of the authors, but not necessarily of the publisher. If a library purchased a print copy of this publication, please go to http://www.igi-global.com/reference/assets/IGR-eAccess-agreement. pdf for information on activating the library's complimentary electronic access to this publication.
Editorial Advisory Board
Annie Becker Florida Institute of Technology, USA George Ditsa University of Wollongong, Australia Yair Levy Nova Southeastern University, USA Mahesh S. Raisinghani TWA, USA Edward Szewczak Canisus College, USA
Table of Contents
Foreword ....................................................................................................................................... xxxv Preface .......................................................................................................................................... xxxvii Acknowledgment ............................................................................................................................xliii
Volume I Section I E-Government and E-Commerce Chapter I Key Issues in E-Government and Public Administration / Rhoda C. Joseph and David P. Kitlan ....................................................................................................................................1 Chapter II Government Web Sites as Public Forums / Pearson Liddell, Jr., Robert S. Moore, Melissa Moore, William D. Eshee, and Gloria J. Liddell ...................................................................12 Chapter III Limitations of Evolutionary Approaches to E-Government / Rodrigo Sandoval-Almazán and J. Ramon Gil-Garcia ..........................................................................................................................22 Chapter IV Issues and Trends in Internet-Based Citizen Participation / Stephen K. Aikins ..................................31 Chapter V Public Sector Participation in Open Communities / Andrea B. Baker, J. Ramon Gil-Garcia, Donna Canestraro, Jim Costello, and Derek Werthmuller .................................................................41
Chapter VI Community Informatics / Larry Stillman and Randy Stoecker ..........................................................50 Chapter VII Public Wireless Internet / Dale Nesbary ............................................................................................61 Chapter VIII The Current State and Future of E-Participation Research / Chee Wei Phang and Atreyi Kankanhalli ..............................................................................................................................70 Chapter IX Blogging / David C. Wyld ...................................................................................................................81 Chapter X E-Government and SMEs / Ron Craig ...............................................................................................94 Chapter XI EU E-Business and Innovation Policies for SMEs / Anne Wiggins ....................................................105 Chapter XII Exploitation of Public Sector Information in Europe / Ioannis P. Chochliouros, Anastasia S. Spiliopoulou, and Stergios P. Chochliouros ...................................................................118 Chapter XIII Information Technology Among U.S. Local Governments / Donald F. Norris..................................132 Chapter XIV Public Sector Human Resources Information Systems / Christopher G. Reddick .............................145 Chapter XV Digital Libraries / Micah Altman ........................................................................................................152 Chapter XVI An Exploratory Study of the E-Government Services in Greece / Dimitrios K. Kardaras and Eleutherios A. Papathanassiou ...........................................................................................................162 Chapter XVII e-Government’s Barriers and Opportunities in Greece / Giorgos Laskaridis, Konstantinos Markellos, Penelope Markellou, Angeliki Panayiotaki, and Athanasios Tsakalidis ..........................................................................................................................175 Chapter XVIII E-Lections in New Zealand Local Government / Alex Dunayev and John Paynter ..........................192
Chapter XIX E-Census 2006 in New Zealand / John Paynter and Gabrielle Peko .................................................201 Chapter XX Security Challenges in Distributed Web Based Transactions: An Overview on the Italian Employment Information System / Mirko Cesarini, Mariagrazia Fugini, Mario Mezzanzanica, and Krysnaia Nanini .......................................................................................209 Chapter XXI Interactive Personalized Catalogue for M-Commerce / Sheng-Uei Guan and Yuan Sherng Tay .......218 Chapter XXII Trust Based E-Commerce Decisions / Vesile Evrim and Dennis McLeod ..........................................229 Chapter XXIII Using Partial Least Squares in Digital Government Research / J. Ramon Gil-Garcia ......................239 Section II Privacy, Access, Ethics, and Theory Chapter XXIV Privacy Issues in Public Web Sites / Eleutherios A. Papathanassiou and Xenia J. Mamakou ..........256 Chapter XXV A Framework for Accessible and Usable Web Applications / Lourdes Moreno, Elena Castro, Dolores Cuadra, and Paloma Martinez ..............................................................................................265 Chapter XXVI Intelligent User-Centric Access to Public Information / Giovanni Maria Sacco ...............................274 Chapter XXVII Open Access to Scholarly Publications and Public Policies / Jean-Philippe Rennard .......................284 Chapter XXVIII The Digital Divide and Social Equity / Alfred P. Rovai and Emery M. Petchauer ............................294 Chapter XXIX Africa and the Challenges of Bridging the Digital Divide / Esharenana E. Adomi ...........................303 Chapter XXX Research Ethics in E-Public Administration / Carlos Nunes Silva .....................................................314
Chapter XXXI Medical Ethical and Policy Issues Arising from RIA / Jimmie L. Joseph and David P. Cook ...........323 Chapter XXXII Social Capital and the Gendering of Differential IT Use / Lia Bryant and Iolanda Principe ............333 Chapter XXXIII Technology Diffusion in Public Administration / Eugene J. Akers ....................................................339 Chapter XXXIV Institutional Theory and E-Government Research / Shahidul Hassan and J. Ramon Gil-Garcia .....349 Chapter XXXV Structuration Theory and Government IT / J. Ramon Gil-Garcia and Shahidul Hassan...................361 Section III Security and Protection Chapter XXXVI Intelligence and Security Informatics / Jimmie L. Joseph ..................................................................378 Chapter XXXVII Practical Measures for Securing Government Networks / Stephen K. Aikins ....................................386 Chapter XXXVIII Digital Convergence and Cybersecurity Policy / Anthony W. Buenger, Jr..........................................395 Chapter XXXIX Bioterrorism Response and IT Strategies / David A. Bray .................................................................406 Chapter XL Federal Public-Key Infrastructure / Ludwig Slusky and Parviz Partow-Navid ..................................413 Chapter XLI Radio Frequency Identification (RFID) Technology / David C. Wyld ................................................425 Chapter XLII Roaming-Agent Protection for E-Commerce / Sheng-Uei Guan .......................................................441 Chapter XLIII Integrity Protection of Mobile Agent Data / Sheng-Uei Guan ...........................................................453
Volume II Chapter XLIV The Role of Data Mining in Intrusion Detection Technology / Amalia Agathou and Theodoros Tzouramanis ......................................................................................................................463 Section IV System Design and Data Processing Chapter XLV System Dynamics to Understand Public Information Technology / Luis Felipe Luna-Reyes ............476 Chapter XLVI Government Innovation Through Knowledge Management / Luis Felipe Luna-Reye.......................493 Chapter XLVII A Framework for Knowledge Management in E-Government / Kostas Metaxiotis ...........................508 Chapter XLVIII Web Application Classification: A Maintenance/Evolution Perspective / Hsiang-Jui Kung and Hui-Lien Tung ..............................................................................................................................520 Chapter XLIX Web Services and Service-Oriented Architectures / Bruce J. Neubauer ............................................531 Chapter L The Strategic Determinants of Shared Services / Anton Joha and Marijn Janssen ...........................544 Chapter LI Data Mining in Public Administration / John Wang, Xiaohua Hu, and Dan Zhu ..............................556 Chapter LII Categorization of Data Clustering Techniques / Baoying Wang, Imad Rahal, and Richard Leipold...................................................................................................................................568 Chapter LIII Statistical Dissemination Systems and the Web / Sindoni Giuseppe and Tininini Leonardo .............578 Chapter LIV Text Mining / Antonina Durfee ...........................................................................................................592
Chapter LV Statistical Data and Metadata Quality Assessment / Maria Vardaki and Haralambos Papageorgiou .................................................................................................................604 Chapter LVI Probability Association Approach in Automatic Image Annotation / Feng Xu and Yu-Jin Zhang .......................................................................................................................................615 Chapter LVII Online Analytical Processing and Data-Cube Technologies / Lixin Fu and Wenchen Hu..................627 Section V Project Management and IT Evaluation Chapter LVIII Managing People and Information in Complex Organizations / Kalu N. Kalu ..................................638 Chapter LIX Human-Factors Design for Public Information Technology / Vincent E. Lasnik ...............................650 Chapter LX An Overview of IT Outsourcing in Public Sector Agencies / Anne C. Rouse ....................................662 Chapter LXI E-Health, Local Governance, and Public-Private Partnering in Ontario / Jeffrey Roy .......................672 Chapter LXII Implementing a Sound Public Information Security Program / Stephen K. Aikins ............................689 Chapter LXIII Evaluation of E-Government Web Sites / Michael Middleton............................................................699 Chapter LXIV IT Evaluation Issues in Australian Public-Sector Organizations / Chad Lin ......................................711 Chapter LXV Performance and Accountability in E-Budgeting Projects / Gabriel Puron-Cid and J. Ramon Gil-Garcia ...........................................................................................................................722 Chapter LXVI A Model for Reengineering IT Job Classes in State Government / Craig P. Orgeron .......................735
Section VI Selected Readings Chapter LXVII Developing a Generic Framework for E-Government / Gerald Grant and Derek Chau ...................748 Chapter LXVIII A Web Query System for Heterogeneous Government Data / Nancy Wiegand, Isabel F. Cruz, Naijun Zhou, and William Sunna ........................................................................................................775 Chapter LXIX Digital Government Worldwide: An E-Government Assessment of Municipal Web Sites / James Melitski, Marc Holzer, Seang-Tae Kim, Chan-Gun Kim, and Seung-Yong Rho ..................................................................................................................................790 Chapter LXX User Help and Service Navigation Features in Government Web Sites / Genie N.L. Stowers ...........805 Chapter LXXI An Empirical Study on the Migration to OpenOffice.org in a Public Administration / B. Rossi, M. Scotto, A. Sillitti, and G. Succi .......................................................................................818 Chapter LXXII Organisational Challenges of Implementing E-Business in the Public Services: The Case of Britain’s National Mapping Agency / Francesca Andreescu ..............................................................833 Chapter LXXIII Public Administrators’ Acceptance of the Practice of Digital Democracy: A Model Explaining the Utilization of Online Policy Forums in South Korea / Chan-Gon Kim and Marc Holzer .................854 Chapter LXXIV E-Mexico: Collaborative Structures in Mexican Public Administration / Luis F. Luna-Reyes, J. Ramon Gil-Garcia, and Cinthia Betiny Cruz ..................................................................................873 Chapter LXXV The Impact of the Internet on Political Activism: Evidence from Europe / Pippa Norris .................889 Chapter LXXVI Adoption and Implementation of IT in Developing Nations: Experiences from Two Public Sector Enterprises in India / Monideepa Tarafdar and Sanjiv D. Vaidya ...............................905
Detailed Table of Contents
Foreword ....................................................................................................................................... xxxv Preface .......................................................................................................................................... xxxvii Acknowledgment ............................................................................................................................xliii
Volume I
Section I E-Government and E-Commerce
Chapter I Key Issues in E-Government and Public Administration / Rhoda C. Joseph and David P. Kitlan ...................................................................................................................................... 1 This chapter examines the impact of e-government on public administration from both the constituent and service perspectives. The chapter presents a holistic view of both challenges and advantages of implementing e-government in the area of public administration. Chapter II Government Web Sites as Public Forums / Pearson Liddell, Jr., Robert S. Moore, Melissa Moore, William D. Eshee, and Gloria J. Liddell ................................................................... 12 In countries around the globe, the public availability of information through technologies, such as the Internet, has increased the average citizen’s ability to access documents, resources and solutions with unprecedented ease. As a result, governments must adapt their systems and Internet-based or electronic communication to offer the most relevant services to their citizenry. In this chapter, we employ a legal perspective to examine the ramifications of public information strategies that allow firms to have hyperlinks embedded within the content of public information systems. This perspective allows the public information manager to make informed decisions when developing government portal strategies.
Chapter III Limitations of Evolutionary Approaches to E-Government / Rodrigo Sandoval-Almazán and J. Ramon Gil-Garcia .......................................................................................................................... 22 This chapter examines the advancement of e-government, primarily through the use of information and communication technologies (ICT). State and local governments are using ICTs in the creation of Web sites and portals, which provide information about the government agencies and, in some cases, electronic transactions such as tax payment systems, online communities, job search, licensing, and vehicle registration, among others. It is through these innovations that government systems are reaching a higher level of sophistication. Chapter IV Issues and Trends in Internet-Based Citizen Participation / Stephen K. Aikins .................................. 31 This chapter reviews the opportunities and challenges of Internet-based citizen participation, the trend noted in the findings of some of the empirical studies and attempts to explain the reason the Internet has failed in its putative potential to bring citizens closer to their governments. The use of Internet technology to further citizen participation is believed to hold great promise to enhance democratic governance by allowing citizens to access public information and interact with government officials. Chapter V Public Sector Participation in Open Communities / Andrea B. Baker, J. Ramon Gil-Garcia, Donna Canestraro, Jim Costello, and Derek Werthmuller ................................................................. 41 This chapter examines the advantages and challenges associated with open source software, particularly for public sector organizations. As new Internet-based products, services and resources are developed, private companies and government agencies are exploring the use of open standards and open source software for their daily operations. Advantages are discussed including interoperability and re-usability of code as well as data longevity. Challenges are discussed including technical training and support services as well as participation in online development communities and how this is constrained by the current legal framework and personnel practices. Chapter VI Community Informatics / Larry Stillman and Randy Stoecker .......................................................... 50 Researchers and practitioners use a wide range of terms when they discuss community involvement with information and communications technologies (ICT). Common (English-language) terms include ‘community networks,’ ‘community computing,’ ‘community information networks,’ ‘civic networking,’ ‘community technology,’ ‘community computer networks,’ ‘online neighborhood network,’ ‘virtual community,’ ‘online community,’ ‘community e-business,’ and most recently, ‘community informatics.’ Since at least the late 1990s, the term ‘community informatics’ has come into use amongst many academic researchers as an overarching label for the academic study of projects and initiatives which deliberately engage community groups and organizations with ICTs. However, community informatics has not yet achieved a stable set of findings or core questions which are commonly used to conduct research.
Chapter VII Public Wireless Internet / Dale Nesbary ............................................................................................ 61 There exists a growing controversy over whether government should be in the business of providing wireless broadband Internet. Public sector entities, particularly counties and cities, are developing the physical and intellectual infrastructure designed to provide wireless broadband Internet to their residents. Opponents of government entry into the wireless broadband market argue that existing private broadband vendors are fully capable of providing wireless Internet in an efficient manner. Supporters argue that government is uniquely capable of building and supporting, at least initially, wireless broadband at a lower cost and in a more pluralistic and efficient manner than private vendors have done thus far. Chapter VIII The Current State and Future of E-Participation Research / Chee Wei Phang and Atreyi Kankanhalli .............................................................................................................................. 70 In this chapter, the authors use the term “e-participation” initiatives to refer to government’s use of ICT to engage citizens in democratic processes. The term “e-participation” is chosen because it is sufficiently general to encompass all such efforts by governments. Instances of e-participation initiatives can be found globally, such as Denmark’s Nordpol.dk, the United States’s Regulations.gov, and Singapore’s Government Consultation Portal. The past decade has witnessed an increasing trend of information and communication technologies (ICT) exploitation by governments around the world to enhance citizen participation. This is reflected in the emergence of a plethora of terms associated with the phenomenon, such as e-consultation or online consultation, online rule-making, online deliberation online public engagement, and e-participation. Chapter IX Blogging / David C. Wyld ................................................................................................................... 81 This chapter focuses on applications in blogging. A blog can be simply defined in the following manner: “A blog is an easy-to-use content management tool. When you ‘blog,’ you are instantly adding new content to your site via a Web interface. No technical or programming skills are necessary” (Weil, 2004, n.p.). In a nutshell, a blog is a “do-it-yourself” Web site. Gone are the days (of say 2003) when one would have to be knowledgeable in html or xml programming or make use of complex, and often expensive, Web creation software to create or update a Web site. With a blog, your Web site can be constantly added to and updated, without having to do anything more than typing (or cutting and pasting) into a text box. Through posting links, you can link your blog to any other site on the Web. You can even add audio and visual material to your blog site by uploading them, much as you would add an attachment to an e-mail. Others who find your site of interest can use RSS (really simple syndication) or sign-up for e-mail alerts to be notified when you post or add material to your blog. Chapter X E-Government and SMEs / Ron Craig ............................................................................................... 94 This chapter looks at a particular focus of e-government, that of support for business in general and SMEs (small and medium-sized enterprises) in particular. While this is only one segment of PIT (public
information technology), it is an important one. The chapter starts with an overview of the importance of SMEs to regional and national economies, showing why governments encourage their start-up, survival and growth. Following this, an overview of e-G2B (e-government to business) initiatives around the world is provided, with particular attention directed to the SME perspective. Chapter XI EU E-Business and Innovation Policies for SMEs / Anne Wiggins .................................................. 105 This chapter explores the academic and government bodies of literature related to EU SMEs (small and medium-sized enterprises) e-business and policy initiatives. Definitions of SMEs are explained, the unique characteristics of SMEs and entrepreneurial characteristics are outlined, and the case made that there is a clear need for more comprehensive research on SMEs in the EU. Chapter XII Exploitation of Public Sector Information in Europe / Ioannis P. Chochliouros, Anastasia S. Spiliopoulou, and Stergios P. Chochliouros ..................................................................118 This chapter focuses on the challenges affecting public sector information (PSI) in the European markets. The gradual “penetration” of an innovative, digitally-oriented information society in the scope of the actual convergence among telecommunications, broadcasting, and information technology, create primary opportunities for access and exploitation of PSI, in the context of a fully competitive and liberalized European electronic communications market. There are now significant challenges on the scene, for improving mutual communication between public sector and private companies, thus creating chances for exploiting new opportunities, to the benefit of the broader European market(s). However, the nonexistence of an appropriate legal framework governing the conditions and/or terms for the commercial use of PSI constitutes a serious drawback for any serious attempt towards evolution, and for an effective development of a European e-communications market. Chapter XIII Information Technology Among U.S. Local Governments / Donald F. Norris................................ 132 The purpose of this chapter is to provide an overview of the adoption, uses and impacts of information technology (IT), including electronic government, among local governments in the United States in the 1950s, these governments began to adopt IT for a variety of purposes and functions, and they continue to do so today. Since the mid 1970s, a small but prolific group of scholars has conducted a large body of research on various aspects of IT and local government. It is from that research and the author’s own studies into this subject that the chapter is based (regarding e-government, see also, Norris, 2006). Given the constraint of space, this chapter can only highlight aspects of this important topic. Readers who wish to delve more deeply into the subject of information technology and local government may wish to avail themselves of the works found in the bibliography as well as references from other, related works which can be found through those works.
Chapter XIV Public Sector Human Resources Information Systems / Christopher G. Reddick ........................... 145 This chapter examines HRIS’ impacts on operations, relationships, and transformations of local government organizations. Human resources information systems is any technology that is used to attract, hire, retain, and maintain talent, support workforce administration, and optimize workforce management. Examples include computers, Internet (Web and e-mail) or other technological means of acquiring, storing, manipulating, analyzing, retrieving, and distributing pertinent information regarding human resources (HR). Chapter XV Digital Libraries / Micah Altman ...................................................................................................... 152 This chapter presents an overview of the history, advantages, disadvantages, and design principles relating to digital libraries, and highlights important controversies and trends. Digital libraries are collections of digital content and services selected by a curator for use by a particular user community. In the last decade digital libraries have rapidly become ubiquitous because they offer convenience, expanded access, and search capabilities not present in traditional libraries. This has greatly altered how library users find and access information, and has put pressure on traditional libraries to take on new roles. However, information professionals have raised compelling concerns regarding the sizeable gaps in the holdings of digital libraries, about the preservation of existing holdings, and about sustainable economic models. Chapter XVI An Exploratory Study of the E-Government Services in Greece / Dimitrios K. Kardaras and Eleutherios A. Papathanassiou ......................................................................................................... 162 The goal of this chapter is to evaluate e-government services in Greece with a set of carefully chosen criteria, in a manner that can be used for evaluating e-government services worldwide. The impact of “e-business” on the public sector is the main source of the government’s transformation towards “e-government,” which refers to the public sector’s efforts to use information and communication technologies to deliver government services and information to the public. E-government allows citizens to interact more directly with the government, transforming multiple operational and bureaucratic procedures and employing a customer-centric approach to service delivery; it allows intra-governmental communication; it also offers numerous possibilities for using the Internet and other Web-based technologies to extend online government services (Gant, 2002). Chapter XVII e-Government’s Barriers and Opportunities in Greece / Giorgos Laskaridis, Konstantinos Markellos, Penelope Markellou, Angeliki Panayiotaki, and Athanasios Tsakalidis ........................................................................................................................ 175 This chapter presents the e-government efforts in Greece. Its aim is to point out the necessity of designing and implementing efficient e-government applications. The vision of an electronically modernized Greek public administration will be realized if a series of key strategic aspects will be considered as well
as international best practices and experiences. Moreover, it will demonstrate the arising opportunities and the key challenges. Chapter XVIII E-Lections in New Zealand Local Government / Alex Dunayev and John Paynter ........................ 192 World-wide governments are investing in initiatives to open access to information, resources, communication and services via channels typically used for electronic commerce. Government agencies are usually the leaders in communication technology commonly developed primarily for military use and later adopted by the general public. Since its inception, the Internet has gained widespread usage, prompting governments to provide online services to the public. The broad category for this type of information and services provision is called “e-government;” it is the general description of a way to provide better access to government information and services. This chapter presents New Zealand’s egovernment strategy in which the Internet will be used to improve the quality of the services and provide greater opportunities to participate in the democratic process for the citizens. Chapter XIX E-Census 2006 in New Zealand / John Paynter and Gabrielle Peko ............................................... 201 This chapter explores the use of e-census technologies in New Zealand. In New Zealand, the census is held every five years. A snapshot is taken on the chosen day and from that the number of people and housing units (houses, flats, apartments) are counted. Everyone in the country on that day is asked to fill in census forms. For the 2006 census an option was introduced to complete the forms on the Internet. Other initiatives included sending text messages about this process, amongst other things to the enumerators (collectors) whose job it is to collate the information in the field. The use of information technology, primarily with the Internet, offers the opportunity to distribute the information and deliver services on a very large scale. Chapter XX Security Challenges in Distributed Web Based Transactions: An Overview on the Italian Employment Information System / Mirko Cesarini, Mariagrazia Fugini, Mario Mezzanzanica, and Krysnaia Nanini ..................................................................................... 209 This chapter examines the objectives and challenges of the Italian plan of e-government, within which the Italian job workfair is conceived. Public administrations, during the last few years, activated modernizations in public service delivery. In particular, this arrangement relates to the service digitalization and automation, thanks to the massive inclusion of information and communication technologies in public offices. This paved the way for internal and external organizational and technological changes, in that a new approach is required to leverage the new technologies. Moreover, the Internet technologies began to play an important role in public services delivery, and many transactions are Web-based nowadays. Eventually, several governments in Europe, and others all over the world, started their own plans of e-government with the goal of increasing the amount and the quality of the service offered via the Internet to their customers (citizens, enterprises, profit and no-profit organizations).
Chapter XXI Interactive Personalized Catalogue for M-Commerce / Sheng-Uei Guan and Yuan Sherng Tay ..... 218 M-commerce possesses two distinctive characteristics that distinguish it from traditional e-commerce: the mobile setting and the small form factor of mobile devices. Of these, the size of a mobile device will remain largely unchanged due to the tradeoff between size and portability. Small screen size and limited input capabilities pose a great challenge for developers to conceptualize user interfaces that have good usability while working within the size constraints of the device. In response to the limited screen size of mobile devices, there has been unspoken consensus that certain tools must be made available to aid users in coping with the relatively large volume of information. Recommender systems have been proposed to narrow down choices before presenting them to the user (Feldman, 2000). The authors propose a product catalogue where browsing is directed by an integrated recommender system. Chapter XXII Trust Based E-Commerce Decisions / Vesile Evrim and Dennis McLeod ........................................ 229 In time, due to the increase of human-computer interaction, trust has become one of the most challenging topics in computer science. Today, trust based e-commerce decisions are becoming more valuable as Internet services are increasingly being used in business to consumer e-commerce applications. E-commerce provides a new way of shopping for the customers by offering more choices and transforming economic activity into a digital media. It also provides an opportunity for the businesses to extend their sales to a larger community. However, the success of getting higher profits and improved services are based on better communication. As in the real world, critical understanding of users’ behavior in cyberspace cannot be achieved without the analysis of the factors affecting the purchase decisions (Limayem, Khalifa, & Frini, 2000). Having lots of options in an environment that is missing face-to-face interaction enforce users to make trust-aware decisions to better protect their privacy and satisfy their expectations such as quality of services. Chapter XXIII Using Partial Least Squares in Digital Government Research / J. Ramon Gil-Garcia .................... 239 This chapter examines how to use partial least squares (PLS) and argues that this could help to incorporate more realistic assumptions and better measurements into digital government research. It does it through a commented example of a digital government research study (Gil-García, 2005b). It is important to clarify that the intention is not to suggest that every research project should use PLS, but to encourage scholars and practitioners to seriously consider this technique as an alternative when designing and carrying out their research. PLS is a structural equation modeling (SEM) technique similar to covariance-based SEM as implemented in LISREL, EQS, or AMOS. Therefore, PLS can simultaneously test the measurement model (relationships between indicators and their corresponding constructs) and the structural model (relationships between constructs).
Section II Privacy, Access, Ethics, and Theory Chapter XXIV Privacy Issues in Public Web Sites / Eleutherios A. Papathanassiou and Xenia J. Mamakou ........ 256 The advent of the Internet has altered the way that individuals find information and has changed how they engage with many organizations, like government, health care, and commercial enterprises. The emergence of the World Wide Web has also resulted in a significant increase in the collection and process of individuals’ information electronically, which has lead to consumers concerns about privacy issues. Many researches have reported the customers’ worries for the possible misuse of their personal data during their transactions on the Internet (Earp & Baumer, 2003; Furnell & Karweni, 1999), while investigation has been made in measuring individuals’ concerns about organizational information privacy practices (Smith, Milberg & Burke, 1996). Information privacy, which “concerns an individual’s control over the processing— i.e., the acquisition, disclosure, and use— of personal information” (Kang, 1998) has been reported as one of the most important “ethical issues of the information age” (Mason, 1986). Chapter XXV A Framework for Accessible and Usable Web Applications / Lourdes Moreno, Elena Castro, Dolores Cuadra, and Paloma Martinez ............................................................................................ 265 Internet growth makes feasible their use by an increased number of people around the world. This chapter examines several approaches introduced in order to create a universal access for all types of users independent of their capabilities. Nowadays, disabled people have several problems using the Web in the same way as non-disabled people, but the use of this technology is a right for everybody and more in the public administration scope in which, a lot of services must be available for users and on a correct way. Universal access may be obtained through the integration of usability and accessibility concepts in the software engineering discipline. These design methodologies consist of the possibility that every user, with independence of if they have disabilities or not, participate in all phases of the Web application development. Chapter XXVI Intelligent User-Centric Access to Public Information / Giovanni Maria Sacco ............................. 274 The quantity and diversity of information available from public government sources is now quite large. Governments, especially local ones, are using the Web to provide a number of services that are mainly informative and aim at improving the quality of life of citizens and at promoting the local community, for example job placement services, tourist information, and so on. Finally, government e-services available to citizens represent one of the most frequent and critical points of contact between public administrations and citizens. In addition to common services such as ID cards, permits, e-services represent the only practical way of providing incentives and support to specific classes of citizens. The key problem is that information must be findable (Morville, 2002). Easy and effective user-centric access to complex information is therefore one of the most critical functionalities of e-government. Since the goal is
end-user interactive access, a holistic approach, in which modeling, interface and interaction issues are considered together, must be used and will be discussed in this chapter. Chapter XXVII Open Access to Scholarly Publications and Public Policies / Jean-Philippe Rennard ..................... 284 “If I have seen further it is by standing upon the shoulders of giants.” The famous statement of Sir Isaac Newton demonstrates that the progress of science relies on the dissemination of discoveries and scientific knowledge. Even though scientific progress is not strictly cumulative (Kuhn, 1970) information sharing is the heart of this progress. Nowadays, scientific knowledge is mainly spread through scholarly journals, that is, highly specialized journals where quality controls and certifications are achieved through peer-review. The first section of this chapter will present the specificity of the current economic model of scientific publications. The second section will introduce to the open access movement and to its emerging economic model. The third section will show the growing implication of governments in that movement. Chapter XXVIII The Digital Divide and Social Equity / Alfred P. Rovai and Emery M. Petchauer .......................... 294 As the Internet becomes increasingly central to living in today’s society, it becomes important that certain groups are not systematically excluded. This chapter examines the digital divide with an emphasis on critical perspectives that recognize power, racism, and social stratification and the challenges faced by public officials to promote information technology policies and programs that support social equality. Chapter XXIX Africa and the Challenges of Bridging the Digital Divide / Esharenana E. Adomi ......................... 303 In this chapter, efforts are made to define digital divide, unravel the status of Africa in the global digital map, enumerate the causes of low level of ICT The provision of communication services in developing regions (like Africa) is an essential aspect of enhancing and facilitating the rate of economic and social development (Yavwa & Kritzinger, 2001). There is thus the need for African countries to make frantic efforts to ensure that ICTs are provided adequately and consistently to close the divide and reap the benefits of economic and social development. Chapter XXX Research Ethics in E-Public Administration / Carlos Nunes Silva ................................................... 314 The purpose of this chapter is to discuss professional ethical issues in research activities conducted in e-public administration, most of which are common to the private and non-profit sectors. It offers an overview of key ethical issues in this field and identifies ethical challenges raised by the application of information and communications technologies (ICT) in public administration research activities. The evidence available shows that ICT places new ethical challenges but does not change radically the nature of ethical problems characteristic of paper-based and face-to-face public administration.
Chapter XXXI Medical Ethical and Policy Issues Arising from RIA / Jimmie L. Joseph and David P. Cook ......... 323 New technologies can lead to social upheaval and ethical dilemmas which are unrecognized at the time of their introduction. Medical care technology has advanced rapidly over the course of the past two decades and has frequently been accompanied by unforeseen consequences for individuals, the medical profession and government budgets, with concomitant implications for society and public policy (Magner 1992; Marti-Ibanez 1962). Advances in information technology (IT) during the last decade and a half are now impacting the medical profession, and the delivery of medical advances, in ways that will impact public policy debates for the foreseeable future. The World Wide Web makes information that was once the eminent domain of medical professionals available to average citizens who are increasingly demanding medical treatments from the leading edge of medical technology. Chapter XXXII Social Capital and the Gendering of Differential IT Use / Lia Bryant and Iolanda Principe .......... 333 Public information technology, as a term, implicitly suggests universal access by citizens to information through the use of technology. The concepts of social capital and the digital divide intersect in access to public information technology. Social inclusion or exclusion occurs as a consequence of the ways in which societies are stratified according to race, gender, (dis)ability, ethnicity and class. This chapter focuses especially on one aspect of stratification, gender and theorizes the gendering of differential access and use of information technologies. An understanding of gendered participation relevant to access to public information technology within the policy contexts for electronic government and social inclusion is important to inform public information technology policy, and service planning and delivery that are premised on the notion of universal access. Chapter XXXIII Technology Diffusion in Public Administration / Eugene J. Akers .................................................. 339 This chapter examines the diffusion of information technology in the public sector and how it provides the opportunity to apply the appropriateness of diffusion theory in a combined context of information technology and public policy innovation. The ability to understand the salient aspects of innovations as perceived by the members of a social system is essential to the success of planned change. Chapter XXXIV Institutional Theory and E-Government Research / Shahidul Hassan and J. Ramon Gil-Garcia ... 349 This chapter provides a brief overview of institutional theory in various disciplinary traditions, with an emphasis on institutional theory in sociology. The authors identify various patterns of the use of institutional theory in information systems and e-government research. They also discuss future trends in e-government based on institutional theory. Additionally, based on their analysis of the current state of the art, the authors suggest some research directions for using institutional theory in future e-government research.
Chapter XXXV Structuration Theory and Government IT / J. Ramon Gil-Garcia and Shahidul Hassan................. 361 This chapter presents several examples of how the structuration theory has been applied to study IT in both public and private sector organizations. The authors highlight the usefulness of this perspective to understand incremental and radical change in organizational and inter-organizational settings. The chapter highlights the characteristics of the ensemble view of IT in organizations and provides a brief overview of the structuration theory. Also presented are four influential models that apply the structuration theory to information systems research. Additionally, the chapter argues that previous models have mainly explained incremental change within organizational settings and an important future trend for public information technology research should be to understand radical change and inter-organizational relationships. Section III Security and Protection Chapter XXXVI Intelligence and Security Informatics / Jimmie L. Joseph ................................................................ 378 Intelligence and security informatics (ISI) is the application of information systems (IS), databases and data coding schemes to issues of intelligence gathering, security and law enforcement. This chapter examines the differences between ISI and other disciplines of informatics. ISI differs from other disciplines because of the critical role played by the general public in data gathering and information dissemination. Three major differences exist between ISI and other forms of informatics, and these differences make ISI unique in terms of data collection and dissemination. The differences are: (1) data source reliability, (2) the need to determine which datum is relevant, and (3) the need to disseminate the finding to the general public without knowing in advance the appropriate individuals or institutions needing to be informed. Chapter XXXVII Practical Measures for Securing Government Networks / Stephen K. Aikins .................................. 386 Governments have the obligation to manage their information security risks by securing mission critical internal resources such as financial records and taxpayer sensitive information on their networks. Consequently, public sector information security officers are faced with the challenge to contain damage from compromised systems, prevent internally and Internet launched attacks, provide systems for logging and intrusion detection, and build frameworks for administrators to securely manage government networks (Oxenhandler, 2003). This chapter discusses some of the cost-effective measures needed to address information security vulnerabilities and related threats.
Chapter XXXVIII Digital Convergence and Cybersecurity Policy / Anthony W. Buenger, Jr........................................ 395 The purpose of this chapter is to explain how digital convergence is affecting the public sector and the need for a cyber security policy that includes the active involvement of both the public and private sectors. Digital convergence constitutes the full realization of the information age and provides the foundation to link cultural, personal, business, governmental, and economic affairs into a rapidly expanding global digital world called cyberspace. However, this linking of people around the globe is challenging the government to actively work with private industry to ensure its critical infrastructures and associated information is adequately protected. Chapter XXXIX Bioterrorism Response and IT Strategies / David A. Bray ............................................................... 406 This chapter examines how public health information technology (IT) can aid public health preparedness in terms of bioterrorism preparedness and associated emergency response. Most analyses of possible future bioterrorism events predict they may be similar to the anthrax events of 2001, specifically a limited population of individuals may experience morbidity or mortality, but the concern, panic, and worry stirred up by the threat will catch the attention of the entire nation. If public health IT is to help with bioterrorism preparedness, it needs to not only address mitigation of civilian illnesses and deaths, but also help to manage individual and societal fears springing from the real or threatened occurrence of such an event. Chapter XL Federal Public-Key Infrastructure / Ludwig Slusky and Parviz Partow-Navid ................................ 413 All branches of federal government are required to migrate their business practices to a paperless operation. Privacy and information security (InfoSec) are critical for protection of information shared over networks internally between the US Government agencies and externally with non-federal organizations (businesses; state, local, and foreign governments; academia; etc.) or individuals. This chapter will examine public key infrastructure (PKI), which is the simplest, most widely used architecture for secure data exchange over not-secure networks. It integrates computer hardware and software, cryptography, information and network security, policies and procedures to facilitate trust in distributed electronic transactions and mitigate the associated risks. Chapter XLI Radio Frequency Identification (RFID) Technology / David C. Wyld .............................................. 425 We are in the midst of what may become one of the true technological transformations of our time. RFID (radio frequency identification) is by no means new a “new” technology. This chapter examines several dimensions of RFID, which is fundamentally based on the study of electromagnetic waves and radio, pioneered in the nineteenth century work of Faraday, Maxwell, and Marconi. The idea of using radio frequencies to reflect waves from objects dates back as far as 1886 to experiments conducted by Hertz. Radar was invented in 1922, and its practical applications date back to World War II, when the British
used the IFF (identify friend or foe) system to identify enemy aircraft (Landt, 2001). Stockman (1948) laid-out the basic concepts for RFID. However, it would take decades of development before RFID technology would become a reality. Since 2000, significant improvements in functionality, decreases in both size and costs, and agreements on communication standards have combined to make RFID technology viable for commercial and governmental purposes. Today, RFID is positioned as an alternative way to identify objects to the ubiquitous bar code. Chapter XLII Roaming-Agent Protection for E-Commerce / Sheng-Uei Guan ..................................................... 441 There has been a lot of research done on the area of intelligent agents. Unfortunately, there is no standardization in the various proposals, resulting in vastly different agent systems. Efforts are made to standardize some aspects of agent systems so that different systems can interoperate with each other. This chapter will examine some of the leading standards in agent representation including KQML and agent TCL, their security vulnerabilities and the application of safety protocols. This chapter will examine these standards in the context of e-commerce and m-commerce and efforts to protect transactions through SAFE (secure roaming agent for e-commerce) and offer a look at interoperability in roaming systems. Chapter XLIII Integrity Protection of Mobile Agent Data / Sheng-Uei Guan ......................................................... 453 This chapter discusses security and integrity issues facing agent technology. Various frameworks are discussed including SAFER, or secure agent fabrication, evolution and roaming, which is a mobile agent framework that is specially designed for the purpose of electronic commerce (Guan & Hua, 2003, Guan et al., 2004; Zhu et al., 2000). By building strong and efficient security mechanisms, SAFER aims to provide a trustworthy framework for mobile agents to assist users in conducting mobile or electronic commerce transactions. Agent integrity is another area crucial to the success of agent technology (Wang et al., 2002). Despite the various attempts in the literature, there is no satisfactory solution to the problem of data integrity so far. Some of the common weaknesses of the current schemes are vulnerabilities to revisit attack when an agent visits two or more collaborating malicious hosts during one roaming session and illegal modification (deletion/insertion) of agent data. Agent monitoring protocol (AMP) (Chionh et al., 2001) is examined, which is an earlier proposal under SAFER to address agent data integrity, does address some of the weaknesses in the current literature.
Volume II Chapter XLIV The Role of Data Mining in Intrusion Detection Technology / Amalia Agathou and Theodoros Tzouramanis .................................................................................................................... 463 This chapter examines the several important contributions and improvements data mining has introduced to the field of IDS (intrusion detection system) technology. Over the past few years, the Internet has
changed computing as we know it. The more possibilities and opportunities develop, the more systems are subject to attack by intruders. Thus, the big question is about how to recognize and handle subversion attempts. One answer is to undertake the prevention of subversion itself by building a completely secure system. However, the complete prevention of breaches of security does not yet appear to be possible to achieve. Therefore these intrusion attempts need to be detected as soon as possible (preferably in real-time) so that appropriate action might be taken to repair the damage. This is what an IDS does. IDSs monitor and analyze the events occurring in a computer system in order to detect signs of security problems. However, intrusion detection technology has not yet reached perfection. Section IV System Design and Data Processing Chapter XLV System Dynamics to Understand Public Information Technology / Luis Felipe Luna-Reyes .......... 476 This chapter presents system dynamics as a method to get a better understanding of such mismatches in public information technologies. The method has already been used successfully in planning and evaluation of both public and private IT applications (Madachy & Tarbet, 2000; Tarek K. Abdel-Hamid & Madnick, 1991; Wolstenholme, 2003; Wolstenholme, Henderson, & Gavine, 1993). The method allows to understand the interactions among technologies and organizations as a continuous process of organizational change (March, 1981), in which is possible to find brief periods of rapid change. However, even those periods of rapid change are conceptualized as the result of endogenous and continuous local adaptations (Hutchins, 1991), where technology enables, not causes change (Orlikowski, 2000). Also presented are the basic principles and tools of system dynamics, and continues with an example of its application in the analysis of an IT project in the Public Sector. The chapter ends with a brief description of future trends in modeling and simulation as well as a brief conclusion. Chapter XLVI Government Innovation Through Knowledge Management / Luis Felipe Luna-Reye..................... 493 The purpose of this chapter is to discuss the process involved in managing knowledge, considering critical factors in the process. In this way, the chapter is organized in four different but conceptually interrelated sections. In the first of them, the author describes some of the main concepts of knowledge and knowledge management. The second section is a description of the process as stated in the original question, and the next one is a brief discussion on the impact of the four critical factors identified by Arthur Andersen and Company on the main stages in the process in the way that is proposed in the initial question. The last sections of the chapter constitute a description of future trends and conclusions to the essay. Chapter XLVII A Framework for Knowledge Management in E-Government / Kostas Metaxiotis ......................... 508 While most of the prior research studies have investigated the possible application of KM in the public sector, none have focused on the application of KM in e-government; this is done through this chapter. In
this chapter, the author, recognizing the importance of e-government and KM to devolve into the public administration sector, continues his previous research related to the application of KM in e-government (Metaxiotis & Psarras, 2005), discusses key issues and presents a framework for the application of KM in e-government as a basis for future research. Chapter XLVIII Web Application Classification: A Maintenance/Evolution Perspective / Hsiang-Jui Kung, and Hui-Lien Tung ............................................................................................................................ 520 This chapter examines the three layers of Web applications: conceptual, presentation, and navigation; and its two perspectives: designer and viewer. Software evolution is “the dynamic behavior of programming systems as they are maintained and enhanced over their life times” (Belady & Lehman, 1976). Web application evolution is of increasing importance as more Web systems are in production. Many companies use the Web to communicate with the external world as well as within their organizations and to carry out their business processes more effectively. Web technologies have been adopted by organizations in the public sector. Many state agencies provide their services via the Web. This study investigates the management of e-government applications at a U.S. state technology agency (STA). Chapter XLIX Web Services and Service-Oriented Architectures / Bruce J. Neubauer .......................................... 531 A review of the development of information systems can help in understanding the potential significance of Web services and service oriented architecture (SOA) in the public sector. SOA involves the convergent design of information systems and organizational workflows at the level of services. The purpose of this chapter is to suggest a strategy for mapping the design of service oriented architectures on to the complex patterns of governance including combinations of federalism, regionalism and outsourcing of functions from government agencies to nonprofit organizations. This involves the modeling of workflows and the identification of opportunities for the sharing of services among agencies and nonprofits. Chapter L The Strategic Determinants of Shared Services / Anton Joha and Marijn Janssen ......................... 544 The goal of the research presented in this chapter is to analyze the strategic determinants influencing the decision-making for using and implementing shared services. The structure of this chapter is as follows. In the following section we discuss the historical and theoretical background of shared services. In the section thereafter we provide an overview of the strategic determinants influencing the shared services decision. Next, both future trends and future research directions are presented and finally, in section six, conclusions are drawn. Chapter LI Data Mining in Public Administration / John Wang, Xiaohua Hu, and Dan Zhu ............................ 556 This chapter examines the application of data mining within public organizations. In general, data mining is a data analytical technique that assists businesses in learning and understanding their customers so that
decisions and strategies can be implemented most accurately and effectively to maximize profitability. Data mining is not general data analysis, but a comprehensive technique that requires analytical skills, information construction, and professional knowledge. Businesses are now facing global competition, and are being forced to deal with an enormous amount of data. The vast amounts of data and the increasing technological ability to store it also facilitated data mining. In order to gain a certain level of competitive advantage, a data analytical technology called data mining is now commonly adopted among businesses. Organizations use data mining as a tool to forecast customer behavior, reduce fraud and waste, and assist in medical research. Chapter LII Categorization of Data Clustering Techniques / Baoying Wang, Imad Rahal, and Richard Leipold................................................................................................................................. 568 This chapter examines data clustering, a discovery process that partitions a data set into groups (clusters) such that data points within the same group have high similarity while being very dissimilar to points in other groups (Han, 2001). The ultimate goal of data clustering is to discover “natural” groupings in a set of patterns, points, or objects, without prior knowledge of any class labels. In fact, in the machine learning literature, data clustering is typically regarded as a form of unsupervised learning as opposed to supervised learning. In unsupervised learning or clustering, there is no training function as in the supervised learning. There are many applications for data clustering including, but not limited to, pattern recognition, data analysis, data compression, image processing, understanding genomic data, and market-basket research. Chapter LIII Statistical Dissemination Systems and the Web / Sindoni Giuseppe and Tininini Leonardo ........... 578 This chapter reviews the main concepts at the basis of multidimensional (data warehouse) modeling and navigation. We also illustrate some peculiarities of statistical data that make the implementation of a statistical data warehouse that is a statistical dissemination system enabling the user to perform a multidimensional navigation, a challenging issue in many aspects. Finally, we analyze the main characteristics of some of the most important systems for the dissemination of statistical data on the Web, distinguishing two main approaches, the former based on a free navigation on specific subcubes, the latter on a constrained navigation on a single data cube. Chapter LIV Text Mining / Antonina Durfee ......................................................................................................... 592 Massive quantities of information continue accumulating at about 1.5 billion gigabytes per year in numerous repositories held at news agencies, libraries, corporate intranets, PCs, and the Web. A large portion of all available information exists in the form of texts. Researchers, analysts, editors, venture capitalists, lawyers, help desk specialists, and even students are faced by text analysis challenges. This chapter explores text mining tools which aim at discovering knowledge from textual databases by isolating key bits of information from large amounts of text, identifying relationships among documents. Text mining technology is used for plagiarism and authorship attribution, text summarization and retrieval,
and deception detection. Chapter LV Statistical Data and Metadata Quality Assessment / Maria Vardaki and Haralambos Papageorgiou ............................................................................................................... 604 This chapter aims in summarizing some of the latest efforts in assessing quality of statistical results in national public administrations and international organizations in order to meet demands for comparable, high quality and reliable statistics used for economy and policy-monitoring purposes. Topics that are covered include quality criteria proposed by national and international organizations, metadata requirements for quality reporting and transformations that should be integrated in the workflow process of public administrations’ information systems for automatic manipulation of both data and metadata, thus minimizing errors and assuring quality of results. Chapter LVI Probability Association Approach in Automatic Image Annotation / Feng Xu and Yu-Jin Zhang ..................................................................................................................................... 615 Automatic image annotation is derived from the manual annotation for CBIR. Since the semantic gap degrades the results of image search, the text descriptions are considered. It is desired that the text and the visual features can cooperate to drive more effective search. The text labels, as the high-level features, and the visual features, as the low-level features, are complementary for image content description. Therefore, automatic image annotation becomes an important research issue in image retrieval. In this chapter, some approaches for automatic image annotation will be reviewed and one of the typical approaches is described in detail. Then the keyword-based image retrieval is introduced. The general applications of automatic image annotation are summarized and explained by figure examples. Chapter LVII Online Analytical Processing and Data-Cube Technologies / Lixin Fu and Wenchen Hu................ 627 This chapter examines the applications of online analytical processing and data cube technologies in the public sector. Since the late 80’s and early 90’s, database technologies have evolved to a new level of applications—online analytical processing (OLAP)—where the executive management can make quick and effective strategic decisions based on the knowledge in terms of queries against large amount of data stored. Some OLAP systems are also regarded as decision support systems or executive information systems. The traditional, well-established online transactional processing systems such as relational database management systems mainly deal with mission critical daily transactions. Two cases are examined within this chapter. One case is related to data analysis for student retention. Another case is related to NSF grant awards analysis. One may want to know the number of awards grouped by schools, by disciplines, by regions, by amounts, by dates, and so on, and grouped by any arbitrary combination of these dimensions.
Section V Project Management and IT Evaluation Chapter LVIII Managing People and Information in Complex Organizations / Kalu N. Kalu ................................ 638 Information technology affects organization and society itself, as it redefines work content, reorganizes leadership styles and cultures, reshuffles power hierarchies and spawns a series of both man-designed and spontaneous adaptations. Information technology oftentimes necessitates a new division of labor that creates policy problems and loss of accountability. Organizational leadership, especially in the public sector, urgently requires a theoretical as well as a practical revaluation to cope with the structural and functional changes within work and administrative organizations. This chapter seeks to elucidate three leadership models in the context of IT-induced changes in organizational forms and processes, networked leadership, organic leadership, and gatekeeper leadership models. Chapter LIX Human-Factors Design for Public Information Technology / Vincent E. Lasnik ............................. 650 This chapter examines the realm of human factors design for public information technology in the rapidly evolving post-modern “knowledge age” of the 21st century, with special focus on how new research and development into human cognition, perception and performance capabilities is changing the design function for IT systems and products. Many “one size fits all” IT designs are neither adaptive nor adaptable—promulgating a top-down technological imperialism penetrating every aspect of their use. The communication, collaboration, and interaction infrastructure of IT organizations thus remains acutely challenged with enduring problems of usability, learnability, accessibility, and adaptability. As the function and form of products undergoes increasingly rigorous scrutiny, one important design goal is emerging as a paramount priority: improving the usability of products, tools and systems for all stakeholders across the enterprise. It is therefore important to briefly describe emerging human factor design knowledge and practices applicable to organizations that invent, incubate, innovate, prototype, and drive the creation and application of public IT. The findings here suggest the most effective strategies to manage and augment user-centered design endeavors across a wide array of public IT products and organizations. Chapter LX An Overview of IT Outsourcing in Public Sector Agencies / Anne C. Rouse .................................. 662 This chapter examines the outsourcing of services by governments as the result of public sector reforms. Outsourcing has been argued to lead to cost savings; “improved discipline”; better services; access to scarce skills; and the capacity for managers to focus more time on the “core business” of their organizations (Domberger, 1998). Government outsourcing initiatives have encompassed a range of services, but given the large sums of money invested in IT assets, outsourcing of IT services (IT outsourcing, or ITO) has been a major initiative for many agencies. Case studies have reported ITO successes and failures (e.g., Currie & Willcocks, 1998; Rouse & Corbitt, 2003a; Willcocks & Kern, 1998; Willcocks & Lacity,
2001; Willcocks & Currie, 1997), but much of the evidence presented to public sector decision makers to justify this reform is anecdotal and unsystematic, and when investigated in depth does not necessarily support widespread conclusions. Chapter LXI E-Health, Local Governance, and Public-Private Partnering in Ontario / Jeffrey Roy ..................... 672 The purpose of this chapter is to undertake a critical examination of the emergence of e-health in the Canadian province of Ontario. More than solely a technological challenge, the emergence and pursuit of e-health denote a complex governance transformation both within the province’s public sector and in terms of public-private partnering. The Ontario challenge here is complicated by the absence of formal regional mechanisms devoted to health care, a deficiency that has precipitated the creation of local health integration networks to foster e-health strategies on a sub-provincial basis, as well as ongoing difficulties in managing public information technologies. With respect to public-private partnering, a greater regionalization of decision-making and spending authorities, within transparent and locally accountable governance forums, could provide incentives for the private sector to work more directly sub-provincially, enjoying greater degrees of freedom for collaboration via more manageable contracting arrangements. Chapter LXII Implementing a Sound Public Information Security Program / Stephen K. Aikins .......................... 689 This chapter sheds light on the needed policy guidelines and standards for safeguarding an agency’s information resources. The evolving nature of information security threats such as cyber crime, as well as the need to ensure confidentiality and privacy of citizen information and to protect critical infrastructure call for effective information security management in the public sector. E-government applications have made it easier for citizens to conduct business online with government agencies, although their trust in the ability of governments to keep that information private is low. Considering the amount of citizen information held by governments at all levels, and the steps needed to address potential homeland security and information technology (IT)-related threats to critical infrastructure, the need for effective means of safeguarding public agency data has become an issue of paramount importance. In addition, the need to ensure integrity and availability of public information resource is crucial to many government operations. Chapter LXIII Evaluation of E-Government Web Sites / Michael Middleton.......................................................... 699 The intent of this chapter is to provide an overview of different approaches to Web site evaluation in order to suggest further application and development of evaluation instruments. In recent times, the popularity of the Internet has led to e-government practices being widely recognized as an important option for service to the general public. In response, various tiers of government from national to local level have sought opportunities to engage the public through Web sites. Many governments now provide some level of access to government through Web interfaces, for example through access to resources such as publications and government data. In some cases there are services provided that may be executed online. For example users may provide personal information for licensing or to undertake payments. There continues to be a diversity of implementation quality and levels for such services.
Chapter LXIV IT Evaluation Issues in Australian Public-Sector Organizations / Chad Lin .....................................711 The main objective of this chapter is to identify evaluation issues that are critical in implementation of IT projects by public sector organizations. A key contribution of the chapter is to identify and examine evaluation issues and other key factors faced by public sector organizations undertaking IT projects. The key issues presented in this chapter are of interest to senior public sector executives concerned with making decisions about IT investments and realizing IT benefits. Chapter LXV Performance and Accountability in E-Budgeting Projects / Gabriel Puron-Cid and J. Ramon Gil-Garcia ......................................................................................................................... 722 Based on the analysis of three federal initiatives, this chapter argues that due to how much ICT is embedded in government institutional and organizational environments, the tensions between performance and accountability become also reflected in the goals, features, and functionality of e-budgeting projects (See terms and definitions). Further, the prevalence of accountability for finance and fairness (accountability bias) already identified in the literature (Behn, 2001) is also reflected in the formal goals, general characteristics, and technical capabilities of the e-budgeting systems. The cases thus support the general hypothesis that information technologies do not necessarily have the power to transform government radically, at least not in the case of e-budgeting initiatives. Chapter LXVI A Model for Reengineering IT Job Classes in State Government / Craig P. Orgeron ..................... 735 The ubiquitous nature of information technology at all levels of government and the core requirement to recruit and retain qualified technology professionals calls for an expansion in the body of research; this research can provide invaluable insight into the success and failure in public sector information technology human resource practices. The intent of the research within this chapter is the utilization of the DeMers’ (2002) seven-pronged approach to critically examine Mississippi state government agencies with the expected result of assessing the effectiveness and efficiency of the IT personnel classification system. This leading-edge and highly effective IT personnel classification system, designed specifically to improve IT recruitment and retention, was implemented by the state of Mississippi in partnership with the Hay Group, an internationally-known human resource consultancy. Section VI Selected Readings Chapter LXVII Developing a Generic Framework for E-Government / Gerald Grant and Derek Chau ................. 748 Originally published in the Journal of Global Information Management, Vol. 13, No. 1, this article addresses the following key question: given the wide variety of visions, strategic agendas, and contexts
of applications, how may we assess, categorize, classify, compare, and discuss the e-government efforts of various government administrations? In answering this question the authors propose a generic e-government framework that will allow for the identification of e-government strategic agendas and key application initiatives that transcend country specific requirements. In developing the framework a number of requirements are first outlined. The framework is proposed and described. It is then illustrated using brief case studies from three countries. Findings and limitations are discussed. Chapter LXVIII A Web Query System for Heterogeneous Government Data / Nancy Wiegand, Isabel F. Cruz, Naijun Zhou, and William Sunna ...................................................................................................... 775 Originally published in the International Journal of Electronic Government Research, Vol. 1, No. 2, this article describes a Web-based query system for semantically heterogeneous government-produced data. Geospatial Web-based information systems and portals are currently being developed by various levels of government along with the GIS community. Typically, these sites provide data discovery and download capabilities but do not include the ability to pose DBMS type queries. The authors extend work in schema integration by focusing on resolving semantics at the value level in addition to the schema or attribute level. They illustrate their method using land use data, but the method can be used to query across other heterogeneous sets of values. Their work starts from an XML Web-based DBMS and adds functionality to accommodate heterogeneous data between jurisdictions. Their ontology and query rewrite systems use mappings to enable querying across distributed heterogeneous data. Chapter LXIX Digital Government Worldwide: An E-Government Assessment of Municipal Web Sites Throughout the World / James Melitski, Marc Holzer, Seang-Tae Kim, Chan-Gun Kim, and Seung-Yong Rho ................................................................................................................................ 790 Originally published in the International Journal of Electronic Government Research, Vol. 1, No. 1, this article evaluates the current practice of digital government in large municipalities worldwide. The study assesses 84 cities from around the world using a five-stage e-government framework. The authors’ research and methodology goes beyond previous research by utilizing 92 measures that were translated into the native language of each city, and the assessment of each municipal Web site was conducted by a native speaker of the municipality’s language between June and October of 2003. They review relevant e-government literature for evaluating Web sites in the United States and internationally, discuss our sample selection, methodology, theoretical framework, findings, and recommendations. Their results indicate that Seoul, Hong Kong, Singapore, New York, and Shanghai are the top five large cities in the providing digital government opportunities to citizens online. In addition, the authors’ research suggests a difference in the digital government capabilities between the 30 developed nations belonging to the Organization for Economic Co-operation and Development (OECD) and lesser-developed (non-OECD) nations.
Chapter LXX User Help and Service Navigation Features in Government Web Sites / Genie N.L. Stowers ......... 805 Originally published in the International Journal of Electronic Government Research, Vol. 2, Issue 4, this article examines the user help and service navigation features in government Web sites and compares them across levels of government. These features are critical to ensuring that users unfamiliar with government are able to successfully and easily access e-government services and information. The research finds clear patterns in the use of similar help and navigation features across governments, leading to a conclusion that these features are diffusing in the public sector Web development field. The chapter concludes by stating that Web developers should work to overcome a second digital divide—one of a lack of knowledge of Web site organization and government structure. Users need to be actively assisted to find information by Web developers. Chapter LXXI An Empirical Study on the Migration to OpenOffice.org in a Public Administration / B. Rossi, M. Scotto, A. Sillitti, and G. Succi ..................................................................................... 818 Originally published in the International Journal of Information Technology and Web Engineering, Vol. 1, Issue 3, this article reports the results of a migration to open source software (OSS) in public administration. The migration focuses on the office automation field and, in particular, on the OpenOffice. org suite. The authors have analyzed the transition to OSS considering qualitative and quantitative data collected with the aid of different tools. All the data have been always considered from the point of view of the different stakeholders involved, IT managers, IT technicians, and users. The results of the project have been largely satisfactory. However the results cannot be generalized due to some constraints, like the environment considered and the parallel use of the old solution. Nevertheless, the authors think that the data collected can be of valuable aid to managers wishing to evaluate a possible transition to OSS. Chapter LXXII Organisational Challenges of Implementing E-Business in the Public Services: The Case of Britain’s National Mapping Agency / Francesca Andreescu ............................................................ 833 Originally published in the International Journal of E-Business Research, Vol. 2, Issue 4, this article explores the processes of strategic and organizational transformation engendered by e-business implementation in a commercialized British public sector organization within the geographic information industry. Recognized as a leading participant in the geographic information industry, within which it is forging partnerships with key private sector companies, the organization has enthusiastically grasped e-business as an all-embracing phenomenon and implemented a new strategy that transformed the way it did business. The case analysis illustrates the challenges and constraints that this organization is facing in implementing e-business strategies in practice.
Chapter LXXIII Public Administrators’ Acceptance of the Practice of Digital Democracy: A Model Explaining the Utilization of Online Policy Forums in South Korea / Chan-Gon Kim and Marc Holzer ............... 854 The Internet provides a new digital opportunity for realizing democracy in public administration, and this study raises a central question: What factors determine public officials’ acceptance of the practice of digital democracy on government Web sites? The authors focus on online policy forums among many practices of digital democracy. To gauge public officials’ behavioral intentions to use online policy forums on government Web sites, they examined individual and organizational factors, as well as system characteristics. They also administered a survey questionnaire to Korean public officials and analyzed a total of 895 responses. Path analysis indicates that three causal variables are important in predicting public officials’ intentions to use online policy forums: perceived usefulness, attitudes toward citizen participation, and information quality. In this article, originally published in the International Journal of Electronic Government Research, Vol. 2, Issue 2, the authors discuss implications of this study for practices and theories of digital democracy. Chapter LXXIV E-Mexico: Collaborative Structures in Mexican Public Administration / Luis F. Luna-Reyes, J. Ramon Gil-Garcia, and Cinthia Betiny Cruz ................................................................................ 873 After six years of challenges and learning pushing forward the e-government agenda in Mexico, the Presidential succession brought an opportunity for assessing the current progress, recognizing the main unsolved problems, and planning the vision for the future of e-government in Mexico. This case, originally published in the International Journal of Cases on Electronic Commerce, Vol. 3, Issue 2, provides a rich description of the e-Mexico system, including its main objectives and goals, governance structures, IT infrastructure, collaboration processes, main results, and current challenges. Some background information about Mexico is also provided at the beginning of the case. Playing the role of a consultant working for the new Mexican CIO, the reader is asked to evaluate the current situation and help in the design of a work plan, including a proposal for organizing the ICT function, the main strategic objectives, and some specific lines of action for the next six years. Chapter LXXV The Impact of the Internet on Political Activism: Evidence from Europe / Pippa Norris ............... 889 The core issue for this study concerns less the social than the political consequences of the rise of knowledge societies, in particular the capacity of the Internet for strengthening democratic participation and civic engagement linking citizens and government. To consider these issues, this article, originally published in the International Journal of Electronic Government Research, Vol. 1, No. 1, is separated into four parts. Part I summarizes debates about the impact of the Internet on the public sphere. Part II summarizes the sources of survey data and the key measures of political activism used in this study, drawing upon the 19-nation European Social Survey, 2002. Part III examines the evidence for the relationship between use of the Internet and indicators of civic engagement. The conclusion in Part IV summarizes the results and considers the broader implications for governance and democracy.
Chapter LXXVI Adoption and Implementation of IT in Developing Nations: Experiences from Two Public Sector Enterprises in India / Monideepa Tarafdar and Sanjiv D. Vaidya ............................................................................................................................... 905 Originally published in the Journal of Cases on Information Technology, Vol. 7, No. 1, this case describes issues in IT adoption at two large public sector organizations in India. Along with illustrating the significance of top management drive and end-user buy in, it particularly highlights the role of middle management in managing the IT adoption process at different levels in these large organizations.
xxxv
Foreword
The breadth and speed of information technology in the public sector is breathtaking. Every day new software and hardware technologies are developed. Each of these new technologies has significant consequences for society and governments. For example, as online databases grow in their numbers, size, and accessibility over the Internet, the need to provide for robust security becomes more important. Likewise, new ethical questions and dilemmas arise as a result of these developments such as what balance should exist between rights of access and privacy. Different governments treat this balance in different ways. Thus, there is a close interconnection between technical and social developments in information technology. The Internet, especially with respect to governmental e-commerce, depends on the development of trust between citizens and governments. Without that trust, participation in governmental e-commerce would be low and its impact muted. Similarly, e-government technologies are aimed at providing governments with increased capacity for more effective service but this increased effectiveness can only be achieved if organizations undertake changes in their processes and these changes also depend upon the actions of public managers, not just the technology. Thus, there is a need to keep up on research concerning technical, social, and managerial impacts of technologies. All governmental organizations have been affected by these new technologies, ranging from small local governments to the largest nation-states. To what extent are the transformations due to technological change similar across organizations that differ in size and culture? These are questions that researchers are beginning to address. In short, the sheer size and speed of change in public information technology is so broad and complex that it is extremely difficult even for those who have specialized in this area to keep abreast of these developments. There is a need for works that bring together these diverse strands of research on digital government. This volume provides a service to researchers in the field. It covers a broad range of research including hardware, software, social, managerial, ethical, and political issues of public information technology. Its coverage is international in scope, including the United States, Europe, and emerging countries. The breadth of coverage ensures that the book contains material relevant to a wide variety of researchers. The diversity of research is striking. For example, it includes chapters on radio frequency identification technology, service-oriented architecture, the bridging of the digital divide in Africa, and blogging. To summarize, this book will provide readers with an excellent perspective concerning the state of research on digital government. Bruce Rocheleau Northern Illinois University, USA
xxxvi
Bruce Rocheleau is a professor of political science in the Department of Public Administration at Northern Illinois University in DeKalb, Illinois. In the area of information management in the public sector, he is currently pursuing empirical research concerning the use of computers and related information technologies in the public sector. His research includes the study of information management, geographic information systems, impacts of networks, and organizational problems involved in sharing information and resources. In the area of policy analysis and evaluation, his research includes the study of welfare, mental health, and aging policies. His research has included studies of the implementation and impact of programs in these areas. He is the author of numerous publications about governmental information management. He recently published the book Public Management Information Systems (Idea Group Inc., 2006) and edited the book Case Studies in Digital Government (IGI Global, 2007).
xxxvii
Preface
This volume brings together a wide range of research on the past, present, and future of the international trend toward greater and greater use of information technology in the public sector. Rather than survey the content of these research contributions, which are adequately described in their respective abstracts, it seems better to devote this preface to considering the broad context of public sector information systems. Toward that end, what follows discusses three over-arching questions which arise time and again in this literature: (1) How and whether the vast international investments in e-government now occurring are justified in terms of the economic development and related advances for which e-government is purportedly a critical part of the infrastructure; (2) How and whether information technology will be a force for centralizing government, for decentralization, or for some synergistic new combination of trends affecting the powers that be; and (3) How and whether the unprecedented levels of participation potentially enabled by the Internet age will translate into political participation and social capital, energizing social development on a global scale.
E-govErnmEnt as invEstmEnt in Economic dEvElopmEnt Investment in information technology in the private sector and in e-government in the public sector is often seen as the path to economic expansion. From promotion of federal e-government by the National Performance Review in the 1990s to promote community-wide area networks at the local level in the 2000s, the e-government business model has been promoted as a critical economic development policy in the United States. The argument that economic development would be promoted has led a number of communities to find ways to establish wide-area networks for their downtowns or even for their entire jurisdictions, sometimes free for citizens. Cities with WiFi initiatives in 2006 included: Anaheim, CA; Arlington, VA; Minneapolis, MN; Pasadena, CA; Philadelphia; Portland, OR; San Francisco; and Tempe, AZ. This was just one aspect of the worldwide movement toward “smart communities,” which integrate information technology throughout civic infrastructure. The same economic development logic has been promoted on a state basis also. In December 2006, Virginia Governor Timothy M. Kaine announced he would submit a $1.6 million budget amendment to extend broadband capacity on the state’s eastern shore, additional to the $1.4 million already appropriated. Kaine said, “Providing access to reliable, high speed broadband is an essential public investment, attracting high tech industries and strengthening economic development.” In this context it is something of a contrast to note that in the first five years of existence of the Bush administration’s e-government agenda, Congress funded only $13 million of the original $100 million goal. Over the same period, Congress became increasingly skeptical of the value of e-government fund
xxxviii
This volume brings together a wide range of research on the past, present, and future of the international trend toward greater and greater use of information technology in the public sector. Rather than survey the content of these research contributions, which are adequately described in their respective abstracts, it seems better to devote this preface to considering the broad context of public sector information systems. Toward that end, what follows discusses three over-arching questions which arise time and again in this literature: (1) How and whether the vast international investments in e-government now occurring are justified in terms of the economic development and related advances for which e-government is purportedly a critical part of the infrastructure; (2) How and whether information technology will be a force for centralizing government, for decentralization, or for some synergistic new combination of trends affecting the powers that be; and (3) How and whether the unprecedented levels of participation potentially enabled by the Internet age will translate into political participation and social capital, energizing social development on a global scale.
E-govErnmEnt as invEstmEnt in Economic dEvElopmEnt Investment in information technology in the private sector and in e-government in the public sector is often seen as the path to economic expansion. From promotion of federal e-government by the National Performance Review in the 1990s to promote community-wide area networks at the local level in the 2000s, the e-government business model has been promoted as a critical economic development policy in the United States. The argument that economic development would be promoted has led a number of communities to find ways to establish wide-area networks for their downtowns or even for their entire jurisdictions, sometimes free for citizens. Cities with WiFi initiatives in 2006 included: Anaheim, CA; Arlington, VA; Minneapolis, MN; Pasadena, CA; Philadelphia; Portland, OR; San Francisco; and Tempe, AZ. This was just one aspect of the worldwide movement toward “smart communities,” which integrate information technology throughout civic infrastructure. The same economic development logic has been promoted on a state basis also. In December 2006, Virginia Governor Timothy M. Kaine announced he would submit a $1.6 million budget amendment to extend broadband capacity on the state’s eastern shore, additional to the $1.4 million already appropriated. Kaine said, “Providing access to reliable, high speed broadband is an essential public investment, attracting high tech industries and strengthening economic development.” In this context it is something of a contrast to note that in the first five years of existence of the Bush administration’s e-government agenda, Congress funded only $13 million of the original $100 million goal. Over the same period, Congress became increasingly skeptical of the value of e-government funding. The Congressional appropriations committees instituted some of the most restrictive language to date in their FY 2007 appropriations. As a result, almost all of the 25 showcase “Quicksilver” e-government initiatives announced by the Bush administration in 2001 have been delayed or affected by the low level of budgeting, forcing the devotion of the FY 2008 budget not to new initiatives but to trying to finish ones started long ago. While the Office of Management and Budget (OMB) looks at the problem as one of educating ignorant members of Congress to get them “on board,” Congress tends to see the problem as one of OMB’s seeming inability to document the value of e-government investments. The American e-government funding model of pass-the-hat agency self-funding combined with user fees is rationalized as one which creates federal departmental “ownership” of e-goverment projects, but while few if any departments speak against the OMB/Bush administration strategy, it is clearly a strategy which has not forged the
xxxix
sort of strong political alliances among stakeholders which underpin Congressional funding of other programs which get higher priority. Congress has never fully bought into the e-government program and support for it is weak on the hill. In the FY 2006, appropriations bill for transportation, treasury, housing and urban development, judiciary and other related agencies, Congress required that the OMB justify e-government expenditures and request renewal funds. As a result, in January 2006, the Office of Management and Budget submitted the mandated report to Congress, justifying the cost of the 25 e-government and five line of business consolidation projects—all of which would cost over $192 million in FY 2006—squeezed from existing agency budgets. In essence, in lean budgetary times, the OMB was forcing agencies to spend on non-priority items which Congress had not specifically appropriated money for, such as forcing the National Park Service to spend $1.5 million on e-government when the NPS was scrimping for basic park operations. OMB’s contrary view was that its e-government guidance was simply helping agencies be efficient in getting the biggest bang for the buck. All of this is to suggest one thing: public investment in information technology is controversial. It is at once a great hope, perhaps the great hope, for economic and governmental transformation, and it is an endeavor where cost over-runs, delayed implementation, and outright failure are commonplace. If the nature of public information systems was better understood, success would be more likely and political support more forthcoming. The purpose of this handbook of research on the nature of public information systems seeks to make a small contribution to that much-needed understanding.
information tEchnology, cEntralization, and dEcEntralization While advocates of the virtual state have often cited the advantages of networks over hierarchy in flexibly adapting to change, assembling and re-assembling to meet one or another ad hoc challenge, studies of actual organizational response to emergencies, such as the World Trade Tower bombing of 2001, strongly suggest that network effectiveness depends on pre-existing social capital and trust expressed through pre-existing strong networks. Organizational hierarchies play a critical role both in building an organizational culture oriented toward use of networks for coordinated response, and in building the networks themselves prior to and in the absence of emergency demands. That is, the hierarchy vs. networks dichotomy is false. Rather, the two exist in synergy in effective organizations. Perhaps the leading example of centralization in recent years has been the push of the OMB to replace departmental IT systems with enterprise-wide lines of business systems in financial management, human resource management, and many other areas. For FY 2008, the OMB instructed departments that their budget proposals would have to demonstrate implementation of the administration’s lines of business consolidation initiative. At the state level, consolidation and centralization is approached with almost religious zeal. Major recent IT consolidation efforts have occurred in California, Michigan, and New York, for example. Some IT analysts and leaders have argued that centralization trends amount to over-centralization and urge a pendulum swing in the opposite direction. A chief executive of the Gartner Group, a leading IT consulting firm, for instance, recently argued that CIOs needed to relinquish some control and responsibilities to end users, that the centralized concentration of IT funding ironically leaves little for the original goal of business transformation or for needed investment in human aspects of IT. With recentralization, end users again face the frustration of dealing with rigid central IT departments and turn to alternatives such as new consumer Internet, communications, and database technologies outside CIO control.
xl
An interesting illustration occurred when the U.S. military implemented MCS2 (electronically networked maneuver control systems) in the Gulf War. An expected result was centralization of information in the hands of top-ranking officers. What was less expected was an increase in micromanagement, with many commanders unwilling to use new information systems to unleash the abilities of lower-level staff to make decisions. The desire not to relinquish decision-making power made available by centralization arising from information systems implementation, reinforced and enhanced existing bureaucratic structures in spite of the vision of some that these systems would allow flexible, informed, decentralized decision-making in the field. Control systems can be programmed to reflect the political priorities of those in power. As systems design is not normally covered in the press, this comes to light only occasionally, but the practice is routine. A recent example was the attempt of the Bush administration to embed anti-union features in both the proposed merit-based personnel system of the Defense Department’s National Security Personnel System (NSPS), and in a similar human resources system of the Department of Homeland Security. Anti-labor aspects of both were ruled illegal in federal court decisions in 2006 and 2005 respectively. The NSPS software system was found, for instance, to fail to ensure that employees could bargain collectively, did not provide for congressionally-required third-party review of labor relations decisions, and did not provide a fair appeals process for employees. Normally, however, embedding political controls in ostensibly neutral software goes unchallenged and unnoticed by the public at large. Even in the NSPS case, the Department of Defense did not accept the court ruling, but continued development of the system even as it was contested in the courts.
thE hopE that information tEchnology will build social capital There is evidence that Internet access does indeed improve the ability of citizens to interact with their government, though most use at that time is information-seeking rather than undertaking actual participatory transactions. Empirical analysis of the 2000 presidential elections has revealed that the Internet did show promise of bringing new individuals into the political process. Numerous writers have speculated that the age of the Internet would lead to a more participatory citizenry, whose experiences in electronic participation would build social capital and energize social, political, and economic development of all types. In traditional forms of political participation, community participation in politics has been found to correlate with socio-economic status, being older, and having lived in the community longer. Data do not show this to be the case for online political participants. Contrary to the predictions of social capital theory, recent findings show that engagement in non-political, social groups in the community is not correlated with online political participation. We may well ask if online communities destined to play a major political role in the future. Again, empirical case studies related to this question lead to the conclusion that although public administration literature has cited the importance of online communities as a vehicle for the delivery of public goods, actual experiences suggests cybercommunities tend to have weak governance structures, undermining accountability and legitimacy. This two-volume set is separated into six sections: (1) e-government and e-commerce, (2) privacy, access, ethics, and theory, (3) security and protection, (4) system design and data processing, (5) project management and IT evaluation, and (6) selected readings. Each chapter within these sections is separated into seven segments: (1) an introduction providing the historical perspective of the subject matter, (2)
xli
a background providing discussions supporting the author’s view as well as the views of others, (3) a segment devoted to the primary information regarding the subject matter, (4) a future trends segment providing future and emerging trends in the field of research and insight into the future of the topic from the perspective of published research as well as future research opportunities within the domain of the topic, (5) a conclusion, providing discussion for the overall coverage of the topic, (6) as well as a future research directions segment that acts as a supplement, discussing the managerial and more technical aspects of the subject matter, (7) a references and further reading section , and (8) concluding is a complete list of terms and definitions for readers to familiarize themselves with the subject matter’s terminology. The first section, titled “E-Government and E-Commerce,” covers the rise of e-government and its series of stages, from one way information dissemination to two-way interactions to two-way transactions, culminating in cross-agency integration of e-services. The transition from the first to second stage has not proved a difficult obstacle for most jurisdictions, which implement interactions such as feed back forms and e-mail. However, the transition to stage 3, the interaction stage, has proved more difficult. Although many examples of e-transactions exist (e,g., paying taxes online), mass adoption of e-transactions by the public has proved illusive. Even more difficult has been overcoming department-centric business of government models and replacing them with integrated cross-agency models. The chapters within this section examine these issues as well as others that exemplify the progressive movement toward electronic government which while problematic, still is in the process of fulfilling its potential. The second section, titled “Privacy, Access, Ethics, and Theory,” covers the potential of Internet technology to bring about democratic transparency in the way government conducts its business. However, even a transparent government must support individual privacy rights. Privacy is a growing issue because people have good reason to believe that data collected on them for one purpose may be appropriated and used for altogether difference purposes than the original ones about which they were informed. In theoretical terms, some seeking to understand these issues have turned to structuration theory, which is a variant of institutional theory growing out of the work of Anthony Giddens. Gidden held that individual actions both shape and are constrained by social structures. In addition to the structuration theory, the institutional theorist perspective, more specifically, Fountain’s theory of technology enactment, is discussed throughout the section. The third section, titled “Security and Protection,” covers several security threats including massive data theft, cyber-terrorism, and the use of malware. In the United States, information technology security rose to first place in budget priority after the bombing of the World Trade Center and has remained a top priority to the present day. Although there are currently over 217,000 various known threats including identity theft, imposter Web sites, file sharing, this section will cover some of the most prevalent in the public sector. The fourth section, titled “System Design and Data Processing,” delves into topics such as service oriented architectures, enterprise resource planning systems, statistical data and statistical dissemination systems, and data cube technologies, amongst many others. Enterprise resource planning systems in particular has often been the result of systems architecture planning in the U.S. and worldwide. After a checkered start in the private sector in the late 1980s and early 1990s, such systems were widely adopted in the public sector in the late 1990s and by the 2000s transition from agency-specific to enterprise-wide software was the primary reform thrust of the U.S. Office of Management and Budget as well as many states and localities. Efforts were made to unify financial, human resources, payroll, procurement and other departmental software systems into single jurisdiction-wide systems. The fifth section, titled “Project Management and IT Evaluation,” examines the relationship between information technology and project management. Information technology frequently succeeds or fails
xlii
on the strength or weakness of project management. The United States is seeing an increase in project management due to its emphasis in the Bush administration’s FY 2007 budget. Project management is often tied to enforcing IT enterprise architecture, which reflects the IT policies at the national level and of state chief information officers at the state level. Evaluation is another topic covered within this section which is coequal in importance to project management. Project management may be more critical for short-term IT success, but in the long run the success of IT initiatives requires that they be proven to work in a cost-effective manner, hence the critical importance of evaluation. Concluding the Handbook of Research on Public Information Technology is a “Selected Reading” section of 10 refereed journal articles for additional insight into the realm of information technology in the public sector. These articles come highly recommended and introduce innovative applications, trends and technologies within this fast growing area of information science and technology.
summary A work much more famous than this volume could dream to be started with the phrase, “it was the best of times, it was the worst of time,” and went on to contrast two cities, one cloaked in tradition and one in revolutionary upheaval. It is interesting to note that the vision of digital cities and global communities draws on such contrasts, comparing often-hierarchical traditional patterns of work and governance with the revolutionary potential of information technology as a liberating force, perhaps not for “liberty, equality, fraternity,” but at least for participation, transparency, and empowerment. In this preface your editor has tried to suggest that empirical research can throw much-needed light on traditional and revolutionary perspectives alike. There are so many important issues attached to this subject, many explored by scholarly contributions in this volume, dealing both with pragmatic implementation and with conceptual design, that they cannot be enumerated here. What is certain, however, is something close to the heart of and bringing a smile to the lips of every academic: more research is needed! Toward this end, this volume seeks to make a small contribution. G. David Garson North Carolina State University, USA
Volume II
Chapter XLIV
The Role of Data Mining in Intrusion Detection Technology Amalia Agathou University of the Aegean, Greece Theodoros Tzouramanis University of the Aegean, Greece
introduction Over the past few years, the Internet has changed computing as we know it. The more possibilities and opportunities develop, the more systems are subject to attack by intruders. Thus, the big question is about how to recognize and handle subversion attempts. One answer is to undertake the prevention of subversion itself by building a completely secure system. However, the complete prevention of breaches of security does not yet appear to be possible to achieve. Therefore these intrusion attempts need to be detected as soon as possible (preferably in real time) so that appropriate action might be taken to repair the damage. This is what an intrusion detection system (IDS) does. IDSs monitor and analyze the events occurring in a computer system in order to detect signs of security problems. However, intrusion detection
technology has not yet reached perfection. This fact has provided data mining with the opportunity to make several important contributions and improvements to the field of IDS technology (Julisch, 2002).
background IDSs are systems that aim at the detection of subversion and the prevention of similar attacks in the future (Sundaram, 1996). Therefore, an IDS identifies evidence of intrusions, either while they are in progress or after the fact. The most popular way to detect intrusions has been the use of the audit data generated by the operating system. An audit trail is a record of activities on a system that are logged to a file in chronologically sorted order. These data may be collected in many ways, but their sources are usually network activity and/or host-based logs.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Role of Data Mining in Intrusion Detection Technology
Since almost all activities are logged on a system, a manual inspection of these logs may detect intrusions. However, too much data are collected, making manual analysis impossible to be usefully analyzed for intrusions. What IDSs achieve is the automation of the data analysis process. It is thus possible to establish the guilt of the intruders and to detect unauthorized and subversive user activity. Postmortem analysis of the audit data is significant as it helps to determine the extent of the damage and to identify intruders so that steps may be taken to overcome system weakness. Kemmer and Vigna (2002) describe an IDS as composed of several components: sensors for the capture of events and for their storage as audit data; an engine for the production of alarm signals upon detection of a potential intrusion, detected from the processing of the audit data captured; and a site security officer (SSO) for the reception of the alarms and for appropriate response. Data Collection Issues Achieving reliable and complete data collection about the target system’s activities is a complex issue. As Kemmer and Vigna (2002) state, most operating systems offer some form of auditing that provides an operations log for different users. These data might be collected at the lowest possible level, resulting in the collection of rather large quantities of system activity information to be analyzed for intrusions. To alleviate this problem, the use of random sampling has been suggested; however, this could mean that certain types of attacks may stay undetected. The problem is further complicated by the need to allow for differences in the data as a result of special circumstances such as holidays and other factors. Finally, depending on a specific IDS solution and its correlation engine, a storage period for current audit files should be appropriately set. For retrieval analysis purposes, archive files should be stored as copies.
detection techniques Traditionally, there are two basic categories of intrusion detection techniques: anomaly detection and misuse detection. Most current intrusion detection systems use one or both of these two approaches. According to Lee and Stolfo (1998), the anomaly detection model devises a set of statistical metrics that model the behavior of an entity, usually a user, a group of users, or a host computer, interpreting deviations from this behavior profile as a problem. The profile of a user entity, for instance, may include information such as the CPU (central processing unit) usage and the frequency of system commands during a user log-in session. The IDS monitors the operation of a computer system and constantly compares the profile of, say, a current user session with the one stored in its database. If it detects a high deviation from the normal behavior, it signals an alarm to the system security officer. Deviation from a profile can be computed as the weighted sum of the deviations of the constituent statistical measures. Stored profiles are constantly being updated so that shifts of normal behavior are accounted for. The misuse detection model, on the other hand, uses specifically known patterns of unauthorized behavior to predict and detect subsequent similar attempts. They contain attack descriptions (or signatures; Lee et al., 2000) and match them against the audit data stream, looking for evidence of known attacks. Depending on the robustness and seriousness of a signature that is triggered, some form of alarm, either a response or notification, should be sent to the proper authorities. An obvious difficulty in this architecture is the need for the constant updating of the rule base as new attack methods become known (Julisch, 2002).
public domain for intrusion detection The fast development of information and communication technology over the past few years
The Role of Data Mining in Intrusion Detection Technology
has led to the interconnection of corporations, small to medium companies, organizations, government bodies, and private citizens. Networks have many applications in the public domain such as e-government services that allow the user to submit her or his income declaration, view the tax position, request certificates, conduct estate checks, and so forth. As stated in the European Commission (2002) article, the Internet offers enormous opportunities to any person who knows how to use a computer and can take part in social life by clicking with a mouse. E-Europe and its programs (e-learning, e-health, e-government, and e-business) aim at fully exploiting this potential in favor of social development. Official communications and medical and financial transactions require a higher level of authentication, integrity, and confidentiality. The public wants information systems to be secure as well as trustworthy. For instance, people need to trust these systems to ensure the privacy of their data, especially when it comes to sensitive information like medical or economic data. Federal agencies are using data mining for the analysis and detection of terrorist patterns of activity. The New York Times in 2003 reported that U.S. intelligence officials believed that a recent rise in electronic attacks against government and military computer networks in the United States may be the work of terrorist hackers and could signal a potential crisis in national security (Lichtblau, 2003). Also, in April 2002, the Los Angeles Times reported that U.S. intelligence officials feared similar attacks by China (Lichtblau, 2002). Common ways of protection against unauthorized access are the installation of a password and/or a firewall. However, these methods alone cannot provide extensive protection and must be integrated with other security devices such as devices for attack recognition and application intrusion detection systems. It is essential to reach a balance between the protection and availability of the data (Battelli et al., 2005).
The U.S. National Science Foundation (U.S. NSF, 2006) supports research concerning the integration of data mining and IDS. Novel datamining-based anomaly detection techniques developed under NSF support have been incorporated in the Minnesota Intrusion Detection System (MINDS; Ertoz et al., 2004) that can help cybersecurity analysts to analyze vast amounts of network traffic and detect novel intrusions, insider abuse, and policy violations by evaluating the most anomalous connections identified by the system. MINDS is currently being used at the U.S. Army Research Laboratory’s Center for Intrusion Monitoring and Protection (ARL-CIMP) and at the University of Minnesota where it monitors over 40,000 computers. Data mining for rare events becomes critical as new technologies allow more and more data to be collected. The underlying techniques used by the Minnesota team can be applied in many areas beyond detecting computer intrusions, such as financial and health care fraud detection.
data mining mEEts intrusion dEtEction Data mining is, at its core, the process of analyzing data from different perspectives and summarizing it into useful information (Mannila, Smyth, & Hand, 2001). Data mining software finds regularities in large data sets. It allows data miners to analyze data and summarize the relationships identified. It can also present the data in a useful format, such as a graph or a table. Technically, data mining is the process of finding correlations or patterns in large data sets. These tasks are accomplished with the following data mining techniques: classification, which enables the categorization of records into classes; clustering, which is a method used to discover the segmentation of the data into natural categories; and association rules for the definition
The Role of Data Mining in Intrusion Detection Technology
Table 1. The techniques used in data mining (Elite Analytics LLC, 2006) Classification:
Enables the categorization of records into two or more pre-defined classes by the use of classification algorithms and a set of pre-classified examples.
Clustering:
Exploratory method used to discover natural groupings within records or entities. Clustering approaches are commonly used for segmentation.
Association Rules:
Basic types of patterns or regularities that are found in transactional-type data.
Sequential Pattern Detection:
Sequential patterns involve mining frequently occurring patterns of activity over a period of time.
Change & Deviation Detection:
A method useful for identifying significant changes in a data set from previously measured or normative values.
of normal activity and, hence, the discovery of anomalies (Phung, 2000). There are also techniques for identifying significant changes in a data set from normative values, as well as for visualization of the data through graphs or tables. Table 1 presents briefly the methods used in data mining whereas Table 2 lists the different tools used in its process. Traditional statistical methods are also usually used to complement the data mining methods (Brugger et al., 2001). The distinction between data mining and applied statistics is not always clear. The main difference is that data mining enables data exploration and analysis without any specific hypothesis in mind. Also, data mining tends to be more application oriented and looks for useful patterns in a data set, whereas traditional statistical methods are more concerned with
modeling. Furthermore, in contrast to statistical methods, data mining can handle massive data sets with many variables. The traditional statistical techniques include exploratory statistics, hypothesis testing, regression (linear, nonlinear, logistic), forecasting, and experimental design (Elite Analytics LLC, 2006).
how data mining meets the shortfalls with current idss Phung (2000) argues that while the misuse detection model is a useful and viable approach, there are shortfalls to using only this approach. As stated previously, signatures are developed in response to known vulnerabilities or exploitations that have been posted or released. However, developing unique signatures is a difficult task and often
Table 2. Data mining tools (Brugger, Kelley, Sumikawa, & Wakumoto, 2001; Thearling, 2006) Neural Networks:
Are non-linear predictive models that learn through training and resemble biological neural networks in structure.
Decision trees:
Are tree-shaped structures that represent sets of decisions. These decisions generate rules for the classification of a dataset. Specific decision tree methods include Classification and Regression Trees (CART) and Chi Square Automatic Interaction Detection (CHAID).
Statistical Methods:
There are five statistical measures: the operational model, the mean and standard deviation model, the multivariate model, the Markov process model and the time series model.
Genetic Algorithms:
Optimization techniques that use processes such as genetic combination, mutation, and natural selection in a design based on the concepts of evolution.
Nearest Neighbour Method:
Classifies each record in a dataset based on a combination of the classes of the k most similar records to it in a historical dataset (where k≥ 1). It is also called the k-nearest neighbour technique.
Rule Induction:
It is related to the extraction of useful if-then rules from data based on statistical significance.
The Role of Data Mining in Intrusion Detection Technology
the IDSs fail in the sense that they produce many false positives, that is, by alerting too often rather than not enough. Another difficult problem is the detection of exploitations for which there are no known signatures. This leads to the concept of false negatives, where an IDS does not generate an alert when an intrusion is actually taking place. Finally, another aspect that does not relate directly to intrusion detection but that is of vital importance is the amount of data that an analyst can effectively and efficiently analyze. Depending on the intrusion detection tools employed by a company and its size, there is the possibility for logs to reach millions of records per day. On the other side, data mining methods excel at processing large system logs of audit data. Therefore, data mining methods can contribute to an IDS by helping to integrate anomaly detection and misuse detection (Lippmann et al., 2000). A typical data mining method is correlated with finding association rules. Thus, it may identify data mining patterns for normal and abnormal activity and may help the IDS to generate an alert when an attack for which signatures have not been developed takes place (Bass, 2000). With data mining it is easy to correlate data related to alarms with mined audit data, thereby considerably reducing the rate of false alarms (Manganaris, Christensen, Zerkle, & Hermiz, 2000). For example, with regard to false positives, data mining can be used to identify valid network activity that can be filtered out by identifying false-alarm generators. Fan, Miller, Stolfo, Lee, and Chan (2001) propose one of the fundamental data mining techniques used in intrusion detection technology that implements decision trees to detect network intrusions. Lee et al. (2000) present another data mining technique that refers to segmentation, allowing pattern extraction from unknown attacks. This is done by matching patterns extracted from a simple audit set with those referred to stored, warehoused unknown attacks. Data mining is also sure to provide a solution
to the issue of data reduction. As the amount of data an analyst needs to look at seems to be growing rapidly, data mining algorithms can help the identification of the most relevant data and aid a more efficient analysis.
current research Efforts In the past years, a growing number of research projects have begun to explore the use of data mining to address data overload in intrusion detection. In the following, a broad overview of the work that has been done at the intersection between intrusion detection and data mining is presented. Lee and Stolfo (1998) and Lee, Stolfo, and Mok (1999) propose the MADAM ID (mining audit data for automated models for intrusion detection), a software tool that uses data mining techniques to build better models for intrusion detection by analyzing audit data using associations and frequent episodes. The original idea behind this pioneering and awarded research activity1 was to generate classifiers using a rule-learning program on training data sets of system usage. The output from the classifier, a set of classification rules, is used to recognize anomalies and detect known intrusions. The ADAM (Audit Data and Mining) test bed was proposed by Barbará, Couto, Jajodia, Popyack, and Wu (2001). It is a tool that employs data mining techniques to detect intrusions. It uses a combination of association rules, mining, and classification to provide a flexible representation of the network traffic pattern, uncover some unknown patterns of attacks that cannot be detected by other techniques, and accommodate the large amount of network audit data that keeps growing in size. In the first step, ADAM builds a repository of profiles for normal frequent item sets that hold during attack-free periods. It does so by mining data that is known to be free of attacks. In the second step, ADAM runs a sliding-window online algorithm that finds frequent item sets in the last N connections and compares them to a
The Role of Data Mining in Intrusion Detection Technology
normal profile, discarding those that are deemed normal. With the rest, ADAM uses a classifier that has been previously trained to classify the suspicious connections as a known type of attack, an unknown type, or a false alarm. The limitation of ADAM, as well as the limitation of most of the anomaly detection models, is that it cannot detect stealthy attacks as it raises an alarm only when the support of an unexpected rule, that is, an association of event attributes, exceeds a threshold. In other words, ADAM can detect an attack only when it involves a relatively large number of events during a short period of time. In Abraham (2001), the IDDM (intrusion detection using data mining) is proposed, which aims toward the potential use of the data mining paradigm in near-real-time intrusion detection in order to extend intrusion detection capabilities. The research over IDDM focuses on the reuse and expansion of previous works as required, some of which are evaluated as part of the research. Rather than concentrating on the use of a particular technique in a certain application instance, IDDM explores multiple uses for any given data mining principle in a variety of ways by producing descriptions of network data and by using this information for deviation analysis. IDDM makes feasible the characterization of network data and the detection of variations in these characteristics over time. Therefore, IDDM in combination with tools that recognize existing attack patterns or operate in a similar manner facilitates the recognition and potential reaction to unwanted violations to network operations. In Ertoz et al. (2004), the MINDS is presented, which uses a suite of data mining techniques to automatically detect unusual network behavior and emerging cyber threats. It is a highly successful tool in automatically detecting several novel intrusions that could not be identified using popular signature-based tools. It includes miscellaneous modules for gathering and analyzing large amounts of network traffic. Typical analyses include behavioral anomaly detection, summari-
zation, and profiling. In addition, the system has a module based on an association-pattern analysis that summarizes the highly anomalous ranked network connections that have been recorded by the anomaly detection module. Furthermore, given the huge number of connections observed per time unit, association-pattern-based summarization of novel attacks is quite valuable in allowing a security analyst to comprehend and distinguish emerging threats. According to Long (2004), MINDS is part of the Interrogator architecture at the ARL-CIMP, where network traffic from dozens of Department of Defense sites is analyzed. As a consequence of the high cost of merging huge amounts of data and running analysis on one site, bringing the data collected at these different sites to one place and then analyzing it is not feasible. Therefore, future work is focused on a distributed framework in which these different sites can independently analyze their data and then share high-level patterns and results. That way, issues concerning the individual sites’ data privacy will not arise. The provision of functions like handling distributed data, addressing privacy issues, and using data mining tools by a middleware would make the implementation of such a system a lot easier. The University of Minnesota, the University of Florida, and the University of Illinois at Chicago are developing and implementing such a system as a part of an NSFfunded collaborative project called Data Mining Middleware for the Grid (Kumar, 2005).
futurE trEnds Although there have been successful applications of data mining techniques in the field of intrusion detection, it is clear that there is room for the further improvement of this technology. One of the improvements that should be made is adjusting more precisely data mining algorithms and processes to fit intrusion detection. It is an
The Role of Data Mining in Intrusion Detection Technology
integral part of identifying and obtaining better pictures of the data in order to provide accurate and effective results. Another obstacle in developing an effective solution is the complexity of the dramatically increasing amount of data to be analyzed, which makes data mining quite computationally expensive.
conclusion The chapter overviews the research activity regarding the application of data mining techniques in the field of intrusion detection technology so that systems can achieve higher accuracy toward the recognition of the signatures of known forms of attack and higher adaptability in unknown forms of attack. Some of the best known research efforts in the area of anomaly and misuse detection using data mining techniques were presented, and the main drawbacks and limitations of the existing systems that leave margins for further progress in the area were reported.
futurE rEsEarch dirEctions Most research in the area today focuses on the operational aspect of the IDSs rather than on the nature of attacks and false positives. On the other hand, more attention needs to be devoted to knowledge discovery in databases (KDD) and not to data mining alone. Further research should also address the automatic (or semiautomatic) generation of highquality labeled training data, or the existence of such data should no longer be assumed. Future work should also explore novel applications of data mining that do not fall into the categories of feature selection and anomaly detection. Also, to deal with some of the general challenges in data mining, it might be best to develop special-
purpose solutions that are tailored to intrusion detection. Up to now there has also been a lack of data visualization tools for computer network applications. The development of data visualization tools providing multiple perspectives is thus an urgent task. Finally, a great concern of the research community is the lack of realistic data sets for experimentation purposes. However, experimentation studies based on synthetic data sets may not be similar to real data sets; on the other hand, methods that are not efficient on synthetic data sets may turn out to be effective on real data. Therefore, there is an immediate need for tools and computer network infrastructures that will collect real data sets for the research community while maintaining privacy policies and standards.
rEfErEncEs Abraham, T. (2001). IDDM: Intrusion detection using data mining techniques (Tech. Rep. No. DSTO-GD-0286). Australia: Department of Defense, Defense Science and Technology Organisation. Barbará, D., Couto, J., Jajodia, S., Popyack, L., & Wu, N. (2001). ADAM: Detecting intrusions by data mining. Proceedings of the IEEE Workshop on Information Assurance and Security. Barbará, D., & Jajodia, S. (Eds.). (2002). Applications of data mining in computer security. Kluwer Academic Publishers. Bass, T. (2000). Intrusion detection systems multisensor data fusion: Creating cyberspace situational awareness. Communications of the ACM, 43(1), 99-105. Battelli, D., Bruschi, et al. (2005). Network security: From risk analysis to protection strategies. Istituto Superiore delle Comunicazioni e delle Tecnologie dell’ Informazione. Retrieved
The Role of Data Mining in Intrusion Detection Technology
from http://www.comunicazioni.it/it/Files/4/33/ pub_002_eng.pdf Brugger, S. T., Kelley, M., Sumikawa, K., & Wakumoto, S. (2001). Data mining for security information: A survey (Tech. Rep. No. UCRLJC-143464). Retrieved from http://www.osti. gov/bridge/servlets/purl/15005288-EzHIww/native/15005288.pdf Elite Analytics LLC. (2006). Data mining & applied statistics. Retrieved from http://www. eliteanalytics.com. Ertoz, L., Eilertson, E., Lazarevic, A., Tan, P.-N., Kumar, V., Srivastava, J., et al. (2004). MINDS: Minnesota Intrusion Detection System. In H. Kargupta, A. Joshi, K. Sivakumar, & Y. Yesha (Eds.), Data mining: Next generation challenges and future directions (chap. 11). MIT/AAAI Press. European Commission. (2002). Towards a knowledge-based Europe: The European Union and the information society. Directorate General Press and Communication Publications. Retrieved from http://ec.europa.eu/publications/booklets/ move/36/en.pdf Fan, W., Miller, M., Stolfo, S., Lee, W., & Chan, P. (2001). Using artificial anomalies to detect unknown and known network intrusions. Proceedings of the First IEEE International Conference on Data Mining, San Jose, CA. Hu, Y., & Panda, B. (2004). A data mining approach for database intrusion detection. Proceedings of the ACM SAC, Cyprus. Julisch, K. (2002). Data mining for intrusion detection: A critical review. In D. Barbará & S. Jajodia (Eds.), Applications of data mining in computer security (chap. 1). Boston: Kluwer Academic Publishers. Kemmer, R. A., & Vigna, G. (2002). Intrusion detection: A brief history and overview. Security and Privacy, pp. 27-30.
0
Kumar, V. (2005). Parallel and distributed computing for cybersecurity. IEEE Distributed Systems Online, 6(10). Lee, W., Nimbalkar, R. A., Yee, K. K., Patil, S. B., Desai, P. H., Tran, T. T., et al. (2000). A data mining and CIDF based approach for detecting novel and distributed intrusions. In Lecture notes in computer science: Vol. 1907. Proceedings of the Third International Workshop on Recent Advances in Intrusion Detection (RAID 2000) (pp.49-65). Springer. Lee, W., & Stolfo, S. J. (1998). Data mining approaches for intrusion detection. Proceedings of the 7th USENIX Security Symposium, San Antonio, TX. Lee, W., Stolfo, S. J., & Mok, K. W. (1999). Mining in a dataflow environment: Experience in network intrusion detection. Proceedings of the 5th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’99) (pp. 114-124). Lee, W., Stolfo, S. J., & Mok, K. W. (2000). Adaptive intrusion detection: A data mining approach. Artificial Intelligence Review, 14(6), 533-567. Lichtblau, E. (2002, April 24). CIA: China planning cyber-attacks on U.S., Taiwan. Los Angeles Times. Retrieved from http://www.siliconvalley. com/mld/siliconvalley/3132466.htm Lichtblau, E. (2003, January 16). Warning on Iraqi hackers and US safety. New York Times. Retrieved from http://www.nytimes.com/2003/01/17/ technology/17HACK.html Lippmann, R. P., Fried, D. J., Graf, I., Haines, J. W., Kendall, K. R., McClung, D., et al. (2000). Evaluating intrusion detection systems: The 1998 DARPA off-line intrusion detection evaluation. Proceedings of the 2000 DARPA Information Survivability Conference and Exposition (pp. 12-26).
The Role of Data Mining in Intrusion Detection Technology
Long, K. (2004). Catching the cyber spy: ARL’s interrogator. Proceedings of the 24th Army Science Conference.
10g. Proceedings of the Fourth International Conference on Machine Learning and Applications (ICMLA’05) (pp. 117-124).
Manganaris, S., Christensen, M., Zerkle, D., & Hermiz, K. (2000). A data mining analysis of RTID alarms. Computer Networks, 34, 571-577.
Chan, P. K., Fan, W., Prodromidis, A., & Stolfo, S. J. (1999). Distributed data mining in credit card fraud detection. IEEE Intelligent Systems, 14(6), 67-74.
Mannila, H., Smyth, P., & Hand, D. J. (2001). Principles of data mining. MIT Press. Phung, M. (2000). Intrusion detection FAQ (Tech. Rep.). SANS Technology Institute. Retrieved from http://www.sans.org/resources/idfaq/index.php Sundaram, A. (1996). An introduction to intrusion detection (Tech. Rep.). Retrieved from http://www. acm.org/crossroads/xrds2-4/intrus.html Thearling, K. (2006). Information about data mining and analytic technologies. Retrieved from http://www.thearling.com U.S. National Science Foundation (NSF). (2006). National Science Foundation: Where the discoveries begin. Retrieved from http://www.nsf.gov
furthEr rEading Network intrusion detection based on data mining techniques has been an ongoing research area for the last decade. The reader is directed to the references below for further study. Axelsson, S. (2000). Intrusion detection systems: A survey and taxonomy (Tech. Rep.). Chalmers University. Barbará, D., Couto, J., Jajodia, S., & Wu, N. (2001). ADAM: A testbed for exploring the use of data mining in intrusion detection. SIGMOD Record, 30, 15-24. Campos, M. M., & Milenova, B. L. (2005). Creation and deployment of data mining-based intrusion detection systems in Oracle Database
Chittur, A. (2001). Model generation for an intrusion detection system using genetic algorithms. Unpublished high school honors thesis, Ossining High School (in cooperation with Columbia University). Clifton, C., & Gengo, G. (2000). Developing custom intrusion detection filters using data mining. Proceedings of the 2000 Military Communications International, Los Angeles, CA. Dasgupta, S., & Gonzalez, F. A. (2001). An intelligent decision support system for intrusion detection and response. Proceedings of International Workshop on Mathematical Methods, Models and Architectures for Computer Networks Security (MMM-ACNS). Dickerson, J. E., & Dickerson, J. A. (2000). Fuzzy network profiling for intrusion detection. Proceedings of NAFIPS 19th International Conference of the North American Fuzzy Information Processing Society (pp. 301-306). Domingos, P., & Hulten, G. (2000). Mining high speed data streams. Proceedings of the Sixth ACM SIGKDD Conference on Knowledge Discovery and Data Mining (pp. 71-80). Endler, D. (1998). Intrusion detection applying machine learning to Solaris audit data. Proceedings of the 1998 Annual Computer Security Applications Conference (ACSAC98) (pp. 268-279). Florez, G., Bridges, S. M., & Vaughn, R. B. (2002). An improved algorithm for fuzzy data mining for intrusion detection. Annual Meeting of the North American Fuzzy Information Processing Society.
The Role of Data Mining in Intrusion Detection Technology
Gabrielson, B. (1999). Security using intelligent agents and data mining. Proceedings of the National Security Space Architect MIM Technology Forum, Chantilly, VA.
Lee, W., & Xiang, D. (2001). Information-theoretic measures for anomaly detection. Proceedings of the 2001 IEEE Symposium on Security and Privacy (pp. 130-143).
Han, H., Lu, X. L., Lu, J., Bo, C., & Yong, R. L. (2002). Data mining aided signature discovery in network-based intrusion detection system. ACM SIGOPS Operating Systems Review, 36(4), 7-13.
Leung, K., & Leckie, C. (2005). Unsupervised anomaly detection in network intrusion detection using clusters. Proceedings of Twenty-Eighth Australasian Computer Science Conference (ACSC2005) (pp. 333-342).
Helmer, G., Wong, J., Honavar, V., & Miller, L. (1999). Automated discovery of concise predictive rules for intrusion detection (Tech. Rep. No. 99-01). Ames, IA: Iowa State University.
Luo, J. (1999). Integrating fuzzy logic with data mining methods for intrusion detection. Unpublished master’s thesis, Mississippi State University, MS.
Julisch, C. (2003). Clustering intrusion detection alarms to support root cause analysis. ACM Transactions on Information and System Security, 6(4), 443-471.
Me, L., & Michel, C. (2001). Intrusion detection: A bibliography (Tech. Rep. No. SSIR-2001-01). Rennes, France: Supelec.
Julisch, K., & Dacier, M. (2002). Mining intrusion detection alarms for actionable knowledge. Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 366-375). Kumar, S. (1995). Classification and detection of computer intrusion. Unpublished doctoral dissertation, Purdue University, West Lafayette, IN. Kuok, C. M., Fu, A. W.-C., & Wong, M. H. (1998). Mining fuzzy association rules in databases. SIGMOD Record, 27(1), 41-46. Lane, T. D. (2000). Machine learning techniques for the computer security domain of anomaly detection. Unpublished doctoral dissertation, Purdue University, West Lafayette, IN. Lee, W. (1999). A data mining framework for building intrusion detection models. IEEE Symposium on Security and Privacy (pp. 120-132). Lee, W., Stolfo, S. J., Chan, P. K., Eskin, E., Fan, W., Miller, M., et al. (2001). Real time data mining-based intrusion detection. Proceedings of DISCEX II.
Mohan, S. R., Park, E. K., & Han, Y. (2005). An adaptive intrusion detection system using a data mining approach. Proceedings of the IEEE ICDM 2005 workshop on Computational Intelligence in Data Mining, Houston, TX. Neri, F. (2000). Mining TCP/IP traffic for network intrusion detection. In Lecture notes in computer science: Vol. 1810. Proceedings of 11th European Conference on Machine Learning (pp. 313-322). Barcelona, Spain: Springer. Noel, S., Wijesekera, D., & Youman, C. (2002). Modern intrusion detection, data mining, and degrees of attack guilt. In Applications of data mining in computer security. Kluwer. Petrovskiy, M. I. (2003). Outlier detection algorithms in data mining systems. Programming and Computing Software, 29(4), 228-237. Portnoy, L., Eskin, E., & Stolfo, S. J. (2001). Intrusion detection with unlabeled data using clustering. Proceedings of ACM-CSS Workshop on Data Mining Applied to Security (DMSA-2001). Sequeira, K., & Zaki, M. (2002). Admit: Anomalybased data mining for intrusions. Proceedings of
The Role of Data Mining in Intrusion Detection Technology
the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 386-395). Skorupka, C., Tivel, J., Talbot, L., Debarr, D., Hill, W., Bloedorn, E., et al. (2001). Surf the flood: Reducing high-volume intrusion detection data by automated record aggregation. Proceedings of the SANS 2001 Technical Conference, Baltimore. Thuraisingham, B. (2002). Data mining, national security, privacy and civil liberties. ACM SIGKDD Explorations Newsletter, 4(2), 1-5. Wagner, D., & Soto, P. (2002). Mimicry attacks on host-based intrusion detection systems. Proceedings of the 9th ACM Conference on Computer and Communications Security (pp. 255-264). Zhang, J., & Zulkernine, M. (2006). Anomaly based network intrusion detection with unsupervised outlier detection. Symposium on Network Security and Information Assurance, Proceedings of the IEEE International Conference on Communications (ICC).
tErms and dEfinitions Anomaly Detection: It detects activity that deviates from normal activity. Profile-based anomaly detection depends on the statistical definition of what is normal and can be prone to a large number of false positives. Auditing: Auditing is the gathering and analysis of the information of assets to ensure such things as policy compliance and security from vulnerabilities. Classification: It refers to the data mining problem of attempting to predict the category of data by building a model based on some predictor variables.
Data Mining: This is the process of automatically searching large volumes of data to uncover previously undetected relationships among data items. Data mining is also known as knowledge discovery in databases (KDD). False Negative: This occurs when there is an attack and the product does not raise an alarm. Obviously, this case is problematic because the intruder’s action can go completely unnoticed. False Positive: This occurs when there is no attack and the product raises an alarm. This case can be problematic because administrators, facing a false positive, might take unnecessary actions. Intrusion Detection Systems: They detect inappropriate, incorrect, or anomalous activity. ID systems that operate on a host to detect malicious activity on that host are called host-based ID systems. ID systems that operate on network data flows are called network-based ID systems. Misuse Detection: It detects a pattern that matches closely activity that is typical of a network intrusion. Misuse detection is also known as signature-based detection. Signature: A signature is a distinct pattern in network traffic that can be identified to a specific tool or exploit.
EndnotE 1
The research proposed one of the best performing systems in the DARPA 1998 Off-Line Intrusion Detection Evaluation and was awarded as a Best Paper in Applied Research at the Knowledge Discovery and Data Mining Tools Competition, which was held in conjunction with the Fifth International Conference on Knowledge Discovery and Data Mining (KDD-99).
Section IV
System Design and Data Processing
Starting in 2000, U.S. federal-level agencies were required to prepare documentation of their conformity with federal enterprise architecture policy. The objective of the enterprise architecture initiative was to make interdepartmental information transfer more compatible and efficient, to encourage multiple reuse of IT data and application resources, and to improve security levels. It was also expected that IT under enterprise architecture policy would become more scalable and thus new systems might be integrated more economically. In the United States and worldwide, systems architecture planning has often led to promotion of enterprise resource planning (ERP) systems. After a checkered start in the private sector in the late 1980s and early 1990s, such systems were widely adopted in the public sector in the late 1990s, and by the 2000s transition from agency-specific to enterprise-wide software was the primary reform thrust of the U.S. Office of Management and Budget (OMB) as well as of many states and localities. Efforts were made to unify financial, human resources, payroll, procurement, and other departmental software systems into single jurisdiction-wide systems. Enterprise software offered three central promises. 1. 2. 3.
Enterprise-wide software systems would be less expensive than multiple departmental systems. The consolidation of databases would improve management decision making. Reliance on large ERP software vendors would assure critical systems remained state of the art, never obsolete.
Where ERP was sometimes seen as a relative failure when first introduced, by the mid-2000s, it had grown into a $100 billion industry with many public-sector jurisdictions at all levels reliant upon it. In March 2006, the U.S. Government Accountability Office (GAO) issued a report, “Financial Management Systems: Additional Efforts Needed to Address Key Causes of Modernization Failures.” Evaluating ERP efforts in the Internal Revenue Service, defense department, and elsewhere, the GAO found continuing problems in ERP requirements management, testing, data conversion, system interfaces, and risk and project management. The report came after widespread publicity about over-budget, behindschedule ERP projects from vendors such as CGI AMS, Oracle Corporation, and SAP America Inc.
Because of the high risk, frequent failures, and high price of ERP developed under agency control, the OMB moved to promote development of ERP systems through shared service providers, also called “centers of excellence,” rather than through agencies on their own. The OMB became increasingly hostile toward customization of ERP software because customization was seen as a failure factor. Career-oriented agency CIOs (chief information officers), for their part, found pleasing their constituents through customization was not worth the career-busting price of being tagged with an ERP failure. Just as after the 1980s, the contradiction between centralized mainframe computing and stand-alone desktop computing was partially resolved by the new synthesis found in networked distributed computing, so by 2007 many believed the contradiction between centralized one-size-fits-all ERP systems and decentralized, customized “stovepipe” systems at the departmental level might be at least partially resolved by the synthesis represented by service-oriented architecture (SOA). SOA is a model under which ERP or other system functions may be modularized and delivered as Web services. In the summer of 2006, the Organization for the Advancement of Structured Information Standards formally adopted the SOA Reference Model for organizing distributed capabilities in a way that different capabilities could be controlled by different ownership domains, transcending any one vendor. As with information technology in general, the effect of SOA will rest in human hands. It is perfectly possible to administer it in a centralized manner, with the CIO retaining all the resources for and control over the possible customization that SOA modularization might otherwise allow. Only if decentralized departments have the approval, budget resources, and technical staff to take advantage of SOA will the SOA model lead to the synthesis between centralization and decentralization for which it could be designed. Without these supports, SOA becomes simply a modular approach to ERP software with most of its problems. G. David Garson, September 2007
Chapter XLV
System Dynamics to Understand Public Information Technology Luis Felipe Luna-Reyes Universidad de las Américas-Puebla, Mexico
introduction Information technology development and implementation have been recognized as forms of organizational change (Doherty & King, 2003; Orlikowski, 2000). Public-sector organizations are interested in this process of change because of the expected benefits of using IT, such as cost savings, improved service quality, increased accountability, and public participation (Gil-Garcia & Helbig, 2006). However, IT fails to deliver the anticipated payback in many projects (Jackson, 1997; Keil, Cule, Lyytinen, & Schmidt, 1998). Some of such failures are the result of our lack of understanding about the relationships among IT components and organizational factors involved in the implementation process, producing mismatches or unintended consequences in the process: “[t]he computer hardware may perform correctly, and the software may satisfy its specifi-
cation; but the results are not what was intended, and may be disastrous” (Jackson, 1997, p. 5). This chapter presents system dynamics as a method to get a better understanding of such mismatches in public information technologies. The method has already been used successfully in the planning and evaluation of both public and private IT applications (Abdel-Hamid & Madnick, 1991; Madachy & Tarbet, 2000; Wolstenholme, 2003; Wolstenholme, Henderson, & Gavine, 1993). The method allows us to understand the interactions among technologies and organizations as a continuous process of organizational change (March, 1981), in which it is possible to find brief periods of rapid change. However, even those periods of rapid change are conceptualized as the result of endogenous and continuous local adaptations (Hutchins, 1991), where technology enables, not causes change (Orlikowski, 2000). The chapter presents the basic principles and tools of system dynamics and continues with an
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
System Dynamics to Understand Public Information Technology
example of its application in the analysis of an IT project in the public sector. The chapter ends with a brief description of future trends in modeling and simulation as well as a brief conclusion.
background System dynamics is a method for studying and managing complex feedback systems (Forrester, 1961; Richardson & Pugh, 1981; Roberts, Andersen, Deal, Grant, & Shaffer, 1981; Sterman, 2000). One of the basic principles of system dynamics is that a system’s performance over time is closely linked to an underlying structure of endogenous feedback processes. That is to say, patterns of behavior in the system are explained mainly by endogenous processes, not by exogenous factors. The processes of modeling and simulation are mainly intended to learn about how the world works, helping policy makers to improve their way of thinking (Senge, 1990). Usually, a computer model is needed because of human limitations to predict and manage the behavior of these complex structures (Forrester, 1971). In this way, the modeling process becomes a formal way of developing and testing hypotheses about the impact of feedback processes on specific problematic behaviors in a system.
the modeling process System-dynamics practitioners have described the modeling process as a series of steps going from problem understanding to model validation and use (Randers, 1980; Richardson & Pugh, 1981; Roberts et al., 1981; Sterman, 2000). The modeling process involves the analysis of problem dynamics (expressed on graphs of behavior over time) and problem structure (expressed graphically in causal-loop diagrams or stock-and-flow diagrams, and mathematically in systems of differential equations). In this way, a systemdynamics computer model is the result of an
iterative process of comparing and contrasting a set of assumptions about the system structure and the known behaviors of it. In this way, defining problems in terms of behavior over time (graphs over time), and developing feedback-rich diagrams (causal-loop and stock-and-flow diagrams) are two basic skills to model problems.1
Defining Problems in Terms of behavior over time In system dynamics, defining problems means to express them in terms of their dynamic behavior (Richardson & Pugh, 1981), moving from static pictures of symptoms of a problem to a dynamic definition of it (Mashayekhi, 1992; Senge, 1990). The term reference mode refers to patterns of behavior over time. These patterns represent the dynamic behavior of important problem-related variables. The reference mode is an abstract concept to guide the modeling process, which is built by using multiple sources of historical evidence, such as verbal descriptions, time series, or key events (Saeed, 1998). In terms of the data available, a modeler can draw graphs of either historical data series (information available from records in the organization) or idealized reference modes (information available from actors’ judgments and mental models). In terms of the project scope, modelers create sketches of historical data, forecasted dynamics (what we can expect to happen), or preferred dynamics (what we desire to happen). Historical data series are complex patterns that represent the effects on system behavior of multiple problem components (Saeed, 1992). Therefore, historical data series can be used as the starting point to create a reference mode, but using that pattern as the reference mode itself can be misleading to the modeling effort. Although historical data series can show many different behaviors, most of those rich behaviors
System Dynamics to Understand Public Information Technology
Figure 1. Common patterns of problem behavior Rising Increase
Fa lling Decreas e
Time
Oscillation
Goal S eeking
Time
Time
Boom and Bust
Time
S Shaped
Time
are combinations or instances of a limited set of behavioral patterns like the ones shown in Figure 1. Moreover, it is common to find out in publicsector applications that relevant variables are not recorded in any information system. Thus, many
Time
reference modes are based upon actors’ judgments and mental models. In fact, system dynamics is best suited for problems that show dynamic behaviors like the ones presented in Figure 1, particularly when the
Table 1. Definition of causal links Graphic Representation
Conceptual Definition
Math Definition
Y= X adds to Y, or changes in X lead to changes in Y in the same direction
+ X
Y
X
Y
∫ ( X + ...)ds + Y t0
t0
or ∂Y >0 ∂X
X subtracts from Y, or changes in X lead to changes in Y in the opposite direction
-
t
Y=
t
∫ (− X + ...)ds + Y t0
or ∂Y
Project Manager I (PM I)
0.25
Graphic Artist
=>
Creative Services (CRE)
0.25
Technical Designer
=>
Prog/Coding/Dev I ( PCD I)
1.00
Baseline
1.50
government applications that began as relatively simple, static HTML documents have progressed through a series of phases to become much more complex. STA classifies Web application systems within the state Web portal into four categories: static, Web-enabled forms, custom, and enterprise application integration (EAI). The units involved with the Web application maintenance are Traffic, which records user requests and monitors requests, Creative Services, which delivers graphical design work to accommodate the look and feel of Web applications, and E-Development, which is responsible for the state web portal development and maintenance. STA uses full-time equivalent (FTE) to determine the number of full-time positions to maintain the state Web portal.
in these sites is generally limited to HTML, images, PDF files, and Office files. This content is statically presented to users when requested and changes only when files are manually replaced or updated. Dynamic content and interactive forms on static Web sites are limited to common, across-the-board simplistic functionality (e.g., an online e-mail form). The static Web application maintenance requirement is listed in Table 1. The Traffic unit provides initial triage to static Web site change requests. Technical designers do the majority of maintenance performed on static Web sites. Graphic artists may be required to update images. For a negligible percentage of the time, developers may be required to provide minor updates to the limited number of interactive forms.
static web applications
web-Enabled forms application
Static Web sites, sometimes called “brochureware” sites, provide a snapshot of information about an organization or activity. Such sites consist primarily of content files that require no run-time compilation, interpretation, or execution. Content
Web-enabled forms provide limited interactivity by automating simple paper-based business processes. An example of a Web-enabled form would be a Web-based survey that generates either e-mail or a flat-file extract to the business owner.
Table 2. Web-enabled form application maintenance requirement Functional Requirement
Resource Cluster
# FTEs
Traffic
=>
Project Manager I (PM I)
0.25
Graphic Artist
=>
Creative Services (CRE)
0.25
Technical Designer
=>
Prog/Coding/Dev I ( PCD I)
0.50
Developer
=>
Prog/Coding/Dev I ( PCD I)
1.25
Baseline
2.25
Web Application Classification
Table 3. Custom Web application maintenance requirement Functional Requirement
Resource Cluster
# FTEs
Traffic
=>
Project Manager I (PM I)
0.50
Project Manager (or Supervision/Mgt)
=>
Project Manager II (PM II)
0.75
Requirements Engineer (or Senior Developer or Supervision/Mgt)
=>
Requirement & Testing Engineer (RTE)
0.75
Graphic Artist
=>
Creative Services (CRE)
0.25
Technical Designer
=>
Prog/Coding/Dev I ( PCD I)
0.75
Prog/Coding/Dev I ( PCD I)
1.25
Req. & Testing Engineer (RTE)
0.50
Developer Test Engineer (or Senior Developer or Project Manager or Supervision/Mgt)
=>
Baseline
Common modules provide limited dynamic Web content by reusing (copy and paste) existing software modules. An example of a common module is the press-release application. Both static content and a modest amount of interactive and dynamic content (that requires run-time compilation, interpretation, or execution) typically comprise development at this level. Dynamic content or interactive forms at this level usually contain minor customizations.
4.75
The Web-enabled forms application maintenance requirement is listed in Table 2. Traffic provides initial triage for maintenance requests for Web-enabled forms and common modules. Technical designers may be able to perform minor modifications. Graphic artists may be required to update images. For a large percentage of the time, developers will need to make the requested modifications.
Table 4. EAI application maintenance requirement Functional Requirement
Resource Cluster
# FTEs
Traffic
=>
Project Manager I (PM I)
0.50
Project Manager (or Supervision/Mgt)
=>
Project Manager II (PM II)
0.75
Requirements Engineer (or Senior Developer or Supervision/Mgt)
=>
Req. & Testing Engineer (RTE)
0.75
Graphic Artist
=>
Creative Services (CRE)
0.25
Technical Designer
=>
Prog/Coding/Dev I ( PCD I)
0.75
Developer
=>
Prog/Coding/Dev I ( PCD I)
1.25
EAI Developer
=>
Prog/Coding/Dev I ( PCD I)
1.25
Test Engineer (or Senior Developer or Project Manager or Supervision/Mgt)
=>
Req. & Testing Engineer (RTE)
0.50
Baseline
6.00
Web Application Classification
custom web applications Custom Web applications generally provide both static and dynamic or interactive content. Such applications may augment a static Web site or may constitute a Web site on its own. Custom Web applications may include simple Web-enabled forms or common software modules, but provide additional business-level processing. Examples of such additional business-level processing could include automated workflows such as scheduling and calendaring, custom search applications, or online reporting. Dynamic content and interactive forms at this level usually contain significant customizations. The custom Web application maintenance requirement is listed in Table 3. Traffic provides initial triage and requirements discovery for maintenance requests on custom Web applications. Changes in the underlying business process may require formal project management by a project manager or team supervisor, as well as detailed requirements gathering by a requirements engineer, developer, or team supervisor. Graphic artists may be required to update images. Technical designers may resolve user interface (UI) design issues. Developers must make program changes to accommodate new business requirements. A test engineer may be required for user acceptance testing.
Eai web applications EAI Web applications exhibit the same characteristics as custom Web applications but add an additional level of complexity because they communicate either synchronously or asynchronously with various enterprise application environments that are autonomous of STA’s Web application environment. Examples of existing enterprise application environments include mainframebased systems (such as GRATIS and CJIS), COTS (commercial off the shelf; such as LicenseEase
and Systems Automation), or agency data stores (such as SoS SQLServer and OSR SQLServer). Dynamic content and interactive forms at this level usually contain significant customizations. The EAI application maintenance requirement is listed in Table 4. Traffic provides initial triage and requirements discovery for maintenance requests on EAI Web applications. Changes in the underlying business process may require formal project management lead by a project manager or team supervisor, as well as detailed requirements gathering by a requirements engineer, developer, or team supervisor. Graphic artists may be required to update images. Technical designers may resolve UI design issues. Developers must make program changes to accommodate new business requirements. A test engineer may be required for user acceptance testing. STA has been classifying a Web application portfolio on the state portal since 2001. In 2005, STA had 26 static applications, 10 Web-enabled forms, 57 custom Web applications, and 55 EAI Web applications in its Web application portfolio. STA indicates that the application portfolio management is an ongoing project. The Web applications may change or evolve from one type to another over time. The number of different types of applications varies from time to time. In 2001, the number of static Web applications was greater than that of 2005. STA has a dedicated e-development staff managing the outsourced application portfolio management project.
futurE trEnds Web application evolution is the result of important dynamics regarding the fact that Web applications interact with actions in organizational settings, technology, and the external environment. The e-government application evolution is a neverending process that delivers services and enhances or improves products (Web applications
Web Application Classification
themselves). The application classification can be seen as a starting point, but not as an end, to evolution management. STA manages Web application evolution from three perspectives: establishing processes, delivering services, and enhancing products. Web-based application development becomes more complicated and thus increasingly risky; the state must take steps to mitigate this risk. STA believes that standardizing common application development platforms for the creation of new Web-based applications will have the effect of creating an overall expertise in the chosen application development environment and related technologies. This allows professionals throughout the state to support and advise each other in the course of developing Web-based applications. With limited funds and staffing and growing responsibilities, agencies must prioritize IT projects and practice proper project management techniques to ensure that the projects are completed on time and under budget. According to a 10-year study of major industries conducted by the Standish Group (2001), only 28% of application software development projects are successful, meaning they are completed within budget and on time, and they meet business needs. Ensuring that IT dollars yield business results requires increased competency in project management. The state’s critical project review process links IT expenditure requests with budget and program priorities to enhance the success of projects of strategic interest to state government. Having solid requirements also puts the state on firmer footing to assess vendor performance. STA and other state agencies sign the InterAgency Agreement (IAA) to define, measure, monitor, and deliver services. The IAA documents the characteristics of services that are required by agencies as they are mutually understood and agreed to by the agencies and STA. The IAA ensures that the proper elements and commitment are in place to provide optimal support of content and applications being developed and managed by the agencies for inclusion on the state portal that
is managed by STA. The IAA uses ServiceCenter (SC) to classify and monitor all maintenance requests for the four Web application categories. The ServiceCenter also provides historical data to fine-tune the maintenance requirements for the four Web application categories.
conclusion Much of the existing literature in Web applications attempts to describe and understand the Web application development processes. Web application maintenance and evolution gets its attention when more Web applications are in production. Insights of the properties and maintenance requirements of Web application will make planning, control, and execution of maintenance and evolution process improvements more systematic and effective. The Web application category will indicate the types of activities, methods, and tools required; when and how they should be used; and how they relate to one another.
futurE rEsEarch dirEctions Three directions for future research have emerged from this study. They include (a) a survey of Web application classifications, (b) a longitudinal study of Web application evolution, and (c) the development of an effort estimation model.
Survey of Web Application Classification In the current study, we investigate the STA Web application classification process. This study has shown the STA’s classification method, but other organizations may use different classification schemes to categorize its Web applications. The e-government evolution stage is correlated to its Web applications. It is natural to extend the study
Web Application Classification
by using surveys to investigate multiple organizations. The survey will answer the following questions. Do organizations classify their Web applications? How do organizations classify their Web applications? What are the benefits of their classification schemes? Which evolution stage is the organization in? What is the organization’s Web application portfolio, and what is the relationship between Web application portfolio and evolution stage.
Longitudinal Study of Web application Evolution In this study, we took a snapshot of a Web application portfolio at STA. This study did not investigate the evolution of Web applications. Since Web applications evolve from simple brochureware to more complicated applications, it is natural to extend the study by taking multiple snapshots at different time periods and comparing the changes of the Web application portfolio. The future study will collect Web application classification data every 6 months for 2 to 3 years. The longitudinal study will track a few Web applications that go through the changes from one category to another. The longitudinal study will answer the following questions. How many Web applications evolve over the study period? What is the frequency of a Web application evolution? What is the intensity of the evolution and what factors enable or trigger the evolution?
rEfErEncEs Belady, R. A., & Lehman, M. M. (1976). A model of large program development. IBM Systems Journal, 15(1), 225-252. Bennett, K. H., & Rajlich, V. T. (2000). Software maintenance and evolution: A roadmap. Proceedings for International Conference on the Future of Software Engineering (pp. 73-87). Digital State Survey Sustained Leadership Award. (1997-2002). Retrieved May 30, 2006, from http:// www.centerdigitalgov.com/center/02sustained. php García-Cabrera, L., Rodríguez-Fórtiz, M. J., & Parets-Llorca, J. (2002). Evolving hypermedia systems: A layered software architecture. Journal of Software Maintenance and Evolution: Research and Practice, 14(5), 389-405. Hiller, J. S., & Bélanger, F. (2001). Privacy strategies for electronic government. In EGovernment 2001. Lanham, MD: Rowman & Littlefield Publishers. Ho, A. (2002). Reinventing local governments and the e-government initiative. Public Administration Review, 62(4), 434-444. Holden, S. H., Norris, D. F., & Fletcher, P. D. (2003). Electronic government at the local level: Progress to date and future issues. Public Performance and Management Review, 26(3), 1-20.
development of Effort Estimation model
Jazayeri, M. (2005). Species evolves, individuals age. International Workshop on Principles of Software Evolution (pp. 3-9).
The current study shows the resources required for different types of Web applications in terms of FTE throughout the applications’ life cycles. The FTE is only a rough estimation. In the next phase of the study, estimation models will be developed to predict the effort in terms of man-month (or man-hour) for the evolution of Web applications from one category to the next.
Layne, K., & Lee, J. (2001). Developing fully functional e-government: A four stage model. Government Information Quarterly, 18(2), 122136. Melitski, J. (2003). Capacity and e-government performance: An analysis based on early adopters of Internet technologies in New Jersey. Public
Web Application Classification
Performance and Management Review, 26(4), 376-390. Moon, M. J. (2002). The evolution of e-government among municipalities: Rhetoric or reality? Public Administration Review, 62(4), 424-433. Sandoval, R., & Gil-García, J. R. (2005). Assessing e-government evolution in Mexico: A preliminary analysis of the state portals. Information Resources Management Association International Conference Proceedings. Schelin, S. H. (2003). E-government: An overview. In Public information technology: Policy and management issues. Hershey, PA: Idea Group Publishing. Standish Group. (2001). Extreme chaos. Retrieved May 30, 2006, from http://www.standishgroup. com/sample_research/PDFpages/extreme_chaos. pdf Taylor, M., McWilliam, J., Sheehan, J., & Mulhaney, A. (2002). Maintenance issues in the Web site development process. Journal of Software Maintenance and Evolution: Research and Practice, 14(2), 109-122. United Nations (UN) & ASPA. (2002). Benchmarking e-government: A global perspective. Retrieved May 30, 2006, from http://unpan1. un.org/intradoc/groups/public/documents/UN/ UNPAN021547.pdf West, D. M. (2004). E-government and the transformation of service delivery and citizen attitudes. Public Administration Review, 64(1), 15-27.
furthEr rEading Anderson, K. V., & Henriksen, H. Z. (2005). The first leg of e-government research: Domains and application areas 1998-2003. International Journal of Electronic Government Research, 1(4), 26-44.
Bruno, V., Tam, A., & Thom, J. (2005). Characteristics of Web applications that affect usability: A review. Proceedings of the 19th Conference of the Computer-Human Interaction Special Interest Group (CHISIG). Cardellini, V., Casalicchio, E., Colajanni, M., & Yu, P. S. (2002). The state of the art in locally distributed Web-server systems. ACM Computing Surveys, 34(2), 263-311. Chen, Y. N., Chen, H. M., Huang, W., & Ching, R. K. H. (2006). E-government strategies in developed and developing countries: An implementation framework and case study. Journal of Global Information Management, 14(1), 23-46. Costagliola, G., Di Martino, S., Ferrucci, F., Gravino, C., Tortora, G., & Vitiello, G. (2006). Effort estimation modeling techniques: A case study for Web applications. Proceedings of the Sixth International Conference on Web Engineering. Eirinaki, M., Vazirgiannis, M., & Varlamis, I. (2003). SEWeP: Using site semantics and a taxonomy to enhance the Web personalization process. Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Elbaum, S., Chilakamarri, K., Fisher, M., & Rothermel, G. (2006). Web application characterization through directed requests. Proceedings of the International Workshop on Dynamic Systems Analysis. Gates, S. C., Teiken, W., & Cheng, K. F. (2005). Taxonomies by the numbers: Building high-performance taxonomies. Proceedings of the 14th ACM International Conference on Information and Knowledge Management. Grant, G., & Chau, D. (2005). Developing a generic framework for e-government. Journal of Global Information Management, 13(1), 1-30. Heller, R. S., Martin, C. D., Haneef, N., & GievskaKrliu, S. (2001). Using a theoretical multimedia
Web Application Classification
taxonomy framework. Journal on Educational Resources in Computing, 1(1), 1-22. Ho, K. K. W. (2007). The e-government development, IT strategies, and portals of the Hong Kong SAR government. International Journal of Cases on Electronic Commerce, 3(2), 71-89. Kraemer, K., & King, J. L. (2006). Information technology and administrative reform: Will egovernment be different? International Journal of Electronic Government Research, 2(1), 1-20. Kung, H. J., Tung, H. L., & Case, T. (2007). Managing e-government application evolution: A state government case. International Journal of Cases on Electronic Commerce, 3(2), 36-53. Lee, J., & Kim, J. (2007). Grounded theory analysis of e-government initiatives: Exploring perceptions of government authorities. Government Information Quarterly, 24(1), 135-147. Lee, J. K., Kang, H., Lee, H. K., & Lee, H. S. (2002). Evolutionary stages of e-tailors and retailers: Firm value determinants model. Journal of Global Information Management, 10(3), 15-35. Lee-Kelley, L., & James, T. (2003). E-government and social exclusion: An empirical study. Journal of Electronic Commerce in Organizations, 1(4), 16-32. Mahler, J., & Regan, P. M. (2006). The evolution of Web governance in the federal government. International Journal of Electronic Government Research, 2(1), 21-35.
Norris, D. F., & Lloyd, B. A. (2006). The scholarly literature on e-government: Characterizing a nascent field. International Journal of Electronic Government Research, 2(4), 40-56. Scholl, H. J. (2005). E-government-induced business process change (BPC): An empirical study of current practices. International Journal of Electronic Government Research, 1(2), 27-49. Sharma, S. K., & Gupta, J. N. D. (2003). Building blocks of an e-government: A framework. Journal of Electronic Commerce in Organizations, 1(4), 1-15. Tan, C. W., Pan, S. L., & Lim, E. T. K. (2005). Managing stakeholder interests in e-government implementation: Lessons learned from a Singapore e-government project. Journal of Global Information Management, 13(1), 31-53. Titah, R., & Barki, H. (2006). E-government adoption and acceptance: A literature review. International Journal of Electronic Government Research, 2(3), 23-57. Weerakkody, V., Janssen, M., & Hjort-Madsen, K. (2007). Integration and enterprise architecture challenges in e-government: A European perspective. International Journal of Cases on Electronic Commerce, 3(2), 13-35.
tErms and dEfinitions
Melitski, J., Holzer, M., Kim, S. T., Kim, C. G., & Rho, S. Y. (2005). Digital government worldwide: A e-government assessment of municipal Web sites. International Journal of Electronic Government Research, 1(1), 1-18.
Application Evolution: This concerns any change that is being made to the entire set of programs, procedures, and related documentation associated with a computer system that makes up an application software system. Application evolution contains two parts: evolution processes and the evolution of their products.
Nahrstedt, K., & Balke, W. (2004). A taxonomy for multimedia service composition. Proceedings of the 12th Annual ACM International Conference on Multimedia.
Brochureware: This refers to Web sites or pages that are produced by taking an organization’s printed brochure and translating it directly to the Web without regard for the possibilities of
Web Application Classification
the new medium. The result will almost always be static. E-Government: This is the use of information and communication technology in general to provide citizens and organizations with more convenient access to government information and services. It is an efficient way of conducting business transactions with citizens and business and within the governments themselves. Enterprise Application Integration (EAI): This is a process that helps to integrate applications inside an organization (e.g., an ordering system with an inventory on hand) or applications of different organizations in a seamless fashion. It is done by EAI vendors with special software tools.
0
Organization/Government Portal: This is a personalized, single point of access through a Web browser to critical business information located inside and outside of an organization. Portals provide gateways to organization data, information, and knowledge. Stage of E-Government Development: The stages are a method for quantifying progress. They are representative of the government’s level of development based primarily on the content and deliverable services available through official Web sites. Web Application System: This is computer software using a Web browser as the interface to support organizational functions or processes. The system is the integration of computer hardware, software, databases, networks, procedures, and people to collect, process, store, and distribute information for specific business purposes.
531
Chapter XLIX
Web Services and Service-Oriented Architectures Bruce J. Neubauer University of South Florida, USA
INTRODUCTION A review of the development of information systems can help in understanding the potential significance of Web services and service-oriented architecture (SOA) in the public sector. SOA involves the convergent design of information systems and organizational workflows at the level of services. The purpose of this chapter is to suggest a strategy for mapping the design of service-oriented architectures onto the complex patterns of governance including combinations of federalism, regionalism, and the outsourcing of functions from government agencies to nonprofit organizations. This involves the modeling of workflows and the identification of opportunities for the sharing of services among agencies and nonprofits. The structures of government agencies reflect political jurisdictions, legislative committee structures, areas of public policy, and geographical locations. Federalism creates situations in which multiple agencies (often at different levels of government) have similar responsibilities in the same geographic areas. Metropolitan areas
are complex mosaics of local governments and special districts. In addition, nonprofit organizations are also involved in strategic alliances with government agencies to provide services to citizens. The coordination of efforts among multiple organizations has been one of the major functions of public administrators acting through formal or informal networks of relationships within and across organizational boundaries. Web services and SOA can be used to help integrate the often costly and fragmented delivery of government services.
BACKGROUND Information systems were historically centralized. It is still common for departments within an organization to each have their own computer applications used to support accounting, payroll, or other specific responsibilities. Such applications and systems are sometimes referred to as “stovepipe” applications, suggesting the fact that they stand alone. While such applications may well optimize specific functions, they may not be
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Web Services and Service-Oriented Architectures
designed to support business process workflows that cut across departments. As a consequence, common business processes often require manually entering data multiple times and making adjustments for the different ways the same data are stored in multiple databases. Modern systems of applications are sometimes called enterprise systems or ERPs. The acronym ERP stands for enterprise resource planning. Examples of ERP solutions include SAP and Oracle applications. Such systems are designed at the scale of an entire organization. ERPs are modular in nature. An organization can purchase the modules it needs from the same vender and be assured that they are compatible with one another. The benefit of systems of computer applications designed at the enterprise level is an improved ability of the applications to support entire business processes. Many enterprise systems share one enterprise database. It should not be necessary to enter data into multiple stovepipe applications. The underlying database should provide top managers with one version of the truth rather than a collection of summaries gleaned from multiple databases within the organization. Electronic data interchange (EDI) is a similar technology for creating data communications between organizations. EDI solutions tend to be point-to-point connections between specific organizations for specific purposes. Government agencies can more easily coordinate their efforts and outsource responsibilities when they have modern information systems. ERP and EDI technologies are valuable and widely used. However, ERP solutions tend to be inflexible and EDI solutions may not support agile relationships among multiple organizations in networks of strategic alliances. Web services is a modern aspect of computer programming that facilitates the linking together of disparate legacy computer resources. SOA is an application of design principles to both computer resources and organizational units that can facilitate the creation of agile relationships between depart-
ments within large organizations and among organizations working together. Early computer programmers worked in procedural programming languages such as FORTRAN and COBOL. The entire program usually ran on one machine and data were often managed by the software application itself rather than by the use of a separate database management system. Later, when programmers wrote procedural applications to run on networked computers, they sometimes used remote procedure calls to make use of subprocedures of code physically residing on another machine. The application would send a request across the network and wait for the subprocedure to run on the other machine and send data back to the application. This way of thinking evolved into object-oriented programming. A major benefit of this approach is reuse of the work required to create the subprocedure. Another benefit relates to loose coupling, meaning that the task of building a very large and complex application can be assigned to many individuals who can each build his or her own part of it without a detailed knowledge of what other programmers are doing (Kaye, 2003). As long as the interface between the part of the system and the rest of the system is agreed upon, how the part does what it does is not especially important to other programmers. To an object-oriented programmer, a use of a Web service is the invocation (calling) of structured code residing across a network on a Web server. To a business analyst, a Web service is an opportunity to reuse not just code but entire services provided by organizations. Service-oriented architecture involves the orchestration or choreography of multiple services. An orchestration involves one focal application calling one or more services as needed within the same organization. A choreography is a coordination of services between or among organizations. Having been asked to make the transition from procedural programming to object-oriented programming, computer programmers are now facing the challenge of working at the level of the design of
Web Services and Service-Oriented Architectures
services. The design of Web-based information systems can be very challenging (Andrew et al., 2006; Terrasse, Becker, & Savonnet, 2003). Modeling such systems in terms of services can help address that complexity while helping to bridge the traditional divide between organization design and the design of business processes. Some books on Web services and SOA are intended primarily for managers and executives (Barry, 2003; Bloomfield, Coombs, Knights, & Littler, 2000; Linthicum, 2004; Manes, 2003). Other sources not specifically written for managers are certainly approachable by people without technical backgrounds (Erl, 2004, 2005). Ties between workflow design and service-oriented architecture are addressed by Marks and Bell (2006) and Allen (2006). O’Toole (1997) has written that governments often seek to execute their efforts via structures of interagency collaboration, that the role of not-forprofit organizations is large and growing, and that the frequency and variety of links with for-profit firms is impressive. There are many situations that present possibilities for managers to add or remove actors from the array of their activities involving networks with others (Brudney, O’Toole, & Rainey, 2000). With human oversight, Web services can be used to facilitate the assembly and reassembly of agile strategic alliances. Strategic alliances are held together by networks of human trust rather than by the compatibility of computer network protocols.
implEmEnting govErnmEnt wEb sErvicEs and sErvicE-oriEntEd architEcturEs Some Web services are relatively small units of functionality that can be accessed by software applications of various kinds. For example, currency exchange rates change frequently. It is inefficient to expect employees to find current rates of exchange
each time the need for this information arises. This kind of activity can be automated and then either consumed or made available to others via a Web service (see Figure 1, where Activity B can be to apply a currency conversion to an amount of money). By using Web services, it is entirely feasible for one organization (almost anywhere in the world) to continually update a database with currency exchange rates and expose that information as a service to the computer applications of other organizations using the Internet. The organization maintaining the data can expose a Web service by providing the necessary WSDL (Web Service Description Language) file at a Web address, and by publishing the availability of the service using UDDI (Universal Description, Discovery and Integration). In computer jargon, such a relationship between two entities is called client-server, meaning that the client entity always asks for something, and the server entity always acts in the role of a provider (please see “Key Terms” below for brief explanations of WSDL, UDDI, and other technical acronyms). What makes a service a Web service is its implementation across the Internet using XML (extensible markup language) and SOAP (Simple Object Access Protocol) standards. Humans may or may not be directly involved in the maintenance of the database containing exchange rates in the example above. When the request from a computer application comes in, the service hits the database and returns the needed information to the application across the Internet. The arrival of the request is not synchronized with the work required to keep the database updated. The work required to be done is not a factor of the number of requests received. Although it may be costly to continually update the database, there is little or no cost associated with making the information available to additional organizations. The major constraints on scaling up to serve large numbers of requests are bandwidth to and from the server providing the Web service, and the speed of the processor(s) on the server. Of course, if the Web
Web Services and Service-Oriented Architectures
Figure 1. Simple Business Process Invoking a Web service Activity A
Activity B
Service
Activity C
server or the database fails or is taken off line for maintenance, the business processes of many organizations may be affected. One idea behind UDDI is that the failure of a Web service could trigger an automated substitution of another Web service to assure the continuity-dependent processes. As a slightly more complex example, consider the business process of a city government responding to a request to renew a business license. Several activities may be required to make the decision regarding whether or not to renew the license. It may be necessary to verify the physical integrity of the building in which the business is housed. It may also be necessary to verify that property taxes have been paid and that there is no evidence that the business is being used for criminal purposes. In the past, each of these three activities may have been performed manually and in sequential order. As a result, it might have taken a month or more to issue each renewal. There is often no reason why multiple related activities cannot be initiated in parallel. For example, checking for taxes and crime reports can probably be automated and performed almost instantly. Within the organization, automated activities can be
implemented using Web services or other means. If the city government has an intranet, it uses the same technologies as the Internet and it makes sense to automate selected services using Web services. As previously indicated, the design of Web services exposed only within one organization is called orchestration. The orchestration of Web services is a step toward the choreography of Web services involved in the business processes of multiple organizations. Figure 2 is a model of a business process that includes the parallel performance of activities and the outsourcing of two activities via Web services. Web services can be simple or very complex. An entire business process involving many activities and business rules can be nested inside a Web service and made available to others as a subprocess in a larger context. If real-time human intervention is required for the process triggered by the call to the Web service to be completed, the calling process may have to wait until the secondary process is completed before going forward. Leymann, Roller, and Schmidt (2002) make a distinction between hierarchical patterns and peer-to-peer structures, noting that in many cases a
Web Services and Service-Oriented Architectures
Figure 2. More complex business process invoking multiple Web services Activtiy A
Activtiy B
Activtiy C
Service
Activtiy D
Service
Activtiy E
mixture of hierarchical and peer-to-peer structures is used to realize complex multipartner business processes. In a hierarchical structure, the ultimate process transcends any of the organizations supporting its activities. The supporting organizations are each likely to have internal workflows independent of one another and choreographed at the level of the transcendent process. This ar-
rangement can allow each organization to keep its secrets regarding how it performs its work internally and to protect contents of databases from unintended uses by others. In a peer-to-peer relationship between organizations (or between departments in one organization), each partner is both providing services and consuming services provided by others (see Figure 3). Each partner
Figure 3. Peer-to-peer relationship between organizations or departments exposing and consuming each other’s Web services
Web Services and Service-Oriented Architectures
outsources what it is less well equipped to do and provides to its partners services it performs well. In a hierarchical architecture, each Web service supports a transcendent process, represented in Figure 4. A hierarchical architecture begs the question regarding who or what is responsible for the transcendent process. In the private sector, a transcendent process may be a supply chain that includes multiple companies such as a manufacturer, warehouse, shipper, and retailer. No one company owns the entire supply chain and yet all participate in it. Examples of transcendent processes in the public sector are not as obvious. Federalism tends to ensure that large processes are managed by state or national governments. Regional governance (as in a metropolitan area with many overlapping jurisdictions) provides opportunities for peer-to-peer service architectures and for hybrid architectures. State agencies can manage large processes, and city and county agencies (and special districts) may provide Web services to each other and in support of the larger processes managed by state governments. The same pattern exists at the next larger scale, meaning that there may be opportunities for state governments to share services as peers while also supporting larger processes managed at the na-
tional level. The notion of transcendent processes may suggest an interpretation of the virtual state (Fountain, 2001). Relationships among organizations via Web services are negotiated by humans and should include service-level agreements that spell out the responsibilities of partners to one another. Agility does not suggest that working relationships among agencies and nonprofits should be brief encounters. Technical agility helps assume that technology will not impede substitution of strategic partners or make changes in alliances difficult and costly. In a time of crisis, such as a natural disaster or terrorist event, the ability to quickly reassemble sets of strategic relationships among agencies may be very important (Harrald, 2006). In such circumstances, the possible automated substitution of Web services supported by UDDI may become valuable. The following is a list of suggestions and guidelines intended to help generalist managers steer their organizations toward acceptance and adoption of information technologies that include Web services and SOA. •
Encourage employees to think about the organization in terms of sequences of activities that cut across functional areas (departments).
Figure 4. Hierarchal relationship between a transcendant process and Web services provided by organizations
Web Services and Service-Oriented Architectures
•
•
•
•
•
•
Identify major business processes that the organization performs frequently, either to create value for citizens or for the organization’s own internal needs. Encourage IT professionals and other employees to begin thinking of activities and the computer software that support them as services. Encourage programmers and other IT employees to better understand what the organization does and the business processes that it performs, including business rules that affect the flow of work through activities within business processes. Help them realize that this is similar to ways they manage the flow of execution within computer applications. Try to create an organizational culture in which nontechnical employees do not blame IT professionals for their technical problems and in which IT professionals recognize and appreciate the knowledge and skills of end users. Create incentive systems that encourage employees to optimize processes rather than to maximize the interests of their own departments. Encourage programmers and other IT professionals to continue their educations by attending training in object-oriented analysis and design, modern programming languages, and emerging Web-based technologies. Accept the fact that there are limits to the ability of programmers to jump paradigms. Most large organizations need some programmers with legacy skills and others with more cutting-edge skills. Legacy specialists can help “wrap” older applications for inclusion in modern systems. Programmers who are experts in object-oriented programming are likely to be unable and unwilling to maintain legacy applications written in COBOL, for example.
The most difficult aspects of implementing government Web services and service-oriented architectures are likely to be political and administrative issues. There are close relationships between the possession of information, political power, and administrative discretion. Employees are likely to resist change even if change does not threaten their employment. For budgetary reasons, political and administrative resistance to the introduction of Web services is likely to become greater at the level of the extension of workflows across organizational boundaries. Agencies may become smaller and more specialized as redundant activities are identified in multiple agencies. Government agencies have legal and ethical responsibilities for the protection of data about citizens. Administrators may rightly feel that their ability to assure proper handling of confidential data is compromised when data become accessible to other organizations via Web services. Other major issues involve relationships among agencies regarding federalism, relationships among agencies regarding regional governance, and relationships between sectors involving privatization and outsourcing. Almost any major e-government initiative today is likely to be implemented using Web services. Citizens and employees as users are not likely to see or realize that the resources of multiple departments and/or organizations are involved in making the system functional. The following are brief descriptions of several government projects in Europe and the United States, involving BEA and its partners, available on the Web in the fall of 2006. What is common to each of the following examples is the need to tap into multiple disparate IT systems in order to support either a business process or a complex visualization needed for monitoring or decision-making purposes. The National Institute for Agricultural Research (INRA) is a French public research body responsible for producing and distributing innovations relevant to agriculture, food, and the environ-
Web Services and Service-Oriented Architectures
ment. INRA needed to transform its purchasing management system, rationalize its global ordering process, and analyze its expenses. INRA deployed an intranet purchasing program. As a result, INRA is able to more effectively manage its €100 million of annual purchases. Process automation has streamlined and enhanced many of INRA’s day-to-day processes. INRA has also benefited from a significant increase in productivity, largely due to the automation of complex tasks. Its managers now have a unified view of expenditures and purchasing decisions. Oslo is the capital of Norway and has a parliamentary model of government. Each city district has its own district administration, which administers multiple social and health services. In addition, there are approximately 40 departments and agencies with specific areas of responsibilities. In total, up to 55 operations offer more than 250 different services to the citizens of Oslo. The local government’s vision and strategic plan is to provide all relevant services online. More than 200 services were made available interactively in less than 1 year. Citizens can now apply online for a wide range of public-sector services, including building-construction permits, child care, and the payment of parking tickets. A United States Department of Agriculture (USDA) mainframe-based system known as STARS (Store Tracking and Redemption System) was the primary tool used by the agency to manage the Food Stamp Program. Technicians were unable to automate many key processes and the legacy system had become difficult to maintain. Systems integrator Ventera Corporation designed a new system named STARS II involving Web services and SOA. As indicated by the USDA’s Privacy Impact Assessment Form available on the Web at the USDA site, the new system draws upon data from the treasury department and the Social Security Administration, and also from state agencies and third-party providers. The City of Chicago Public Housing Commission needed to link multiple legacy systems and
provide a consistent view into project information for city executives, project managers, contractors, architects, and other personnel. The goal was to improve planning, expedite problem resolution, and reduce expenditures triggered by contract provisions regarding project deadlines missed. A company named Enterpulse designed and implemented a project management portal using a BEA portal. The resulting application enables improved project tracking and substantial cost savings by visually displaying data that enable administrators to prioritize project work efforts and avoid contract penalties.
futurE trEnds Interest in the use of Web services in the public sector will increase as they become more widely used in corporations. The focus of governmental budgetary deliberations may shift toward business processes in continuing efforts to constrain costs and increase performance. Because Web services do not expose actual databases and other resources on legacy systems, it is possible to share services while also addressing concerns of agency administrators regarding the responsibilities of agencies for information security and the privacy of citizens and others. Over time, service orientation may gradually change our primary way of perceiving organizations. Information systems then become the mediator of what employees do, and Web services become a means by which activities and entire processes can be shared among organizations. Because different kinds of organizations have similar processes, distinctions between organizational sectors (government, nonprofit, and for profit) may become less apparent. Organizations may tend to become smaller and more specialized.
Web Services and Service-Oriented Architectures
conclusion The essential questions in designing processes based upon services are the following. How should activities be defined? When should an activity be made available to others and when should an activity be outsourced to others? Organizations (and departments within organizations) that are closest to the data should be responsible for capturing and managing that data. Organizations (and departments within organizations) whose employees have a specialized expertise should make that expertise available to others. These guidelines still leave a number of high-level design decisions open for discussion. The essential assertion of this chapter is that hierarchical SOA patterns can be used to support relationships among agencies at different levels of government (federalism) and that peerto-peer SOA patterns can be used to support relationships based upon regional governance and the outsourcing of functions from agencies to nonprofit organizations. In terms of hierarchical relationships, parent entities (i.e., states vis-à-vis local governments) may be more likely to share business processes with child entities, and child entities may be more likely to share access to data with parent entities using Web services. In other words, the states might share business processes with their political subdivisions as Web services. The federal government might share business services with state governments as Web services. Local governments might make data available to both state agencies and federal agencies using Web services. Also, the states might make data available to the federal government using the same technology. The sharing of Web services among local governments and special districts could become very complex given the variety of relationships involving overlapping jurisdictions and different policies. This chapter has proposed the application of modeling and design principles to the challenge of using service orientation to help government agen-
cies and nonprofit organizations become more cost effective. Reports of implementation experiences with actual details of patterns of design including administrative, political, and technological implications are needed. In the design sciences, an evidence of maturity within a domain is the emergence of commonly observed patterns that become the building blocks for complex systems. SOA is a young and promising design science with the potential to significantly reshape the delivery of real services by government agencies and nonprofit organizations.
futurE rEsEarch dirEctions Opportunities for future research regarding SOA can be categorized into technical questions, political questions, and administrative questions. Regarding technical questions, because they depend upon Internet protocols, calls to Web services are relatively slow when compared to dedicated connectivity such as EDI. Attempts to address other technical challenges tend to lead to solutions that add additional overhead to making calls to Web services. Application performance may become an issue because the communications are based on SOAP, XML, and other Internet standards. Assuring that calls to Web services are chunky helps minimize the number of calls across networks. Nevertheless, Web applications may not scale well (i.e., serve increasing numbers of simultaneous users) because Web protocols and standards tend to be slow. Security, of course, is always important and a direction for future research. The encryption required to secure XML-based communications adds to the technical overhead that degrades the performance of Web services. Web services are usually stateless, meaning that the computer providing the service to other applications does not remember previous communications that may be part of the same transaction. That being the case, additional information regarding past events must often be included in
Web Services and Service-Oriented Architectures
subsequent calls to the same Web service. The management of transactions involving multiple Web services is especially problematic. Twophase and three-phase commit protocols that can be implemented when one organization owns or controls all parts of a distributed application cannot easily be implemented using Web services that are usually stateless. Web services are not likely to be widely reused unless they can easily be found. Automated discovery of available Web services may not satisfy the concerns of managers regarding security, privacy, and service-level agreements. The technical vision of distributed applications that can swap out Web services on the fly (perhaps to achieve load balancing) will not actually be fully implemented until managers of organizations can feel confident that they can safely delegate to software agents a variety of human concerns regarding who their partners are. Although these are essentially political and/or administrative concerns, any actual implementation of solutions will become technical concerns that may well call for technical research. Regarding political implications and the need for future research, Web services have the potential to substantially change the ways that organizations perform their work. The long-term implications involve the very design of organizations, including government agencies and nonprofits. While Web services are usually cast at the grain of activities rather than at the grain of the tasks of entire departments or missions of entire agencies, it is possible that Web services could lead to major changes in the ways agencies and nonprofits perform their work. The budgetary processes of governments often involve incremental adjustments from the previous fiscal year without regard for the possibility that technology may facilitate major changes in workflows that might dramatically reduce costs. Even administrators who have positive attitudes toward the adoption of new information technologies may have political reasons to resist the adoption of distributed information systems that may make possible substantial reductions in
0
necessary funding. SOA may also tend to compromise the ability of agency administrators to control information available to others. Research to better understand how agency and nonprofit administrators perceive Web services and SOA (as both threats and opportunities) might help assure that their organizations will not be unnecessarily penalized by the adoption of these technologies in budgetary and other political processes. It follows that a future research direction regarding administration is to better understand how Web services and SOA tend to affect existing business processes and relationships among organizations. There are opportunities to model and simulate knowledge-based workflows involving combinations of computer networks and human operators. Software like Arena and TechSmith’s Morae could be used to study not only the usability of computer-facilitated workflows, but also the identification of bottlenecks in selected workflow designs. As activities become broken out of the context of specific workflows and reused in multiple workflows, humans may lose their sense of understanding the meaning of their efforts in a larger context. This may tend to make work less satisfying. It may also lead to errors in judgment because the programmers who design distributed applications (including services provided by humans using technology) may not anticipate humans’ need to exercise judgment within meaningful contexts. A Web service involving human participation that one moment is used to select a good location for a new gas station and 10 minutes later is used to target a warhead may fail to provide human participants with necessary contexts required for the true exercise of judgment. Web services and SOA might be the next big step toward alienation of employees (life in the cubicles) and at worse a cruel exploitation of human minds harnessed to exercise judgments in the absence of knowing the consequences of related decisions. In light of globalization, Web services and SOA could be for knowledge work what outsourcing of production
Web Services and Service-Oriented Architectures
has been to industrial employment. The prospect that the mind might be reduced to a commodity is not attractive. Research regarding how employees regard their work experiences and their need for the context that provides meaning to their employment could inform the humanistic implementation of systems involving SOA. Research into the administrative implications of SOA can help assure that organizations and societies reap the benefits of this new technology while avoiding its use in ways that alienate employees and make inappropriate delegations of discretion to software systems.
Fountain, J. E. (2001). Building the virtual state: Information technology and institutional change. Washington, DC: Brookings Institution Press.
rEfErEncEs
Linthicum, D. S. (2004). Next generation application integration: From simple information to Web services. Boston: Addison-Wesley.
Allen, P. (2006). Service orientation: Winning strategies and best practices. Cambridge: Cambridge University Press. Andrew, P., Conard, J., Woodgate, S., Flanders, J., Hatoun, G., Hilerio, I., et al. (2006). Presenting Windows Workflow Foundation, beta edition. Indianapolis, IN: SAMS. Barry, D. K. (2003). Web services and serviceoriented architectures. San Francisco: Morgan Kaufmann Publishers. Bloomfield, B. P., Coombs, R., Knights, D., & Littler, D. (Eds.). (2000). Information technology and organizations: Strategies, networks, and integration. New York: Oxford University Press. Brudney, J. L., O’Toole, L. J., Jr., & Rainey, H. G. (2000). Advancing public management: New developments in theory, methods, and practice. Washington, DC: Georgetown University Press. Erl, E. (2004). Service-oriented architecture: A field guide to integrating XML and Web services. Upper Saddle River, NJ: Prentice Hall. Erl, E. (2005). Service-oriented architecture (SOA): Concepts, technology, and design. Upper Saddle River, NJ: Prentice Hall.
Harrald, J. R. (2006). Agility and discipline: Critical success factors for disaster response. The Annals of the American Academy of Political and Social Science, 614(1), 256-272. Kaye, D. (2003). Loosely coupled: The missing pieces of Web services. Marin County, CA: RDS Press. Leymann, F., Roller, D., & Schmidt, M.-T. (2002). Web services and business process management. IBM Systems Journal, 41(2), 198-211.
Manes, A. T. (2003). Web services: A manager’s guide. Boston: Addison-Wesley. Marks, E. A., & Bell, M. (2006). Service-oriented architecture (SOA): A planning and implementation guide for business and technology. Hoboken, NJ: Wiley Press. O’Toole, L. J., Jr. (1997). Treating networks seriously: Practical and research-based agendas in public administration. Public Administration Review, 57(1), 45-52. Terrasse, M.-N., Becker, G., & Savonnet, M. (2003). Metamodeling architectures and interoperability of Web-enabled information systems. In A. Dahanayake & W. Gerhardt (Eds.), Web-enabled systems integration: Practices and challenges (pp. 1-18). Hershey, PA: Idea Group Publishing.
furthEr rEading Agrawal, R., Bayardo, R., Jr., Gruhl, D., & Papadimitiour, S. (2002). Vinci: A service-oriented architecture for rapid development of Web applications. Computer Networks, 39(5), 523-539.
Web Services and Service-Oriented Architectures
Bieberstein, N., Sanjay, B., Fiammante, M., Jones, K., & Shah, R. (2006). Service-oriented architecture compass: Business value, planning and enterprise roadmap. Upper Saddle River, NJ: Pearson. Buschman, F., Meunier, R., Rohnert, H., & Sommerlad, M. (1996). Pattern-oriented software architecture: A system of patterns. New York: John Wiley & Sons. Cox, D. E., & Kreger, H. (2005). Management of the service-oriented-architecture life cycle. IBM Systems Journal, 44(4), 709-726. Datz, T. (2004, January 15). What you need to know about service-oriented architecture. CIO Magazine. Retrieved February 23, 2007, from http://www.cio.com/archive/011504/soa.html Debevoise, T. (2005). Business process management with a business rules approach: Implementing the service oriented architecture. New York: Business Knowledge Architects. Ferguson, D. F., & Stockton, M. L. (2005). Service-oriented architecture: Programming model and product architecture. IBM Systems Journal, 44(4), 753-780. Foster, I. (2005). Service-oriented science. Science, 308(6723), 814-817. He, H. (2003, September 30). What is service-oriented architecture? O’Reilly XML.COM. Retrieved February 23, 2007, from http://webservices.xml. com/pub/a/ws/2003/09/30/soa.html IBM Business Consulting Services. (2004). IBM service-oriented architecture and modeling. Retrieved February 23, 2007, from http://www-935. ibm.com/services/us/gbs/bus/pdf/g510-5060ibm-service-oriented-modeling-arch.pdf IBM Redbooks. (2004). Patterns: Service-oriented architectures and Web services. White Plains, New York: IBM Corporation.
Kragzig, D., Banke, K., & Slama, D. (2004). Enterprise SOA: Service-oriented architecture best practices. Upper Saddle River, NJ: Prentice Hall. Marks, E. A., & Bell, M. (2006). Service-oriented architecture (SOA): A planning and implementation guide for business and technology. Hoboken, NJ: Wiley Press. Marks, E. A., & Werrell, M. J. (2003). Executive’s guide to Web services (SOA, service-oriented architecture). Hoboken, NJ: Wiley Press. McGovern, J., Sims, O., & Jain, A. (2006). Enterprise service oriented architectures: Concepts, challenges, recommendations. New York: Springer. McKendrick, J. (2006, January 3). Ten examples of SOA at work, right now. ZD Net. Retrieved February 23, 2007, from http://blogs.zdnet.com/ service-oriented/?p=508 Nagaratnam, N., Nadalin, A., Hondo, M., McIntosh, M., & Austel, P. (2005). Business-driven application security: From modeling to managing secure applications. IBM Systems Journal, 44(4), 847-868. Newcomer, E., & Lomow, G. (2004). Understanding SOA with Web services. Reading, MA: Addison-Wesley Professional. Papazoglou, M. P., & Georgakopoulos, D. (2003). Service-oriented computing. Communications of the ACM, 46(10), 25-28. Pulier, E., & Hugh, T. (2005). Understanding enterprise SOA. Greenwich: Manning Publications. Schmidt, M. T., Hutchison, B., Lambros, P., & Phippen, R. (2005). The enterprise service bus: Making service-oriented architecture real. IBM Systems Journal, 44(4), 781-798. Walend, D. (2006, April 4). Understanding service oriented architecture. Retrieved February
Web Services and Service-Oriented Architectures
23, 2007, from http://today.java.net/pub/a/today/2006/04/04/understanding-service-orientedarchitecture.html Woods, D., & Mattern, T. (2006). Enterprise SOA: Designing IT for business innovation. Cambridge, MA: O’Reilly Media. Zimmermann, O., Krogdahl, P., & Gee, C. (2004). Elements of service-oriented analysis and design: An interdisciplinary modeling approach for SOA projects. Retrieved February 23, 2007, from http://www-128.ibm.com/developerworks/webservices/library/ws-soad1/
tErms and dEfinitions Business Process: A business process is a sequence of purposeful activities frequently performed within an organization or by multiple organizations in a coordinated way. It creates value for citizens or customers, or fulfills an internal need of an organization. Workflow designs are models of business processes. Electronic Data Interchange (EDI): EDI is a commercial way to create and use an electronic connection between two computers or networks geographically distant from one another and often owned by different organizations. These connections can be secure and may be relatively costly and inflexible as compared to solutions based upon Web services. Extensible Markup Language (XML): XML is a way to mark up text-based content into a structure that can be interpreted by a computer application that has the schema necessary for interpretation. Object-Oriented Programming: Most modern computer programming languages are object oriented, meaning that conceptually the programming code consists of units (objects) that include both a data structure and an ability to do things. Some kinds of objects correspond to things
that exist in the real world, such as citizens. Web services are an extension of this modular way of structuring programming code and designing programs that are distributed across computer networks. Service-Oriented Architecture (SOA): An SOA is a design by which multiple services are called in sequence or in parallel so as to implement the activities that compose a business process. Simple Object Access Protocol (SOAP): SOAP refers to an XML protocol used to provide a container in code for communications with Web services. The expression “SOAP envelope” is often used to describe the function of this technology. Strategic Alliance: A strategic alliance is a working association between two or more organizations, such as between a government agency and a nonprofit organization. Universal Description, Discovery and Integration (UDDI): A UDDI is a Web-based directory that lets organizations publish the availability of Web services they provide that are available to be used by software applications used by other organizations. UDDI is often compared to a telephone book or yellow pages. Web Service: A Web service is a service made available within an organization or between organizations at the level of computers connected by in intranet or across the Internet using specialized standards including WSDL, SOAP, and UDDI. Web Service Description Language (WSDL): A WSDL is a file on a Web server that is associated with a Web service that contains information about how a software application can talk to the Web service and what services the Web service is prepared to provide. WSDL files are intended to be read by computers. Because they are text files, they can be opened and read with Microsoft Notepad and other ASCII editors.
Chapter L
The Strategic Determinants of Shared Services Anton Joha Delft University of Technology, The Netherlands Marijn Janssen Delft University of Technology, The Netherlands
introduction Stimulated by recent advances in information and communication technology, and in their continuous pursuit for ways to reduce costs while at the same time improving customer orientation, public agencies have started to share services. By unbundling services of multiple public agencies, standardizing these services, and concentrating them in a separate organization, the basic premise of shared services seems to be that services provided by one local department or agency can be provided to others with relatively few efforts, potentially resulting in both cost savings and service quality improvements (Bergeron, 2003). Public agencies share their role as service provider by joint development, operation, control, and governance of services. As such, the sharing of services can be viewed as a particular type of sourcing arrangement. Services remain within
public administration, and public agencies control and govern the shared-service arrangement. In many countries the promising benefits of sharing services resulted in the establishment of a new type of electronic intermediary (e-intermediary) named shared-service centers (SSCs). E-intermediaries are autonomous organizations supporting organizations to improve the coordination of their activities. The basic premise of SSCs is that services operated by the intermediary organization can be used by the other organizations with relatively few efforts. The SSC provides services to many users. A number of U.S., Canadian, and Australian states, as well as Ireland and the Netherlands, have adopted shared services to support and hasten e-government developments (e.g., Accenture, 2005). The choice for sharing services is a major decision having a long-term and strategic impact, which often competes with outsourcing arrangements (Janssen & Joha, 2006b).
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Strategic Determinants of Shared Services
Just like in outsourcing arrangements (Baldwin, Irani, & Love, 2001; Fowler & Jeffs, 1998), the intended benefits are often not met and there are many strategic determinants affecting decision making concerning the use and implementation of shared services. The goal of the research presented in this chapter is to analyze the strategic determinants influencing the decision making for using and implementing shared services. The structure of the chapter is as follows. In the following section we discuss the historical and theoretical background of shared services. In the section thereafter, we provide an overview of the strategic determinants influencing the shared-services decision. Next, future trends are presented, and finally, conclusions are drawn.
background History shows various cycles of the centralization and decentralization of computing power, functionality, and organization structure. The mainframe area has resulted in the centralization of large systems in so-called data centers, whereas the introduction of personal computers resulted in a major decentralization of computing power, functionality, and control. With the advent of the Internet, it has become possible to combine aspects of centralized and decentralized models. These kinds of mixed models are often aimed at capturing the best elements of centralized and decentralized models. A SSC is an example of such a mixed model. The selected government services supporting the business processes of local, decentralized agencies are unbundled and concentrated into a shared-service center, whereas service provisioning of the service remains the responsibility of the local agencies. SSCs are a particular type of sourcing arrangement, where resources and services are retained in-house. A SSC differs from outsourcing arrangements as the SSC is a complete internal
arrangement that is founded and governed by the initiators, who are often also the users; outsourcing is externally oriented as it is about the relationship with external parties. In an SSC arrangement, there are relationships between many users and one provider, whereas outsourcing generally addresses the relationship between one user and one or more providers. Janssen and Joha (2006b) pointed out that these specific differences also have consequences for the decision-making processes and risks involved. There are many theories underpinning sourcing theory and its decision-making process. Lee, Huynh, Kwok, and Pi (2003) and Jayatilaka, Schwarz, and Hirschheim (2003) provide an overview of sourcing theories and the strategic determinants influencing these decisions. We adopt their process-driven decision approach and use four major decision categories for implementing shared services: (a) make-or-buy decision, (b) scope and type of shared-service arrangement, (c) cost benefits and risks assessment, and (d) implementation choice and change management strategy. These categories are shown in the left column of Table 1. In general, the process starts when politicians and/or public managers decide whether to develop services in-house, share them with others, or buy them on the market. Next, the scope and type of the shared-service arrangement should be decided on, which will result in the identification of various options available to share services. Thereafter, the cost benefits and risks of the identified options should be assessed and the decision to implement one arrangement should be taken. Finally, the shared-service arrangement should be implemented using a change management strategy. In the following paragraphs, we discuss these four categories in detail. The make-or-buy decision for sourcing is largely determined by resource-based theory and core competency theory, which explain that organizations should retain core capabilities inhouse and that noncore capabilities do not have to be owned.
The Strategic Determinants of Shared Services
Table 1. Decision categories for shared services and their driving theories and motto decision categories
driving theories
. Make-or-buy decision . Scope and type of shared-service arrangement
-
Core competencies theory Resource-based theory Coordination theory Resource dependency theory Transaction cost theory
Performing activities either in-house or by external suppliers Determining the potential scope and options for sharing services, including the objectives that should be met
. Cost benefits and risks assessment
-
Power-political theory Principal-agent theory Transaction cost theory Power-political theory Organizational theories
Assessing the financial feasibility of the shared-service options, also taking into account the risks involved Defining the optimal way to implement the shared-services concept within the organization
. Implementation choice and change management strategy
Choices about the scope of services and the type of shared-service arrangement are driven by coordination, transaction costs, and resource dependency theory. Coordination theory is about the management of interdependencies between organizational business processes (Malone & Crowston, 1994). The basic view in resource dependency theory is that organizations are dependent on external resources to function (Pfeffer & Salancik, 2003). The problem that resource dependency theory emphasizes is that the environment changes and resources become more or less scarce, resulting in power differences between organizations and providing an explanation of how independent organizations come to depend on and dominate each other. Cost benefits and risks assessment of the selected sourcing options are often based on the power-political, principal-agent, and transaction cost theories. The basic idea of outsourcing is based on the transaction cost theory (Coase, 1937). Transaction costs result from the transfer of property rights between parties and exist because of friction in economic systems. An organization will tend to expand until the cost of organizing an extra transaction internally becomes equal to the costs of carrying out the same transaction on the open market. The use of a communication network and integration technology will decrease
driving motto
the transaction costs, enabling organizations to source functions and to focus on their core competencies. Principal-agent theory deals with the relationship between the principal and agent based on the division of labor, information asymmetry and environment, and partner behavior (Jensen & Meckling, 1976). Both the transaction cost and principal-agent theories are based on rationality, an efficiency criterion used for explaining outsourcing structures. Political organizational theories are used for explaining organizational arrangements and include social, coordination, risk, and strategic management theories. These view actors as political entities having different degrees of power. The way shared services will be implemented and how changes are managed depend on powerpolitical and organizational theories, which are interdisciplinary and based on knowledge from the fields of psychology, political science, economics, anthropology, and sociology. They seek to explain behavior and dynamics in both individual and group contexts. Now we have discussed the historical and theoretical foundation, we will further explore the strategic determinants of shared services in the following section. We will do this using the four decision categories that were presented in this section.
The Strategic Determinants of Shared Services
dEtErminants of sharEd-sErvicEs dEcisions Make-or-Buy Decision The make-or-buy decision requires the creation of an overview of services that might be sourced and their potential sourcing options. Quinn and Hilmer (1994) argue that profit organizations ideally concentrate on those activities where they can add value to customers and strategically outsource other services for which the organization has neither a critical strategic need nor special capabilities. When translating this to public organizations, services can be categorized along two dimensions: (a) the degree of public interest in the service and (b) the potential vulnerability that could arise from market failure if the service is outsourced. Figure 1 shows schematically the two axes and the relationship with sourcing arrangements. When there is a high degree of public interest and vulnerability, the agency would generally like to have a high degree of control, usually selecting
an insourcing option, which could be a sharedservice center. When the degree of public interest in a service is low and there is a low degree of vulnerability, activities can be outsourced. In between, there is a continuous range of activities requiring different degrees of control. Assessing the make-or-buy decision will result in a highlevel categorization of core activities that need to be executed internally and noncore activities that can potentially be outsourced to an external supplier.
Scope and Type of Shared-Service arrangement Those services that are identified as potentially sharable need further analysis to determine the scope and type of arrangement. There are two key ways in which the scope can be expanded, namely, through functional integration of corporate services within one agency or through sharing along functional lines with other government departments or agencies. Figure 2 highlights the scope choice by showing four quadrants of
Figure 1. Make-or-buy decision for public organizations (based on Quinn & Hilmer, 1994)
Degree of public interest
Low
High
Low control needed (outsource)
Strategic control (insource) High Degree of vulnerability
Low
The Strategic Determinants of Shared Services
Figure 2. Determining the potential scope of shared services
Scale (e.g. interdepartemental)
Maximum Efficient, but limited scope
Efficient, and extensive scope
Inefficient and limited scope
Inefficient, but extensive scope
Limited Few
Many
Functional integration (e.g. HR, finance, procurement)
sharing options. The top-right quadrant, efficient and extensive scope, might be viewed as the total shared-service arrangement option; however, a long-lasting implementation process involving many kinds of risks might be necessary to accomplish this. Janssen and Joha (2006b) investigated a case study were the accomplishment of a mature shared-service center took almost two decades. Various types of arrangements can be chosen, including joint ventures with other public agencies, the creation of a new semiautonomous agency, the concentration of existing services in one of the existing agencies, or other models. The type of arrangement influences the way the SSC will need to be operationalized and designed in terms of its service and financial model, organizational structure, and IT governance mechanisms. IT governance mechanisms determine how communication, responsibilities, and decision-making structures are formalized (Weill & Ross, 2005). The choice for the type of arrangement and its organizational design are affected by the objectives that the shared-service arrangement should meet and vice versa. Sharing might be risky or even impossible if agencies have oppos-
ing objectives; for example, one might focus on improving the cost efficiency of existing services, whereas another agency might want to share resources to gain access to services that would be out of reach if it had to invest in these services individually. Moreover, each service might have (slightly) different objectives. Janssen and Joha (2006b) provided an overview of the objectives of shared-service centers by categorizing them into four categories, namely, (a) strategic and organizational, (b) political, (c) technical, and (d) economical objectives. This phase should result in a detailed overview of the available options to share services, thereby also identifying the accompanying objectives for each type of service.
Cost Benefits and risks assessment The different combinations of options that are identified in the former step need to be translated to a number of preferred scenarios that need to be assessed on their merits, disadvantages, and risks. This step requires gathering accurate details of the current IT, people costs, and head count for the individual processes in order to accurately
The Strategic Determinants of Shared Services
model the potential savings. The specific cost benefits of the sharing of services should be assessed taking into account the potential for cost reduction, service improvement, better control, gaining access to expertise or services out of reach for the organization, and so on. Also the risks involved with the defined scenarios need to be addressed. Following Charette (1991) and Willcocks and Margetts (1994), risk can be de-
fined as an undesired outcome that has a known or estimated probability of occurrence and impact based on experience or some theory. In Table 2, an overview of risks is presented based on Earl (1996), ISPL (1999), and Willcocks, Lacity, and Kern (1999). This table can help decision makers to assess the probability and impact of the potential risks, resulting in appropriate risk measures necessary for risk mitigation.
Table 2. Potential risks of shared services risks (based on Earl, 1996; ispl, 1999; willcocks et al., 1999) Unclear service requirements Unstable service requirements Uncertain interfaces Lack of stakeholder participation Shortfalls in subcontracted tasks Loss of control of the services Delays in the deliveries Poor quality of deliverables Increased service costs Discouragement of the service actors Poor quality of the services
Services not accepted by (local) agencies Unpredictable costs for the (local) agencies No attainment of the objectives No formal governance mechanisms
risk definition (based on ispl, 1999) The requirements of the services are not clear or not available. The requirements of the services are not stable and they are changing and evolving. Interfaces may be with other services, processes, or systems within the internal organization or with other (local) agencies. The involved stakeholders (e.g., users) have insufficient participation in the service development or execution. Tasks that are subcontracted are not performed to the required quality. The management loses control of the services. Some deliverables, being either intermediate deliverables in projects or management deliverables for services in general, are delivered late. Some deliverables, being either intermediate deliverables in projects or management deliverables for services in general, are not delivered to the required quality. The costs of the services increase. Some actors responsible for delivering the services lose their motivation and commitment for various reasons. The delivered services do not have the required quality; this may mean that it is not working, not adapted to the needs and requirements, or that there are shortfalls in some quality properties (functionality, efficiency, reliability, etc.). The services are not accepted by the (local) agencies involved; the reason may be poor quality but may also encompass sociological or human issues (e.g., due to unrealistic expectations). The delivered services generate unpredictable costs for the (local) agencies; these costs may relate to usage, operation, maintenance, and so forth, and be due to bad quality or other reasons. The delivered services do not contribute to the predefined objectives. The reasons may be bad quality, unpredictable costs, or some complex combinations of factors. Communication and decision-making structures are not (effectively) implemented and/or roles and responsibilities are not formalized.
The Strategic Determinants of Shared Services
Unstable and unclear requirements form a major risk in the development of an SSC, particularly in defining the future service delivery. Risk mitigation can be achieved by involving users and organizational representatives in the design process and sign off at all stages, and by executing strong change and requirements management. Also, dividing the project into manageable phases with a gating process at the end of each phase before proceeding to the next might well reduce these risks. A lack of upfront commitment and participation from the majority of the involved stakeholders will restrict the potential range of solutions and potentially its feasibility. Related to this is that the initial resistance to change and poor planning will cause procrastination and slow the project implementation timescales. Risk measures include developing and implementing a communication program to focus on reasons for change, involving staff in the process of redesign and training, and mentoring managers in working in the new environment. Not attaining the objectives might be the result of not clearly defining, monitoring, and tracking the potential benefits. A project board will need to agree on benefits milestones, measurement architecture, and the used monitoring process. Other important risks are loss of control of the service and poor service quality due to an undefined or unclear IT governance structure. Clear roles and responsibilities under the new structure need to be defined. Based on the results of this and the former steps, all predefined scenarios can be compared and a choice will have to be made out of the potential sourcing scenarios. The scenarios can be evaluated from three key perspectives. 1.
2.
0
Functionality and quality, which is about the ability of the service to support the agencies in meeting future challenges and changes Risks, taking into account the factors that could cause an option to fail to deliver all of the required outcomes and/or benefits
3.
Business case, which deals with the financial investment that is required, also including a comparison with the market best practice in case the service would be outsourced
implementation choice and change Management Strategy When a decision is made about which services will be shared and the architecture has been determined, an implementation strategy can be developed. A big-bang strategy, an incremental strategy, or a combination of these strategies can be used (Bruijn, Heuvelhof, & Veld, 2002). In essence, the issues around implementation are whether to concentrate all shared services for all involved agencies in one organization at once or to gradually implement processes and technology solutions among all involved agencies before trying to move to a shared-services concept. The choice of a change management strategy might be determined by political and technical issues. Political issues include the initial resistance that might be present or expected within local departments not willing to give up some of their autonomy, and individuals who are afraid that their carrier options might be affected. Change management strategies can try to deal with political issues by, for example, defining such extremely high performance levels that local departments need to cooperate to accomplish these levels. Another example is that a service is so expensive that the local departments are very much willing to share the costs with others. Technical issues are related to the technical constraints and limitations of an organization’s ICT infrastructure as this is a necessary condition for establishing shared services. This not only includes the technical platform, but also software and interfaces. Shared services should also be able to technically cope with changes and therefore require an adaptive architecture. Failure of service provisioning can result in a shut down of critical operations and result in a major loss
The Strategic Determinants of Shared Services
of productivity and complaints from constituents. The management of the SSC should mitigate these risks by having duplicate versions of systems and an emergency communication network to avoid the single-point-of-failure problems. The design of the necessary ICT infrastructure is therefore an essential factor in the implementation strategy.
futurE trEnds The sharing of services is a strategic decision having a long-term impact. We expect that sharing services will be less and less focused on technology aspects and more and more on organizational matters. The focus on the design of effective IT governance mechanisms and the introduction of risk mitigation mechanisms are especially emerging trends. The IT governance of shared services is a complicated endeavor as it often involves multiple agencies having different objectives and resources; also, the number of shared services used varies among agencies, technology sophistication differs, and unshared resources of the public agencies are interwoven with shared services (Janssen & Joha, 2006a). The use of shared services has initially been limited to easy-to-standardize services. In the future, the scope might be expanded and public agencies might share complete back-office functions and processes. Even seemingly different services might be shared, such as finance and accounting and parts of procurement and human resources services. Moreover, a trend is that these services are customized to the local situation of public agencies. In this way, the advantages of economies of scale, by means of sharing services, and the ability to customize services to the local situation are combined.
conclusion There are many strategic determinants affecting decision making concerning the implementation and operation of an e-intermediary like a shared-service center. To obtain the promised benefits of shared services and avoid the risks, the right design trade-offs and decisions have to be made influencing the performance and success of the shared-service arrangements. As a result, the sharing of services requires the design of a conscious decision-making process influenced by a variety of strategic determinants. In this chapter, we presented four major decision categories: (a) make-or-buy decision, (b) scope and type of shared-service arrangement, (c) cost benefits and risks assessment, and (d) implementation choice and change management strategy. We gave an overview of the decision categories, and discussed the strategic determinants associated with the decision-making process and implementation of shared services in public administration. A decision-making process regarding shared services starts with identifying viable sourcing options; assessing these on functionality, quality, risks, and financial implications; and developing a carefully crafted change management strategy. An important strategic determinant is risk, not only as a necessary input variable for the decision process, but also in order to be able to mitigate risks once the decision for implementing a shared-service arrangement has been made. Furthermore, there are a variety of strategic determinants that affect the decision-making processes of public agencies and might result in different arrangements to share services. Therefore, it is of prime importance that the strategic determinants are taken into account in a systematic and structured way when deciding whether to make use of shared services.
The Strategic Determinants of Shared Services
futurE rEsEarch dirEctions There are many potential research directions in this field and these are clustered according to our previous categorization. Make-or-Buy Decision. An important question is whether activities can be outsourced or need to be kept in-house. There still is a need for a framework that provides all the different dimensions that need to be taken into account to determine which services are suitable for outsourcing and which are not. Besides the degree of vulnerability and public interest, other factors might well play a role, including the competency of the organization in executing the service and in managing external suppliers, the complexity of the service, and the market maturity. Scope and Type of Shared-Service Arrangement. The way the SSC supports an organization does have major consequences for the way it should be embedded in the organization. It has to be determined what the specific objective of the SSC should be such as service improvement, cost reduction, or innovation. It is still not evident whether all advantages can be combined and achieved and which factors are of importance to meet a specific objective (Janssen & Joha, 2006b). The configuration of the right SSC business model to ensure a good and effective service delivery is an important research direction (Janssen & Joha, 2007). There is no theory addressing which business model the best choice is given certain circumstances. The specific model that is chosen has consequences for policy making and accountability, making it clear to whom the SSC reports and what its specific responsibilities are. The way the organization needs to be structured in terms of roles and responsibilities is also a topic for further research. Moreover, it is not yet clear which pricing structures can be adopted and what governance model fits best for the different models. Cost benefits and risks assessment. From a macro organizational economic perspective, it
has to be determined whether the SSC is feasible. Such a feasibility study should include different analyses, for example, costs of technology adaptations, costs of changing or modifying facilities, and customer satisfaction with current services or potential shared services. Yet, the major cost factors are not well understood. From an economic perspective, it is of interest to analyze whether the nature of an SSC changes over time in terms of economies of scale, efficiency, and governance, and in what way it evolves. Finally, the difference in efficiency and effectiveness between SSC and outsourcing arrangements is a fruitful research direction. Implementation choice and change management strategy. The dependency on the internal resources and people within SSC arrangements is extremely high. Research is necessary to identify the different strategies to cope with initial resistance and to analyze which specific strategies are most appropriate and effective given the specific circumstances (Janssen & Joha, 2007). A well-functioning ICT infrastructure is a necessary condition for establishing an SSC. This not only includes the technical platform, but also software and interfaces. The SSC should also be able to technically cope with contingencies, such as mergers and acquisitions, and therefore requires an adaptive architecture. Research issues include the following questions. What ICT is at least necessary for SSCs? What technology is suitable for sharing services? The design of the ICT infrastructure not only determines whether an SSC is possible and viable, but also how it should be implemented.
rEfErEncEs Accenture. (2005). Shared services: Government shared services model for high performance. Retrieved June 23, 2006, from http://www.accenture. com/Global/Services/By_Industry/Government/ R_and_I/DrivingServices.htm
The Strategic Determinants of Shared Services
Baldwin, L. P., Irani, Z., & Love, P. E. D. (2001). Outsourcing information systems: Drawing lessons from a banking case study. European Journal of Information Systems, 10(1), 15-24. Bergeron, B. (2003). Essentials of shared services. John Wiley & Sons. Bruijn, H. de, Heuvelhof, E. ten, & Veld, R. in ‘t. (2002). Process management: Why project management fails in complex decision making processes. Dordrecht, the Netherlands: Kluwer Academic Publishers. Charette, R. (1991). Application strategies for risk analysis. New York: McGraw Hill. Coase, R. (1937). The nature of the firm. Economica, 4, 386-405. Earl, M. J. (1996). The risks of outsourcing IT. Sloan Management Review, 37(3), 26-32. Fowler, A., & Jeffs, B. (1998). Examining information systems outsourcing: A case study from the United Kingdom. Journal of Information Technology, 13, 111-126. ISPL. (1999). Introduction to ISPL. ISPL Consortium. Janssen, M., & Joha, A. (2006a). Governance of shared services centers in public administration. Paper presented at the 12th Americas Conference on Information Systems, Acapulco, Mexico. Janssen, M., & Joha, A. (2006b). Motives for establishing shared service centers in public administrations. International Journal of Information Management, 26(2), 102-116. Janssen, M. & Joha, A. (2007). Governance and design issues of shared service centers. In Anttiroiko, A. and Mälkiä, M. (Eds.) Encyclopedia of digital government, Vol. II. Jayatilaka, B., Schwarz, A., & Hirschheim, R. (2003). Determinants of ASP choice: An integrated
perspective. European Journal of Information Systems, 12(3), 210-224. Jensen, M., & Meckling, W. (1976). Theory of the firm: Managerial behavior, agency costs, and capital structure. Journal of Financial Economics, 5, 305-360. Lee, J. N., Huynh, M. Q., Kwok, R. C. W., & Pi, S. M. (2003). IT outsourcing evolution: Past, present and future. Communications of the ACM, 46(5), 84-89. Malone, T. W., & Crowston, K. (1994). The interdisciplinary study of coordination. ACM Computing Surveys, 26, 87-119. Pfeffer, J., & Salancik, G. R. (2003). The external control of organisations. Stanford, CA: Stanford Business Classics. Quinn, J. B., & Hilmer, F. G. (1994). Make versus buy: Strategic outsourcing. Sloan Management Review, 35(4), 43-55. Weill, P., & Ross, J. (2005). A matrixed approach to designing IT governance. MIT Sloan Management Review, 46(2) 26-34. Willcocks, L., & Margetts, H. (1994). Risk assessment in information systems. European Journal of Information Systems, 4(1), 1-12. Willcocks, L. P., Lacity, M. C., & Kern, T. (1999). Risk mitigation in IT outsourcing strategy revisited: Longitudinal case research at LISA. Strategic Information Systems, 8(3), 285-314.
furthEr rEading Beynon-Davies, P., & Williams, M. D. (2003). Evaluating electronic local government in the UK. Journal of Information Technology, 18(2), 137-149. Duivenboden & Thaens, M. (2006). Information and communication technology and public innova-
The Strategic Determinants of Shared Services
tion. In Assessing the ICT-driven modernization of public administration (pp. 141-158). Amsterdam: IOS Press.
ships: Theory and practice. Journal of Strategic Information Systems, 9(4), 321-350.
Eisenhardt, K., & Martin, J. A. (2000). Dynamic capabilities: What are they? Strategic Management Journal, 21(10-11), 1105-1121.
Kern, T., & Willcocks, L. (2002). Exploring relationships in information technology outsourcing: The interaction approach. European Journal of Information Systems, 11(1), 3-19.
Grant, G., McKnight, S., Uruthirapathy, & Brown, A. (in press). Designing governance for shared services organizations in the public service. Government Information Quarterly.
Kern, T., & Willcocks, L. P. (2001). The relationship advantage: Information technologies, management and sourcing. Oxford: Oxford University Press.
Grant, R. M. (1991). The resource-based theory of competitive advantage. California Management Review, 33(3), 114-135.
Lacity, M. C., & Hirscheim, R. (1995). The information systems outsourcing: Myths, metaphors and realities. Chichester, United Kingdom: Wiley.
Grembergen, W. van, & Haes, S. De. (2004). IT governance and its mechanisms. Information Systems Control Journal, 6, 32-35. Hirschheim, R., & Lacity, M. (2000). The myths and realities of information technology insourcing. Communications of the ACM, 43(2), 99-107. Hodgkinson, S. L. (1996). The role of the corporate IT function in the federal IT organization. In M. J. Earl (Ed.), Information management: The organizational dimension (pp. 247-269). Oxford: Oxford University Press. Janssen, M., & Joha, A. (2004). Issues in relationship management for obtaining the benefits of a shared service center. Sixth International Conference on Electronic Commerce (pp. 219-228). Janssen, M., & Joha, A. (2007). Understanding IT governance for the operation of shared services in public service networks. International Journal of Networking and Virtual Organizations, 4(1), 20-34. Janssen, M., & Wagenaar, R. W. (2004). An analysis of a shared services center in e-government. Paper presented at the Hawaii International Conference on System Sciences, HI. Kern, T., & Willcocks, L. (2000). Exploring information technology outsourcing relation-
Lacity, M. C., & Willcocks, L. P. (1998). An empirical investigation of information technology sourcing practices: Lessons from experience. MIS Quarterly, 22(3), 363-308. Lacity, M. C., & Willcocks, L. P. (2002). Global IT outsourcing: In search of business advantages. Chichester, United Kingdom: Wiley. Ray, G., Muhanna, W. A., & Barney, J. B. (2005). Information technology and the performance of the customer service process: A resources-based analysis. MIS Quarterly, 29(4), 625-652. Reich, B. H., & Benbasat, I. (2000). Factors that influence the social dimension of alignment between business and information technology objectives. MIS Quarterly, 24(1), 81-111. Roy, V., & Aubert, B. A. (2002). Research contributions: A resource-based analysis of IT sourcing. ACM SIGMIS Database, 33(2), 29-40. Strikwerda, J. (2005). Shared service centers. In Van kostenbesparing naar waardecreatie. Assen, the Netherlands: Van Gorcum. Teece, D., Pisano, G., & Shuen, A. (1997). Dynamic capabilities and strategic management. Strategic Management Journal, 18(7), 509-533.
The Strategic Determinants of Shared Services
Ulbrich, F. (2005). Improving shared service implementation: Adopting lessons from the BPR movement. Business Process Management Journal, 12(2), 191-205.
E-Intermediary: An electronic intermediary is a (semi)autonomous organization that supports other organizations in coordinating their activities.
Wagenaar, R. W., Bruijn, J. A. de, Voort, H. van der, & Wendel de Joode, R. van. (2006). Implementation of shared service centers in public administration: Dilemma’s and trade-offs. Amsterdam, the Netherlands: IOS Press.
IT Governance: IT governance is the system and structure for defining policy and monitoring and controlling the policy implementation, and managing and coordinating the procedures and resources aimed at ensuring the efficient and effective execution of services.
Weill, P. (2004). Don’t just lead, govern: How best performing organisations govern IT. MIS Quarterly Executive, 3(1), 1-17. Willcocks, L. P., & Kern, T. (1998). IT outsourcing as strategic partnering: The case of the UK inland revenue. European Journal of Information Systems, 7(1), 29-45. Yang, C., & Huang, F. B. (2000). A decision model for IS outsourcing. International Journal of Information Management, 20(3), 225-239.
tErms and dEfinitions Business Process: A business process is a sequence of tasks initiated by an event and aimed at providing products or services. Change Management Strategy: A change management strategy is the plan for the development of a program and procedure for ensuring the fulfillment of intended functions or services.
Risk: Risk is an undesired outcome that has a known or estimated probability of occurrence and impact based on experience or some theory. Shared Service: Shared services are services that are shared by multiple users and provided by one service provider. Shared-Service Center (SSC): The SSC is a type of business model in which selected services are unbundled and concentrated into an accountable, semiautonomous unit or organization that provides the predefined services to all local, decentralized agencies on the basis of agreed conditions. Sourcing: Sourcing is the decision process of identifying and selecting potential internal or external suppliers of specified services. Strategic Determinant: A strategic determinant is an influencing or determining element or factor in a decision choice.
Chapter LI
Data Mining in Public Administration John Wang Montclair State University, USA Xiaohua Hu Drexel University, USA Dan Zhu Iowa State University, USA
introduction Data mining involves searching through databases for potentially useful information such as knowledge rules, patterns, regularities, and other trends hidden in the data. In order to complete these tasks, the contemporary data mining packages offer techniques such as neural networks, inductive learning decision trees, cluster analysis, link analysis, genetic algorithms, visualization, and so forth (Hand, Mannila, & Smyth, 2001; Wang, 2006). In general, data mining is a data analytical technique that assists businesses in learning and understanding their customers so that decisions and strategies can be implemented most accurately and effectively to maximize profitability. Data mining is not general data analysis, but a comprehensive technique that requires analytical
skills, information construction, and professional knowledge. Businesses are now facing globalized competition and are being forced to deal with an enormous amount of data. The vast amounts of data and the increasing technological ability to store them also facilitated data mining. In order to gain a certain level of competitive advantage, businesses now commonly adopt a data analytical technology called data mining. Nowadays, data mining is more widely used than ever before, not only by businesses who seek profits, but also by nonprofit organizations, government agencies, private groups, and other institutions in the public sector. Organizations use data mining as a tool to forecast customer behavior, reduce fraud and waste, and assist in medical research.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Data Mining in Public Administration
background Data mining uses statistical analysis, artificial intelligence, and machine-learning technologies to identify patterns that cannot be found by manual analysis alone. The primary function of data mining has already amazed many people and is now considered one of the most critical issues toward a business’s success. However, data mining was not born all of a sudden. The earliest usage of data mining can be traced back to the World War II years. Data analytical methods such as model prediction, database segmentation, link analysis, and deviation detection were used for military affairs and demographic purposes by the U.S. government, but data mining had not been seriously promoted until the 1990s. Gramatikov (2006) compares statistical methods to data mining, differentiating them by the ultimate focus of these two tools. Statistical methods use data that are collected with a predefined set of questions. Statisticians are either looking for describing parameters of data or making inferences through statistics within intervals. With data mining, knowledge is generated from hidden relations, rules, trends, and patterns that emerge as the data are mined. The reason that data mining has been developed enormously again in the last few years is that a huge amount of information was demanded by modern enterprises due to globalization. Important information regarding markets, customers, competitors, and future opportunities were collected in the form of data to be put in databases, and businesses needed data mining to unearth useful information and knowledge. Otherwise, a huge, overloaded, and unstructured database could just make it very difficult for companies to utilize it, and it would in turn mislead the database users. Public administration is, broadly speaking, the study and implementation of policy. The term may apply to government, private-sector organizations and groups, and individuals. The
adjective public often denotes government at federal, state, and local levels, though it increasingly encompasses nonprofit organizations such as those of civil society or any not specifically acting in self-interest. Then, a long list exists, including colleges and universities, health care organizations, and charities, as well as postal offices, libraries, prisons, and so forth. In the public sector, data mining initially was used as a means to detect fraud and waste, but has since grown for purposes such as measuring and improving program performance. Data mining has been increasingly cited as an important tool for homeland security efforts, crime prevention, and medical and educational applications to increase efficiency, reduce costs, and enhance research.
bEnEfits of data mining in public administration Data mining techniques offer public-sector opportunities to optimize decisions based on general trends extracted from historical data. With the knowledge that can be extracted from the data, public organizations can level up its knowledge accumulation strategies and steps. The knowledge that can be derived with the data mining could serve first as a tool for better governance and second as a means for sustaining the organizational knowledge. Data mining technology is applied in different aspects of public administration such as health care, immigration, law enforcement, and other public sectors to solve specific business or research problems. Examples of application areas follow.
improving service or performance The purpose of SBA’s (Small Business Administration) lender and loan monitoring system is to improve service and performance. The system was developed by Dun and Bradstreet. SBA uses the system to identify, measure, and manage risk
Data Mining in Public Administration
in its business loan programs. Its outputs include reports that identify the total amount of loans outstanding for a particular lender and estimate the likelihood of loans becoming delinquent in the future based on predefined patterns (U.S. Government Accounting Office [GAO], 2005). Hospitals are currently using data mining to save money in the long run by reducing medication errors and the cost of transcribing doctors’ dictation. For example, a family practitioner at UW Health’s Meadowood clinic uses the Epic System to order a drug; the computer automatically checks a database of potential cross-reactions with other medications that the patient is currently taking. This system makes it easier for doctors to do their job very well, and at the same time, this system makes it easier for patients to see their information and interact with doctors. Also, with the growing rate of infections in hospitals in the United States, some hospitals are adopting data mining techniques to inform doctors of problems they might miss. MedMined of Birmingham is a company that has sold its data analysis services to hospitals to help detect infection in its early stage. Hospitals transmit encrypted data from patients’ records to MedMined, which then uses its data mining algorithms to detect unusual patterns and correlations. At first only a few hospitals used this system, but now it is becoming a necessity in hospitals (Lok, 2004). Cahlink (2000) reported that data mining techniques are used by health organizations and hospitals to improve upon work processes. The Center for Disease Control and Prevention National Immunization Program in Atlanta implemented data mining software to allow better tracking of reactions to vaccine. The program has a huge database of adverse reactions to vaccines reported by physicians, clinics, hospitals, patients, and pharmaceutical companies across the nation. Statisticians and federal researchers monitor the data regularly to find problems caused by a single vaccine or vaccine combinations.
Through a cooperative agreement, the FDA’s (Food and Drug Administration) Division of Drug Risk Evaluation in the Office of Drug Safety has been working for almost 2 years to implement a desktop data mining software system. This data mining tool will help to evaluate the hundreds of thousands of reports submitted annually to the Adverse Events Reporting System (AERS), a system that has become more widely available to the public (Anonymous, 2005). Delavari, Shirazi, and Beikzadeh (2004) showed that higher educational institutes use data mining models to identify which part of their processes can be improved by data mining technology and how they can achieve their goals. Data mining is used in the educational system to allocate resources and staff more efficiently, manage student relations, and enhance the performance of the institution and its students and faculty. Databases include information on students and teachers, course schedules, academic performance, test scores, extracurricular activities, postgraduation activities, and so forth, all of which can guide an institution on how to improve.
helping customer relations management Studying consumer behavior is the primary purpose of data mining. Data mining has enabled businesses to provide better customer service through the use of CRM (customer relations management) technology. Most federal agencies have different customers such as citizens, businesses, other government agencies, and even offices within agencies, and each customer interaction provides extensive data that are used to develop new channels of service for customers. CRM is a conglomeration of technologies and management strategies that is used by organizations to control the operational side of their businesses. CRM is implemented in every department that deals with customers, be it in sales, technical support,
Data Mining in Public Administration
customer service, or marketing. CRM combines all customer data derived from these departments into one place such that the information can be accessed anywhere. This will enable the company to see the snapshot of a customer’s history whenever a customer is contacted. According to Dean (2001), the government’s use of CRM technologies is very different from the private sector’s benefits. Businesses use CRM to weed out customers that are costly to serve while the government uses CRM as a tool to help them acquire customers. All the same agencies use CRM to learn about customer habits to create efficiency and cost-savings solutions. The Internal Revenue Service (IRS) uses CRM as a federal tax payment system; it uses this system to facilitate the collection of tax payments from corporations. Also, this system is incorporated in the IRS’s call center and helps customer service representatives to have access to taxpayers’ data to help resolve any issues they may have with their tax payments. This system has been said to help reduce the time it takes taxpayers to phone in payments by more than 40%, and the number of taxpayer requests have dropped by 90%. The average payment now takes just 2 minutes and 20 seconds, and it happens 100,000 times a day. CRM has helped governments and businesses to take an inventory of their customers, identify the products and services provided to customers, identify the methods of providing the products and services, and measure the effectiveness of communications with the customers through service channels.
Analyzing Scientific and Research information The increasing amount of data accumulated in the health care industry creates databases that can serve as the basis for data mining. As the health care industry continues its work to enhance the quality of care, promote services, and reduce cost, undiscovered patterns of care will become increasingly transparent, first for physicians,
nurses and other clinicians, and ultimately for all consumers of health care. Providers are now beginning to recognize the value of data mining as a tool to analyze patient care and clinical outcomes. As providers deploy advanced clinical data systems, more granular, primary data are becoming available for analysis. In health care, increased access to data and information has facilitated the development of new drugs that seem to be produced at ever-increasing speeds, as Boire (2005) claims. By being able to quickly analyze volumes of data from all kinds of different tests and during different time periods, data mining represents a critical cog within the drug development process. What was once a few years ago almost considered impossible entered the realm of the possible through access to these newfound capabilities due in large part to data mining. According to scientists, the breakthrough discovery of being able to map the DNA genome offers limitless possibilities in the development of new drugs now that specific genes can be isolated. Data mining gives an opportunity to analyze the actual impact of one variable on another. The analysis of data mining results enables the health care industry to discover new approaches in care delivery that consider a multitude of data points. A big step forward was achieved in medical research with the help of data mining. Children’s Memorial Research Center, a leading U.S. pediatric hospital and research institute, has gained unique insights into tumor classification and treatment strategies with the help of SPSS predictive analysis. The automated extraction of information from biomedical literature promises to play an increasingly important role in text-based knowledge discovery processes (“SPSS Predictive Analytics Accelerating Cancer Research at Children’s Memorial Research Center,” 2005).
managing human resources According to Ashbaugh and Miranda (2002), the human resource management system (HRMS) is an integral part of the digital government that
Data Mining in Public Administration
streamlines government processes in accounting, payroll, and personnel administration. The underlying architecture for digital government is the Internet and integrated administrative systems commonly known as enterprise resource planning (ERP) systems. ERP systems are built on software that integrates information from different applications into a common database. The ERP system and HRM system are the linkage between financial and human resources applications through a single database in a software application that is both rigid and flexible. The rigidity comes from the need to standardize processes and deter customers from modifying the underlying software source code. Flexibility refers to the customer’s ability to configure the software to collect specific data and other business goals. Business intelligence is a new concept in the public sector that uses advanced analytical tools such as data mining to provide insights into organizations’ trends and patterns, and helps organizations to improve their decision-making skills. HRMS and business intelligence can be used to support personnel management decisions, including turnover analysis, recruitment, training analysis, and salary and workforce planning. The U.S. Air Force uses data mining to manage its human resources. It has signed an $88.5 million multiyear contract with the Oracle Corp., which includes a closely watched deal to build a new logistics system for the organization. The Air Force’s Expeditionary Combat Support System (ECSS) is intended to replace more than 500 legacy IT systems with one integrated commercial supply chain management system. Oracle was competing against other enterprise resource planning vendors for the ECSS contract. Oracle uses data mining to provide information on promotions, pay grades, clearances, and other information relevant to human resources planning (Cowley, 2005).
detecting fraud, waste, and abuse In the government, data mining was initially used to detect financial fraud, waste, and abuse. 0
The Department of Agriculture’s Risk Management Agency (RMA) uses data mining methods to identify potential abusers, improve program policies and guidance, and improve program performance and data quality. RMA uses information collected from insurance applicants as well as from insurance agents and claims adjusters. The department produces several types of output, including lists of names of individuals whose behavior matches patterns of anomalous behavior, which are provided to program investigators and sometimes insurance agencies. In addition, it also produces programmatic information, such as how a procedural change in the federal crop insurance program’s policy manual would impact the overall effectiveness of the program, and information on data quality and program performance, both of which are used by program managers (U.S. GAO, 2005). Fraud in health care is controlled by the use of data mining by federal agencies. The agency can compare costs charged for medical services and find health care providers overcharging their patients. Data mining is used to compare treatments for different medical conditions to determine if a patient is receiving inadequate or excessive care. Researchers are currently using data mining to review records of infants stored in the database to compare the effectiveness and safety of the narcotics morphine and fentanyl in easing pain. The need to mine existing data is very important in newborn intensive care units, where different treatments cannot always be studied in large-scale randomized clinical trials (Landro, 2006). Instead of insurance companies continuing to rely on the medical expertise of physicians as well as other trained clinicians, to manually review insurance claims to detect health care fraud and abuse, companies can now use a data mining framework that assesses clinical pathways to construct an adaptable and extensible detection model. Clinical pathways are driven by physician orders as well as industry and local standards of clinical care. Pathways provide the medical community with algorithms of the deci-
Data Mining in Public Administration
sions to be made and the care to be provided to a particular patient population. The use of clinical pathways in detecting insurance fraud and abuse by service providers shows significant promise. A care activity is highly likely to be fraudulent if it orders suspiciously. For example, a typical pattern of physicians is that of ordering noninvasive tests before ordering more invasive ones. Therefore, there is a high probability that the same set of medical activities ordered in a different order is fraudulent or abusive (Yang & Hwang, 2006).
ance policy, initiated by Giuliani, resulted in the full prosecution of even minor crimes such as the illegal placement of graffiti on public property. Giuliani used technology and data mining to analyze data concerning crimes in all sectors of his city. Data mining made it possible to analyze massive amounts of data that allowed uncovering trends and patterns concerning future crime behavior within high-risk areas.
detecting criminal activities
Following the terrorist attacks of September 11, 2001, data mining has been used increasingly as a tool to help detect terrorist threats through the collection and analysis of public- and private-sector data. Data mining has become one of the key features of many homeland security initiatives and its use has also expanded to other purposes. In the context of homeland security, data mining can be a potential means to identify terrorist activities, such as money transfers and communications, and to identify and track individual terrorists themselves, such as through travel and immigration records. Some of the homeland security data mining applications represent a significant expansion in the quantity and scope of data to be analyzed. Some efforts that have attracted a higher level of congressional interest include the Terrorism Information Awareness (TIA) project (now discontinued) and the Computer-Assisted Passenger Prescreening System II (CAPPS II) project (now canceled and replaced by Secure Flight; Seifert, 2006). Other initiatives that have been the subject of recent congressional interest include the Multi-State Anti-Terrorism Information Exchange (MATRIX), the Able Danger program, and data collection and analysis projects being conducted by the National Security Agency (NSA; “MATRIX Pilot Project Concludes,” 2005). Other government data mining projects include Talon, a program run by the Pentagon’s Counterintelligence Field Activity, which collects reports on demonstrators outside U.S. military
Data mining is also used by federal and state agencies to identify criminal activities, fraudulent misuse of governmental credit cards, and Medicaid and Medicare abuse. The SPSS predictive analytics software is used for crime prevention. This software’s ability to detect unusual activity patterns has aided in the detection of credit card and Medicare fraud. In Richmond, Virginia, police use data mining to help them predict where to put patrols for crime prevention. An arm of the U.S. Army’s homeland security uses the SPSS software to fight cybercrime. The Army aims at protecting the databases of utility companies from hackers bent on shutting down these systems (Van, 2005). The IRS uses the system to identify financial crime, including individual and corporate tax fraud. Its outputs include reports containing names, social security numbers, addresses, and other personal information of individuals suspected of financial crime. Reports are shared with IRS field office personnel, who conduct investigations based on the report’s results (U.S. GAO, 2005). In line with Boire (2005), the murder rate in New York City was reduced from 2,200 in the late 1970s to between 600 to 700 murders a year over time under Mayor Giuliani’s leadership. Some other factors like Bill Clinton’s bill to increase the number of police officers and the zero-toler-
detecting terrorist activities
Data Mining in Public Administration
bases. Thousands of such reports are stored in a database called Cornerstone and are shared with other intelligence agencies. The Pentagon’s Advanced Research and Development Activity based at Fort Meade, Maryland, runs a research program whose goal is to develop better ways to mine huge databases to help the nation avoid strategic surprises such as those of September 11, 2001 (Boyd, 2006).
barriErs of data mining in public administration Two precursors are necessary for a successful data mining expedition: a clear formulation of the problem to be solved, and access to relevant data. Complicating the formulations of the problems to be solved and inherent in the public sectors are constraints raised by political opposition, privacy considerations, and concerns arising from the inherent limitations of the technology itself and the competency and hidden agendas of those who would implement the data mining projects and interpret its outputs. The most potent threat to privacy interests created by data mining technology arises from efforts to prevent terrorism in this country and overseas. As a result of anguished inquiries into the failure of the Unites States military and intelligence branches to detect the risk, and to intercept at least one previously identified terrorist before he boarded the airplane at Newark Airport and participated in the attacks, it became more politically acceptable for politicians and public servants to procure and develop more effective and potentially intrusive data mining techniques in the interest of public safety. Building on the U.S. Patriot Act was the Total Information Awareness Project (TIA) developed by the Department of Defense. TIA was designed to collect information on individuals’ financial transactions, travel records, medical records, and other activities from a wide variety of public and private databases. The data mined from these
sources were to be used to prevent terrorism. Unlike the Patriot Act, which passed through Congress with little opposition, two bills were introduced in Congress to prevent TIA from moving forward. TIA was much broader in scope than the Patriot Act and focused on collecting information about ordinary Americans rather than terror suspects. Because TIA was seen as a greater threat to with less return in security, Congress was widely against it. The TIA project has since been scrapped (Bagner, Evansburg, Watson, & Welch, 2003).
futurE trEnds The world is fast becoming a global village and with this comes the urgent need for a new generation of computational tools to assist humans in extracting useful information from the rapidly growing volumes of digital data. The future of data mining lies in predictive analysis or oneclick data mining, accomplished through the simplification and automation of the entire data mining process. Despite the limitations, data mining will have a tremendous impact on how business is done in the future. As a technology, data mining will become more embedded in a growing number of business applications, making it more readily available to a wider market segment. The developers of data mining applications therefore need to develop it into something that most users can work with and that can provide and add value to our every day lives. Easier interfaces will allow end user analysts with limited technical skills to achieve good results.
conclusion Data mining has become an indispensable technology for businesses and researchers in many fields including public administration. Since
Data Mining in Public Administration
public-sector organizations and decision makers are dealing with a large amount of information from the public, a systematic way of collecting and reading data is necessary. This solution would not only find the similarities in one case to another, but also would identify unique cases and extreme values. Decision makers, service providers, and researchers would then be able to launch the next action based on the knowledge discovered from the database, and this increases their chances of being right. When the target market is the entire human population in the universe instead of a specific business market, data mining means more than making money for one’s self. Exercising data mining in public administration is done to help the public, improve people’s lives, and hence benefit the public as a whole.
futurE rEsEarch dirEctions Predictive Analysis Augusta (2004) suggested that predictive analysis is one of the major future trends for data mining. Rather than being just about mining large amounts of data, predictive analytics looks to actually understand the data content. They forecast based on the contents of the data. However, this requires complex programming and a great amount of business acumen. They do more than simply archive data, which is what data mining is currently known for. Organizations aim not just to process data, but understand them more clearly, which will in turn allow them to make better predictions about future behavior. With predictive analytics you have the program scour the data and try to form, or help form, a new hypothesis. This shows great promise and would be a boon for public administration everywhere.
range of fields in which data mining is used for analyzing the data. This has resulted in a process of the cross-fertilization of ideas generated within this diverse population of researchers interacting across the traditional boundaries of their disciplines. The next generation of data mining applications covers a large number of different fields from traditional businesses to advanced scientific research. Kantardzic and Zurada (2005) observed that with new tools, methodologies, and infrastructure, this trend of diversification will continue each year.
rEfErEncEs Anonymous. (2005). FDA drug safety reviewers to use data mining tool. Washington Drug Letter, 37(5). Ashbaugh, S., & Miranda, R. (2002). Technology for human resources management: Seven questions and answers. Public Personnel Management, 31(1), 7-19. Augusta, L. (2004). The future of data mining: Predictive analytics. DM Review, 14(8), 16-20, 37. Bagner, J., Evansburg, A., Watson, V. K., & Welch, J. B. (2003). Senators seek limits on DoD mining of personal data. Intellectual Property & Technology Law Journal, 15(5), 19-20. Boire, R. (2005, October). Future of data mining in marketing: Part 1. Direct Marketing News. Boyd, R. S. (2006, February 2). Data mining tells government and business a lot about you. Knight Ridder Washington Bureau. Cahlink, G. (2000). Data mining taps trends. Government Executive, 32(12), 85-87.
Diversity of Application Domains
Cowley, S. (2005, October 24). Oracle wins $88.5 million Air Force contract. IDG News Service.
In data mining and the “X” phenomenon, as Tuzhilin (2005) coined it, X constitutes a broad
Dean, J. (2001). Better business through customers. Government Executive, 33(1), 58-60.
Data Mining in Public Administration
Delavari, N., Shirazi, M. R. A., & Beikzadeh, M. R. (2004). A new model for using data mining technology in higher educational systems. 5th International Conference on Information Technology Based Higher Education and Training (pp. 319-324). Gramatikov, M. (2006). Data mining techniques and the decision making process in the Bulgarian public administration. Retrieved March 8, 2007, from http://unpan1.un.org/intradoc/groups/public/ documents/nispacee/unpan009209.pdf Hand, D. J., Mannila, H., & Smyth, P. (2001). Principles of data mining. Cambridge, MA: MIT Press.
Van, J. (2005, October 17). Cybercrime being fought in new ways. Knight Ridder Tribune Business News, p. 1. Wang, J. (Ed.). (2006). Encyclopedia of data warehousing and mining (1st ed.). Hershey, PA: Idea Group Reference. Yang, W. S., & Hwang, S. Y. (2006). A processmining framework for the detection of healthcare fraud and abuse. Expert Systems with Applications, 31(1), 56-68.
furthEr rEading
Kantardzic, M. M., & Zurada, J. (Eds.). (2005). Next generation of data-mining applications. IEEE Press, Wiley-Interscience.
Agard, B., & Kusiak, A. (2004). Data-miningbased methodology for the design of product families. International Journal of Production Research, 42(15), 2955-2969.
Landro, L. (2006, January 26). The informed patient: Infant monitors yield new clues. Studies of digital records are used to identify problems with medications: Practices. Wall Street Journal, p. D5.
Álvarez-Macilas, J. L., Mata-Vázquez, J., & Riquelme-Santos, J. C. (2004). Data mining for the management of software development process. International Journal of Software Engineering & Knowledge Engineering, 14(6), 665-695.
Lok, C. (2004, October). Fighting infections with data. Technology Review, p. 24.
Berberidis, C., & Vlahavas, I. (2005). Mining for weak periodic signals in time series databases. Intelligent Data Analysis, 9(1), 29-42.
MATRIX pilot project concludes. (2005). Retrieved March 8, 2007, from http://www.fdle.state. fl.us/press_releases/expired/2005/20050415_matrix_project.html SPSS predictive analytics accelerating cancer research at Children’s Memorial Research Center, Chicago. (2005, March 7). Business Wire. Tuzhilin, A. (2005). Foreword. In J. Wang (Ed.), Encyclopedia of data warehousing and mining (1st ed.). Hershey, PA: Idea Group Reference. U.S. Government Accounting Office (GAO). (2005). Data mining: Agencies have taken key steps to protect privacy in selected efforts, but significant compliance issues remain (GAO-05866). Author.
Besson, J., Robardet, C., Boulicaut, J. F., & Rome, S. (2005). Constraint-based concept mining and its application to microarray data analysis. Intelligent Data Analysis, 9(1), 59-82. Brito, P., & Malerba, D. (2003). Mining official data. Intelligent Data Analysis, 7(6), 497-500. Calders, T., Lakshmanan, L. V. S., Ng, R. T., & Paredaens, J. (2006). Expressive power of an algebra for data mining. ACM Transactions on Database Systems, 31(4), 1169-1214. Ceglar, A., & Roddick, J. F. (2006). Association mining. ACM Computing Surveys, 38(2), 1-42. Cheng, B.W., Chang, C. L., & Liu, I. S. (2005). Enhancing care services quality of nursing homes
Data Mining in Public Administration
using data mining. Total Quality Management & Business Excellence, 16(5), 575-596. Cheng, D., Kannan, R., Vempala, S., & Wang, G. (2006).
A divide-and-merge methodology for clustering. ACM Transactions on Database Systems, 31(4), 1499-1525.
Cho, S. B., & Won, H. H. (2003). Data mining for gene expression profiles from DNA microarray. International Journal of Software Engineering & Knowledge Engineering, 13(6), 593-608. Churilov, L., Bagirov, A., Schwartz, D., Smith, K., & Dally, M. (2005). Data mining with combined use of optimization techniques and selforganizing maps for improving risk grouping rules: Application to prostate cancer patients. Journal of Management Information Systems, 21(4), 85-100. Chye, K. H., Chin, T. W., & Peng, G. C. (2004). Credit scoring using data mining techniques. Singapore Management Review, 26(2), 25-47. Czerwinski, M., Gage, D. W., Gemmell, J., Marshall, C. C., Pérez-Quiñonesis, M. A., Skeels, M. M., et al. (2006). Digital memories in an era of ubiquitous computing and abundant storage. Communications of the ACM, 49(1), 44-50. Dumouchel, B., & Demaine, J. (2006). Knowledge discovery in the digital library: Access tools for mining science. Information Services & Use, 26(1), 39-44. Fan, W., Wallace, L., Rich, S., & Zhang, Z. (2006). Tapping the power of text mining. Communications of the ACM, 49(9), 77-82.
Fatemieh, S. O. (2006). A review of “Discovering knowledge in data: An introduction to data mining.” Journal of Biopharmaceutical Statistics, 16(1), 127-130. Feng, C. X. J., & Wang, X. F. (2004). Data mining techniques applied to predictive modeling of the knurling process. IIE Transactions, 36(3), 253-263.
Firestone, J. M. (2005). Mining for information gold. Information Management Journal, 39(5), 47-52. Ford, J. M. (2005). Data and text mining: A business applications approach. Personnel Psychology, 58(1), 267-271. Gama, J., Fernandes, R., & Rocha, R. (2006). Decision trees for mining data streams. Intelligent Data Analysis, 10(1), 23-45. Garatti, S., Savaresi, S. M., Bittanti, S., & Brocca, L. L. (2004). On the relationships between user profiles and navigation sessions in virtual communities: A data-mining approach. Intelligent Data Analysis, 8(6), 579-600. Gibert, K., Sànchez-Marrè, M., & Flores, X. (2005). Cluster discovery in environmental databases using GESCONDA: The added value of comparisons. AI Communications, 18(4), 319331. Guo, G., & Neagu, D. (2005). Fuzzy knnmodel applied to predictive toxicology data mining. International Journal of Computational Intelligence & Application, 5(3), 321-333. He, B., Chang, C. C., & Kevin. (2006). Automatic complex schema matching across Web query interfaces: A correlation mining approach. ACM Transactions on Database Systems, 31(1), 346-395. Holzman, L. E., Fisher, T. A., Galitsky, L. M., Kontostathis, A., & Pottenger, W. M. (2004). A software infrastructure for research in textual data mining. International Journal on Artificial Intelligence Tools, 13(4), 829-849. Hormozi, A. M., & Giles, S. (2004). Data mining: A competitive weapon for banking and retail industries. Information Systems Management, 21(2), 62-71. Hou, J. L., Yu, F. J., & Lin, R. K. (2006). A knowledge component analysis model based on
Data Mining in Public Administration
term frequency and correlation analysis. Journal of Computer Information Systems, 46(4), 64-77.
International Journal of Production Research, 43(6), 1089-1108.
Hu, Y. C. (2005). Finding simplified fuzzy if-then rules for function approximation problems using a fuzzy data mining approach. Applied Artificial Intelligence, 19(6), 601-619.
Padmanabhan, B., & Tuzhilin, A. (2003). On the use of optimization for data mining: Theoretical interactions and eCRM opportunities. Management Science, 49(10), 1327-1343.
Jones, K. (2006). Knowledge management as a foundation for decision support systems. Journal of Computer Information Systems, 46(4), 116-124.
Perng , Y. H., & Chang, C. L. (2004). Data mining for government construction procurement. Building Research & Information, 32(4), 329-338.
Kajko-Mattsson, M., & Chapin, N. (2004). Data mining for validation in software engineering: An example. International Journal of Software Engineering & Knowledge Engineering, 14(4), 407-427. Kersten, G. E., & Zhang, G. (2003). Mining inspire data for the determinants of successful Internet negotiations. Central European Journal of Operations Research, 11(3), 297-316. Last, M., Friedman, M., & Kandel, A. (2004). Using data mining for automated software testing. International Journal of Software Engineering & Knowledge Engineering, 14(4), 369-393. Macdonell, C. (2005). Easy data mining for school libraries. Library Media Connection, 24(1), 38-39. Morbitzer, C., Strachan, P., & Simpson, C. (2004). Data mining analysis of building simulation performance data. Building Services Engineering Research & Technology, 25(3), 253-267. Nayak, R., & Qiu, T. (2005). A data mining application: Analysis of problems occurring during a software project development process. International Journal of Software Engineering & Knowledge Engineering, 15(4), 647-663. Neaga, E. I., & Harding, J. A. (2005). An enterprise modeling and integration framework based on knowledge discovery and data mining.
Qian, G., Zhu, Q., Xue, Q., & Pramanik, S. (2006). Dynamic indexing for multidimensional non-ordered discrete data spaces using a datapartitioning approach. ACM Transactions on Database Systems, 31(2), 439-484. Ren, Y., Ding, Y., & Zhou, S. (2006). A data mining approach to study the significance of nonlinearity in multistation assembly processes. IIE Transactions, 38(12), 1069-1083. Riquelme, J. C., Polo, M., Aguilar-Ruiz, J. S., Piattini, M., Ferrer-Troyano, F. J., & Ruiz, F. (2006). A comparison of effort estimation methods for 4 GL programs: Experiences with statistics and data mining. International Journal of Software Engineering & Knowledge Engineering, 16(1), 127-140. Šef, T., & Gams, M. (2004). Data mining for creating accentuation rules. Applied Artificial Intelligence, 18(5), 395-410. Sinha, A. P., & May, J. H. (2005). Evaluating and timing predictive data mining models using receiver operating characteristic curves. Journal of Management Information Systems, 21(3), 249-280. Spangler, W. E., Gal-Or, M., & May, J. H. (2003). Using data mining to profile TV viewers. Communications of the ACM, 46(12), 66-72. Spangler, W. S., Kreulen, J. T., & Newswanger, J. F. (2006). Machines in the conversation: Detecting
Data Mining in Public Administration
themes and trends in informal communication streams. IBM Systems Journal, 45(4), 785-799.
tErms and dEfinitions
Subramanyam, R. B. V., & Goswami, A. (2005). A fuzzy data mining algorithm for incremental mining of quantitative sequential patterns. International Journal of Uncertainty, 13(6), 633-652.
Business Intelligence: Business intelligence is a new concept in the public sector that uses advanced analytical tools such as data mining to provide insights into organization trends, patterns, and decision making.
Toussaint, G. (2005). Geometric proximity graphs for improving nearest neighbor methods in instance-based learning and data mining. International Journal of Computational Geometry & Applications, 15(2), 101-150.
Customer Relations Management (CRM): CRM is a conglomeration of technologies and management strategies used by organizations to control the operational side of the business, and this is the domain for human resources and financial systems.
Vason, B. J. (2004). Mine data to discover infection control trends. Nursing Management, 35(6), 46-47. Wang, S., Chung, K. F. L., & Shen, H. (2005). Fuzzy taxonomy, quantitative database and mining generalized association rules. Intelligent Data Analysis, 9(2), 207-217. Wood, C. A., & Ow, T. T. (2005). Corporate data to data derived from the Web. Communications of the ACM, 48(9), 99-104. Wu, R. C., Chen, R. S., & Chian, S. S. (2006). Design of a product quality control system based on the use of data mining techniques. IIE Transactions, 38(1), 39-51. Xing, Z., & Stroulia, E. (2006). Understanding the evolution and co-evolution of classes in object-oriented systems. International Journal of Software Engineering & Knowledge Engineering, 16(1), 23-51.
Data Mining: This is a process of sifting through the mass of organizational data to identify patterns critical for decision support. Data Quality: This refers to the accuracy and completeness of data. Enterprise Resource Planning Systems (ERPS): This is a system built on software that integrates information from different applications into a common database. Interoperability: This is the ability of a computer system and/or data to work with other systems using common standards. Public Administration: The term may apply to government, private-sector organizations and groups, or individuals.
Zhong, N., Hu, J., Motomura, S., Wu, J. L., & Liu, C. (2005). Building a data-mining grid for multiple human brain data analysis. Computational Intelligence, 21(2), 177-196. Zurada, J., Karwowski, W., & Marras, W. (2004). Classification of jobs with risk of low back disorders by applying data mining techniques. Occupational Ergonomics, 4(4), 291-305.
Chapter LII
Categorization of Data Clustering Techniques Baoying Wang Waynesburg University, USA Imad Rahal College of Saint Benedict, Saint John’s University, USA Richard Leipold Waynesburg University, USA
introduction Data clustering is a discovery process that partitions a data set into groups (clusters) such that data points within the same group have high similarity while being very dissimilar to points in other groups (Han & Kamber, 2001). The ultimate goal of data clustering is to discover natural groupings in a set of patterns, points, or objects without prior knowledge of any class labels. In fact, in the machine-learning literature, data clustering is typically regarded as a form of unsupervised learning as opposed to supervised learning. In unsupervised learning or clustering, there is no training function as in supervised learning. There are many applications for data clustering includ-
ing, but not limited to, pattern recognition, data analysis, data compression, image processing, understanding genomic data, and market-basket research.
background Data clustering is an important human activity. As humans, we can easily perform mental tasks such as distinguishing between cats and dogs, or between animals and plants. A more concrete example of clustering is given in Figure 1, which demonstrates the clustering of padlocks. In this example, there are 10 padlocks with different colors and shapes that we would like to cluster into three different groups.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Categorization of Data Clustering Techniques
Figure 1. A simple example of clustering
(a) The padlocks before clustering
(b) The padlocks after clustering
Clustering has its roots in a number of fields including data mining, statistics, biology, and machine learning. The importance and interdisciplinary nature of clustering is evident in its rich and diverse literature. Besides the phrase data clustering, a number of other terms and phrases have been coined to describe the same process, namely, cluster analysis, automatic classification, numerical taxonomy, botryology, and typological analysis (Jain & Dubes, 1988). Representing data by fewer clusters necessarily introduces a loss of certain fine details such as specific properties pertaining to individual data objects, but achieves the more important goal of simplification—a highly desirable characteristic in an age of inexorable abundance of data. Clustering represents many data objects using a few clusters; hence, it models the data by these clusters. From a machine-learning perspective, the clusters correspond to hidden patterns where the search for clusters can be viewed as a form of unsupervised learning and the resulting system as a representation of a data concept. Consequently, clustering is the unsupervised learning of a hidden data concept. Data mining focuses primarily on large databases that impose additional severe computational requirements on data clustering as a process. These challenges led to the emergence of numer-
ous powerful and broadly applicable clustering approaches (Berkhin, 2002).
data clustEring tEchniQuEs categorization of data clustering techniques Generally speaking, data clustering techniques can be categorized in a number of distinct ways (Berkhin, 2002; Han & Kamber, 2001; Jain & Dubes, 1988), one of which, based on the structure of the produced clusters, is shown in Figure 2. As depicted, clustering can be subdivided into partitioning clustering, hierarchical clustering, and hybrid clustering. Hierarchical clustering produces a nested sequence of partitions, whereas partitioning clustering results in a single partition. Hierarchical clustering approaches can be further categorized into agglomerative (bottom-up) and divisive (top-down) hierarchical clustering depending on whether the hierarchical decomposition is carried in a bottom-up or a top-down fashion. Partitioning clustering consists of two subcategories, distance based and density based, depending on the similarity measure utilized by the clustering process. A recent trend has been to combine features of hierarchical and partitioning
Categorization of Data Clustering Techniques
Figure 2. Categorization of data clustering approaches Data Clustering
Hierarchical
agglomerative (bottom-up)
Hybrid
divisive (top-down)
clustering in an attempt to reap benefits from both categories; approaches that follow this trend are usually identified as hybrid clustering.
partitioning methods Partitioning clustering methods generate a partition of a data set in order to recover any natural groupings present in the data. Figure 3 shows a data set partitioned into three clusters. As aforementioned, partitioning clustering can be further subdivided into distance-based partitioning and density-based partitioning. A distance-based partitioning approach breaks a data set into k subsets or clusters such that data points in the same cluster are closer to each other than to data points in other clusters. By and large, the most classical distance-based partitioning
Figure 3. Data set partitioned into three clusters
0
Partitioning
distance based
density based
methods are k-means (Hartigan & Wong, 1979) and k-medoids, where each cluster is organized around a gravity center, either the mean or the mode of the points in the cluster, respectively. In general, distance-based partitioning approaches suffer from several problems: (a) They require an input parameter k, the number of resulting clusters, which needs to be predetermined by the user, (b) they are only suitable for clusters with isotropic shapes, (c) they are not good for clusters with different sizes, and (d) they are sensitive to the choice of the initial cluster centers and may converge to local minima of the criteria function values if the initial partitions are not properly chosen. Figure 4 shows two different partitions produced by the same k-means algorithm with different initial partitions (Jain, Murty, & Flynn,
Categorization of Data Clustering Techniques
Figure 4. K-means is sensitive to the initial partition
x2 F
G
DE A
C B
x1 1999). When k-means starts with patterns A, B, and C as the initial cluster means, it ends up with the partition {{A}, {B, C}, {D, E, F, G}} shown by ellipses. However, a better three-cluster partition can be obtained by choosing A, D, and F as the initial cluster means as depicted in the rectangular partition {{A, B, C}, {D, E}, {F, G}}. The rectangular partition is better because data points in the same cluster are closer to each other than to data points in other clusters. In density-based clustering, clusters are dense areas of points in the data space separated by areas of low density (usually referred to as noise). A cluster is regarded as a connected dense area of data points that can grow in any direction to which the density leads. Density-based clustering can usually discover clusters with arbitrary shapes without predetermining the number of clusters. However, like partitioning clustering, density-based clustering is also very sensitive to
input parameters such as the density threshold, as illustrated in Figure 5 (Hinneburg & Keim, 1998). Usually, higher density thresholds result in fewer points falling inside clusters and more points labeled as noise or outliers. One of the most typical density-based clustering algorithms is DBSCAN (Ester, Kriegel, Sander, & Xu, 1996). DBSCAN views each cluster as a maximal set of density-connected points. Points are connected if they are reachable, which means that they fall within a core neighborhood with each other. The core neighborhood is determined by the input parameters, the neighborhood radius, and the minimum number of neighbors.
hierarchical clustering Hierarchical algorithms create a hierarchical decomposition of a data set X. The hierarchical decomposition is represented by a dendrogram,
Figure 5. Sensitivity of density-based clustering
Categorization of Data Clustering Techniques
Figure 6. Hierarchical decomposition and the resulting dendrogram
(a) Hierarchical decomposition (
( b) Dendrogram
a tree that iteratively splits X into smaller subsets until each subset consists of only one object. In such a hierarchy, each level of the tree represents a clustering of the data points in X. Figure 6 shows the hierarchical decomposition process and the corresponding dendrogram of the hierarchical clustering. Hierarchical clustering approaches are subdivided into agglomerative (bottom-up) and divisive (top-down) approaches (Han & Kamber, 2001). Agglomerative approaches start with each point residing in its own distinct cluster; then they successively merge clusters together until a stopping
criterion is satisfied. Divisive approaches operate in the opposite direction by starting with all points in a single cluster, which is then split recursively until a stopping criterion is met. Figure 7 shows a hierarchical clustering of five data points performed agglomeratively (upper part of the figure) and divisively (lower part of the figure). Agglomerative clustering algorithms are greedy by nature. At each step, the two most similar clusters are determined and merged into a new cluster. The algorithm stops when a certain stopping criterion is met. Usually, it proceeds until one large cluster containing all data objects is
Figure 7. Agglomerative vs. divisive hierarchical clustering
Step 0
a b
Step 1
Step 2 Step 3
abcd cd
d
d
e
Agglomerative
a
c
Step 4
Step 4
Divisive Step 3
Step 2 Step 1
Step 0
Categorization of Data Clustering Techniques
formed. Divisive clustering is very similar except that it splits the most dissimilar clusters instead of merging the most similar ones. There are several ways to define cluster similarity. Most hierarchical clustering algorithms utilize either the single-link or the complete-link method. In the single-link method, the distance between two clusters is defined as the minimum distance between all pairs of points from the two clusters; in the complete-link method, the distance between two clusters is defined as the maximum of all pair-wise distances between all points in the two clusters. In either case, two clusters are merged to form a larger cluster (or a single cluster is split to form two smaller clusters) based on the minimum distance (or maximum similarity) criteria. Complete-link-based algorithms produce tightly bound or compact clusters while the singlelink-based ones suffer when there is a chain of noise between two given clusters. In general, it has been noted in the literature that algorithms based on complete-link clustering produce better results than their single-link counterparts, which usually produce skewed results (Jain & Dubes, 1988). In summary, hierarchical clustering algorithms are considered more flexible than their partitioning counterparts largely because they do not require input parameters from users. However, their computational complexities are typically higher than those of the partitioning algorithms. The time complexities of single-link-based and completelink-based clustering algorithms are O(n2) and O(n3), respectively, while k-means requires only O(n). This observation spurred the development of many hybrid algorithms aimed at exploiting the benefits of both hierarchical clustering and partitioning clustering.
Hybrid clustering Several hybrid clustering algorithms have been proposed in the literature to reap the benefits of both hierarchical and partitioning clustering. In
general, these algorithms first partition the data set into preliminary clusters and then construct a hierarchical structure on top of these subclusters. Figure 8 shows a data set partitioned into 15 subclusters that are then merged into two clusters (Lin & Chen, 2005). One of the earliest hybrid algorithms was developed by combining k-means and hierarchical clustering (Murty & Krishna, 1981). This algorithm first partitions the data set into several groups and then performs k-means clustering on each partition to obtain several subclusters. Then, hierarchical clustering is used to build up levels using the centers of the subclusters from the previous level. This process continues until exactly k clusters are formed. In a final step, the algorithm reassigns all points of each subcluster to the cluster of their corresponding center. Another hybrid approach, BIRCH, is considered as one of the most efficient clustering algorithms (Zhang, Ramakrisshnan, & Livny, 1996). BIRCH performs a linear scan of all data points to create cluster summaries that are then stored in a memory-resident data structure called a CF-tree (clustering-feature tree). In a CF-tree, a nonleaf node represents a cluster consisting of all the subclusters. BIRCH first partitions the data set into many small subclusters and then applies a global clustering algorithm on those subclusters to achieve the final results. The main strength of BIRCH derives from its highly efficient data preprocessing of large input data sets, making the global clustering algorithm execute more efficiently. A third popular hybrid approach called CHAMELEON operates on a k-nearest-neighbor graph (Karypis, Han, & Kumar, 1999). The underlying algorithm consists of three basic steps: (a) constructing a k-nearest-neighbor graph, (b) partitioning the k-nearest-neighbor graph into many small subclusters, and (c) merging those subclusters to get the final clustering results. CAMP is a fourth efficient hybrid clustering approach that utilizes density attractor trees
Categorization of Data Clustering Techniques
Figure 8. Hybrid clustering process
(a) Obtain subclusters (Wang & Perrizo, 2005). CAMP consists of two processes: (a) clustering using local density attractor trees and (b) merging clusters based on similarity. The data set is first grouped into local attractor trees via density-based clustering. Each resulting local attractor tree represents a preliminary cluster. In a subsequent step, the small clusters are then merged level by level based on their cluster similarity.
an alternative clustering categorization Clustering techniques can alternatively be categorized into prototype-based, density-based, and graph-based approaches according to the type of the resulting clusters (Tan, Steinbach, & Kumar, 2006). Each such category can be further subdivided into partitioning, hierarchical, and hybrid clustering (see Figure 9). In prototype-based clustering, a cluster is a set of data points in which any point is closer to the prototype than to the prototype of any other cluster. For example, k-means is a simple prototype-based clustering algorithm where the prototype is the cluster mean. Other prototypebased approaches include fuzzy clustering and the SOM (self-organizing maps) algorithm (Kohonen 1997). Density-based clustering can be expanded
(b) Merge subclusters into larger clusters to include grid-based clustering and subspace clustering, which are more efficient than the traditional density-based clustering techniques discussed above. In graph-based clustering, data objects are represented by nodes and the distance or similarity between two data objects is represented by the weight of the edge between the corresponding nodes. The graph is often made sparser by keeping only the edges among the nearest neighbors. The similarity between two objects is based on the number of the nearest neighbors they share (Tan et al., 2006). A typical graph-based clustering is the minimum spanning tree (MST) clustering (Xu, Olman, & Xu, 2001). Another more recent graph-based clustering approach is CHAMELEON.
futurE trEnds With the inexorable growth of volumes of data collected in various application areas, a number of new data clustering techniques are being developed to meet the following challenges in one way or another (Han & Kamber, 2001). •
The curse of scalability: Many clustering algorithms work well on small data sets. However, it is currently common for data-
Categorization of Data Clustering Techniques
•
•
•
bases to contain millions or even billions of data records. Thus, highly scalable clustering algorithms are in great demand. Nonparametric clustering: Many clustering algorithms require users to provide certain parameters for the clustering process (such as k, the number of desired clusters, in k-means). As a result, clustering results can be quite sensitive to such input parameters. Ability to deal with noisy data: Most real-world data sets and databases contain erroneous or missing data. Consequently, clustering techniques sensitive to noise tend to produce clusters of poor quality. The curse of high dimensionality: Many databases store objects in high-dimensional spaces. This may complicate the clustering process due to the required time expense, especially when the data are sparse and highly skewed.
conclusion Clustering is an interesting, useful, and challenging process. Currently, there are many clustering techniques already available and many more are still under development. Different clustering techniques may have completely different focal objectives. Some techniques consider clustering accuracy as the primary concern while other techniques focus mostly on computational efficiency and scalability. A number of techniques aim mostly on relieving the user from the burden of predetermining input parameters without sacrificing the overall quality of the resulting clusters. Thus, selecting a good clustering technique can be quite challenging. For some applications, such as creating a biological taxonomy, a hierarchical clustering is preferred. In the case of clustering for summarization, a partitioning clustering is appropriate. In some other applications, the hybrid approach may prove more efficient.
futurE rEsEarch dirEctions Data clustering has gained more and more popularity as volumes of data increase in many different areas. Data clustering can be used in agriculture to provide farmers with information on irrigating and fertilizing based on the image from the previous year. USDA (U.S. Department of Agriculture) has grants on such a project. Data clustering is also widely used in medical image analysis, stock markets, and bioinformatics. Many research foundations such as NSF (National Science Foundation), GSA, NIH, and so forth provide large grants for research on data clustering and mining. The International Data Corporation has recently projected that market demand for data mining tools will grow dramatically.
rEfErEncEs Berkhin, P. (2002). Survey of clustering data mining techniques (Tech. Rep.). Accrue Software. Clatworthy, J., Buick, D., Hankins, M., Weinman, J., & Horne, R. (2005). The use and reporting of cluster analysis in health psychology: A review. British Journal of Health Psychology, 10, 329358. Ester, M., Kriegel, H.-P., Sander, J., & Xu, X. (1996). A density-based algorithm for discovering clusters in large spatial databases with noise. Proceedings of the 2nd ACM SIGKDD (pp. 226231). Han, J., & Kamber, M. (2001). Data mining, concepts and techniques. Morgan Kaufmann. Hartigan, J. A., & Wong, M. A. (1979). A kmeans clustering algorithm. Applied Statistics, 28, 100-108. Hinneburg, A., & Keim, D. A. (1998). An efficient approach to clustering in large multimedia databases with noise. 4th International Conference
Categorization of Data Clustering Techniques
on Knowledge Discovery and Data Mining (pp. 315-326). Jain, A. K., & Dubes, R. C. (1988). Algorithms for clustering data. Prentice-Hall, Inc. Jain, A. K., Murty, M. N., & Flynn, P. J. (1999). Data clustering: A review. ACM Computing Surveys, 31(3), 264-323. Karypis, G., Han, E. H., & Kumar, V. (1999). CHAMELEON: A hierarchical clustering algorithm using dynamic modeling. IEEE Computer, 32(8), 68-75. Kohonen, T. (1997). Self-organizing maps. New York: Springer-Verlag. Lin, C., & Chen, M. (2005). Combining partitional and hierarchical algorithms for robust and efficient data clustering with cohesion self-merging. IEEE Transactions on Knowledge and Data Engineering, 17(2), 145-159. Murty, N. M., & Krishna, G. (1981). A hybrid clustering procedure for concentric and chain-like clusters. International Journal of Computer and Information Sciences, 10(6), 397-412. Tan, P., Steinbach, M., & Kumar, V. (2006). Introduction to data mining. Pearson Education, Inc. Wang, B., & Perrizo, W. (2005). Tree-based clustering for gene expression data. 20th ACM Symposium on Applied Computing (pp. 204-205). Xu, Y., Olman, V., & Xu, D. (2001). Minimum spanning trees for gene expression data clustering. Genome Informatics, 12, 24-33. Zhang, T., Ramakrisshnan, R., & Livny, M. (1996). BIRCH: An efficient data clustering method for very large databases. International Conference on Management of Data, ACM SIGMOD (pp. 87-94).
furthEr rEading There is a vast amount of research in the dataclustering area. In order to have a solid understanding of clustering, the reader should read at least two data mining books, and keep updated with a few research journals and the proceedings of some prestigious international conferences. The following is a suggested reading list. •
•
•
• • • • • •
Han, J., & Kamber, M. (2001). Data mining, concepts and techniques. Morgan Kaufmann. Jain, A. K., Murty, M. N., & Flynn, P. J. (1999). Data clustering: A review. ACM Computing Surveys, 31(3), 264-323. Tan, P., Steinbach, M., & Kumar, V. (2006). Introduction to data mining. Pearson Education, Inc. IEEE Transactions on Knowledge and Data Engineering International Journal of Computer and Information Sciences Data Mining and Knowledge Discovery International Conference on Management of Data, ACM SIGMOD International Conference on Knowledge Discovery and Data Mining IEEE International Conference on Data Mining
tErms and dEfinitions Cluster: A cluster is a group of objects having some common natural characteristics. Data Clustering: Data clustering is a discovery process that partitions a data set into groups such that data points within a group have high similarity in comparison to one another but are very dissimilar to points in other groups.
Categorization of Data Clustering Techniques
Data Mining: Data mining is a knowledge discovery process that focuses on extracting previously unknown, actionable information from very large databases. Distance Measure: This measure is a metric that is used to compute the distance between two data objects. The most commonly used distance measures are Manhattan distance and Euclidean distance. Hierarchical Clustering: Hierarchical clustering is the process of creating a hierarchical decomposition of a data set.
Hybrid Clustering: Hybrid clustering is a clustering process that partitions a data set into preliminary clusters and then constructs a hierarchical structure upon these subclusters based on a given similarity measure. Partitioning Clustering: Partitioning clustering is the process of generating a single partition of the data in an attempt to recover any natural groupings hidden in the data. Unsupervised Learning: This is a machinelearning approach in which a model is fit to a given set of observations. It is distinguished from supervised learning by the fact that there is no a priori output.
Chapter LIII
Statistical Dissemination Systems and the Web Sindoni Giuseppe Eurostat, Luxembourg Tininini Leonardo CNR - Istituto di Analisi dei Sistemi ed Informatica “A. Ruberti”, Italy
introduction The Web is increasingly used as a preferred medium for A2C (administration to citizens) and A2B (administration to business) service delivery. An increasing number of government initiatives are aimed at making access to electronic records easier for the general public. For example, the Electronic Record Archives program of the U.S. National Archives and Records Administration is aimed at preserving virtually any kind of electronic record, free from dependence on any specific hardware or software, and at enabling customers to find records they want and to deliver those records in formats suited to customers’ needs (Lake, 2006). This in particular will include records coming from the 2010 U.S. census. International professional associations are increasingly paying attention to public availability of statistical data. For example, the last meeting of the International Association for Social Science Information Services & Technology (IASSIST, 2006) dedicated entire sessions to problems like knowledge and resource discovery, innovative
data dissemination systems, and data-intensive Web site design. In this context, the world of public statistics has the opportunity to exploit many efficient, flexible new technologies and standards to deliver better quality statistics in a more timely and accessible way. First, Web technologies can greatly reduce the gap between data producers and users as published data can be made available immediately after their production to a worldwide community of users. Second, as published statistical data are multidimensional, it would seem that the consolidated methodologies and techniques for data warehouse (DWH) modeling and navigation could be easily applied to support user navigation among the available data. However, this methodological and technological scenario cannot be implemented straightforwardly due to some specific features of statistical data in contrast with conventional business data, which require the introduction of specifically designed models and techniques. In this chapter we review the main concepts at the basis of multidimensional (data warehouse) modeling and navigation. We also illustrate some
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Statistical Dissemination Systems and the Web
peculiarities of statistical data that make the implementation of a statistical data warehouse, that is, a statistical dissemination system enabling the user to perform a multidimensional navigation, a challenging issue in many aspects. Finally, we analyze the main characteristics of some of the most important systems for the dissemination of statistical data on the Web, distinguishing two main approaches, the former based on a free navigation of specific subcubes, and the latter on a constrained navigation of a single data cube.
background In this section we review the main terms and concepts related to data warehousing and statistical databases (SDBs). A key concept common to both contexts is that of (statistical) aggregate data. These are obtained by applying simple aggregation functions (Cabibbo & Torlone, 1999; Klug, 1982), like the standard count, sum, min, max, and avg, or more complex statistical analysis functions to groups of elementary data (usually called microdata in statistical terminology). In statistical surveys, microdata are commonly obtained from questionnaires, but data extracted from public registries are also becoming increasingly important.
Microdata are rarely processed in their original form, but are instead transformed by specifically designed tools for extraction, transformation, and loading (ETL), performing a general reconciliation of the source data and a reclassification of some attributes, as well as some preaggregations. The resulting data are stored in the so-called fact tables (see Figure 1), comprising both dimension codes, used to classify the data, and measures, on which the aggregation and statistical functions are applied (Cabibbo & Torlone, 1997; Kimball, 1996). Each fact table is typically linked to a collection of dimension tables to form the so-called star and snowflake schemas (see Figure 2). Dimension tables are used to both decode the dimension codes and define groups of codes, which represent the levels of detail of the classifying dimensions and consequently also of the aggregate data. These levels are commonly known as dimension levels and are organized in dimension hierarchies (Jagadish, Lakshmanan, & Srivastava, 1999). For example, if Di is an area dimension, Di,1, Di,2, and Di,3 may correspond to the national, regional, and municipality levels, and may also have a temporal evolution that affects the corresponding aggregate data (Tininini, Paolucci, Sindoni, & De Francisci, 2002). Figure 2 shows an example of star schema with three dimension tables, one of which (the area dimension) has three dimension levels.
Figure 1. From microdata to aggregate data
F M F ...
- div - unmar - mar ...
...
inc. 0
-
div
F F F
- ... ...
mar ... ...
0 ... ...
... grouping (by sex)
sex age gr. mar st
ETL
Smith Carol F // divorced ...
F
M M M
- unmar ... ... ... ...
... ...
aggregate data aggregation
fact table
microdata
Average income by sex Female
.
Male
.
Statistical Dissemination Systems and the Web
Figure 2. A star schema with one fact table and three dimension tables sex F M
description female male
mar st
description
unmar mar div ...
unmarried married divorced ...
sex
area
mar st
inc.
area
municip
region
nation
F M F ...
...
div unmar mar ...
0 ...
...
Florence Pisa Perugia ...
Tuscany Tuscany Umbria ...
Italy Italy Italy ...
As mentioned above, dimensions are used to classify data, defining the data groups to which aggregation functions are applied. Consequently, aggregate values are associated with specific combinations of dimension values and the mapping between them is commonly represented by the multidimensional data cube metaphor (Gray, Bosworth, Layman, & Pirahesh, 1996): For each of the n dimensions used to classify the data, a dimension of an n-dimensional space is
considered. Single dimension values are mapped to distinct points on the dimension axis and their combination in this n-dimensional space defines a multidimensional hypercube. Each cell of the hypercube is identified by a specific combination of dimension values and is associated with the corresponding aggregate value, namely, the value obtained by applying the aggregation function on the data group defined by that dimensional combination (see Figure 3).
d ce vo r di
rie ar m
un
m
ar
rie
area
d
d
Figure 3. The mapping from the combination of dimension values (female, married, Florence) to the corresponding aggregate value (resident population) in a three-dimensional hypercube
florence male female
Pisa 84,839
Perugia
marital status
sex
0
resident population
Statistical Dissemination Systems and the Web
Figure 4. Roll-up and drill-down on the area dimension Resident population (x000) by nation and sex
roll-up
Female
Male
France
,
,
Germany
,0
0,
Italy
,0
,
...
...
...
Resident population (x000) by region and sex
drill-down
Female
Male
Sicily
,
,0
Tuscany
,
,
...
...
Umbria ...
The presence of dimension levels may further complicate the structure of the hypercube, enabling the user to perform a multidimensional navigation, for example, to decrease or increase the level of detail by performing the well-known operations of roll-up and drill-down (Agrawal, Gupta, & Sarawagi, 1997; Li & Wang, 1996). For instance, the user may have accessed some data on the resident population by region and sex, and use the DWH user interface to drill down on the area dimension, thus obtaining data classified by municipality and sex, or conversely to roll up on the same dimension, visualizing population data by nation and sex (see Figure 4).
SDBs have several peculiarities that require conventional DWH techniques to be extended with more specific models and structures. In the following, we briefly analyze the most relevant of these peculiarities, the impact they have on the multidimensional navigation paradigm, and how the most well-known Web SDBs have dealt with the related issues, trying to achieve a good trade-off between the characteristic freedom and flexibility of DWH multidimensional navigation and the constraints arising in the statistical-dissemination context.
statistical dissEmination systEms
In this section we show the reasons why conventional DWHs cannot be used straightforwardly to disseminate statistical data. Several features of such data will be analyzed as well as the impact that they have on the well-known data warehouse multidimensional interaction paradigm. Sample surveys. The first peculiarity regards sample surveys. Unlike conventional DWHs, statistical surveys can only rarely observe the entire target population. Observations are instead often limited to fairly small subpopulations, commonly known as samples. Sample selection is driven by sophisticated methods in order to limit bias problems and make the sample representative of the entire population, at least for the specific
The strict correspondence between SDBs, that is, systems specifically designed to disseminate statistical data, and DWHs, also known as online analytical processing (OLAP) systems, was pointed out a few years ago by Shoshani (1997). Consequently, as DWHs have well-established methodologies and techniques, as well as powerful and user-friendly tools supporting the design, storage, and multidimensional navigation of data, one may think to straightforwardly extend their use to the interactive dissemination of statistical data. However, despite the evident similarities,
statistical dissemination vs. multidimensional navigation
Statistical Dissemination Systems and the Web
context of interest (Hansen, Hurwitz, & Madow, 1993; Thompson, 1997). However, the representativeness of any sample decreases with increased classificatory detail, and beyond a certain level the inferred results can produce a distorted or completely incorrect view of the phenomenon under consideration. For example, each individual in a sample survey on working conditions may be representative of 5,000 individuals of the actual population. As a consequence, in any aggregation operation each record will have a weight corresponding to 5,000 units. This assumption is certainly reasonable at the national level and for a moderate level of classificatory detail, but it will certainly produce distorted distributions for generic dimension combinations at the municipality level. In a multidimensional navigation perspective, this implies that drill-down operations should be significantly limited when dealing with sample data. Data quality. The problem of data quality is very relevant in any information system, but in conventional DWHs, fact tables are usually supplied with data extracted from transactional (electronic) information systems whose quality is usually much higher than that of data collected using paper forms compiled by nonexperts, as is the case in census surveys (Statistics Canada, 2003). This issue is closely related to the previous one as samples unavoidably introduce statistical approximations that negatively affect the quality of aggregate data, especially in cases of fine-grained classifications. Preserving privacy. A further issue regards privacy and secondary disclosure. Most published statistical data are subject to national and international laws protecting the privacy of citizens. For example, a simple table publishing the number of companies classified by country, economic activity, and revenue class may disclose sensitive information even if it publishes data at the national level: If in a given country there is only one active company in a specific economic sector (e.g., car manufacturing), it is possible to
deduce the name of the (only) company from the published table and consequently also its revenue. In general, the data production process should prevent this kind of problem through specifically designed techniques of secondary disclosure control (Kleinberg, Papadimitriou, & Raghavan, 2000; Malvestuto & Moscarini, 2003), providing a reasonable certainty that no sensitive information can be disclosed from the published data. This control is typically based on complex algorithmic techniques that are incompatible with online processing times and therefore with typical data warehouse and Web interactions. Microdata unavailability. All DWHs are based on the classification and aggregation of microdata in fact tables. Aggregate data are precomputed, stored in materialized views, and later used to speed up the computation of cube elements, but if the result cannot be obtained by using the materialized views alone, the microdata can be accessed and aggregated online. In SDBs it is often the case that microdata are not available and only statistical tables (i.e., a collection of already aggregated data) are available for dissemination. This is particularly true in international statistical organizations, whose mission is to collect aggregated data from their member states and publish them in a harmonized way. Even when microdata are available, data in some selected statistical tables represent the only data that the survey manager wants to be made accessible to users (United Nations [UN] Statistics Division, 2006). Filter questions and heterogeneous hierarchies. Statistical questionnaires have quite complex structures and cannot be easily mapped to data warehouse dimensions. This is due to filter questions used to drive the flow of responses through the questionnaire. For example, a respondent could be asked if she is employed. If the answer is yes, then the type of job and starting year are asked for, otherwise the questionnaire flow jumps to the next question. These questions cause several modeling problems as they require
Statistical Dissemination Systems and the Web
complex generalization hierarchies and a consequent proliferation of fact tables, or alternatively a flat structure with null-value issues to be tackled. This problem is analogous to that of heterogeneous hierarchies described by Lehner (1998). The differences between multidimensional navigation in a conventional DWH and an SDB are depicted in Figure 5, where the dimension levels are represented with an increasing level of detail on the dimension axes (e.g., if D2 is an area dimension, D2,1, D2,2, and D2,3 may correspond to the national, regional, and municipality level) and the grey areas represent the dimension-level combinations that can be accessed by users. In conventional DWHs, the user is free to drill down and roll up along any dimensional hierarchy, independently of the detail level of the other dimensions. In contrast, drill-down on a dimension in an SDB can only be performed starting from certain combinations of the other dimensions and, conversely, rolling up on a dimension increases the number of possible explorations (drill-down) on other dimensions.
dealing with inaccessible dimensional combinations There are many online Web-enabled data warehouse systems that try to fulfill statistical dissemination requirements. They are either modules developed as part of a complete commercial statistical analysis suite or general-purpose statistical dissemination systems developed by public statistics agencies. The former are based on commercial statistical analysis systems, for example, SAS, SPSS, and business objects, which usually include a module for the Web-enabled publishing of statistical reports in their software suites. These modules normally rely on the system’s storage facilities and often provide several system-specific APIs to implement limitations and constraints required by the application domain. Examples of systems of the latter category are Beyond2020 from the Canadian Statistical Office; Data Explorer from Eurostat, the European Commission Statistical Office; StatLine from the Dutch Statistical Office; American FactFinder
Figure 5. Combinations of dimensional levels accessible in D3
D3
D 3,3
D 3,3 D 3,2 D 3,1
D2,3 D 2
D2,1 D2,2 D2,3
D2
D1,1
D1
D1,3
D1
D1,2 D1,3
(a)
(b)
(a) a conventional data warehouse and (b) a statistical dissemination system
Statistical Dissemination Systems and the Web
from the U.S. Census Bureau; and DaWinciMD from Istat, the Italian National Office of Statistics, which were originally developed to meet the dissemination requirements of their stakeholders but resulted in generic solutions suitable for any statistical domain and organization. All these systems aim at providing a good trade-off between the constraints arising in the statistical dissemination context and the characteristic freedom and flexibility of data-warehouse multidimensional navigation. Broadly speaking, there are two main approaches to the multidimensional navigation in SDBs and they are both based on the idea of splitting the entire cube into several subcubes, where free multidimensional navigation is safe. In the former approach, the user first selects the subcube of interest and then performs a conventional (completely free) multidimensional navigation on the selected subcube. We call this the free subcube navigation paradigm. In the latter approach, no preliminary subcube selection is required, although the navigation is constrained by the defined subcubes. In other words, the user can navigate on (the permitted subsections of) the entire cube, and the interface continuously adapts to the current selection, thus enabling the user to only reach permitted dimension combinations. We call this the constrained cube navigation paradigm. The main advantage of the free subcube navigation approach is that it enables the use of commercial data warehouse systems and conventional multidimensional interaction paradigms. Its main drawback lies in the subcube selection step: Due to the proliferation of subcubes, selecting the right one (i.e., the one corresponding to the desired combination of measure and dimension levels) from a collection of hundreds or even thousands of possible subcubes can be quite complex for the end user and may well lead to failure. The constrained cube navigation approach is harder to achieve as it requires a specifically designed interaction paradigm and can be implemented on commercial data warehouse systems
only by a substantial effort of customization using the proprietary APIs. However, it has the advantage of enabling the user to navigate across different subcubes, thus facilitating the construction of the desired measure-dimension combination. It is also more consistent with the typical exploratory spirit of multidimensional navigation. In the following we illustrate the approach adopted by some of the most important Web SDBs, as well as their support for the basic multidimensional navigation functions.
Systems Based on Free Subcube navigation Beyond2020 (http://www.beyond2020.com) is a statistical report dissemination system initially developed by Statistics Canada to produce reports for the 1991 census and later turned into commercial software due to the strong domain independence of the achieved solution. The system allows users to select the subcube of interest from a hierarchy of statistical domains, then to select the measure and dimensions from among those available for the selected subcube, and finally to perform a fully multidimensional navigation on the chosen dimensions. As noted above, its main weak point is in the initial subcube selection, which can be cumbersome and error prone, particularly if the global cube (i.e., the one from which the several subcubes are extracted) has a high number of available dimensions. Once the cube is visualized as a statistical table, many functions are available to modify its appearance and structure, for example, to drill down along one dimension and to remove one or more dimensions. StatLine (http://statline.cbs.nl) is a system developed by the Dutch Statistical Office, also available to other statistical public organizations under a specific licensing agreement. The subcube selection functions are similar to those in Beyond2020, so the desired measure and dimension combination must be chosen as the first step, which normally entails a complex navigation
Statistical Dissemination Systems and the Web
among tens of subcubes. Hierarchical code lists are not managed, so drill-down and roll-up are not allowed. Eurostat’s Data Explorer (http://ec.europa. eu/eurostat) is a system to disseminate European statistics through the Web. Though its general features are similar to StatLine, two types of OLAP interaction are possible. Users can select predefined tables, on which only partial slicing and dicing operations can be performed, or customizable tables on which fine-tuned selections can be made through a flexible query interface. The main drawback is once again in subcube selection, which can be very difficult, especially when the global cube has a high number of dimensions. Subcube selection is based on a hierarchical classification of measures; this can be misleading for nonexpert users with no knowledge of this classification who may have to proceed by trial and error in order to find the desired subcube. The American FactFinder (http://factfinder. census.gov) is the Web-enabled data warehouse of the United States Census Bureau. Subcubes are selected through a sequential process by which a file must first be chosen, corresponding to a set of subcubes referring to a given statistical domain. In the second step, the reference area is chosen and finally the user can select the subcube to be visualized from a flat list of predefined cubes. This hierarchical information structure offers very flexible area-oriented data selection features. A drawback of this approach is that the nature of specific subcubes can be revealed only after the area selection has been made. In addition, selections can lead to meaningless combinations of areas and tables, where cube data are available only for a subset of the chosen areas.
constrained cube navigation and dawincimd The only currently available system implementing constrained cube navigation is DaWinciMD (Sindoni & Tininini, 2006), originally developed
to disseminate aggregate data from the 2001 Italian Population and Housing Census (http://dawinci. istat.it/MD) and more recently used to disseminate data from the household budget survey of the Bosnia and Herzegovina Agency of Statistics (http://dawinci.istat.it/dawincibosnia). The system is based on the definition of the maximum-detail dimensional combinations of each global cube, basically corresponding to all permitted subcubes (e.g., the gray zones in Figure 1a). The user can start by displaying the data corresponding to a certain combination of measure and dimension levels and then navigate to other subcubes through roll-up and drill-down without ever violating the dissemination constraints or returning to the data cube selection page. It is the system itself that proposes, on each navigation page, only the dimension levels compatible with the measure and dimension levels already selected, thereby always leading the user to a permitted dimensional combination. Preliminary cube selection is based on the interdependent selection of the object and classifications of interest. The concept of object in the system basically corresponds to that of measure in a conventional data warehouse, although an object may also incorporate some slicing operations on the data cube. In order to guide the user in selecting the required cube, objects are organized into hierarchies, mainly based on generalization relationships, and the user can choose generic objects, that is, those located in the higher levels of the hierarchy. The system is able to combine the generic user choices and map them to the actual object-classification combinations specified by the metadata. DaWinciMD classifications basically correspond to specific dimension levels of the data cubes, although a classification’s structure can be more complex than usual flat dimension levels. Classifications can be shared by several cubes, enabling a user to perform classification-based navigations; the user can select a combination of classifications and ask the system to show all available statistical aggregates (cubes) classified in
Statistical Dissemination Systems and the Web
that way, independently of the measure. As with objects, classifications are organized into hierarchies to enable the user to express generic queries and consequently facilitate access to data.
futurE trEnds Statistical software producers are continuously improving their systems to provide ever more sophisticated functions. Improvements are mainly focused on dashboard features, that is, an integrated set of tools to perform complex business performance analysis, starting from sales, customer behavior, and personnel data. These instruments can be very useful in the private sector, but their usefulness to public statistics is still far from proven. Statistical organizations are also trying to improve their systems to offer more usable interfaces to access and download statistical data. In addition, there has recently been a certain emphasis on proposals for data sharing standards. For example, the Statistical Data and Metadata Exchange (SDMX) initiative proposes a set of XML- (extensible markup language) based formats and architectural standards to improve the exchange of statistical aggregates among national and international organizations (see http://www. sdmx.org). In this perspective, statistical data dissemination is likely to become a joint exercise where many organizations agree on standard structures and content in specific domains and set up a network of Web services to present users with a single interface to all their published data, avoiding redundancy and inconsistencies.
conclusion In this chapter we illustrated the main issues related to the dissemination of statistical aggregate data on the Web. We have shown that, despite the multidimensional nature of this kind of data,
the well-known, consolidated methodologies and techniques for data warehousing cannot be applied straightforwardly due to some specific features of statistical data with respect to conventional business data. Finally, we have analyzed the main features of some of the most important systems for the dissemination of statistical data on the Web, aiming at achieving a good trade-off between the flexibility of multidimensional navigation, the efficiency and usability requirements of any Web application, and the specific requirements related to the statistical context, such as the protection of sensitive data, significance of published aggregates, and compliance with the statistical production process.
futurE rEsEarch dirEctions The statistical dissemination field is a relatively unexplored one by academic researchers, probably because it is highly interdisciplinary, involving competencies in statistics, data warehousing, Web technologies, and so forth. Nevertheless, many related problems deserve to be carefully explored and could be interesting research subjects, also suitable for doctoral programs. In this section, two areas are presented that are what the authors believe to be the most promising ones.
Online Secondary Disclosure control As stated above, secondary disclosure control is typically based on complex algorithmic techniques that are incompatible with online processing times and therefore with typical data warehouse and Web interactions provided by current technologies. In this respect, it could be very interesting to explore possible solutions to this problem aimed at enabling the development of statistical Web dissemination systems that provide users with the ability of directly and efficiently querying a data warehouse while preserving data privacy.
Statistical Dissemination Systems and the Web
This could involve topics like materialized views, query decidability, query optimization, dynamic data scrambling, and assuring anonymousness. The achievement of important results in this research field could have a big impact on the data warehouse system market because it would substantially increase the amount of information that could be safely provided to the public, while at the same time reducing the needed effort, since there would be no more need for defining and producing predefined hypercubes.
Unfortunately, since these proposals came mainly from the business world and were developed according to mostly operational requirements and without the involvement of computer scientists, they lack robust and sound formal specifications. So, aspects like expressive power, decidability, and equivalence, which are theoretical but have important consequences on the efficient use of these languages, cannot be assessed. This leaves room for some interesting theoretical research.
languages for web-based statistical Exchange
rEfErEncEs
The use of Internet and Web technologies in the statistical production chain is ever increasing the level of process automation. Paper-based data collection is more and more being substituted by electronic processes, DBMSs (database management systems) and sophisticated data analysis tools are available for statistical production, and, as presented in this chapter, data-warehouse and Web technologies bring statistics closer to users. As pointed out in the section “Future Trends,” this new scenario is also enforcing new ways of implementing collaboration efforts between organizations at both the national and international level. Now, what is disseminated via the Web by an organization can be directly used as input for further elaboration by another organization, or many organizations can decide to set up data sharing systems, where each one contributes to a portion of the data available to the public. This high degree of system interoperability is made possible mainly by the use of Web and XML technologies. Specific XML standards have been proposed by the statistical communities and by their data providers. For example, the SDMX and XBRL (extensible business reporting language, http://www.xbrl.org) standards have been proposed to provide models, schemes, and query languages for efficient data exchange between statistical information systems.
Agrawal, R., Gupta, A., & Sarawagi, S. (1997). Modeling multidimensional databases. International Conference on Data Engineering (ICDE’97) (pp. 232-243). Cabibbo, L., & Torlone, R. (1997). Querying multidimensional databases. International Workshop on Database Programming Languages (DBPL’97) (pp. 319-335). Cabibbo, L., & Torlone, R. (1999). A framework for the investigation of aggregate functions in database queries. International Conference on Database Theory (ICDT’99) (pp. 383-397). Gray, J., Bosworth, A., Layman, A., & Pirahesh, H. (1996). Data cube: A relational aggregation operator generalizing group-by, cross-tab, and sub-total. International Conference on Data Engineering (ICDE’96) (pp. 152-159). Hansen, M. H., Hurwitz, W. N., & Madow, W. G. (1993). Sample survey methods and theory. John Wiley. International Association for Social Science Information Services & Technology (IASSIST). (2006). Program of the International Association for Social Science Information Services & Technology 2006 meeting. Retrieved September
Statistical Dissemination Systems and the Web
2006 from http://www.icpsr.umich.edu/iassist/ index.html Jagadish, H. V., Lakshmanan, L. V. S., & Srivastava, D. (1999). What can hierarchies do for data warehouses? International Conference on Very Large Data Bases (VLDB’99) (pp. 530-541). Kimball, R. (1996). The data warehouse toolkit. John Wiley & Sons. Kleinberg, J. M., Papadimitriou, C. H., & Raghavan, P. (2000). Auditing Boolean attributes. ACM Symposium on Principles of Database Systems (PODS 2000) (pp. 86-91). Klug, A. (1982). Equivalence of relational algebra and relational calculus query languages having aggregate functions. Journal of ACM, 29(3), 699-717. Lake, D. (2006). The Electronic Records Archives (ERA). Retrieved September 2006 from http:// www.archives.gov/era/pdf/2006-saa-lake.pdf Lehner, W. (1998). Modelling large scale OLAP scenarios. International Conference on Extending Database Technology (EDBT’98) (pp. 153-167). Li, C., & Wang, X. S. (1996). A data model for supporting on-line analytical processing. Conference on Information and Knowledge Management (CIKM’96) (pp. 81-88). Malvestuto, F. M., & Moscarini, M. (2003). Privacy in multidimensional databases. In M. Rafanelli (Ed.), Multidimensional databases (pp. 310-360). Idea Group Publishing.
Statistics Canada. (2003). Quality guidelines. Retrieved from http://www.statcan.ca/english/ freepub/12-539-XIE/12-539-XIE03001.pdf Thompson, M. E. (1997). Theory of sample surveys. Chapman & Hall. Tininini, L., Paolucci, M., Sindoni, G., & De Francisci, S. (2002). Spatio-temporal information systems in a statistical context. International Conference on Extending Database Technology (EDBT’02) (pp. 307-316). United Nations Statistics Division. (2006). Fundamental principles of official statistics. Retrieved June 2006 from http://unstats.un.org/unsd/methods/statorg/FP-English.htm
furthEr rEading Bimonte, S., Tchounikine, A., & Miquel, M. (2005). Towards a spatial multidimensional model. International Workshop on Data Warehousing and OLAP (DOLAP 2005) (pp. 39-46). Cabibbo, L., & Torlone, R. (2001). An architecture for data warehousing supporting data independence and interoperability. International Journal of Cooperative Information Systems, 10(3), 377-397. Cabibbo, L., & Torlone, R. (2004). On the integration of autonomous data marts. International Conference on Scientific and Statistical Database Management (SSDBM 2004) (pp. 223-231).
Shoshani, A. (1997). OLAP and statistical databases: Similarities and differences. ACM Symposium on Principles of Database Systems (PODS 97) (pp. 185-196).
Cabibbo, L., & Torlone, R. (2005). Integrating heterogeneous multidimensional databases. International Conference on Scientific and Statistical Database Management (SSDBM 2005) (pp. 205-214).
Sindoni, G., & Tininini, L. (2006). Statistical warehousing on the Web: Navigating troubled waters. International Conference on Internet and Web Applications and Services (ICIW’06).
Calvanese, D., De Giacomo, G., Lenzerini, M., Nardi, D., & Rosati, R. (2001). Data integration in data warehousing. International Journal of Cooperative Information Systems, 10(3), 237-271.
Statistical Dissemination Systems and the Web
Cuzzocrea, A. (2005). Providing probabilisticallybounded approximate answers to non-holistic aggregate range queries in OLAP. International Workshop on Data Warehousing and OLAP (DOLAP 2005) (pp. 97-106). Cuzzocrea, A. (2006). Accuracy control in compressed multidimensional data cubes for quality of answer-based OLAP tools. International Conference on Scientific and Statistical Database Management (SSDBM 2006) (pp. 301-310). Duncan, G. T., & Mukherjee, S. (2000). Optimal disclosure limitation strategy in statistical databases: Deterring tracker attacks through additive noise. Journal of the American Statistical Association, 95(451), 720-729. Fan, H., & Poulovassilis, A. (2003). Using AutoMed metadata in data warehousing environments. International Workshop on Data Warehousing and OLAP (DOLAP 2003) (pp. 86-93). Fayyoumi, E., & Oommen, B. J. (2006). A fixed structure learning automaton micro-aggregation technique for secure statistical databases. International Conference on Privacy in Statistical Databases (PSD 2006) (pp. 114-128). Fidalgo, R. d. N., Times, V. C., & de Souza, F. d. F. (2002). Providing OLAP interoperability with OLAPWare. International Conference of the Chilean Computer Science Society (SCCC 2002) (pp. 167-176). Ge, R., Ester, M., Jin, W., & Hu, Z. (2006). A disc-based approach to data summarization and privacy preservation. International Conference on Scientific and Statistical Database Management (SSDBM 2006) (pp. 321-332). Gingras, F., & Lakshmanan, L. V. S. (1998). nD-SQL: A multi-dimensional language for interoperability and OLAP. International Conference on Very Large Data Bases (VLDB’98) (pp. 134-145).
Grumbach, S., & Tininini, L. (2000). Automatic aggregation using explicit metadata. International Conference on Scientific and Statistical Database Management (SSDBM 2000) (pp. 85-94). Hua, M., Zhang, S., Wang, W., Zhou, H., & Shi, B. (2005). FMC: An approach for privacy preserving OLAP. International Conference on Data Warehousing and Knowledge Discovery (DaWaK 2005) (pp. 408-417). Hurtado, C. A., & Mendelzon, A. O. (2001). Reasoning about summarizability in heterogeneous multidimensional schemas. International Conference on Database Theory (ICDT 2001) (pp. 375-389). Jarke, M., & Vassiliou, Y. (1997). Data warehouse quality: A review of the DWQ project. 2nd Conference on Information Quality (IQ 1997) (pp. 299-313). Malvestuto, F. M., & Mezzini, M. (2004). Privacy preserving and data mining in an on-line statistical database of additive type. International Workshop on Privacy in Statistical Databases (PSD 2004) (pp. 353-365). Malvestuto, F. M., & Pourabbas, E. (2004). Customized answers to summary queries via aggregate views. International Conference on Scientific and Statistical Database Management (SSDBM 2004) (pp. 193-202). Maniatis, A. S., Vassiliadis, P., Skiadopoulos, S., & Vassiliou, Y. (2003). Advanced visualization for OLAP. International Workshop on Data Warehousing and OLAP (DOLAP 2003) (pp. 9-16). Mansmann, S., & Scholl, M. H. (2006). Extending visual OLAP for handling irregular dimensional hierarchies. International Conference on Data Warehousing and Knowledge Discovery (DaWaK 2005) (pp. 95-105). Pourabbas, E., & Shoshani, A. (2006). The composite OLAP-object data model: Removing an unnecessary barrier. International Conference on
Statistical Dissemination Systems and the Web
Scientific and Statistical Database Management (SSDBM 2006) (pp. 291-300). Priebe, T., & Pernul, G. (2003). Ontology-based integration of OLAP and information retrieval. International Workshop on Database and Expert Systems Applications (DEXA’03) (pp. 610-614). Sampaio, M. C., de Sousa, A. G., & Baptista, C. d. S. (2006). Towards a logical multidimensional model for spatial data warehousing and OLAP. International Workshop on Data Warehousing and OLAP (DOLAP 2006) (pp. 83-90). Scotney, B., Dunne, J., & McClean, S. (2002). Statistical database modelling and compatibility for processing and publication in a distributed environment. Research in Official Statistics, 5(1), 5-18. Sung, S. Y., Liu, Y., Xiong, H., & Ng, P. A. (2006). Privacy preservation for data cubes. Knowledge and Information Systems, 9(1), 38-61. Timko, I., Dyreson, C. E., & Pedersen, T. B. (2006). Pre-aggregation with probability distributions. International Workshop on Data Warehousing and OLAP (DOLAP 2006) (pp. 35-42). Torlone, R., & Panella, I. (2005). Design and development of a tool for integrating heterogeneous data warehouses. International Conference on Data Warehousing and Knowledge Discovery (DaWaK 2005) (pp. 105-114). Trottini, M., Franconi, L., & Polettini, S. (2006). Italian household expenditure survey: A proposal for data dissemination. International Conference on Privacy in Statistical Databases (PSD 2006) (pp. 318-333). Westlake, A. (2006). Provenance and reliability: Managing metadata for statistical models. International Conference on Scientific and Statistical Database Management (SSDBM 2006) (pp. 204-216).
0
tErms and dEfinitions Data Cube: A data cube is a collection of aggregate values classified according to several properties of interest (dimensions). Combinations of dimension values are used to identify the single aggregate values in the cube. Data Sharing: This is a data exchange process where open, freely available data formats and process patterns are known and standard. Thus, any organization or individual can use any counterparty’s data and metadata (assuming they are permitted access to it). Data Warehouse: A data warehouse is a repository of an organization’s data, specifically designed to support activities of analysis and decision making. Dimension: It is a property of data used to classify them and navigate the corresponding data cube. In data warehouses, dimensions are often organized into several hierarchical levels, for example, a time dimension can be organized into days, months, and years. Drill-Down (Roll-Up): Drill-down is typical data warehouse operation by which aggregate data are visualized at a finer (or coarser for rollup) level of detail along one or more analysis dimensions. Fact Table: This is a table of reconciled elementary data (microdata in statistical terminology) to be grouped and aggregated in the process of data-cube construction. Measure: The measure is a numeric value obtained by applying an aggregate function (such as count, sum, min, max, or average) to groups of data in a fact table.
Statistical Dissemination Systems and the Web
Metadata: According to the ISO (International Organization for Standardization) standard, it is defined as data that define and describe other data and processes.
Statistical (Dissemination) Database: A statistical database is a database whose structure is specifically designed for the dissemination of statistical data (usually, but not necessarily, on the Web).
Chapter LIV
Text Mining Antonina Durfee Appalachian State University, USA
abstract Massive quantities of information continue accumulating at about 1.5 billion gigabytes per year in numerous repositories held at news agencies, at libraries, on corporate intranets, on personal computers, and on the Web. A large portion of all available information exists in the form of text. Researchers, analysts, editors, venture capitalists, lawyers, help desk specialists, and even students are faced with text analysis challenges. Text mining tools aim at discovering knowledge from textual databases by isolating key bits of information from large amounts of text, identifying relationships among documents. Text mining technology is used for plagiarism and authorship attribution, text summarization and retrieval, and deception detection.
introduction The proliferation of computers, storage devices, and the World Wide Web makes access to various data sources very convenient. The availability and
accessibility of many up-to-date data sources offer an extensive support for ordinary people and decision makers in the dynamic, complex, and demanding environment of today. With technological advances, our technological abilities to collect, generate, distribute, and store data have outgrown our ability to process and understand them. Business and governmental units, collecting and storing information by the click of a mouse button, now want to understand the trends lying behind this information quickly. Understanding those trends allows optimizing the processes of decision making and reaching customers more effectively. Although technology devices can deliver vital data for decision-making purposes anywhere, anytime, the utilization of these advances is mediocre. Technological devices such as computers, Internet-enabled mobile phones, laptops, and personal digital assistants contribute to data multiplication, leading to information overload. Information overload creates data tombs resulting in unavoidable losses and missed opportunities. This environment dictates the strong need for intelligent solutions for data analysis and exploration.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Text Mining
A substantial portion of the available information is stored in text or document databases. This information resides in a company’s internal and external documents, technical and financial reports, customer feedback on products, market analyses and overviews, electronic mail and notice-board messages, advertisements, managerial notes, business-related publications, business plans, correspondence with partners and creditors, competitor releases, news articles, research papers, books, digital libraries, and various company-related Web pages. Compounding the problem is that text, by its very nature, can have multiple meanings and interpretations. The structure of text is not only complex, but also not always directly obvious. Even the author of a text might not know the extent of what might be interpreted from the text. These features of text make it a very rich medium for conveying a wide range of meanings, but also very difficult to manage, analyze, and mine using computers (Nasukawa & Nagano, 2001). Therein lies the conundrum: There is too much internal and external text to analyze manually, but it is problematic for computer software to correctly interpret, let alone create, knowledge from text. Text mining (TM) looks for a remedy for that problem. TM seeks to extract high-level knowledge and useful patterns from textual data. Text mining tools seek to analyze and learn the meaning of implicitly structured information automatically (Dörre, Gerstl, & Seiffert, 1999). TM or data mining (DM) from textual databases is an essential part of discovering previously unknown patterns useful for particular purposes from textual databases (Dörre et al.; Hearst, 1999). While users of numeric data can explore a new, previously unknown pattern in numeric data stored in a database, the users of naturallanguage text can only associate the discovery of previously unknown patterns in textual data with rediscovery or with a new interpretation of what the author of a text had already written. It is very argumentative to state that an author of a text might
not know himself or herself what is stated in a text. A new interpretation of all the facts stated in the text can only appear because different readers understand the same text differently based on their backgrounds. Witten, Bray, Mahoui, and Teahan (1998) argue that TM has potential because one does not have to understand the text in order to extract useful information from it.
background TM has its roots in computational linguistics, natural-language processing, text analysis, cognitive psychology, information retrieval (IR), machine learning, statistics, and information and library sciences. The confluence of multiple disciplines in the area of TM is presented in Figure 1. Some of the parental disciplines, for instance, statistics, artificial intelligence, and information science, are the same for TM and DM. TM can be seen as a subpart of DM that deals with one specific format of data, namely, text.
working with textual data Since text is the most popular and convenient way of transferring meaning from authors to readers, the amount of digitally available text is mounting. A recent study indicates that 80% of a company’s information is contained in text documents (Tan, 1999). Managers and knowledge workers spend a lot of time dealing with textual-information overload looking for useful points in it. For instance, a Gartner Group survey reveals that 75% of managers spend more than an hour per day sorting out and answering their e-mails (Marino, 2001). The dynamic business environment does not allow managers the luxury to devote enough time to read and analyze all available documents that might contain information that might impact managerial decisions. Being the most common vehicle for written communication, text has a complicated and am-
Text Mining
Figure 1. TM as a confluence of multiple disciplines
biguous multilevel structure. Structural principles exist in the formation of words (morphology of language), creation of grammatical sentences (syntax), and representation of meaning (semantics). The three components of text—word usage, grammatical construction, and content—vary very much within every individual language. Furthermore, the authors and readers of the text often represent the same semantics using different words (synonymy) or describe different meanings using words that have various meanings (polysemy). For instance, in human-system communication, Furnas, Landauer, Gomez, and Dumais (1987) discovered that two people favored the same term in one out of five cases, which consequently resulted in 80 to 90% failure rates in communication. This feature of NL words as the basic units of text confuses TM technologies such as document management systems, automatic thesauruses, and search engines. These technologies are based on keywords, indexes, or text properties, such as author, subject, type, word count, printed-page count, and time last updated. Those approaches are less effective in working with NL text because of the ambiguity, polysemy, synonymy, complexity of syntactic construction, and multivariance of interpretations discussed above. TM technologies aim to increase the productivity of managers, decision makers, and
knowledge workers and reduce textual-information overload by retrieving “golden nuggets” or insights from textual databases. The user of TM technology should be able to categorize, prioritize, compare documents, and understand and utilize the meaning of any particular document without browsing, reading, and analyzing an entire document collection.
information needs The fulfillment of TM tasks, such as clustering, categorization, feature extraction, thematic indexing, and information retrieval by content, is associated with satisfying the primary information needs of text users, such as searching, browsing, and visualization. In searching, the user specifies an information request in terms of a finely defined query, and asks the system to locate individual documents that correspond to that query. Search engines, such as Google, AltaVista, and Overture, are the most successful Web applications that try to satisfy the needs of searching and information access. In browsing, a person navigates the text collection with the help of links between individual documents that are provided by the system. Hypertext technology can link Web pages, creating a hierarchical structure of a document collection
Text Mining
such as the one of Yahoo. Summaries and semantic maps can be created for navigating and browsing large textual databases. A summary represents the content of the document in a more compact form than the original, such as keywords or key sentences (Neto, Santos, & Kaestner, 2000). A semantic map graphically represents concepts and relations that compose a concept. It assumes multiple relations between a concept and the knowledge that is associated with the concept. Searching is used to obtain a suitable starting point of browsing (Lagus, 2000). Visualization, like a map, displays categories or associations. Learning and memorizing of that map can improve users’ searching and browsing abilities (Lin, 1995). Searching and browsing systems require an explicit description of the information needed by the user in the form of queries. However, while searching and browsing text collections for relevant information, users face problems in constructing smart queries. It is easy to imagine situations where the user might not be fully acquainted with established terminology in a field, or not fully sure about the content of the documents he or she needs to retrieve. Most users, as was noticed in Anick and Vaithyanathan (1997), prefer to answer questions about the relevance of information already presented to them by the system rather than describe explicitly what they are looking for. Additionally, the vocabulary problem that is studied in human-system communication negatively influences users’ ability to construct efficient queries. For the visualization of any type of information, something familiar, such as a hierarchy or map, is used as a means of illustrating something more complex or unfamiliar. Under text visualization, researchers consider the illustration of similarities, differences, overlaps, and other relationships existing in documents and document collections. Text visualization contributes to faster and more intuitive understanding of the entire document collection by filtering out uninteresting items. Graphical interpretation of the
relationships in mailing lists and semantic maps is an example of text visualization, for example, WebSOM (Kohonen, 1999).
text mining tasks The most general and common task of textual data analysis is exploring patterns in text. This general task can be divided into finer, more specific tasks: categorization, clustering, feature extraction, thematic indexing, and, according to Hand, Mannila, and Smyth (2001), information retrieval by content. Categorization assigns documents to preexisting categories, called topics or themes. Applications of text categorization include indexing text to support document retrieval and extracting data from the text. According to Hidalgo (2002), the classes are usually content based and can be labeled, for example, as topics, keywords, or subject headings, but can also reflect genres or authors. Automatic document categorization for knowledge-sharing purposes, document indexing in libraries, Web page classification into Internet directories, and some other tasks can be accomplished by implementing categorization algorithms. Clustering in TM is the process of partitioning a given collection into a number of previously unknown groups of documents with similar content. Clustering allows for the discovery of unknown or previously unnoticed links in the subset of documents or terms in any particular document collection. Document clustering has been extensively explored for TM since researchers historically have perceived clustering techniques as discovery tools. Clustering does not require any predefined categories for grouping the documents (Jain, Murty, & Flynn, 1999) as opposed to categorization. Clustering is considered to be exploratory in nature. Document clustering not only allows the classification of text domains and improved document search and retrieval (Willett, 1988), but also provides more information about
Text Mining
text collections by finding similarities among documents. Feature extraction refers to the extraction of linguistic items from the documents to provide a representative sample of their content. Distinctive vocabulary items found in a document are assigned to the different categories by measuring the importance of those items to the document content. Thematic indexing or topic tracking refers to the identification of the significant terms for a particular document collection. Indexing identifies a given document or a query text by a set of weighted or unweighted terms obtained from a document or a query text. Those terms are often referred to as index terms or keywords. According to van Rijsbergen (1979), IR is the process of locating the subset of the documents that is deemed to be relevant to a posed query. IR by content is the process of inferring parts of a document that are semantically similar to a query pattern. IR is the oldest and most established field in text processing that, according to Hand et al. (2001), can be regarded as a subtask of TM. IR deals with “the representation, storage, organization, and access of information items” (Durfee Baeza-Yates & Ribeiro-Neto, 1999). To perform IR, the user is supposed to have an idea in mind about what question should be answered by retrieving information. An IR system does not directly provide an answer, but rather points a user to an appropriate document. It is assumed that the user has a classification system in mind that separates the relevant documents from irrelevant ones. Successful IR systems aim to discover and characterize this dichotomy to assist users in achieving their information needs in a search process. Traditionally, IR systems are query based, and it is assumed that users can describe their information needs explicitly and adequately in the form of a query. The IR system aims at satisfying the searching and browsing needs of a user by performing text indexing and using a particular searching strategy.
general model of tm TM solutions tend to automatically provide an overview of the documents in order to grant a user overall understanding of what the text documents are about without the need to read them. Visually, the TM framework consists of three parts and can be represented as a sequence of the following processes. 1.
2.
3.
Text representation and distillation transforms and represents free-form text in a chosen format and/or consolidates documents from various sources. Text from a string of symbols has to be encoded in some numeric format within a document or through document collections; that is, words are represented as numeric vectors or ranges. Encoded text from every individual document is transformed further into a lower dimensional format that is more appropriate for computers. This transformation is achievable via different procedures, such as word stemming, meaning only using the canonical form of a word; word disambiguation, to determine which of the senses of an ambiguous word are invoked in a particular use of the word in order to fight both word polysemy and synonymy by constructing a dictionary of word senses; and word removal, excluding words that do not contribute to document meaning. Knowledge sophistication deduces concepts (tokens or meaningful concepts) and patterns from the distilled text by utilization of knowledge discovery algorithms. A document or entire document collection can be clustered, categorized, or visualized to reveal interdocument or interterm relationships. The extracted features from a document or collection can be summarized to present new knowledge to a user. Knowledge (relationship) representation delivers and presents the deduced knowl-
Text Mining
edge to a user. The discovered relationships from the previous part are presented in some graphical or other visual form that a user can easily interpret (i.e., lists and tags, hierarchies, hypertext diagrams, semantic maps, tables or matrixes).
part visualizes documents with tokens and patterns and/or interrelationships among them by building semantic maps, hierarchies, summaries, or hypertexts with highlighted links.
A general TM framework is presented in Figure 2, where the text distillation step consolidates documents into a subbase, filtering and converting them from various formats into one acceptable by the TM method, such as conceptual graph representations or vectors. The knowledge sophistication step identifies and extracts concepts and patterns from the distilled documents, which supposedly capture the main meaning of text in the form of tokens and consolidate discovered patterns into evocative knowledge. The former can be achieved via clustering, categorization, or associative discovery methods. In most of the available TM programs, tokens are picked out by calculating word frequencies and co-occurrences. Tokens are chosen according to Zipf’s law and collocations. Collocations are frequently used phrases, such as fixed expressions accompanied by certain connotations (Hearst, 1997). The knowledge representation step retrieves the documents that have notable tokens or patterns in them. This
Text mining is extensively used in business and competitive intelligence. Zanasi (2000) defines competitive intelligence as timely and fact-based data on which management may rely in decision making and strategy development. TM helps in information extraction, topic tracking, summarization, document comparison, categorization, clustering, and concept linkage and information visualization. By performing all those tasks, TM can help in detecting deceptive messages in financial reports or in insurance claims, attribute authorship or a source of a copyrighted material, find similar concepts and link them together in medical or scientific reports trying to find a cure or an answer to a problem, create summaries of a lengthy document, compare stories on the same subject from different sources, build thesauruses of new information domains, or visualize content of a document or a set of documents in an easy to understand form of concept maps or topics. TM software can determine that a main topic
text mining applications
Figure 2. General TM framework
Text Mining
in the document A is X, and link this topic to a topic Y in a document B, which in terms is commonly related to a well-known topic Z. As a result, the relationship is built between topics X, Y, and Z. Dan Swanson’s research in the ’80s identified a relationship between migraines and the deficiency of magnesium by looking up medical articles that contain the keyword migraine linking it to “spreading depression” and finally arriving to a key term “magnesium deficiency” (Fan, Wallace, Rich, & Zhang, 2005). Although Swanson’s work was done manually, modern TM products are capable of deploying the same method automatically. Deception detection is a growing concern for information quality. Plagiarism detection and authorship attribution pose a lot of concerns in the easy-to-copy-and-paste environment of today. TM software can pick up distinctive linguistic cues and check the validity or integrity of the entire document or set of documents. Every author has his or her own unique writing style, which includes sentence and clause length, the order and use of particular words, parts of speech, and tenses. Lying is considered to be a cognitively more consuming process. When a person lies, his or her style becomes less elaborate, more simplistic, and direct (there is no time to come up with a lot of details and comments on a fly). This cognitive human peculiarity can help in retrieving faulty insurance claims. TM software can cluster a claim written by a person and determine how truthful all parts of this claim are.
futurE trEnds As a future development, TM systems should offer an easy-to-use graphical interface, the ability to discover knowledge, and the ability to process relevant data in different languages and formats automatically. Many currently available tools are generic tools from machine learning or statistical
communities. The tools operate separately from the data source and require significant preprocessing. At the same time, a realistic knowledge discovery process is iterative and interactive. Intelligent TM tools should include tight integration with database management systems for data selection, preprocessing, and result validation. The capability to directly access different data sources from online as well as off line will greatly reduce the data transformation task. In the light of increasing numbers of proposed algorithms and mathematical models for text mining, it is important to provide architecture for easy synthesis and adaptation of new methods for experienced users as well as novice ones.
conclusion Knowledge workers and decision makers are intuitive thinkers who observe reality, collect and analyze the facts about it, and react upon the produced information. A good knowledge and deep understanding of reality, and observations and predictions about the business environment constitute a basic premise for success. With huge amounts of information available on the Internet and in internal databases, efficient and effective discovery of knowledge has become an imminent issue. Despite the availability and potential benefits of analyzing existing digital information, its usefulness is limited by IT. Data and text mining methods attempt to resolve information overload by finding and delivering valuable nuggets to knowledge workers and decision makers automatically. Modern information technology aspires to offer its users built-in data and text mining capabilities to discover those value-adding nuggets. Text mining is becoming an integral part of predictive analytics or so-called decision automation systems. The intelligence in all of those systems comes from humans who have a deep
Text Mining
understanding of the business and know where to point the tools, how to prepare the data, and how to interpret the results (Eckerson, 2007).
futurE rEsEarch dirEctions Several directions are emerging based on domain types of an unstructured text: Web applications, medicine, biology, science, finance, and marketing. Regardless of the domain, the types of text-related tasks remain the same: visualization; security monitoring, such as detection or intrusion detection; authorship attribution; summarization and thematic hierarchy building; and clustering and categorization. One of the most prominent application domains for text mining becomes the Internet with its abundance of digital text and challenges in creating the semantic web, authenticity detection, and activity monitoring. Pons-Porrata, Berlanga-Llavori, et al. (2007) present a topic discovery system aimed to reveal the implicit knowledge present in news streams to create a hierarchy of topic or subtopics, where each topic contains the set of documents that are related to it and a summary extracted from these documents. Their methodology consists of a new incremental hierarchical clustering algorithm that combines both partition and agglomerative approaches and a new summarization method based on the Testor Theory. Adeva, José, et al. (2007) investigated a use of text mining for intrusion detection, such as detecting attempts of either gaining unauthorized access or misusing a Web application. Their intrusion detection software uses text categorization: It is capable of learning the characteristics of both normal and malicious user behavior from the log entries generated by the Web application server. The Web-based telemedicine domain was used for testing the robustness of the text mining approach to intrusion detection. It used the text categorization capability to learning the characteristics of both normal and malicious user behavior from the
regular, high-level log entries generated by the Web application through its application server. Another online challenge is the multilingual nature of the Internet. Lam, Chan, et al. (2007) introduced
a named entity matching model that makes use of both semantic and phonetic patterns in text. A mining system based on the entity matching model was developed for discovering new named entity translations from daily Web news outside of a bilingual dictionary. Köhler, Philippi, et al. (2006) tried to shorten the gap between the HTML- (hypertext markup language) based Internet and the RDF-based vision of the semantic Web by linking words in texts to concepts of ontologies as opposed to using indexes. The authors developed fully automated methods for mapping equivalent concepts of imported RDF ontologies. The method’s seamless integration of domain-specific ontologies for concept-based information retrieval was tested on a set of Web pages that contain synonyms and homonyms. Text mining has great potential in medicine and biology domains (Karamanis, 2007). By adding meaning to text, text mining techniques produce a more structured analysis of textual knowledge than simple word searches, and can provide powerful tools for the production and analysis of systems biology models (Ananiadou, Kell, et al., 2006). Currently, literature in system biology studies is integrated in three ways: handpicking models, concepts represented by the Medical Subject Headings and Gene Ontologies, or functional relationships captured in protein databases. Mining text directly for specific types of information is on the rise as text analytics methods become more accurate and accessible. Roberts (2006) states that uses of text mining for manual curing of related literature and deriving concepts captured in ontologies and databases are a very beneficial, direct application of text mining for the development of systems biology. Van Driel, Bruggeman, et al. (2006) systematically classified the relationships between genes and proteins in various species at the phenotype
Text Mining
level. The researchers have used text mining to classify over 5,000 human phenotypes contained in the Online Mendelian Inheritance in Man database. We find that similarity between phenotypes reflects biological modules of interacting functionally related genes.
Hearst, M. (1999). Untangling text data mining. Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics (ACL’99), MD.
rEfErEncEs
Jain, A., Murty, M., & Flynn, P. (1999). Data clustering: A review. ACM Computing Surveys, 31(3), 265-323.
Adeva, G., José, J., et al. (2007). Intrusion detection in Web applications using text mining. Engineering Applications of Artificial Intelligence, 20(4), 555-566. Ananiadou, S., Kell, D., et al. (2006). Text mining and its potential applications in systems biology. Trends in Biotechnology, 24(12), 571-579. Dörre, J., Gerstl, P., & Seiffert, R. (1999). Text mining: Finding nuggets in mountains of textual data. Proceedings of KDD-99, Fifth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Diego, CA. Durfee Baeza-Yates, R., & Ribeiro-Neto, B. (1999). Modern information retrieval. New York: ACM Press. Eckerson, W. (2007). Predictive analytics: Extending the value of your data warehousing investment (TDWI best practices report). Renton, WA. Fan ,W., Wallace, L., Rich, S., & Zhang, Z. (2005). Tapping into the power of text mining. Communications of ACM. Fayyad, U., & Uthurusamy, R. (2002). Evolving data mining into solutions for insights. Communications of the ACM, 45(8), 28-31. Furnas, G. W., Landauer, T. K., Gomez, L. M., & Dumais, S. T. (1987). The vocabulary problem in human-system communication. Communications of the ACM, 30(11), 964-971. Hand, D., Mannila, H., & Smyth, P. (2001). Principles of data mining. Boston: MIT Press.
00
Hidalgo, J.-M.-G. (2002). Text mining and Internet content filtering. Proceedings of ECML/PKDD2002, Helsinki, Finland.
Karamanis, N. (2007). Text mining for biology and biomedicine. Computational Linguistics, 33(1), 135-140. Köhler, J., Philippi, S., et al. (2006). Ontology based text indexing and querying for the semantic web. Knowledge-Based Systems, 1. Kohonen, T. (1999). WEBSOM. Helsinki, Finland: Helsinki Technological University. Lagus, K. (2000). Text mining with WEBSOM. Espoo, Finland: Department of Computer Science and Engineering, Helsinki University of Technology. Lam, W., Chan, S., et al. (2007). Named entity translation matching and learning: With application for mining unseen translations. ACM Transactions on Information Systems, 25(1), 1-32. Lin, X. (1995). Searching and browsing on map displays. Proceedings of American Society for Information Science (ASIS-95), Chicago, IL. Marino, G. (2001). Workers mired in e-mail wasteland. CNET. Retrieved from http://www. news.com. Nasukawa, T., & Nagano, T. (2001). Text analysis and knowledge mining system. IBM Systems Journal, 40(4). Neto, J. L., Santos, C., & Kaestner, A. (2000). Document clustering and text summarization. Proceedings of the Fourth International Confer-
Text Mining
ence on Practical Applications of Knowledge Discovery and Data Mining (PADD-2000): The Practical Application Company, London. Pons-Porrata, A., Berlanga-Llavori, R., et al. (2007). Topic discovery based on text mining techniques. Information Processing & Management, 43(3), 752. Roberts, P. (2006). Mining literature for systems biology. Briefings in Bioinformatics, 7(4), 399406. Tan, A. (1999). Text mining: The state of the art and the challenges. Proceedings of PAKDD-99, Workshop on Knowledge Discovery from Advanced Databases (KDAD’99), Beijing, China. van Driel, M., Bruggeman, J., et al. (2006). A textmining analysis of the human phenome. European Journal of Human Genetics, 14(5), 535-542. van Rijsbergen, C. (1979). Information retrieval (2nd ed.). London: Butterworths. Willett, P. (1988). Recent trends in hierarchic document clustering: A critical review. Information Processing and Management, 24(5), 577-597. Witten, I., Bray, Z., Mahoui, M., & Teahan, B. (1998). Text mining: A new frontier for lossless compression. Proceedings of the Data Compression Conference ’98. Zanasi, A. (2000). Web mining through the online analyst. In N. F. F. Ebecken & C. Brebbia (Eds.), Data mining II (pp. 3-14). WIT Press.
furthEr rEading Adeva, G., Jose, J., et al. (2006). Web misuse detection through text categorization of application server logs. International Journal on Artificial Intelligence Tools, 15(5), 849-854.
Adeva, G., José, J., et al. (2007). Intrusion detection in Web applications using text mining. Engineering Applications of Artificial Intelligence, 20(4), 555-566. Ananiadou, S., Kell, D., et al. (2006). Text mining and its potential applications in systems biology. Trends in Biotechnology, 24(12), 571-579. Berry, M. (2004). Survey of text mining: Clustering, classification, and retrieval. Springer. Byrne, T. (2005). Text mining’s next step. KM World, 14(9), 1-3. Chen, Z. (2006). From data mining to behavior mining. International Journal of Information Technology & Decision Making, 5(4), 703-711. Dozier, C., & Jackson, P. (2005). Mining text for expert witnesses. IEEE Software, 22(3), 94-100. Feldman, R., & Sanger, J. (2006). The text mining handbook: Advanced approaches in analyzing unstructured data. Cambridge: Cambridge University Press. Grimes, S. (2006). Search for meaning. Intelligent Enterprise, 9(1), 10. Halliman, C. (2006). Business intelligence using smart techniques: Environmental scanning using text mining and competitor analysis using scenarios and manual simulation. Houston, TX: Information Uncover. Juhl, L., Saric, J., et al. (2006). Literature mining for the biologist: From information retrieval to biological discovery. Nature Reviews Genetics, 7, 119-129. Karamanis, N. (2007). Text mining for biology and biomedicine. Computational Linguistics, 33(1), 135-140. Köhler, J., Philippi, S., et al. (2006). Ontology based text indexing and querying for the semantic web. Knowledge-Based Systems, 1.
0
Text Mining
Lam, W., Chan, S., et al. (2007). Named entity translation matching and learning: With application for mining unseen translations. ACM Transactions on Information Systems, 25(1), 1-32.
Wei, C., & Chang, Y. (2007). Discovering event evolution patterns from document sequences. IEEE Transactions on Systems, Man & Cybernetics: Part A, 37(2), 273-283.
Masao, F., Yuki, K., et al. (2005). A method of extracting and evaluating good and bad reputations for natural language expressions. International Journal of Information Technology & Decision Making, 4(2), 177-196.
Wu, B., Li, Q., et al. (2006). Finding nuggets in documents: A machine learning approach. Journal of the American Society for Information Science and Technology, 57(6), 740.
Mittermayer, M. (2004). Forecasting intraday stock price trends with text mining techniques. The 37th Hawaiian International Conference on System Sciences, HI.
tErms and dEfinitions
Monash, C. (2005). Text mining market scoping. Pons-Porrata, A., Berlanga-Llavori, R., et al. (2007). Topic discovery based on text mining techniques. Information Processing & Management, 43(3), 752. Porter, A. (2007). How “tech mining” can enhance R&D management. Research Technology Management, 50(2), 15. Roberts, P. (2006). Mining literature for systems biology. Briefings in Bioinformatics, 7(4), 399406. San Juan, E., & Ibekwe-San Juan, F. (2006). Text mining without document context. Information Processing & Management, 42(6), 1532. Trumbach, C. (2006). Addressing the information needs of technology managers: Making derived information usable. Technology Analysis & Strategic Management, 18(2), 221. van Driel, M., Bruggeman, J., et al. (2006). A textmining analysis of the human phenome. European Journal of Human Genetics, 14(5), 535-542. Weber, M., Kors, J. A., & Mons, B. (2005). Online tools to support literature-based discovery in the life sciences. Briefings in Bioinformatics, 6(3), 277-286.
0
Collocation: Collocation is defined as a sequence of words or terms that co-occur more often than would be expected by chance. Collocation is the way in which words are used together regularly. Data Mining (DM): DM is the essential and arduous step in the process of knowledge discovery in databases with the goal of extracting high-level knowledge from low-level data. Feature Extraction: Feature extraction refers to the extraction of linguistic items from the documents to provide a representative sample of their content. Distinctive vocabulary items found in a document are assigned to the different categories by measuring the importance of those items to the document content. Knowledge Discovery in Databases (KDD): KDD is the nontrivial process of identifying valid, novel, potentially useful, and ultimately understandable patterns in data (Fayyad, 1996). Natural-Language Processing: Natural-language processing is a subfield of artificial intelligence and linguistics that addresses the problems of automated generation and understanding of human languages. Semantic Web: The semantic Web is a web of data, like a global database. The Web was designed as an information space, with the goal that it should be useful not only for human-human
Text Mining
communication, but also that machines would be able to participate and help. The semantic Web approach aims at developing languages for expressing information in a machine-processable form. Text Categorization: Text categorization assigns documents to preexisting categories, called topics or themes. Automatic document categorization for knowledge-sharing purposes, document indexing in libraries, Web page classification into Internet directories, and some other tasks can be accomplished by implementing categorization algorithms. Text Clustering: Text clustering is a process of partitioning a given collection into a number of previously unknown groups of documents
with similar content. Clustering allows for the discovery of unknown or previously unnoticed links in the subset of documents or terms in any particular document collection. Text Mining (TM): Also known as intelligent text analysis, text data mining, and knowledge discovery in text, TM is an essential part of discovering previously unknown patterns useful for particular purposes from textual databases. Thematic Indexing or Topic Tracking: Thematic indexing refers to the identification of the significant terms for a particular document collection. Indexing identifies a given document or a query text by a set of weighted or unweighted terms obtained from a document or a query text.
0
0
Chapter LV
Statistical Data and Metadata Quality Assessment Maria Vardaki University of Athens, Greece Haralambos Papageorgiou University of Athens, Greece
introduction Quality was defined in the ISO (International Organization for Standardization) 8402-1986 standard as “the totality of features and characteristics of a product or service that bear on its ability to satisfy stated or implied needs,” which slightly changed in ISO updates. However, regarding quality in statistics, “stated or implied needs” are mainly identified by considering several quality dimensions, criteria, or components for the collection, processing, and dissemination of statistical information for the public (see, for example, Eurostat, 2002a, 2002b; Office of Management and Budget [OMB], 2002; Organization for Economic Cooperation and Development [OECD], 2003; Statistics Canada, 2003; Statistics Finland, 2002).
Statistics disseminated by national public administrations (PAs) aim at monitoring the economic and social development of their country and analyzing the current state of the economy and trends with as much accuracy, timeliness, and comparability as possible. This attempt is affected by certain constraints, some of which are as follows. • •
•
The effort to satisfy different user categories having diverse needs and requests The regulations and requirements of international organizations as well as the quality standards that should be followed each time The strict budget and burden of human resources of each institute and the legal constraints of the countries
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Statistical Data and Metadata Quality Assessment
•
The adaptation to the new technologies introduced
Therefore, currently, PAs and especially national statistical institutes (NSIs) are facing a dual task: (a) to embody harmonization and transformation procedures in their workflow processing since methodological peculiarities in the way of collecting, storing, and disseminating information may lead to inconsistencies of their statistical results, and (b) to upgrade their infrastructure with new meta information systems aiming at increasing the quality of the provided services. Since NSIs are usually the primary collectors of statistical data while other PAs are operating as secondary data sources, emphasis is given to NSIs while considering the quality of collected statistics. On the other hand, international organizations make considerable efforts to implement a common framework for monitoring statistics between countries with as much comparability and coherence as possible. Examples are initiatives like the proposal of standard quality indicators for monitoring quality by the European Statistical Service (Eurostat, 2002b), quality guidelines for statistics dissemination by the Organization of Economic Cooperation and Development (2003) and the International Monetary Fund (IMF, 2002), the cooperation of Eurostat and IMF in presenting quality information through the Special Dissemination Standard (Eurostat, 2006), and so forth. These initiatives target in a unified approach terminology used by various countries and organizations, the integration of common indicators in their dissemination requirements, and their implementation as a benchmark for assessing the quality and sufficiency of published information. This chapter aims at summarizing some of the latest efforts in assessing the quality of statistical results in national public administrations and international organizations in order to meet the demands for comparable, high-quality, and
reliable statistics used for economy- and policymonitoring purposes. Topics that are covered include quality criteria proposed by national and international organizations, metadata requirements for quality reporting, and transformations that should be integrated in the workflow process of public administrations’ information systems for automatic manipulation of both data and metadata, thus minimizing errors and assuring the quality of results.
background The quality of statistics is commonly assessed by public administrations with the use of quality dimensions and criteria, like, for example, relevance (the degree to which statistics meet current and potential users’ needs), accuracy (refers to the closeness between the values provided and the unknown true values), timeliness, the accessibility of information, the comparability of the statistics (over time, across domains, and between countries), and so forth. It is worth noticing that there is a trade-off between the different components of quality, especially between timeliness and accuracy (Bier & Henning, 2001), accuracy and geographic comparability, relevance and comparability over time, relevance and accuracy, coherence for large domains and relevance for subdomains, and so forth, therefore the best balance needs to be sought. In addition, metadata—preferably the metadata items indicated as reference metadata—play a crucial role in describing the contents and the quality of the statistical data. These are metadata relevant to all instances of data described, for example, entire collections of data, data sets from a given country, or a data item concerning a specific country and year. However, since reference metadata are sometimes produced, collected, or disseminated separately from the statistical data to which they refer, a unified approach of their dissemination highly improves the relevance and
0
Statistical Data and Metadata Quality Assessment
comparability of different countries’ statistics. The breakdown of quality into components is not unique or invariant over time. Organizations use slightly different sets of quality dimensions and there is a need for agencies’ cooperation to develop a common quality framework in order to contribute to a harmonized approach at least by doing the following. • • •
•
Facilitating the comparison of evaluations across countries Enabling countries to make better use of each other’s evaluation findings and reports Providing guidance to evaluators on the key elements required for a quality evaluation process Facilitating partnerships and collaboration on joint evaluations
The necessity of a common quality framework is evident when we consider that the amount of information processed by PAs is constantly growing as demands for comparable, well-timed data of high quality are increasing. At the same time, organizations operate under strict budgets and cannot indefinitely increase the workload of their personnel; therefore the statistical producer usually aims at a quality level that simply justifies the cost. However, if a set of data reported to an international organization by a national data source is assessed as not comparable in terms of international standards, it is not easy in most cases for the international organization to increase the cross-country comparability of the data since the data were produced in accordance with national standards that deviated from international ones. Therefore, international organizations’ efforts in this area are often limited to only the assessment and documentation of the existing deviations from international standards (Yamada, 2004). To overcome this problem, international organizations try to develop updated and wellestablished quality standards for the disseminated statistics, which, in the form of recommendations,
0
council regulations (in European Union cases), declarations, or guidelines, are imposed or proposed to be used by national PAs or even by the international organizations under consideration in an attempt to achieve comparability and data harmonization. Examples of such efforts are made by the OECD and its Quality Framework and Guidelines (OECD, 2003), Eurostat with the Standard Quality Report (Eurostat, 2002b) and the Standard Quality Indicators (Linden & Papageorgiou, 2004), and the IMF with the Data Quality Assessment Framework (IMF, 2002). Documents and papers presenting related national quality assessment methods and practices include, among others, Statistics Finland (2002), Statistics Canada (2003), and Viggo, Byfuglien, and Johannessen (2003). Finally, while national PAs allocate considerable resources to automate their internal procedures, international organizations try to provide them with a solid quality-assessment standard practice to integrate in their metadata-guided statistical processing. This attempt is influenced by the use of the Internet, as well as by the construction of new metadata-enabled statistical information systems (SIS). As an example of such effort in the public sector, we refer to the European Statistics Code of Practice (legal document Reference No. COM/2005/0217), which is addressed to governance and statistical authorities for implementation regarding professional independence, data collection, the adequacy of resources, quality commitment, statistical confidentiality, cost effectiveness, objectivity, and so forth.
Quality assurancE framEwork and Quality rEporting The quality assurance framework in any public administration and especially a statistical institute can be examined in two dimensions: (a) the quality of services, where total quality manage-
Statistical Data and Metadata Quality Assessment
ment (TQM), Deming’s cycle, and quality function deployment (QFD) are implemented in the series of processes targeting at increasing the efficiency of the operations and customer-user satisfaction, and (b) the quality of the collection and processing of data they compile as well as the dissemination of high-quality economic indicators and, in particular, comparable and harmonized policy-making statistics, which is especially considered by NSIs. In the latter case, the harmonization and coherence of existing statistics requires a process of reconciliation of different statistical metadata standards used in a common framework, maximizing the correspondence between statistical results and policy-making indicators. Ad hoc national and international standards, depending on the authority that develops or describes them, have been given in (Vardaki, 2004). Breaks frequently occur in time series and involve changes in standards and methods that affect the comparability of statistics over time; data made before and after the change are not fully comparable, thus endangering the coherence of statistics. Information about breaks in time series is quite an important piece of statistical metadata because of the adverse effects they can have on statistical inference based on fragmented data. Breaks can appear in space as well as in time, rendering data from different countries not fully comparable. In such cases, international practices and guidelines minimize the appearance of relevant inconsistencies. At the same time, a set of instructions should be followed to gain better control over the organization’s software development process, integrating best practices, like, for example, the hierarchy presented by the capability maturity model (CMM; CMMI Reports and Technical Notes, 2006).
Quality Criteria and Agencies’ cooperation Quality in statistics is defined by national and international organizations as being comprised
of a number of usually overlapping dimensions (criteria) and quality indicators. Regarding the international organizations, some examples are as follows. The OECD (2003) proposes the following quality criteria: relevance, accuracy, credibility, timeliness, punctuality, accessibility, interpretability, and coherence. The IMF (2002) has developed its own data quality assessment framework incorporating the assurance of integrity, methodological soundness, accuracy and reliability, serviceability, and accessibility. On the other hand, Eurostat (2002b) follows six components to assess quality in statistics, namely, relevance, accuracy, timeliness and punctuality, accessibility and clarity, coherence, and finally comparability. It has also been defined as a list of Standard Quality Indicators (Linden & Papageorgiou, 2004), which are constantly adapted to the increasing requirements with the help of national representatives participating in regular workgroups for quality assessment. However, national statistical institutes and other PAs have also developed their own quality frameworks, mainly taking into account the criteria proposed by the international organizations they are obliged or willing to report their results to. Some cases that are worth mentioning are the following. In Statistics Canada (2003), the following dimensions are proposed: relevance, accuracy, timeliness, accessibility, interpretability, and coherence. Alternatively, three quality dimensions are included in the framework of FedStats (American Federal Statistical Organizations including over 100 U.S. federal organizations) in response to the OMB directive (OMB, 2002), namely, utility, objectivity, and integrity. However, while describing specific procedures of PAs like the ones of the Social Security Administration, then utility, objectivity, integrity, transparency, and reproducibility are employed to ensure the quality of the information. More quality dimensions are frequently supported by large national PAs like, for example, at the Brazilian Institute of Geography and Statistics (IGBE) where nine 0
Statistical Data and Metadata Quality Assessment
components are required for the assessment of quality in services and products, including exhaustiveness as a dimension. It is also interesting to examine the Korean National Statistical Office (KNSO) where efficiency is indicated as part of the quality framework, and the Swiss Federal Statistical Office (SFSO) where security and neutrality are considered among their seven quality criteria. Finally, international organizations have proceeded in collaborations for metadata standards development for quality assurance. The Statistical Data and Metadata Exchange (SDMX) initiative is a collaborative venture sponsored by the Bank for International Settlements, the European Central Bank (ECB), Eurostat, the IMF, the OECD, the United Nations Statistics Division, and the World Bank (see SDMX, 2004). There are also initiatives toward the unification of components. For example, Eurostat recently reduced the quality
framework dimensions from seven to six, removing completeness (Eurostat, 2002b). It should also be noted that, although not usually identified as a measure of quality, the cost involved in the production and dissemination of statistics as well as the burden of respondents acts as a constraint of quality. Table 1 illustrates a comparative analysis of several already mentioned international and national data analysis public organizations regarding the quality components they have adopted. It is worth noticing that accuracy and accessibility are considered essential by nearly all organizations, with relevance and comparability being also of high importance. In addition, there is partial overlapping between the above criteria. For example, the methodological soundness of IMF is used as Eurostat’s comparability across countries; serviceability includes parts of timeliness, punctuality, and
Table 1. Comparison of quality components used by national and international public organizations IMF
FedStats
Integrity
√
√
Methodological soundness
√
Accuracy
√
OECD
Eurostat
ECB
StatCanada
SFSO
IGBE
KNSO
√
√
√
√
√
√
√
√
√
√
√ √
√
√
√
Reliability
√
Serviceability
√
Accessibility
√
√ √
√
√
√
√
√
Clarity Relevance Comparability
√
√
Coherence
√
√
√
√
Timeliness
√
√
√
√
Punctuality
√
Transparency
√
Neutrality
√
Utility
√
Objectivity
√
Security
0
√
Credibility
√
Interpretability
√
Efficiency
√
√
Availability
Exhaustiveness
√
√
√ √ √
Statistical Data and Metadata Quality Assessment
coherence; reliability enhances accuracy; integrity presupposes transparency; and finally efficiency takes into consideration the burden of personnel. Finally, it should be mentioned that although different terminologies are used, the components supported by international organizations like OECD, IMF, Eurostat, and so forth as examined above are the ones internationally approved.
Metadata and Quality Reporting Quality reporting is the preparation and dissemination, on a regular or irregular basis, of reports conveying information about the quality of a statistical product or survey. Such reports are produced and generally published by international organizations monitoring the quality of the information collected and compiled by the national institutes. Of course, the related national PAs can also prepare more extended quality reports for their own assessment purposes, which can be internal documents or published reports. The preparation of quality reports by a public administration presupposes that a set of metadata items has been identified as necessary to be reported for the assessment of the quality of national statistics disseminated. These metadata should reflect the entire statistical data collection process and thus, include metadata items regarding the following (Vardaki & Papageorgiou, 2004). •
•
•
Data collection directly from databases or using questionnaires and various collection methods and standards Data processing and analysis, including statistical editing, imputation, errors detection and correction, integration and harmonization, and the analysis of results Output dissemination process that consists of tabulation, publication, and reporting
Therefore, at least the following metadata items should be considered in the quality assessment reporting in order to assure comparability over time, across counties, and across domains.
•
• • • •
Survey or collection of information metadata (including metadata of surveys and administrative registers) Data storage metadata (i.e., format, location) Data process metadata (editing, coding, imputation, standards, etc.) Metadata for quality enhancement Metadata resulting from the integration of multiple data sources
Furthermore, it should be noted that the reported metadata should be consistent for both primary and secondary data sources and thus consider the quality assessment requirements of all public administrations.
Quality assurancE in public statistical information systEms The quality reporting and assurance requirements discussed in the previous section as well as the metadata items that should be integrated in this framework will certainly be more effective and less costly if the procedure is automated, thus reducing the possibility of data and metadata mismatches. To automate the processing of statistical data, a set of operators or transformations must be defined and utilized for building SISs that support metadata-guided statistical processing. Examples of recent attempts for defining sets of operators are given in Papageorgiou, Pentaris, Theodorou, Vardaki, and Petrakos (2001), Denk, Froeschl, and Grossmann (2002), Scotney, Dunne, and McClean (2002), Vardaki and Papageorgiou (2004), and the references therein. Such operators are extensively used in online analytical processing (OLAP) technologies and are part of data mining techniques for extracting relevant information from large data sets or databases. The transformations under consideration represent elementary processing steps that can be applied on statistical data tables. Their impor0
Statistical Data and Metadata Quality Assessment
tance in an SIS is that they allow for simultaneous manipulation of both data and metadata, which results in the output of a new, automatically documented table. All such operators have the closure property, meaning that the application of a transformation on a table always produces a new table. Consequently, transformations severely reduce the possibility of data and metadata mismatches and also automatically hold the history of processes. Several different transformations can be defined. We elected to refer to five important ones frequently used by databases’ processes, namely, the following. •
•
•
•
•
Selection: The result of applying this transformation in a new table holding only a subset of the initial data satisfying the selection criterion Projection: The result is a new table holding only a subset of variables and indicators of the initial data Group by: Used for creating new indicators from existing data, after grouping (partitioning) the values of a table according to the distinct values of one or more existing variables denoted as grouping variables Join: applied on two tables having one or more common variables. The result is a new table having all the variables and indicators of both tables. Since the result of the operator is a new table, the indicators of this table must all have the same grouping variables. Algebraic transformations: This is a general transformation used to denote the mathematical operations involving constants (e.g., additions, multiplications, etc.) that are frequently applied into a table.
futurE trEnds There is still no unified mechanism in place for monitoring the data quality across domains and over time, at least for the data produced in the 0
European Statistical System (ESS). In addition, the production of standard quality-reporting documents and the revision of related metadata files are essential in order to ensure the use of wellestablished terminology and coherent structures, and to put more emphasis on process data quality and related indicators (Eurostat, 2006). However, the ultimate purpose would be to implement a common framework for monitoring statistics, which will be a baseline for quality assessment comparisons between different countries, as well as, for example, between USA and the European Union. Probably, a framework like the Standard Quality Indicators (Linden & Papageorgiou, 2004) should be defined from a producer’s and a user’s perspective, aiming at a complementary use. Such a framework will be the baseline for countries’ comparability and its implementation will ensure coherence. Finally, further considerations include the development of a composite indicator (quality barometer) measuring overall quality, defined by giving weights to representative quality indicators. Future plans should embrace transformation integration in relevant public information technologies.
conclusion The implementation of quality monitoring strategies in the collection, compilation, and dissemination of data in national statistical institutes and public administrative registers can meet the increasing demands for comparable, high–quality, and reliable statistics used for economy- and policy-monitoring purposes. We have discussed various components for the quality assessment of information produced in international statistical organizations and national agencies, and highlighted the need for cooperation and initiatives toward the unification of quality assessment criteria. A template has been examined comparing various organizations’ quality frameworks.
Statistical Data and Metadata Quality Assessment
Topics that are covered include quality criteria proposed by national and international organizations, metadata requirements for quality reporting, and transformations that should be integrated in the workflow process of public administrations’ information systems for automatic manipulation of both data and metadata, thus minimizing errors and assuring the quality of results. Finally, it is depicted that trends in the future include a common, unified framework, and international organizations keep cooperating for implementing a common framework and possibly a composite indicator monitoring the overall quality.
futurE rEsEarch dirEctions In the cases of large data warehousing environments, the need for an integrated data-metadata model for large data-sets management has become evident, and a number of such UML (unified modeling language) or XML (extensible markup language) models have been developed and/or used by public- and private-sector organizations. However, the quality assurance of the automatic processing and interchange of various sources of information is still lagging behind since quality metrics and indicators should be developed and incorporated in the systems. Future research directions in quality integration modeling are not only limited to statistics and statistical institutes, but are generally two-fold: (a) service- and product-development quality assessment in areas and sectors including health (bioinformatics, biobanks, etc.), business (banks, private companies, etc.), and even in the challenge Europe is facing nowadays for the New Economy, and (b) software-development life-cycle modeling to ensure software quality. Regarding the first direction, a number of organizations performing research in areas like health, for example, insist on quality assurance for their results. In cases of multicenter, large-scale clinical trials and studies with different eligibil-
ity criteria, with a combination of trials with retrospective surveys, and so forth, a metadata model automating the stages of a trial is essential but not sufficient, unless quality metrics at least for accuracy, integrity, comparability, and punctuality are incorporated. Biobanks are compared and large data sets are processed considering thousands of variables and other metadata items while organizations try to ensure confidentiality and objectivity without sacrificing accuracy and transparency. Furthermore, the business sector, banks, and private companies urge service and/or product quality. The cost of quality and cost of poor quality are essential components in their project control objectives. Each organization attempts to strengthen every year its quality assurance framework, aiming to obtain new or improved ISO or other quality standards. A number of banks tend to develop their own metadata models incorporating quality indicators to gain control over loan candidates and deposits, as well as performing cost-benefit analysis for new products and services. On the other hand, companies improve their logistics systems by using quality parameters for improving timeliness, serviceability, and efficiency. In addition, Europe is currently facing a challenge: to produce indicators for the New Economy incorporating candidate and newly entered countries that have not yet been accustomed to new technologies and quality-assurance techniques. Related research cooperation programs, with a deadline of 2013, focus on the quality assessment of member state systems and methods targeting economic reform. Finally, the second direction to quality integration refers to software-development life-cycle modeling to ensure software quality. The quality of the processes and management control methods has an important impact on software performance. Several software implementations of nuclear safety systems have failed in the past due to costly delays caused by difficulties in co-
Statistical Data and Metadata Quality Assessment
ordination of the development and qualification process (Kececi & Modarres, n.d.). The prediction and estimation of quality factors like safety, reliability, and security are strongly affected by software-development life-cycle models. Therefore, based on meeting the criteria for high-integrity safety-systems software-development process evaluation, a unified quality model should be defined after an evaluation of the feasibility of the quality requirements, targeting reliability (the capability of the software product to maintain its level of performance under defined conditions for a given period of time), usability, and capability to be improved or modified.
rEfErEncEs Bier, W., & Henning, A. (2001). Trade-off between timeliness and accuracy: ECB requirements for general economic statistics. Economisch Statistische Berichten (ESB), 4299, 86. CMMI reports and technical notes. (2006). Retrieved from http://www.sei.cmu.edu/cmmi/adoption/reports.html Denk, M., Froeschl, K. A., & Grossmann, W. (2002). Statistical composites: A transformation-bound representation of statistical datasets. Fourteenth International Conference on Scientific and Statistical Database Management (SSDBM) (pp. 217-226). Eurostat. (2002a). Definition of quality in statistics. Retrieved from http://forum.europa.eu.int/ Public/irc/dsis/Home/main Eurostat. (2002b). Standard quality report. Retrieved from http://forum.europa.eu.int/Public/ irc/dsis/Home/main Eurostat. (2006). Quality in statistics. Retrieved from http://www.em.gov.lv/em/images/modules/ items/item_file_12629_27.-28.02.2006.quality_in_ statistics.doc
International Monetary Fund (IMF). (2002). Data quality assessment framework and data quality program. Retrieved from http://www.imf.org/external/np/sta/dsbb/2003/eng/dqaf.htm Johanis, P. (2002). Assessing the quality of metadata: The next challenge. UNECE-CES. Retrieved from http://www.unece.org/stats/documents/2002/03/metis/19.e.pdf Kasprzyk, D., & Lee, G. (2004). Reporting sources of error in U.S. federal government surveys. Journal of Official Statistics (JOS), 19(4), 343-363. Kececi, N., & Modarres, M. (n.d.). Software development life cycle model to ensure software quality. Retrieved from http://www.enre.umd. edu/ctrs-lab/psam-paper2.PDF Linden, H., & Papageorgiou, H. (2004). Standard quality indicators. European Conference on Quality and Methodology in Official Statistics (Q2004), Mainz, Germany. Office of Management and Budget (OMB). (2002). Information quality guidelines. Retrieved from http://www.whitehouse.gov/omb/inforeg/ iqg_oct2002.pdf Organization for Economic Cooperation and Development (OECD). (2003). Quality framework and guidelines for OECD statistical activities. Retrieved from http://www.oecd.org/dataoecd/26/42/21688835.pdf Papageorgiou, H., Pentaris, F., Theodorou, E., Vardaki, M., & Petrakos, M. (2001). A statistical metadata model for simultaneous manipulation of data and metadata. Journal of Intelligent Information Systems (JIIS), 17(2/3), 169-192. Scotney, B., Dunne, J., & McClean, S. (2002). Statistical database modeling and compatibility for processing and publication in a distributed environment. Research in Official Statistics (ROS), 5(1), 5-18.
Statistical Data and Metadata Quality Assessment
Statistical Data and Metadata Exchange (SDMX). (2004). Standards. Retrieved from http://www. sdmx.org/standards/index.aspx Statistics Canada. (2003). Quality guidelines (4 ed.). Retrieved from http://www.statcan.ca/english/freepub/12-539-XIE/12-539XIE03001.pdf th
Statistics Finland. (2002). Quality guidelines for official statistics. Helsinki, Finland: Author. Vardaki, M. (2004). Statistical metadata in data processing and interchange. In J. Wang (Ed.), Encyclopedia of data warehousing and mining (Vol. 2, pp. 1048-1053). Idea Group Publishing. Vardaki, M., & Papageorgiou, H. (2004). An integrated metadata model for statistical data collection and processing. Proceedings of the Sixteenth International Conference on Scientific and Statistical Database Management (SSDBM) (pp. 363-372). Viggo, S. H., Byfuglien, J., & Johannessen, R. (2003). Quality issues at Statistics Norway. Journal of Official Statistics (JOS), 19(3), 287-303. Yamada, T. (2004). Role of metadata in quality assurance of multi-country statistical data: In the case of UNIDO Industrial Statistics. Conference on Data Quality for International Organizations. Retrieved from http://unstats.un.org/unsd/accsub/ 2004docs-CDQIO/3i-UNIDO.pdf
furthEr rEading Cluster of Systems of Metadata for Official Statistics (COSMOS). (n.d.). Retrieved from http://www. epros.ed.ac.uk/ cosmos/index.html Dromey, G. (1998). Software product quality: Theory, model, and practice. Retrieved from http://www.sqi.gu.edu.au/docs/sqi/misc/SPQTheory.pdf Environmental Data Standards Council (EDSC). (2006). Quality assurance and quality control
data standard. Retrieved from http://www.epa. gov/edr/QAQC_01062006.pdf European Central Bank (ECB). (2005). Quality of public finances and growth. Retrieved from http:// www.ecb.int/pub/pdf/scpwps/ecbwp438.pdf European Conference on Quality and Methodology in Official Statistics. (2004). Retrieved from http://q2004.destatis.de/download/Abstracts.pdf European Conference on Quality in Survey Statistics. (2006). Retrieved from http://www.statistics. gov.uk/events/q2006/agenda.asp Eurostat handbook on improving quality by analysis of process variables. (n.d.). Retrieved from http://epp.eurostat.ec.europa.eu/pls/portal/ docs/PAGE/PGP_DS_QUALITY/TAB47143233/ HANDBOOK%20ON%20IMPROVING%20Q UALITY.pdf Eurostat’s quality Web site. (n.d.). Retrieved from http://epp.eurostat.ec.europa.eu/portal/page?_ pageid= 2273,1,2273_47140765&_dad=portal&_ schema=PORTAL Food and Agriculture Organization (FAO). (2006). Data quality measures and related stress factors: A conceptual framework to account for differences in statistical environments at country and international levels. Conference on Data Quality for International Organizations. Retrieved from http://faostat.fao.org/Portals/_Faostat/documents/ data_quality/ FAO_CCSA_newport.pdf Hyman, L., Lamb, J., & Bulmer, M. (2006). The use of pre-existing survey questions: Implications for data quality. European Conference on Quality in Survey Statistics. Retrieved from http://www.statistics.gov.uk/events/q2006/downloads/T21_Hyman.doc MetaNet network of excellence. (n.d.). Retrieved from http://www.epros.ed.ac.uk/metanet/index. html New Zealand Official Statistics System. (2006). Quality protocols. Retrieved from http://www.
Statistical Data and Metadata Quality Assessment
statisphere.govt.nz/NR/rdonlyres/C619DEA6BE63-413D-9CA3-C4DEC038367F/0/QualityProtocols.pdf Papageorgiou, H., Pentaris, F., Theodorou, E., Vardaki, M., & Petrakos, M. (2001). Modeling statistical metadata. Thirteenth International Conference on Scientific and Statistical Database Management (SSDBM) (pp. 25-35). Thiyagalingam, J., & Getov, V. (2006). A metadata extracting tool for software components in grid applications. Modern Computing 2006, JVA ’06 (pp. 189-196). Winkel, P., & Zhang, N.-F. (2007). Statistical development of quality in clinical medicine. Wiley.
tErms and dEfinitions Accuracy in Statistics: This denotes the closeness of computations or estimates to the exact or true values. Break in Time Series: When data collected in a specific year are not fully comparable with the data of the previous and/or following years, we say that we have a break in the time series. Clinical Trial or Study: A clinical trial is a research study to answer specific questions about vaccines, new therapies, or new ways of using known treatments. Clinical trials (also called medical research and research studies) are used to determine whether new drugs or treatments are both safe and effective. Coherence of Statistics: This describes the adequacy of statistics to be reliably combined in different ways and for various uses.
Comparability: It is the extent to which differences between statistics are attributed to differences between the true values of the statistical characteristic. Quality Function Deployment (QFD): It is a decision-making technique used in product or service development, brand marketing, and product management transforming customer needs into engineering characteristics of a product or service (also called house of quality). Quality Indicators: These are indicators measuring the quality of statistics produced and can be classified into being producer oriented or user oriented. Statistical Information System (SIS): It is the information system oriented toward the collection, storage, transformation, and distribution of statistical information. Statistical Metadata: They are data about statistical data. They describe statistical data and, to some extent, processes and tools involved in the production and usage of statistical data. Timeliness: It reflects the length of time between the data’s availability and the occurrence of the event or phenomenon they describe. Total Quality Management (TQM): TQM is a management approach of an organization centered on quality, based on the participation of all its members, and aimed at long-term success through customer satisfaction and benefits to all members of the organization and to society.
Chapter LVI
Probability Association Approach in Automatic Image Annotation Feng Xu Tsinghua University, Beijing, China Yu-Jin Zhang Tsinghua University, Beijing, China
introduction Content-based image retrieval (CBIR) has wide applications in public life. Either from a static image database or from the Web, one can search for a specific image, generally browse to make an interactive choice, and search for a picture to go with a broad story or to illustrate a document. Although CBIR has been well studied, it is still a challenging problem to search for images from a large image database because of the well-acknowledged semantic gap between low-level features and high-level semantic concepts. An alternative solution is to use keyword-based approaches, which usually associate images with keywords by either manually labeling or automatically extracting surrounding text from Web pages. Although such a solution is widely adopted by
most existing commercial image search engines, it is not perfect. First, manual annotation, though precise, is expensive and difficult to extend to large-scale databases. Second, automatically extracted surrounding text might by incomplete and ambiguous in describing images, and even more, surrounding text may not be available in some applications. To overcome these problems, automated image annotation is considered as a promising approach in understanding and describing the content of images. Automatic image annotation is derived from the manual annotation for CBIR. Since the semantic gap degrades the results of image search, the text descriptions are considered. It is desired that the text and the visual features cooperate to drive more effective search. The text labels, as the high-level features, and the visual features,
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Probability Association Approach in Automatic Image Annotation
as the low-level features, are complementary for image content description. Therefore, automatic image annotation becomes an important research issue in image retrieval. In this chapter, some approaches for automatic image annotation will be reviewed and one of the typical approaches is described in detail. Then keyword-based image retrieval is introduced. The general applications of automatic image annotation are summarized and explained by figure examples.
background As an effective way to support keyword-based image retrieval, automatic image annotation has been an active research topic in recent years. Since the purpose of image annotation is to bridge the semantic gap between low-level features and highlevel concepts, the process of image annotation can be regarded as semantic concept discovery in image collections. Many sophisticated theories have attempted to drive image annotation automatically, such as statistical theory, machine learning (supervised and unsupervised methods), information theory, graph theory, and so forth. A large number of good results have been reported. Jeon and Manmatha (2004) proposed the use of the maximum-entropy approach for automatic image annotation. Given labeled training data, maximum entropy allows one to predict the probability of a label given test data. Pan, Yang, Faloutsos, and Duygulu (2004) proposed a graph-based approach (GCap) for automatic image captioning. Carneiro and Vasconcelos (2005) introduced a supervised method to automatically annotate and retrieve images using a vocabulary of image semantics. The novel contributions include a discriminate formulation of the problem, a multiple instance learning solution that enables the estimation of concept probability distributions without prior image segmentation, and a hierarchical description of the density of each image class that enables very efficient training.
One of the most important processes is the propagation of the keywords. Jing, Li, Zhang, and Zhang (2004) proposed a keyword propagation framework to seamlessly combine the keyword and visual representations in image retrieval. Shevade and Sundaram (2004) presented a novel annotation paradigm with an emphasis on two facets: (a) semantic propagation and (b) an enduser experience that provides insight. Feng and Chua (2003) analyzed the main limitation of the supervised learning approaches and explored the use of the bootstrapping approach to tackle this problem. Active learning is also effective in image annotation. Jin, Chai, and Si (2004) proposed a coherent language model for automatic image annotation that takes into account the word-toward correlation by estimating a coherent language model for an image. This new approach has two important advantages: (a) It is able to automatically determine the annotation length to improve the accuracy of retrieval results, and (b) it can be used with active learning to significantly reduce the required number of annotated image examples. From the semantic discovery, Li, Goh, and Chang (2003) proposed a confidence-based dynamic ensemble (CDE) that can make dynamic adjustments to accommodate new semantics, to assist in the discovery of useful low-level features, and to improve class-prediction accuracy. Metadata are useful in image annotation. Tsai, McGarry, and Trait (2004) presented a two-level supervised learning framework for effective image annotation. In the first-level induction stage, color and texture feature vectors are classified individually into their corresponding outputs. Then, the color and texture terms as middle-level features are classified into the target high-level conceptual classes during the second-level induction stage. A suite of the most effective approaches for automatic image annotation is based on statistical learning, in which one of the typical approaches is the probability association method. An annotation
Probability Association Approach in Automatic Image Annotation
method is implemented by classification (Fan, Gao, & Luo, 2004), in which multilevel annotation is used. Images are segmented and salient objects are detected by region classification. In this method, semantic concepts perfectly correspond to salient objects. However, the salient-object detection depends on a syntax classification tree and is difficult to be scaled up. Generative models are successful in automatic image annotation. Mori, Takahashi, and Oka (1999) proposed a co-occurrence model in which they examined the co-occurrence of words with image regions created using a regular grid. Duygulu, Barnard, Freitas, and Forsyth (2002) proposed to describe images using a vocabulary of “blobs.” First, regions are created using a segmentation algorithm like normalized cuts. For each region, features are computed and then blobs are generated by clustering the image features for these regions across images. Each image is generated by using a certain number of these blobs. Their translation model applies one of the classical statistical machine translation models to translate from the set of keywords of an image to the set of blobs forming the image. Jeon, Lavrenko, and Manmatha (2003) instead assumed that this could be viewed as analogous to the crosslinguistic retrieval problem and used a cross-media relevance model (CMRM) to perform both image annotation and ranked retrieval. Lavrenko, Manmatha, and Jeon (2003) proposed the continuous-space relevance model (CRM), which assumes that every image is divided into regions and each region is described by a continuous-valued feature vector. Given a training set of images with annotations, a joint probabilistic model of image features and words is computed. Then the probability of generating a word given the image regions can be predicted. Compared with CMRM, CRM directly models continuous features, does not rely on clustering, and consequently does not suffer from granularity issues. Feng, Manmatha, and Lavrenko (2004) also proposed a probabilistic generative model that uses a Bernoulli process to generate words and a kernel
density estimate to generate image features. The results showed that it outperforms CRM and other models. Blei and Jordan (2003) extended the latent Dirichlet allocation (LDA) model and proposed a correspondence LDA model that relates words and images. This model assumes that a Dirichlet distribution can be used to generate a mixture of latent factors. This mixture of latent factors is then used to generate words and regions. EM (expectation maximization) algorithm is used to estimate this model.
typical approach for automatic imagE annotation Since the images are generally represented as the visual features, the key of automatic image annotation is associating the image features with the text labels by reasoning and learning. Nowadays, one of the typical and successful methods is based on probability association. The goal of semantic image labeling is to, given an image I, extract, from a vocabulary W of semantic descriptors, the set of keywords or captions, w, that best describes I. Learning is based on a training set D = {(I , w ), , (I , w )} of image-caption pairs. Bayes’ theorem can be used to invert the conditional dependence as: 1
p (w I ) =
1
N
N
p (I , w ) p (I w ) p (w ) = p (I ) p (I )
(1) where p(I) is interpreted as the probability density of image I and p (I w ) as the probability density of I conditional upon the assignment of annotation w . Traditionally, the joint probability density p(I, w) is estimated by the parametric method while the probability densities p (I w ) and p(w) are estimated by the nonparametric method.
Probability Association Approach in Automatic Image Annotation
The basic idea of the parametric density estimation is to introduce a variable L that encodes hidden states of the world. Each of these states then defines a joint distribution for keywords and image features. The various methods differ in the definition of the states of the hidden variable: Some associate a state to each image in the database, while others associate it with image clusters. The overall model is of the form: p X ,W (x, w ) = ∑ p X ,W L (x, w l ) pL (l ) S
l =1
(2) where S is the number of possible states of L, X is the set of feature vectors extracted from I, and W is the vector of keywords associated with this image. Since this is a mixture model, learning is usually based on the EM (Dempster, Laird, & Rubin, 1977) algorithm, but the details depend on the particular definition of the hidden variable and probabilistic model adopted for p (x, w ). The simplest model in this manner makes each image in the training database a state of the latent variable and assumes conditional independence between image features and keywords; that is: X ,W
p X ,W (x, w ) = ∑ p X L (x l ) pW L (w l ) pL (l ) N
(3)
l =1
where N is the training set size. This enables individual estimation of p (x l ) and p (w l ), as is common in the probabilistic retrieval literature, therefore eliminating the need to iterate the EM algorithm over the entire database (a procedure of large computational complexity). Generally, the words are associated with the image according to the posterior probability. The words describe the content of the entire image without specifying the regions, which is similar to the surrounding text of the Web images. In this type of annotation problem, the state of the hidden variable is associated with each image. XL
WL
Another problem is the correspondence between the words and the specific regions in images. First, images are segmented into regions. Second, each region is described by some set of features. Usually, the image region representations are quantized so that a blob collection is obtained. Then the label associated with a region is referred to as a blob. Finally, words are predicted using regions. In this type of annotation problem, the state of the hidden variable is associated with each image region. The typical approach of parametric density estimation adopts the generative model. The posterior probability in Equation 1 gives the annotation words. Although the annotation decisions are not always optimal in a minimum probability of error sense in some specific models, this approach has achieved impressive performance. Since the distributions of image features will have shapes that are irregular, not resembling any simple parametric form, the nonparametric estimation approach is useful for Equation 1. In this approach, the conditional probability density p (I w ) and the prior probability p(w) are estimated directly from the training set. The simplest nonparametric estimator of a distribution function is the empirical distribution function, but it is known that smoothing can improve efficiency for finite samples. Kernel smoothing, first used by Parzen (1962), is a general formulation of this. As Yavlinsky, Schofield, and Ruger (2005) proposed, the conditional probability density and the prior probability can be estimated as the sum of some certain kernel function that is placed over each point in the training set. The nonparametric estimator of the true density that makes no prior assumptions about the true density is adopted; the irregularity will be helpful in characterizing and distinguishing the distributions under different word classes.
Probability Association Approach in Automatic Image Annotation
kEyword-basEd imagE rEtriEval In multimedia information retrieval, representations of different data types (such as text and images) are used to retrieve documents that contain both. Much attention has been focused on keyword-based image retrieval, though it is an arguably more difficult task where a user submits a text query to find matching images for which there is no related text. It is assumed that a text query Q = {w i = 1, , k } and a collection C of images are given. The goal is to retrieve the images that contain objects described by the keywords w , , w , then rank the images by the likelihood that they are relevant to the query. Text-retrieval systems cannot be simply used because the images I ∈ C are assumed to have no captions. Two types of image retrieval are implemented based on keywords. First, a simple approach to retrieving images is to annotate each image in C using the techniques described in the above section with a small number of keywords. Then the annotations are indexed and text retrieval is performed in the usual manner. This approach is very straightforward, and is quite effective for single-word queries. However, there are several disadvantages as Jeon et al. (2003) described. First, the approach does not allow users to perform ranked retrieval due to the binary nature of word occurrence in automatic annotations. The second problem with indexing annotations is that an appropriate annotation length must be decided. The number of words in the annotation has a direct influence on the recall and precision. In general, shorter annotations will lead to higher precision and lower recall since fewer images will be annotated with any given word. However, a professional user may be interested in higher recall and thus may need longer annotations. Consequently, the second method is to use probabilistic annotation. In the above section, every word w in the vocabulary is assigned i
1
k
a probability p (w I ). Rather than matching the query against the few top words, users could use the entire probability distribution P ( I ) to score images using a language-modeling approach. In a language-modeling approach, users score the documents (images) by the probability that a query would be observed during a random sampling from a document (image) language model. Given the query Q = {w i = 1, , k }, and the image I = {b , , b }, the probability of drawing Q from the model of I is: i
1
m
P (Q I ) = ∑ p (w j I ) k
j =1
(4)
where p (w I ) is computed according to Equation 1. This model of retrieval does not suffer from the drawbacks of fixed-length annotation and allows users to produce ranked lists of images that are more likely to satisfy diverse users. The framework of keyword-based image retrieval is illustrated in Figure 1. j
applications A good automatic image annotation method can enable many applications for the general public. First, images can be retrieved by the annotated keywords. For example, one can query images by the statement, “Find me all the photos with a red car in the image collection,” as illustrated in Figure 2. Red color can be easily matched by the color features, but the car is difficult to be represented by the low-level features. Therefore, the annotation words are useful. Second, photos can be classified and browsed in the private album. For example, all the photos are classified according to semantic topic, from indoor to outdoor, from city to landscape. Then one can browse the photos in the same topic as illustrated in Figure 3. Third, Web image search can filter out most of the irrelevant images. Since the current Web image search is based on the surrounding text,
Probability Association Approach in Automatic Image Annotation
Figure 1. Framework of keyword-based image retrieval Database (Images with annotation words)
Query Word
Automatically Annotating
Word Scoring
Retrieved Images
Figure 2. Example of keyword-based image retrieval
Find mee red cars.
Figure 3. Example of image browse by classification
outdoor
city
landscapE indoor
0
Probability Association Approach in Automatic Image Annotation
Table 1. Applications of automatic image annotation Domain
Application
General Public Domain
Specialized Domain
Example
Keyword-based image retrieval for general image collection
Query as “find me all the red car pictures”
Photo classification and browsing
Private photo album
Web image search
Web image search based on content instead of surrounding text
Medical image retrieval
Medical image annotation with diagnoses results
Satellite image retrieval
Satellite image annotation based on the analysis of the earth’s surface
Figure 4. Example of annotation and retrieval of satellite images Query
Urban
Mountain
Residential
Crop Field
some image content is irrelevant to the query keyword. However, the automatic annotation techniques are based on the image content. If the automatic image annotation can be extended to Web image search, the search results will be more satisfactory. On the other hand, automatic image annotation results can promote some professional applications, such as medical image retrieval and satellite image retrieval. In Figure 4, each satellite image is annotated and the search result is returned according to the annotated keywords (Parulekar, Datta, Li, & Wang, 2005). Some applications of automatic image annotation are summarized in Table 1.
futurE trEnds Information retrieval based on multimodal data has been explored extensively due to the significant growth of the World Wide Web. People can access a huge amount of information from any type of modality, in which image search by keywords is an important application. In the future, automatic image annotation will also be an active research topic. Two possible trends should be focused. 1.
New approaches with more effective and efficient annotation are desired, and the approaches will be extended to larger data sets.
Probability Association Approach in Automatic Image Annotation
2.
In the practical scenario, the annotation approaches will drive more applications both in the general domain and specialized domain. In the Web image search, using the annotated words instead of the surrounding text is a crucial problem and needs further investigation.
conclusion Some approaches for automatic image annotation have been reviewed, and one of the typical approaches, the probability association approach, is described in detail. This approach associates the images with text keywords by posterior probability, in which parametric density estimation and nonparametric density estimation are included. For the former, images and words are related by a hidden variable and the joint distribution is estimated; for the latter, the conditional distribution and prior distribution are estimated directly from the training data. Then the posterior distribution is obtained by Bayes’ theorem. Keyword-based image retrieval adopts the annotated words to search for similar images to the query. Then some current applications of automatic image annotation are summarized. In the future, the approaches will be further investigated to drive more applications.
futurE rEsEarch dirEctions In the future, semantics-based multimedia retrieval will still attract much attention and make more progress both in research and applications. Several specific research topics should be paid more attention. First, image and video understanding is necessary. Both categorization and annotation are for content understanding with semantics. Many techniques in computer vision will be useful for
image and video recognition. Machine learning is an alternative method, including supervised, semisupervised, and unsupervised methods. In addition to content, contextual information is also important. In multimedia information, multimodal data are often available. Mining the relationship between multimodal data will provide some contextual information. On the other hand, some traditional techniques cannot be applied on search problems directly because of the scalability. In multimedia retrieval, all the problems should be considered in a large-scale data set, even in an open environment. Large scalability may lead to low accuracy and be less efficient, so that some special techniques should be applied. Second, a more comprehensive and practical image database is necessary for experiments in image retrieval. Most of previously-reported results have been obtained from Corel image database. Although Corel database contains plenty of images with class labels, it is specific and simple. Some successful approaches on this database may be invalid on the real data set. Currently some research experiments are conducted on real data sets. However, there is no standard database for research. A standard dataset should be comprehensive with variable concepts and practical with complex concepts. With this standard data set, all the approaches can be evaluated. Hence, a standard image database should be investigated. Finally, multimedia information should be accessed according to affective attributes. Almost all of the current research focuses on cognitive attributes. According to semantic hierarchy, affective attributes are at the highest level for content understanding. Affective attributes convey some meaning behind the appearance of multimedia content. Although difficult, it is worthy to be investigated. At the affective level, some psychological and affective computing results will provide much help in addition to traditional techniques in computer vision and machine learning. For example, we can analyze the personal characteristics
Probability Association Approach in Automatic Image Annotation
in films according to illumination distributions and photographic rules. This is an attractive and challenging topic.
rEfErEncEs Blei, D. M., & Jordan, M. I. (2003). Modeling annotated data. Proceedings of the 26th International ACM SIGIR Conference (pp. 127-134). Carneiro, G., & Vasconcelos, N. (2005). Formulating semantic image annotation as a supervised learning problem. Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Vol. 2, pp. 163-168). Dempster, A., Laird, N., & Rubin, D. (1977). Maximum-likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, p. B-39. Duygulu, P., Barnard, K., Freitas, N. de, & Forsyth, R. (2002). Object recognition as machine translation: Learning a lexicon for a fixed image vocabulary. Proceedings of the 7th European Conference on Computer Vision (pp. 97-112). Fan, J., Gao, Y., & Luo, H. (2004). Multi-level annotation of natural scene using dominant image components and semantic concepts. Proceedings of the 12th Annual ACM International Conference on Multimedia (pp. 540-547). Feng, H. M., & Chua, T.-S. (2003). A bootstrapping approach to annotating large image collection. Proceedings of 5th ACM SIGMM International Workshop on Multimedia Information Retrieval (pp. 55-62). Feng, S., Manmatha, R., & Lavrenko, V. (2004). Multiple Bernoulli relevance models for image and video annotation. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (pp. 1002-1009).
Jeon, J., Lavrenko, V., & Manmatha, R. (2003). Automatic image annotation and retrieval using cross-media relevance models. Proceedings of the 26th International ACM SIGIR Conference (pp. 119-126). Jeon, J., & Manmatha, R. (2004). Using maximum entropy for automatic image annotation. Proceedings of the 3rd International Conference on Image and Video Retrieval (pp. 24-32). Jin, R., Chai, J. Y., & Si, L. (2004). Effective automatic image annotation via a coherent language model and active learning. Proceedings of the 12th Annual ACM International Conference on Multimedia (pp. 892-899). Jing, F., Li, M., Zhang, H.-J., & Zhang, B. (2004). Keyword propagation for image retrieval. Proceedings of the 2004 International Symposium on Circuits and Systems (Vol. 2, pp. 53-56). Lavrenko, V., Manmatha, R., & Jeon, J. (2003, December). A model for learning the semantics of pictures. Proceedings of Advances in Neutral Information Processing Systems, Whistler, British Columbia, Canada. Li, B., Goh, K., & Chang, E. Y. (2003). Confidencebased dynamic ensemble for image annotation and semantics discovery. Proceedings of the 11th Annual ACM International Conference on Multimedia (pp. 195-206). Mori, Y., Takahashi, H., & Oka, R. (1999, October). Image-to-word transformation based on dividing and vector quantizing images with words. Proceedings of the First International Workshop on Multimedia Intelligent Storage and Retrieval Management, Orlando, FL. Pan, J.-Y., Yang, H.-J., Faloutsos, C., & Duygulu, P. (2004). GCap: Graph-based automatic image captioning. Proceedings of 2004 Conference on Computer Vision and Pattern Recognition Workshop (pp. 146-155).
Probability Association Approach in Automatic Image Annotation
Parulekar, A., Datta, R., Li, J., & Wang, J. Z. (2005). Large-scale satellite image browsing using automatic semantic categorization and content-based retrieval. Proceedings of the IEEE International Workshop on Semantic Knowledge in Computer Vision in conjunction with IEEE International Conference on Computer Vision (pp.1873-1880). Parzen, E. (1962). On estimation of a probability density and mode. Annals of Mathematical Statistics, 35, 1065-1076. Shevade, B., & Sundaram, H. (2004). Incentive based image annotation (AME-TR-2004-02). AZ: Arts Media and Engineering Program, Arizona State University. Tsai, C.-F., McGarry, K., & Tait, J. (2004). Automatic metadata annotation of images via a two-level learning framework. Proceedings of the 2nd International Workshop on Semantic Web in conjunction with ACM SIGIR’04 (pp. 32-42). Yavlinsky, A., Schofield, E., & Ruger, S. (2005). Automated image annotation using global features and robust nonparametric density estimation. Proceedings of the 4th International Conference on Image and Video Retrieval (pp. 507-516).
furthEr rEading Allen, N. (2001). Telling our stories in new ways. Computers and Composition, 18, 187-194. Aner-Wolf, A. (2004). Extracting semantic information through illumination classification. Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (pp. I269-I274). Batlle, J., Casalsb, A., Freixeneta, J., & Marti, J. (2000). A review on strategies for recognizing natural objects in colour images of outdoor scenes. Image and Vision Computing, 18, 515-530.
Boutell, M., & Luo, J. (2004). Bayesian fusion of camera metadata cues in semantic scene classification. Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Vol. 2, pp. 623-630). Chen, Y., & Wang, J. Z. (2004). Images categorization by learning and reasoning with regions. Journal of Machine Learning Research, 5, 913939. Csurka, G., Dance, C. R., Fan, L., Willamowski, J., & Bray, C. (2004). Visual categorization with bags of keypoints. Proceedings of the 8th European Conference on Computer Vision (pp. 11-14). Eakins, J. P. (2002). Towards intelligent image retrieval. Pattern Recognition, 35, 3-14. Feng, D., Siu, W. C., & Zhang, H. J. (2003). Multimedia information retrieval and management. Berlin, Germany: Springer Verlag. Fergus, R., Perona, P., & Zisserman, A. (2003). Object class recognition by unsupervised scaleinvariant learning. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (pp. II/264-II/271). Fergus, R., Perona, P., & Zisserman, A. (2004). A visual category filter for Google Images. Proceedings of the 8th European Conference on Computer Vision (pp. 242-256). Forman, G. (2003). An extensive empirical study of feature selection metrics for text classification. Journal of Machine Learning Research, 3, 1289-1305. Friedman, N., Geiger, D., & Goldszmidt, M. (1997). Bayesian network classifiers. Machine Learning, 29, 131-163. Hanjalic, A. (2001). Video and image retrieval beyond the cognitive level: The needs and possibilities. Proceedings of SPIE on Storage and Retrieval for Media Databases (Vol. 4315, pp. 30-140).
Probability Association Approach in Automatic Image Annotation
Li, F.-F., Fergus, R., & Perona, P. (2003). A Bayesian approach to unsupervised one-shot learning of object categories. Proceedings of the IEEE International Conference on Computer Vision (pp. 1134-1141). Liu, X., Zhang, L., Li, M., Zhang, H. J., & Wang, D. (2005). Boosting image classification with LDA-based feature combination for digital photograph management. Pattern Recognition, 38, 887-901. Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91-110. Picard, R. (1998). Affective computing. MIT Press. Rifkin, R., & Klautau, A. (2004). In defense of one-vs-all classification. Journal of Machine Learning Research, 5, 101-141. Rubner, Y., Tomasi, C., & Guibas, L. J. (2000). The earth mover’s distance as a metric for image retrieval. International Journal of Computer Vision, 40, 99-121. Scholkopf, B., & Smola, A. (2002). Learning with kernels: Support vector machines, regularization, optimization, and beyond. Cambridge, MA: MIT Press. Vapnik, V. (1995). The nature of statistical learning theory. Springer Verlag. Vapnik, V. (1998). Statistical learning theory. Wiley. Vasconcelos, N., & Vasconcelos, M. (2004). Scalable discriminant feature selection for image retrieval and recognition. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Vol. 2, pp. 770-775). Wang, J. Z., Li, J., & Wiederhold, G. (2001). Simplicity: Semantics-sensitive integrated matching
for picture libraries. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(9), 947-963. Wu, Y., Chang, E. Y., & Tsent, B. L. (2005). Multimodal metadata fusion using causal strength. Proceedings of ACM Multimedia (pp. 872-881). Zhang, Y.-J. (2007). Semantic-based visual information retrieval. IRM Press.
tErms and dEfinitions Automatic Image Annotation: This is the process in which images are associated with symbols (keywords or tags) automatically instead of manually being labeled as in some approaches. Most of the approaches are implemented by machine learning, pattern recognition, and computer vision. The annotated symbols describe the image content in the semantic concept level. Content-Based Image Retrieval (CBIR): CBIR is the process by which one searches for similar images according to the content of the query image, such as color, texture, shape, and so forth. Correspondence between Words and the Specific Regions in Images: This is the result of annotating the specific image region instead of the entire image. Nonparametric Density Estimation: If there is no assumption about the distribution, the density is directly estimated from the training data. Parametric Density Estimation: Assume that the images and text words are in a certain distribution. Their joint distribution is in virtue of a hidden variable. The parameters of the distribution are estimated by the EM algorithm. Photo Classification and Browsing: This is designed for private albums. All the photos are classified into several categories corresponding
Probability Association Approach in Automatic Image Annotation
to some topics. When users want to search for a photo or just randomly browse,they can first select an interesting topic. Probability Association Approach: According to Bayes’ theorem, the annotated text words for an image are selected by the posterior probability (the conditional probability of the word, given the image).
Web Image Search: This is one of the aspects of Web search. The current image search is based on surrounding text. In the future, it is desired to apply content-based search to Web images by automatic image annotation.
Chapter LVII
Online Analytical Processing and Data-Cube Technologies Lixin Fu University of North Carolina, Greensboro, USA Wenchen Hu University of North Dakota, USA
introduction Since the late ’80s and early ’90s, database technologies have evolved to a new level of applications: online analytical processing (OLAP), where executive management can make quick and effective strategic decisions based on knowledge in terms of queries against large amounts of stored data. Some OLAP systems are also regarded as decision support systems (DSSs) or executive information systems (EIS). The traditional, well-established online transactional processing (OLTP) systems such as relational database management systems (RDBMS) mainly deal with mission-critical daily transactions. Typically, there are a large number of short, simple queries such as lookups, insertions, and deletions. The main focus is transaction throughput, consistency, concurrency, and failure recovery issues. OLAP systems, on the other
hand, are mainly analytical and informational. OLAP systems are usually closely coupled with data warehouses, which can contain very large data sets that may include historical data as well as data integrated from different departments and geographical locations. So the sizes of data warehouses are usually significantly larger than common OLTP systems. In addition, the workloads of OLAP are quite different from those of traditional transaction systems: The queries are unpredictable and much more complicated. For example, an OLAP query could be, “For each type of car and each manufacturer, list market share change in terms of car sales between the first quarter of 2005 and the first quarter of 2006.” The purpose of these queries is not for the daily operational maintenance of data; instead, it is for deeper knowledge from data used for decision support.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Online Analytical Processing and Data-Cube Technologies
Closely related to OLAP technologies are data-cube technologies. A data cube is a data model that allows data to be viewed in multiple dimensions. For example, in a large retail database (e.g., of Wal-Mart), there may be several dimensions such as time, location, store, and product that track particular sales of certain dollar amounts. We may need to know the sales by time; by location; by store; by categories; by time and location; by store, time, and category; and so forth. Actually, we may navigate by any combination of the dimensions. Furthermore, these dimensions may be hierarchical (e.g., time with day-month-year hierarchy, location with city-state-region hierarchy, product with categories, and so on). Data cubes will allow users to navigate different parts of the data at different granularity levels. For instance, users can report sales grouped by month and state, or query the total sales of product category electronics in the southeastern region. OLAP and data-cube technologies have broad applications. With such decision support systems in place, enterprises can gain competitive advantages by making timely and informative business decisions based on the embedded patterns computed from large data sets. Without data warehouse and OLAP systems, the data would be separated and may have different formats. It is also very hard to maintain the currency of the databases. To answer an analytical query, we may have to coordinate with different databases stored and run on different systems. In coupling with formatting problems, data analysts may have to do it manually, which greatly reduces the overall system performance. Even if we can somehow evaluate the complex queries, the evaluation and coordination process may interfere with the existing operational OLTP systems. In view of these difficulties, the data warehousing and data-cube technologies are needed for satisfying modern data analysis requirements. Both the total revenues in this industry and the number of vendors have experienced explo-
sive growth in the recent past years. In academia, OLAP and data cube pose new challenges and many interesting research problems. One of the challenges is that the queries can be very complex in contrast to normal short queries in relational database systems. For example, in car sales data warehouses, one might want to know which venders or types of cars are sold fast in terms of their market share change in the past 5 years. To answer such queries, we often need to scan a large part of the overwhelming data sets. It is very challenging to evaluate complex queries against large data sets with hundreds of gigabytes (1 gigabyte = 109 bytes) or terabytes (1 terabyte = 1,000 gigabytes) of data. Another challenge is the notorious problem of “the curse of dimensionality.” That is, when the number of dimensions of data increases, the queries may be more complex and harder to evaluate. No existing method or system so far performs efficiently on large data sets of high dimensionality.
background An OLAP system is usually deployed as a part of a greater warehousing system. Data from different sources are extracted, cleaned, transformed, integrated, and loaded into a central place called a data warehouse (Chaudhuri & Dayal, 1997). The centralized data can further be partitioned into subsets of data corresponding to different departments. These smaller data sets are called data marts that are suitable for particular queries. The back-end processing steps are important because the source data more than likely contain noisy and even incorrect data, and the formats and schemas of different sources are often different. The data quality and structure of the integrated data from different sources are critical in whether we can perform meaningful data analysis. The OLAP server is a front-side data analysis tool. OLAP works on top of the warehouse database to provide sophisticated querying capabilities.
Online Analytical Processing and Data-Cube Technologies
There are some other front tools such as reporting, visualizing, and data mining tools. Here, we mainly focus on OLAP. To better understand how the OLAP and cube technologies fit into the big picture of a warehousing system, a typical data warehouse and OLAP system architecture is depicted in Figure 1. As for how to build OLAP systems, OLAP servers fall into three categories: relational OLAP (ROLAP), multidimensional OLAP (MOLAP), and hybrid OLAP (HOLAP; Han & Kamber, 2001). ROLAP servers use RDBMS or extended RDBMS systems to store and manage warehouse data, but use additional OLAP middleware to implement navigation and optimization. Metacube of Informix is based on the ROLAP approach. ROLAP servers can take advantage of the existing proven technologies of relational databases that have been used for decades. Data are stored in files and tables that are in traditional row-based records format. The query engine and related query optimization methods can be reused. The middleware provides an interface to users and parses the user queries into regular SQL (structured query language) queries, which are then evaluated by the RDBMS query execution engine. It also creates and manages derived data structures
such as specialized indexes, for example, the star index and materialized views. MOLAP servers use array-based multidimensional engines to store and process data. For instance, Essbase of Arbor belongs to the MOLAP camp. MOLAP servers usually use proprietary data structures to achieve better performance. However, the OLAP query language of a product is difficult to port into other systems since there is not a standardized OLAP language for the OLAP industry. ROLAP is usually more scalable than MOLAP, but the latter holds a performance advantage. HOLAP tries to combine the benefits of both. The Microsoft SQL server OLAP Services is a HOLAP server. The data-cube concept was first introduced by Gray et al. (1997). In the data-cube model, dimensions are the attributes to track certain numerical attributes called measures. In the following car sales example, the dollar amount is the measure while the maker, time, and location are dimensions. There are one or more fact tables and several dimension tables. Star schema and snowflake schema are data warehouse design models. In a star schema, a fact table is associated with several dimension tables, forming a star. The snowflake schema further allows the dimension tables to have hierarchies. Roll-up, drill-down,
Figure 1. Data warehouse architecture Back End
Source
...
Front End
OLAP Server
Data Extraction and Integration
D W
Query Reporting Analysis Data Mining
Source
... Data Marts
Online Analytical Processing and Data-Cube Technologies
slice, and dice are common OLAP operations. Roll-up performs an aggregation up upon a dimension hierarchy or by reduction of dimensions. For example, aggregating sales from month to year is a roll-up operation. The drill-down operation is just the opposite of roll-up. For example, from yearly sales, a user drills down for more details of the sales for each location and may further drill down to sales for each vendor. The slicing operation performs a selection on one dimension of a cube. For example, accessing the total January sales is a slice operation. Figure 2 shows a datacube example for car sales in a three-dimension diagram. We may have more dimensions but cannot visualize them directly. OLAP and data-cube technologies have important applications in the public sector. There are many successful stories. Here we will point to two cases briefly. One case is related to data analysis for student retention. One of the major goals of many universities is to improve their retention rates and graduation rates, which are typically indicated by the first-year retention rate and 4-year and 6-year graduation rates (for part-time students). One may be interested in the rates among different categories of students, for example, in state vs. out of state, male vs. female,
different ethnic groups, on campus vs. off campus, and so forth. This can be easily implemented through OLAP and cube technologies. Another case is related to NSF (National Science Foundation) grant awards analysis. One may want to know the number of awards grouped by schools, by disciplines, by regions, by amounts, by dates, and so forth, and grouped by any arbitrary combination of these dimensions. Again, OLAP and cube technologies are also suitable for solving the problem effectively.
olap and data-cubE computation tEchnologiEs There is abundant literature on the evaluation of OLAP queries and data cubes. We will briefly summarize some of the top algorithms and systems with good performance as well as our research findings on this subject. Since introductions and descriptions of existing OLAP systems and products are generally readily available, we will focus more on the technical, algorithmic side of the OLAP and cube systems rather than the usage and applications of the systems. In BUC (bottomup cubing; Beyer & Ramakrishnan, 1999), a new
Figure 2. A 3-D presentation of a data cube
Maker
Measure: Sales (in US$)
FORD TOYOTA
Dimensions: Time (day, month, year) Location (city, state, region) Maker
FL GA AL SC Locatio n Time
0
Jan Feb Mar Apr
Online Analytical Processing and Data-Cube Technologies
iceberg-cube problem is introduced: computing the cubes with aggregates above a threshold. The rationale is that computing only the cells that aggregate more saves time and space significantly for sparse data. The BUC algorithm computes large 1 group-bys, 2 group-bys, and so on using Apriori pruning (nicely written). The idea is inspired from PartitionedCube and Apriori. For large data sets, BUC-external switches to BUC-internal. BUC outperforms the earlier algorithms for computing sparse cubes such as ArrayCube (Zhao, Deshpande, & Naughton, 1997) and PartitionedCube (Ross & Srivastava, 1997). Unfortunately, BUC is not suitable for dense data or for highly skewed data. ArrayCube is an MOLAP algorithm that pipelines the computation of dependent cubes to save I/O (input/output) by overlapping as much as possible, fitting many partitions in memory to avoid writing intermediate results. It also uses inmemory array-based MD chunks to avoid sorting. The PartitionedCube algorithm first partitions the input data into sub data cubes according to an attribute, and then recursively partitions each subcubes according to the next attribute until the data fit in the memory when memory cube is used. Dwarf (Sismanis, Deligiannakis, Roussopoulos, & Kotidis, 2002) gives a compressed structure with ALL pointers to compute, store, and query cubes. It removes prefix redundancy for dense cubes and suffix redundancy for sparse cubes. However, updating the data structure is complex and inconvenient. QC-tree (Lakshmanan, Pei, & Zhao, 2003) builds a structure for storing and searching for a quotient cube (Lakshmanan, Pei, & Han, 2002) in the input data. A quotient cube is an intermediate data structure computed from the input data set so that a user can navigate data at different level of granularities. The algorithm partitions the cube cells with identical aggregate values into equivalent classes while preserving the semantics of drill-down links in the lattice structure. A quotient cube is the reduced (or summary of) lattice in terms of classes
of cells. One serious and common problem with the above-mentioned algorithms and systems is that they are based on sorting the input data sets in certain ways as a preprocessing step. In real applications, the input data sets are usually much larger than the available computer memory and thus incur external sorting, which requires a large number of I/O operations (i.e., accesses to computer disks). An I/O operation is 1 million times slower than an operation in memory. Even worse, most existing algorithms need multiple scans of massive input data sets. This takes a prohibitively long time and is often infeasible. We have given a data structure called statistics trees (STs) and an algorithm called CUBIST (cubing with ST; Fu & Hammer, 2000), introducing the star pointers for the first time. A statistics tree is a multiway, multilevel, and balanced tree where each level represents a dimension. The cardinality of a dimension is indicated by the degree of the nodes at that level. The extra star pointer is used for data cubes that do not have a constraint for that level. The leaves of the tree store the aggregated values of the data cubes indicated by the corresponding root-to-leaf paths. This chapter also gives a loading program that inserts all the input records. Once the statistics tree is initialized, a query evaluation algorithm will compute the input query and return an answer to the user. Continuous and ordinal dimensions must first be converted to integers from zero to the cardinality or size of the dimension. The advantage of the CUBIST algorithm is that the query evaluation time does not depend upon the number of records and thus the size of the input database. The disadvantage is that when the dimensions are large, the size of the statistics tree is also large. To improve the performance of CUBIST for queries with constraints on dimension hierarchies, we have developed the algorithm CUBIST++ (Hammer & Fu, 2001), which uses a family of materialized ST trees. In addition to the base tree, we select a set of candidate trees that have the same dimensions as the base tree but at differ-
Online Analytical Processing and Data-Cube Technologies
ent granularity levels. In our greedy strategy, we roll up the dimension with the largest cardinality one after another until all dimensions are at their highest levels. A derived tree is computed from the previous one by merging the subtrees. Given a query, a matching algorithm finds the smallest tree that can provide the answer. If necessary, a query is rewritten according to the inclusion relationship. Experiments show that CUBIST++ is four times faster in setup and two orders of magnitude faster (seconds vs. hundreds of seconds) in the computation of queries than the current leading commercial database Oracle. Our other related work includes the optimization of largerange queries (Fu, 2002b) and the evaluation of holistic operators efficiently (Fu, 2002a; Fu & Rajasekaran, 2001). Both CUBIST and CUBIST++ require that the ST trees fit into memory. However, in real-world applications, data are usually sparse and this requirement may not be met. To address this issue, our recent work on sparse statistics trees (SSTs) removes the drawback while preserving the advantages of ST trees (Fu, 2004). In SST, the data structure itself is dynamic, meaning that the branching values do not have to be contiguous. We recently proposed a new dynamic data structure called RSST (restricted sparse statistics trees) and a novel cube evaluation algorithm, which efficiently computes dense subcubes imbedded in high-dimensional sparse data sets. RSST restricts the number of nonstar values along any root-toleaf path. In this way, the size of the RSST tree increases more slowly than those of straightforward SST or CUBIST algorithms. In this chapter we have also compared the performance of RSST with Dwarf, QC-tree, and BUC (Fu, 2006) and showed its superior performance.
futurE trEnds Despite the intensive research and great achievements, there are still some issues to resolve on
OLAP and data cubes. First, we need a standard language to write OLAP queries. Currently, different vendors use different languages, which causes the difficulty of portability. Second, more effective and efficient algorithms on large warehouses with high-dimensional data are still to be explored to evaluate complex queries. Particularly, the I/O efficiency issue must be addressed. Third, data integration between OLAP, databases, and data mining to form a comprehensive informationoriented decision system will be a major trend of research.
conclusion OLAP and data cubes have been a hot research area for the past years and will continue to be. We have briefly introduced the basic concepts and principles of OLAP and data-cube systems. Specifically, we surveyed the state-of-the-art technologies on how to compute OLAP queries and data cubes efficiently. Once a few issues such as portability, efficiency, and integration are resolved, this field will promise an even brighter future. We live in an information age; how to dig out precious gold automatically from large mountains of heterogeneous, possibly noisy data will be a must to gain an advantageous edge.
futurE rEsEarch dirEctions One interesting research area of OLAP and data cubes is how to apply the cube computation to real-time databases. That is, how to combine the technologies of data-cube and online streaming processing—both hot topics together. Many applications need real-time processing of incoming streams of data generated from various sources. For example, many Internet sites may pump out data on a continuous basis. In weather forecasting, data from different locations and different instruments are created and transported across
Online Analytical Processing and Data-Cube Technologies
a network for monitoring and analysis. Traffic monitoring and temperature control of rooms in buildings are other interesting examples. In these streaming applications, data are continuously generated and processed, and rarely if at all, data are stored. Instead, only a time window of data is stored, processed, and updated. How can then data warehousing and data cubes fit in these applications since data warehouses try to store all data? First, for a trend analysis of a longer time period to find the rules of parameter changing, we have to store historical data somewhere. A data warehouse will collect the historical data from various sources. Second, a data warehouse will be suitable to analyze data in a global perspective. This is essential in the example of weather monitoring. Third, the data-cube technology can be applied to obtain data summaries of streams for streaming processing. Another direction of data warehouse research is distributed data warehouses. Presumably, data warehouses are big, centralized, reconciled, and cleaned data copies of the data from various sources. However, when the warehouses grow too large they can become the bottleneck of data access and queries. One idea is to repartition the global data into a few warehouse sites. For example, a data warehouse for Wal-Mart could be partitioned into several submarts in large regions. In this way, data, queries, and resources may be better utilized.
rEfErEncEs Beyer, K., & Ramakrishnan, R. (1999). Bottom-up computation of sparse and iceberg CUBEs. Proceedings of the 1999 ACM SIGMOD International Conference on Management of Data (SIGMOD ‘99) (pp. 359-370). Chaudhuri, S., & Dayal, U. (1997). An overview of data warehousing and OLAP technology. SIGMOD Record, 26(1), 65-74.
Fu, L. (2002a). Querying and clustering very large data sets using dynamic bucketing approach. The Third International Conference on Web-Age Information Management (WAIM’02) (pp. 279-290). Fu, L. (2002b, June). Range query optimization for multidimensional Web data. Proceedings of the 2002 International Conference on Information and Knowledge Engineering (IKE ’02), Las Vegas, NV. Fu, L. (2004). Processing ad-hoc cube queries over sparse data efficiently and interactively. Proceedings of ICDE ’04, Boston. Fu, L. (2006). Computing dense cubes embedded in sparse data. In D. Taniar (Ed.), Research and trends in data mining technologies and applications: Advances in data warehousing and mining (Vol. 1). Fu, L., & Hammer, J. (2000). CUBIST: A new algorithm for improving the performance of adhoc OLAP queries. ACM Third International Workshop on Data Warehousing and OLAP (pp. 72-79). Fu, L., & Rajasekaran, S. (2001). Novel algorithms for computing medians and other quantiles of disk-resident data. IEEE International Data Engineering and Application Symposium (pp. 145-154). Gibbons, P. B., & Matias, Y. (1998). New sampling-based summary statistics for improving approximate query answers. Proceedings of the 1998 ACM SIGMOD International Conference on Management of Data (SIGMOD ’98) (pp. 331-342). Gray, J., Chaudhuri, S., Bosworth, A., Layman, A., Reichart, D., Venkatrao, M., et al. (1997). Data cube: A relational aggregation operator generalizing group-by, cross-tab, and sub-totals. Data Mining and Knowledge Discovery, 1(1), 29-53.
Online Analytical Processing and Data-Cube Technologies
Hammer, J., & Fu, L. (2001). Improving the performance of OLAP queries using families of statistics trees. 3rd International Conference on Data Warehousing and Knowledge Discovery DaWaK 01 (pp. 274-283). Han, J., & Kamber, M. (2001). Data mining: Concepts and techniques. Morgan Kaufmann Publishers. Lakshmanan, L. V. S., Pei, J., & Han, J. (2002). Quotient cube: How to summarize the semantics of a data cube. Proceedings of 28th International Conference on Very Large Databases (VLDB ’02) (pp. 778-789). Lakshmanan, L. V. S., Pei, J., & Zhao, Y. (2003). QC-trees: An efficient summary structure for semantic OLAP. Proceedings of the 2003 ACM SIGMOD International Conference on Management of Data (pp. 64-75). Ross, K. A., & Srivastava, D. (1997). Fast computation of sparse datacubes. Proceedings of the 23rd VLDB Conference (VLDB ’97) (pp. 116-125). Sismanis, Y., Deligiannakis, A., Roussopoulos, N., & Kotidis, Y. (2002). Dwarf: Shrinking the PetaCube. Proceedings of the 2002 ACM SIGMOD International Conference on Management of Data (SIGMOD ’02) (pp. 464-475). Zhao, Y., Deshpande, P. M., & Naughton, J. F. (1997). An array-based algorithm for simultaneous multidimensional aggregates. SIGMOD Record, 26(2), 159-170.
furthEr rEading Aboulnaga, A., & Chaudhuri, S. (1999). Selftuning histograms: Building histograms without looking at data. Proceedings of the 1999 ACM SIGMOD International Conference on Management of Data (SIGMOD ’99) (pp. 181-192). Acharya, S., Gibbons, P. B., & Poosala, V. (2000). Congressional samples for approximate answering
of group-by queries. Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data (SIGMOD ’00) (pp. 487-498). Agarwal, S., Agrawal, R., Deshpande, P., Naughton, J., Sarawagi, S., & Ramakrishnan, R. (1996). On the computation of multidimensional aggregates. Proceedings of the International Conference on Very Large Databases (pp. 506-521). Agrawal, D., & Aggarwal, C. C. (2001, May). On the design and quantification of privacy preserving data mining algorithms. Paper presented at the Proceedings of the 20th ACM SIGMOD-SIGACTSIGART Symposium on Principles of Database Systems, Santa Barbara, CA. Chan, C. Y., & Ioannidis, Y. E. (1998). Bitmap index design and evaluation. Proceedings of the 1998 ACM SIGMOD International Conference on Management of Data (SIGMOD ’98) (pp. 355-366). Denning, D. (1980). Secure statistical databases with random sample queries. ACM Transactions on Database Systems (TODS), 5(3), 291-315. Denning, D., & Lunt, T. (1987). A multilevel relational data model. Proceedings of the IEEE Symposium on Research in Security and Privacy (pp. 220-234). Fu, L. (2005). Novel efficient classifiers based on data cube. International Journal of Data Warehousing and Mining, 1(3), 15-27. Gibbons, P. B., & Matias, Y. (1998). New sampling-based summary statistics for improving approximate query answers. Proceedings of the 1998 ACM SIGMOD International Conference on Management of Data (SIGMOD ’98) (pp. 331-342). Gupta, H., & Mumick, I. (1999). Selection of views to materialize under a maintenance cost constraint. Proceedings of the International Conference on Management of Data (pp. 453-470).
Online Analytical Processing and Data-Cube Technologies
Harinarayan, V., Rajaraman, A., & Ullman, J. D. (1996). Implementing data cubes efficiently. SIGMOD Record, 25, 205-216. Ho, C.-T., Agrawal, R., Megiddo, N., & Srikant, R. (1997). Range queries in OLAP data cubes. Proceedings of the 1997 ACM SIGMOD International Conference on Management of Data (pp. 73-88). Johnson, T., & Shasha, D. (1997). Some approaches to index design for cube forests. Bulletin of the Technical Committee on Data Engineering, 20(1), 27-35. Labio, W., Quass, D., & Adelberg, B. (1997). Physical database design for data warehouses. Proceedings of the International Conference on Database Engineering (pp. 277-288). Lehner, W., Sidle, R., Pirahesh, H., & Cochrane, R. W. (2000). Maintenance of cube automatic summary tables. Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data (SIGMOD ’00) (pp. 512-513). Mumick, I. S., Quass, D., & Mumick, B. S. (1997). Maintenance of data cubes and summary tables in a warehouse. Proceedings of the 1997 ACM SIGMOD International Conference on Management of Data (SIGMOD ’97) (pp. 100-111). Xin, D., Han, J., Li, X., & Wah, B. W. (2003). Star-cubing: Computing iceberg cubes by topdown and bottom-up integration. VLDB 2003: Proceedings of 29th International Conference on Very Large Data Bases (pp. 476-487). Yan, W. P., & Larson, P. (1995). Eager aggregation and lazy aggregation. Proceedings of the Eighth International Conference on Very Large Databases (pp. 345-357).
tErms and dEfinitions Data Cube: A data cube is a multidimensional data model to view the data stored in a data warehouse. Data Warehousing: Data warehousing is a technology to extract, clean, integrate, and load data into a central warehouse database; it uses front tools such as OLAP and data mining to query the system for decision support. Dimension Hierarchy: This is the granularity level structure of an attribute (called a dimension) that tracks a measure. For example, in time, the dimension we may have is a year-month-day hierarchy. Online Analytical Processing (OLAP): OLAP is mainly for analytical and informational purposes rather than operational purposes. Through a friendly interface and efficient query evaluation, OLAP can help enterprise managers make informative, strategic decisions. Relational OLAP (ROLAP): ROLAP is a type of OLAP server that mainly uses relational technologies. Other types of OLAP servers include MOLAP (multidimensional OLAP) and HOLAP (hybrid OLAP). Roll-Up and Drill-Down: These are common OLAP operations. Roll-up allows us to look at coarser, “big picture” data by dropping one or more dimensions or climbing up along the dimension hierarchies. A drill-down operation is the opposite of roll-up for more detailed data. Statistics Trees: A tree data structure used to evaluate OLAP queries efficiently. The levels represent dimensions and the branching links represent attribute values of that dimensions. A star pointer represents ALL values. Dwarf trees and QC-trees are other examples of tree structures for evaluating OLAP queries.
Section V
Project Management and IT Evaluation
Information technology frequently succeeds or fails on the strength or weakness of project management. In the United States, the Bush administration’s fiscal year (FY) 2007 budget increased emphasis on project management and hiring qualified project managers, backing with dollars the earlier recommendations of the Office of Management and Budget (OMB) and the Government Accountability Office for much more attention on project management. The push for project management professionalization was, in part, a reaction and solution to past information technology failures in enterprise resource planning systems and other IT arenas. In fact, in 2006, the OMB even authorized and encouraged agencies to shift personnel doing hitherto top-priority enterprise architecture systems work to work on project management instead. Project management is often tied to enforcing the IT enterprise architecture, which reflects the IT policies at the national level and chief information officers at the state level. Illustrative was a 2005 study by the National Association of State and County Information Officers (NASCIO), which surveyed 34 state project management offices. The report endorsed a stricter approach to management of IT investments to assure investments conformed to state enterprise architectures. Implementation of such an approach, the report argued, rested on improving project management capacities through better training, certification programs, and support for career advancement by project managers. While federal and state levels have seen standardization and centralization under the banner of “enterprise architecture,” there has always been an implementation gap in terms of specific IT projects at lower organizational levels. The belief reflected in the NASCIO report and in OMB emphasis on project management is that properly trained project managers can assure awareness of and compliance with mandates from central IT authorities.
Evaluation is coequal in importance with project management. Project management may be more critical for short-term IT success, but in the long run, the success of IT initiatives requires that they be proven to work in a cost-effective manner, hence the critical importance of evaluation. While evaluation is a broad topic, it is illustrated by the reliance on the Program Assessment and Rating Tool (PART) in the United States. PART was introduced in 2003 and incorporated in the federal enterprise architecture (FEA) framework for strategic planning mandated for all federal agencies. The government made submission to PART mandatory from 2005. Congress, however, has found PART to be less than satisfactory as a tool of evaluation. For instance, in its FY 2007 consideration of e-government funding, the report of the House Appropriations Committee said that the Program Assessment Rating Tool was devoid of useful information. The committee expressed forcefully its displeasure with administrators who went through the motions of PART evaluation but who could not explain the rationale behind a program’s funding level other by citing its PART score. The Bush administration was strongly encouraged by the committee to use a more meaningful system of evaluation to justify proposed program funding levels. The moral of this story is that on a worldwide basis, IT administrators can expect legislative decision makers to insist on meaningful evaluation and not to settle for symbolic evaluation mechanisms. G. David Garson, September 2007
Chapter LVIII
Managing People and Information in Complex Organizations Kalu N. Kalu Auburn University Montgomery, USA
abstract Information technology affects organizations and society itself as it redefines work content, reorganizes leadership styles and cultures, reshuffles power hierarchies, and spawns a series of both man-designed and spontaneous adaptations. Information technology oftentimes necessitates a new division of labor that creates policy problems and loss of accountability. Organizational leadership, especially in the public sector, urgently requires a theoretical as well as a practical revaluation to cope with the structural and functional changes within work and administrative organizations. This project seeks to elucidate three leadership models in the context of IT-induced changes in organizational forms and processes, namely, networked leadership, organic leadership, and gatekeeper leadership models.
introduction As we enter a period of dramatic change—a shift from the command-and-control organization to
the information-based organization, the organization of knowledge specialists—Peter Drucker (1988, p. 53) cautions that “we can perceive only dimly, what this organization will look like; the job of actually building the information-based organization is still ahead of us—it is the managerial (administrative) challenge of the future.” In the public sector, a crucial concern should be what kind of decision-making processes, administrative structure, and political responsiveness will be engendered by this transformation? To the extent that information technology decentralizes administrative power, it will have a transformational effect on the leadership role (Avison, Kendall, & Degross, 1993). The leadership function is a process that involves both managerial as well as decision-making roles, but also sets organizational goals as well as facilitates and clarifies the means for achieving them. While we have generally articulated these attributes within the classical or Weberian bureaucratic organization, how these will be manifested in the dynamic environment of IT-intensive administration is the purpose of this chapter.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Managing People and Information in Complex Organizations
In assessing the impact of the emerging technological orientation on contemporary administrative practice, we are thus confronted with a serious challenge regarding the epistemology and nature of understanding in orthodox leadership theory. However, what leaders of today and of the future would face is quite different from what used to be. They would be faced with the dual obligation of managing people and information, and at the same time securing the primary mission that informs the organization’s existence. The irony is that while it is easier to manage people (by issuing orders and directives), technology, once set on its own routine, quickly takes on a life of its own and becomes more readily taken for granted. Once people are set into using it, its impersonal nature offers a means of escape from the traditional notion of an overbearing boss, but at the same time takes away from the boss a most crucial incentive of leadership (command and control). The argument is not that leaders have become moribund, but the basic fact is that much of the leadership function has become depersonalized to the extent that the traditional notions of leadership theorizing do not offer a complete portrait of what changes organizations and agencies would face in the future and how they can adapt to them.
background: rEstructuring work organizations According to Laudon and Laudon (2003, p. 386, see Table 1), “New information systems can be powerful instruments for organizational change, enabling organizations to redesign their structure, scope, power relationships, work flows, products, and services.” To the extent that information technology decentralizes administrative power, it will have a transformational effect on the leadership role. Organizational leaders occupy roles that are both personalized and institutionalized
(Avison et al., 1993). Hence, “managers (leaders) will need to change the organization to fit the technology or to adjust both the organization and the technology to achieve an optimal fit” (Laudon & Laudon, p. 72). As technology transforms the administrative context of public administration, the principles, culture, and values that undergird its public purpose will face dramatic but uncertain changes: “While technology itself is neither enslaving nor liberating; it is the management of technology (and the information it carries) that is important” (Gabriel, Fineman, & Sims, 2000, p. 125). This is quite relevant in situations where the power of a leader to make daily decisions is isolated by a data processing system tucked away miles from the leader’s desk, and also in situations where such mundane considerations and affection that require a personal presence and close proximity is removed. Suffice it then to say that the changing nature of the work environment (technology-driven division of labor) will determine how information is exchanged, how decisions are made, how new organizational forms emerge, and what type of leadership would fit into a particular organizational form.
Structure and Boundary Redefinition One of the things that leaders do is to initiate structure, that is, exhibit a series of behaviors through which they define their roles and their subordinates’ roles in achieving the organization’s formal goals. While this process involves the person-in-situation, the new scenario that is advanced in this chapter supports a move toward a depersonalization of leadership roles due to the changing nature of work, and the tools needed to achieve key organizational objectives. Because of the peculiar characteristics of information technology, organizational structures and boundaries have become more fluid and dynamic. The unsettled nature of these boundaries and the frequency in which they can change has brought a greater level of uncertainty in the way tradi-
Managing People and Information in Complex Organizations
Table 1. How information technology can transform organizations (Adapted from Laudon & Laudon, 2004, p. 386) Information Technology Global Networks Enterprise Networks
Distributed Computing Portable Computing Multimedia and Graphic Interfaces
Organizational Change International division of labor: The operations of a firm and its business processes are no longer determined by location; the global reach of firms is extended. The cost of global coordination declines and transaction costs decline. Collaborative work and teamwork: The organization of work can now be coordinated across divisional boundaries. The cost of management declines. Multiple tasks can be worked on simultaneously from different locations. Empowerment: Individuals and work groups now have the information and knowledge to act. Business processes can be streamlined. Management costs decline. Hierarchy and centralization decline. Virtual organizations: Work is no longer tied to physical location. Knowledge and information can be delivered anywhere they are needed, anytime. Work becomes portable. Accessibility: Everyone in the organization—even senior executives—can access information and knowledge. Organizational costs decline as workflows move from paper to digital images, documents, and voice. Complex knowledge objects can be stored and represented as objects containing graphics, audio, video, or text.
Leadership Model* Networked Leadership
Networked Leadership
Organic Leadership
Networked Leadership
Gatekeeper Leadership (Management and Monitoring of Information and Access)
*Last column added by author
tional organizational roles relate to each other. Furthermore, the speed with which generic ideas are replaced by new information means that as functional relationships change within organizations, so will the tools needed to perform new and more sophisticated roles. To the extent that organizational leaders and their subordinates are affected, they would need to learn new ways of thinking, coordinating, and most importantly, organizing. They will need to embrace learning organizations, but to do that they must first unlearn the old habit of doing things: “Organizing and learning will absorb increasing amounts of resources, time and effort, as information proliferates and as times and distances shrink”
0
(Gabriel et al., 2000, pp. 264-265). Unlearning is a condition for learning: unlearning theories, unlearning habits, and unlearning lazy shortcuts that stand in the way of new understanding. It takes courage and requires the ability to drag ourselves out of our comfort zone, the zone we create with the help of our existing stock of concepts, ideas, and theories” (Gabriel et. al, 2000, p. 265). This is the paradox that is often concealed by political cliché: Old learning and old ideas can act as a hindrance to new learning and new ideas (Gabriel et al.), but in the unfolding IT-driven work environment, the choices are limited.
Managing People and Information in Complex Organizations
Hierarchy and Systems of Accountability For the fact that “structure and hierarchy have no fixed form, they cannot function as predetermined modes of control” (Morgan, 1997, p. 266). Leaders of today have to rely less on hierarchical (vertical) power as opposed to distributed or horizontal power within or across organizational units or subunits. Nonetheless, the interactive nature of functional roles means that “leaders need to know what is important and meaningful to the people they work with, and somehow shape their beliefs or meanings in a direction that makes organizational sense” (Gabriel et al., 2000, p. 139). However, when such functional roles are distances apart but have become integral elements of an IT-driven operational routine, systems of accountability become decentralized as executive authority yields to what Terry (2003, p. 94) calls “competence authority.” Thus, the leader’s role in interpretation and making sense of critical organizational matters is neutralized: “As the image of the omnipotent ‘boss’ fades, we sometimes have to search harder in the organization for the leader” (Gabriel et al., p. 134). The distributed nature of the leadership function thus cuts across organizational units whose collective cooperation would be needed to achieve the organization’s mission in a purposeful and goal-oriented direction.
bEyond orthodoxy: rEdEfining thE lEadErship rolE The fast pace in which technology has invaded the organizational work place has rendered administrative practice as a system in flux. The conventional notion of leadership regarding who decides, directs, or authorizes has become ambiguous but tentative at best. Traditional axioms, embedded cultures, and taken-for-granted everyday routines must confront the emerging realities
of a new leadership ethos: “Generic management problems such as overcoming resistance to change, motivating cooperative behavior, or coordinating and integrating distinct horizontal activities” (Lynn, 1998, p. 235) will invoke new administrative learning and structural adaptation. Hence, in such organizational contexts where information technology necessitates a decentralization of administrative authority, and to the extent that they are provided with more comprehensive, immediate, and relevant data relating to their tasks, we can witness “an increase in the power of those at the periphery or local level, thereby facilitating self, rather than central control” (Kovac-Boisvert & Kouzmin, 1994). As Morgan (1997, pp. 266-267) points out, “transformational change ultimately involves the creation of new ‘contexts,’ in which new patterns have to emerge, but cannot be imposed.” Unfortunately, “complex IT-driven tasks, with a high degree of uncertainty and variability, often confront an administrative experience which has not yet developed a formal model of the problem of departmentalism which takes into account the contingent nature of new activities” (Davis & Lawrence, 1977; Kingdon, 1973; Kouzmin, 1979; March & Simon, 1958; Thompson, 1967). Hence as “technology evolves through periods of incremental improvement or routinization, initial contingencies decrease and the order-of-magnitude of maximum achievable performance continues to rise until the next discontinuity” (Kouzmin & Korac-Boisvert, 1995, p. 107). How emerging discontinuities are managed and who does the managing have become critical questions in the unfolding administrative environment. Three leadership models are possible in the emerging organizational scenario: networked leadership, organic leadership, and gatekeeper leadership.
networked leadership The networked leadership scenario would reflect organizational systems that are linked only by
Managing People and Information in Complex Organizations
technology and exchange of information with very minimal individual contact; it is similar to the hub-and-spoke network used by the airline industry except that in this case, we are dealing with information, not physical or moveable objects. Information and data are pulled into a central hub for programmed decision-making, dissemination, coordination, and feedback. It draws upon integrated global networks, enterprise and mobile portal IT systems that can be linked to a central hub through the Internet or Ethernet protocols, and videoconferencing and other modes of secured work interface.
organic leadership According to Morgan (1997, p. 66), “By exploring the parallels between organisms and organizations in terms of organic functioning, relations with the environment, relations between species and the wider ecology, it has been possible to produce different theories and explanations that have very practical implications for organization and management.” Organic leadership reflects leaderless organizations in which decision making will be based on situational or contingency factors and a critical level of synchronization between and across various array of multiple data collection points (organizational units and subunits). In place of a “hierarchical leader specifying standards and ways of achieving goals (task-oriented behaviors), the team might set its own standards and substitute those for the leaders” (Schermerhorn, Hunt, & Osborn, 2000, p. 297), just as you would have in traditional matrix and other team-based forms of organizations. The nature of work would be one of coordination for mutual deliberation, or trial and error to deal with uncertainty, rather than one of discretionary or independent decision making. In fact, the demand of the work itself neutralizes and limits any attempt at unilateral or arbitrary action to a minimum. As the characteristics of individual workers, their jobs, and their organizations change and adapt over time, a new leadership
model would be needed to accommodate both an increasing level of uncertainty and technological and information diversity in the work environment. “The management of organizations can, nonetheless, be improved through systematic attention to the needs that must be satisfied if the organization is to survive,” says Morgan (p. 67), and as the needs change, so does the leadership framework that drives it.
the gatekeeper leadership The gatekeeper (monitoring) leadership is situated at the boundary between the organization and its environment. It is a bridge between the internal processes within the organization and the environmental turbulence and uncertainty if faces from the outside. The critical role of the leader is to “continually gather information, reduce equivocality, provide structure, and overcome barriers” (Northouse, 2004, p. 208). The primary obligation, thus, is to network with outside organizations or agencies, secure information from internal and external sources, and structure it into a useable form so that plans can be developed and acted upon. The gatekeeper also plays a mediating role between internal organizational processes and the regulatory requirements of the external environment, and develops contingencies to deal with environmental complexity and uncertainty. Organizational leaders are equally affected by change. To the extent that many of the internal processes have essentially become rationalized and self-regulatory, the gatekeeper’s role is more focused on external factors. During critical incidents such as (technology-driven) cutbacks and retrenchment, the emotional self-being and integrity of leader-follower relationships are decisive factors in organizational adaptation; hence, leaders must be capable of balancing organizational demands for stability and adaptation to external influences” (Diamond, 1992, pp. 271-272).
Managing People and Information in Complex Organizations
Table 2. Modifying the substitute for leadership framework: Emerging constructs in contemporary ITdriven contexts Characteristics of Individuals Experience, ability, training Professional orientation Indifference toward rewards
Impact on Leadership
Leadership Models
Task-oriented leadership (S) Task-oriented/supportive leadership (S) Motivation-oriented/supportive leadership (S)
Networked Gatekeeper Organic
Task-oriented leadership (S) Supportive leadership (S) Task-oriented/supportive leadership (N) Supportive leadership (S)
Networked Organic Gatekeeper Gatekeeper/Networked
Task-oriented/supportive leadership (S) Task-oriented/supportive leadership (N) Task-oriented/supportive leadership (N)
Organic Networked Networked/Organic
Characteristics of Job Highly structured/routine Intrinsically satisfying Technology-driven work Knowledge-intensive work Characteristics of Organization Cohesive work group Low leader position Leader physically separated
S = Substitute for Leadership
N = Neutralizer of Leadership
*Last Column added by author
organizational implications of nEw lEadErship modEls To capture, or at best, predict, the three leadership constructs, I modify the Substitute for Leadership framework (Schermerhorn et al., 2000, p. 297; see also Keer & Jermier, 1998; Van Wart, 2005) as a way of elaborating the multiple ways that information technology can affect both structure and function in the changing work environment (Table 2). What is indicated here is that the conventional emphasis on traditional models of leadership may not be adequate to capture the various dyadic or horizontal relationships that occur in contemporary organizational work environments. Information technology and organizations exert a reciprocal influence on one another. “While the interaction between them is, oftentimes, very complex; it is influenced by a great many mediating factors, including the organization’s
structure, standard operating procedures, politics, culture, surrounding environment, and management decisions” (Laudon & Laudon, 2003, p. 73). The functional changes in work processes will necessitate equivalent changes in the structure of organizations. As traditional organizational roles change, the kinds of leadership models needed to coordinate these roles will also, out of necessity, need to change (Table 3).
futurE trEnds For the simple fact that the increasing application of information technology in the private and public sectors continues to challenge traditional methods of organization and service delivery, governmental response would be reflected in an increasing use of regulations, laws, and institutions as a way of consolidating governance
Managing People and Information in Complex Organizations
Table 3. Organizational characteristics of emerging leadership constructs Emerging Leadership Models
Organizational Characteristics
Networked
Hierarchy Formalization Decentralization Span of Control Organization Culture Communication Decision Making Accountability Environmental Scanning Strategic Planning Crisis Management OSHA Government Regulation EEOC/Equal Opportunity
Horizontal Integrative High Wide Dynamic Reciprocal Atomized Wide Insular Scripted Programmed Low Uncertainty Standard Work Factors
Workforce Diversity Planning Organizing Directing Staffing
Skills Driven Technology Driven Functional Linkages Self-Directing Skills Driven
Coordinating Reporting Budgeting
Unit Autonomy Centralized SOP Collective Action
efforts within the scope of state authority. In the private sector, “organizational strategies may involve influencing the policy and regulations that shape how emergent global networks operate, the hardware that comprises the network, the content that flows through the network, or the software that mediates between the hardware and content” (Baylis & Smith, 2005,p. 626). Experts and technicians in information technology will continue to play central roles in setting the parameters for organizational action as well as regulatory policy; hence they will become enormously powerful in public- and private-sector decision making. At the same time, the preponderance of virtual organizations and the mediating distance between leaders and subordinates would mean that authority and control would become increasingly dispersed as most organizational subunits seek
Organic Amorphous Spontaneous Contingent Transitory Unstable Flexible Channels Mutual Adjustment Wide Adaptive Contingent Nonprogrammed High Uncertainty Adaptive Environmental Factors Politics Driven Human/Technology Indeterminate Adaptive Skills/Environmental Factors Coordination Contingent Routine/Innovative
Gatekeeper Conical Flexible/Routine Discretionary Narrow Durable Scripted Scripted Narrow Boundary Spanner Scripted Programmed Routine Standard Organizational Factors Strategic Factors Organizational Factors SOP Scripted Organizational Factors SOP SOP SOP
to become quasi-autonomous decision-making operational turfs, none the less by default. Leadership activities would become more skills and task- oriented with a primary focus on technologically- linked teams that work independently but at the same time pool their resources together toward achieving the organizational mission: “The benefits of teams include the ability to independently select appropriate skills for a particular project, the creativity and synergy they engender, their flexibility of structure, and the fact that most of them can be easily disbanded” (Van Wart, 2005, p. 216) or reconfigured according to the demands and contingencies of the moment. Hence, the trend would reflect a gradual disappearance of vertical management and a movement toward a more decentralized horizontal managerial style. The ability of the emergent leadership model to motivate
Managing People and Information in Complex Organizations
employees would be based more on instrumental rewards as opposed to the psychologically-based approaches to employee motivation.
conclusion In essence, the evolving nature of organizations will determine how information is exchanged, how decisions are made, and how new organizational forms emerge. The basis of administrative practice, especially in the public sector, would become increasingly value-neutral and impersonal; which invariably, would pose a serious challenge to the effort of the human-relations school to integrate both the formal and informal aspects of organizational work. Barrett and Greene (2001, p. 65) state, “In the complex world of IT, good planning is a necessity at all levels—to provide an entity-wide vision, to coordinate disparate efforts among agencies, to minimize technological redundancy, and to prioritize decision-making in a world in which spending possibilities are endless and resources are limited.” In the new IT-driven work environment, the key to successful strategic planning would include updating plans regularly, widening the sources of policy input, linking strategic plans to the exigencies of the budgetary process, and integrating IT planning into the wider sociopolitical framework of citizenship and value-added governance. As tempting as it may be, and as versatile as it seems to be, particularly for the administrative process in the public sector, technology remains a two-edged sword. Not only does it have major consequences regarding how leaders should lead, but it portends to transform the very essence of leadership both in theory and in practice. As the structure and boundaries of organizational work become more fluid and unsettled, the leadership model needed to properly articulate agency mission and the public interest would, invariably, become more innovative and adaptive. Leadership activities would revolve more around organizing as opposed to
directing. This work, therefore, offers a pointer in a direction that seeks adaptation but at the same time opens up great possibilities for new learning and research that focuses on novel approaches to leadership theorizing, emergent organizational forms, and the central role of information technology in changing the traditional structure and norms of organizational action.
futurE rEsEarch dirEctions For the simple fact that organizations and societies at large involve people engaged in various activities either as individuals or as groups, the cumulative effect of these activities would have enormous consequences for the evolving social organization. As information technology permeates all aspects of social endeavor beyond the typical organization, it will seek to transform society itself and how people respond to the demands it imposes on members. There is a need to study how electronic voting affects voter participation across demographic, racial, and socioeconomic groups, and how e-filing of tax returns increases the number of those who file their tax returns on time as well as lead to a decline in unclaimed tax refunds. The Department of Treasury currently has about $2 to $3 billion worth of unclaimed tax revenues. In a recent research by Christopher J. Collins and Ken G. Smith (2006) titled Knowledge Exchange and Combination: The Role of Human Resource Practices in the Performance of High-Technology Firms, a field-study of 136 technology companies showed that commitmentbased human resource practices were positively related to the organizational social climates of trust, cooperation, and shared codes and languages (organizational culture). The future will see more research focused specifically on the implication of information technology on the durability and effectiveness of the informal organization as a constructive element in organizational learning, productivity, and overall effectiveness.
Managing People and Information in Complex Organizations
In another study, Srivastava, Bartol, and Locke (2006) surveyed management teams in 102 hotel properties in the United Sates to examine the intervening roles of knowledge sharing and team efficacy in the relationship between empowering leadership and team performance. They found that empowering leadership was positively related to both knowledge sharing and team efficacy, which, in turn, were both positively related to performance. To the extent that information technology helps to advance knowledge sharing between leaders and subordinates in organizations, future research should focus on how this nuanced power play transcends eventual performance. Of equal relevance is a study that focuses on how increased application of information technology could induce work-related stress or what is called “technostress,” which results from an inability to adapt to the introduction and operation of new technology (see Nykodym, Miners, Simonetti, & Christen, 1989). The frequency with which new technology is utilized in the work environment means that both leaders and subordinates will need to develop new skills needed to deal with the technical requirements of traditional job roles. Furthermore, the short time span needed to complete what may be a very intensive training and development program might create more stress for employees already reluctant to learn new ways of performing their jobs. Future research should focus on how these interconnections could affect employee motivation, performance, and potential turnover.
rEfErEncEs Avison, D., Kendall, J. E., & Degross, J. I. (Eds.). (1993). Human, organizational and social dimensions of information systems development. Proceedings of the IFIP WG 8:2, Working Group on Information Systems Development, Amsterdam, the Netherlands.
Barrett, K., & Greene, R. (2001). Powering up: How public managers can take control of information technology. Washington, DC: CQ Press. Baylis, J., & Smith, S. (2005). The globalization of world politics: An introduction to international relations (3rd ed.). New York: Oxford University Press. Collins, C. J., & Smith, K. G. (2006). Knowledge exchange and combination: The role of human resource practices in the performance of hightechnology firms. Academy of Management Journal, 49(3), 544-560. Davis, S. M., & Lawrence, P. R. (1977). Matrix. London: Addison-Wesley. Diamond, M. (1992). Hobbessian and Rousseauian identities: The psychodynamics of organizational leadership and change. Administration & Society, 24(3), 267-289. Drucker, P. F. (1988). The coming of the new organization. Harvard Business Review, 66(1), 45-53. Gabriel, Y., Fineman, S., & Sims, D. (2000). Organizing and organizations (2nd ed.). Thousand Oaks, CA: Sage. Keer, S., & Jermier, J. (1998). Substitutes for leadership: Their meaning and measurement. Organizational Behavior and Human Performance, 22, 375-403. Kingdon, D. R. (1973). Matrix organization: Managing information technologies. London: Tavistock. Kouzmin, A., & Korac-Boisvert, N. (1995). Softcore disasters: A multiple realities crisis perspective on IT development failures. In H. Hill & H. Klages (Eds.), Trends in public sector renewal: Recent developments and concepts of awarding excellence (pp. 89-132). Berlin, Germany: Peter Lang.
Managing People and Information in Complex Organizations
Kovac-Boisvert, N., & Kouzmin, A. (1994). The darkside of info-age social networks in public organizations and creeping crisis. Administrative Theory and Praxis, 16(1), 57-82. Laudon, K. C., & Laudon, J. P. (2003). Essentials of management information systems: Managing the digital firm (5th ed.). Upper Saddle River, NJ: Prentice Hall. Lynn, L. E. (1998). The new public management: How to transform a theme into a legacy. Public Administration Review, 58(3), 231-237. March, J. C., & Simon, H. A. (1958). Organizations. New York: John Wiley. Morgan, G. (1997). Images of organization. Thousand Oaks, CA: Sage. Northouse, P. G. (2004). Leadership: Theory and practice (3rd ed.). Thousand Oaks, CA: Sage. Nykodym, N., Miners, I., Simonetti, J. L., & Christen, J. C. (1989). Computerphobia. Personnel Journal, 68, 54-56. Schermerhorn, J. R., Jr., Hunt, J. G., & Osborn, R. N. (2000). Organizational behavior (7th ed.). New York: John Wiley & Sons. Srivastava, A., Bartol, K. M., & Locke, E. A. (2006). Empowering leadership in management teams: Effects on knowledge sharing, efficacy, and performance. Academy of Management Journal, 49(6), 1239-1251. Terry, L. D. (2003). Leadership of public bureaucracies: The administrator as conservator (2nd ed.). Armonk, NY: M. E. Sharpe. Thompson, J. D. (1967). Organizations in action: Social science vase of administrative theory. New York: McGraw-Hill. Van Wart, M. (2005). Dynamics of leadership in public service: Theory and practice. Armonk, NY: M. E. Sharpe.
furthEr rEading Agervold, M. (1987). New technology in the office: Attitudes and consequences. Work and Stress, 1(2), 143-153. Bakos, J., Yannis, & Treacy, M. E. (1986). Information technology and corporate strategy: A research perspective. MIS Quarterly. Bartol, K. M., & Srivastava, A. (2002). Encouraging knowledge sharing: The role of organizational reward systems. Journal of Leadership and Organizational Studies, 9(1), 64-77. Castells, M. (2002). The Internet galaxy: Reflections on the Internet, business and society. London: Oxford University Press. Chesbrough, H. (2001). Assembling the elephant: A review of empirical studies on the impact of technological change upon incumbent firms. Comparative Studies of Technological Evolution, 7, 1-36. Danzinger, J. N., & Kraemer, K. L. (1986). People and computers. New York: Columbia University Press. DiMaggio, P., Hargitai, E., Neuman, W. R., & Robinson, J. P. (2001). The Internet’s effect on society. Annual Review of Sociology, 27, 306-327. Foxall, G. R. (1988). Marketing new technology: Markets, hierarchies, and user-initiated innovation. Managerial and decision economics, 9(3), 237-250. Fountain, J. (2001). Building the virtual state: Information technology and institutional change. Washington, DC: Brookings Institution. Gold, A., Malhotra, A., & Segars, A. H. (2001). Knowledge management: An organizational capabilities perspective. Journal of Management Information Systems, 18(1), 185-214.
Managing People and Information in Complex Organizations
Graen, G. B., Hui, C., & Taylor, E. A. (2004). A new approach to team leadership: Upward, downward and horizontal differentiation. In G. B. Graen (Ed.), New frontiers of leadership, LMX leadership: The Series (Vol. 2, pp. 33-66). Greenwich, CT: Information Age Publishing. Gronlund, A. (2000). Managing electronic services: A public sector perspective. Berlin, Germany: Springer-Verlag.
Markus. M. L., & Robey, D. (1988). Information technology and organizational change: Causal structure in theory and research. Management Science, 34(5), 583-598. Orlikowski, W. J., & Bailey, S. R. (2001). Technology and institutions: What can research on information technology and research on organizations learn from each other? MIS Quarterly, 25(2).
Haddad, C. J. (2001). Managing technological change: A strategic partnership approach. Thousand Oaks, CA: Sage.
Orlikowski, W. J., & Baroudi, J. J. (1991). Studying information technology in organizations: Research approaches and assumptions. Information Systems Research, 2(1), 1-28.
Huber, G. P. (1990). A theory of the effects of advanced information technologies on organizational design, intelligence, and decision making. Academy of Management Review, 15(1), 47-71.
Perry, J. L., & Kraemer, K. L. (1999). The implications of changing technology. In F. S. Lane (Ed.), Current issues in public administration (6th ed., pp. 182-200). New York: Bedford/St. Martin.
Kalu, K. N. (2001). Leadership and discretionary decision-making in a technocratic administration: Confronting a new praxis. Administrative Theory & Praxis, 23(4), 311-336.
Pinsonneault, A., & Kraemer, K. L. (1989). The impact of technological support on groups: An assessment of the empirical research. Decision Support Systems, 5(2), 197-216.
Keen, P. G. W. (1991). Shaping the future: Business design through information technology. Cambridge, MA: Harvard Business School Press.
Powell, W. W., & Snellman, K. (2004). The knowledge economy. Annual Review of Sociology, 30, 199-220.
Kendall, K. E. (Ed.). (1999). Emerging information technologies: Improving decisions, cooperation, and infrastructure. Thousand oaks, CA: Sage.
Pradhan, J. (2002). Information technology in Nepal: What role for the government? The Electronic Journal of Information Systems in Developing Countries, 8(3), 1-11.
Kouzmin, A., & Korac-Kakabadse, N. (2000). Mapping institutional impacts of lean communication in lean agencies. Administration & Society, 32, 29-69. Landsbergen, D., & Wolken, G. (2001). Realizing the promise: Government information systems and the fourth generation of information technology. Public Administration Review, 61(2), 206-220. Markus, M. L. (1983). Power, politics, and MIS implementation. Communications of the ACM, 26(6), 430-444.
Seo, D., & La Paz, A. I. (in press). Exploring the dark side of information systems in achieving organizational agility. Communications of the ACM. Taylor, J. R., Groleau, C., Heaton, L., & Van Every, E. (2000). The computerization of work: A communication perspective. Thousand Oaks, CA: Sage. Tripathi, M. (2006). Transforming India into a knowledge economy through information communications technologies: Current developments.
Managing People and Information in Complex Organizations
The International Information and Library Review, 38, 139-146. Wilheim, A. (2000). Democracy in the digital age: Challenges to political life in cyberspace. New York: Routledge. Zuboff, S. (1988). In the age of the smart machine: The future of work and power. New York: Basic Books.
tErms and dEfinitions Epistemology: Epistemology is the nature and scope of knowledge and understanding relative to a specific idea, field, or concept. Gatekeeper Leadership: The gatekeeper plays a mediating or moderating role between internal organizational processes and their adaptation to external turbulence and complexity that could affect organizational effectiveness and success. Information-Based Organization: This is an organization that relies on a progressively intensive application of information technology in the functional relationships between and within organizational units. Leadership: Leadership is a formal organizational activity involving individual managerial and decision-making functions, employee motivation, and incentives directed at achieving organizational goals and missions. Learning Organization: Learning is a process by which an organization adapts to new challenges in its operational environments. Members
do this first by unlearning or “unfreezing” old habits, and second by accepting new methods and innovations, and incorporating these into a new process of learning (refreezing) so as to survive in an increasingly uncertain and complex environment. Networked Leadership: This is based on organizational systems in which the operational units and control mechanisms are extensively linked and facilitated by the exchange of information through both centralized and dispersed technology-based hubs. Neutralizer of Leadership: Neutralizers constrain a formal leader’s ability to behave in certain ways, and in most cases can nullify the immediate effect of his or her actions or directives. Examples could be situations where workers have alternate resources, a leader lacks the power to impose sanctions and/or rewards, subordinates have operational autonomy, or where subordinates are distant from the leader. Organic Leadership: This model is drawn from a biological analogy depicting contingent leadership scenarios that would be needed to accommodate an increasing level of uncertainty, nonroutine functional roles, and technological and information diversity in the new work environment. Substitute for Leadership: The substitutefor-leadership theory posits that sometimes hierarchical authority makes no difference in organizational relationships because leaders have come to rely extensively on the expert knowledge of subordinates below them.
0
Chapter LIX
Human-Factors Design for Public Information Technology Vincent E. Lasnik Independent Knowledge Architect, USA
introduction This chapter examines the realm of human-factors design for public information technology in the rapidly evolving postmodern knowledge age of the 21st century, with special focus on how new research and development into human cognition, perception, and performance capabilities is changing the design function for IT systems and products. Many “one size fits all” IT designs are neither adaptive nor adaptable—promulgating a top-down technological imperialism penetrating every aspect of their use. The communication, collaboration, and interaction infrastructure of IT organizations thus remains acutely challenged with enduring problems of usability, learnability, accessibility, and adaptability. As the function and form of products undergo increasingly rigorous scrutiny, one important design goal is emerging as a paramount priority: improving the usability of products, tools, and systems for all stakeholders across the enterprise. It is therefore important to briefly describe emerging human-factor design knowledge and practices applicable to organizations that invent, incubate, innovate, prototype,
and drive the creation and application of public IT. The findings here suggest the most effective strategies to manage and augment user-centered design (UCD) endeavors across a wide array of public IT products and organizations.
background In the context of 21st century industrial and information product design theory and practice, usercentered design is an iterative, systematic process that focuses on constructing a user experience and responsive environment with physical and virtual affordances that are identifiable, manipulable, controllable, customizable, and adaptable from the intrinsic, subjective perspective of the conceptual model of the user (Mayhew, 1999; Moggridge, 2006; Preece, Rogers, & Sharp, 2002). This means carefully and systematically taking into account both (a) the user’s subjective metamodel of his or her own experiences, actions, and informationseeking cognitive and perceptual processes, and (b) the designer’s ostensibly more objective model of the user and alignment mapping to the physical
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Human-Factors Design for Public Information Technology
affordances of the product or tool environment. Thus, in UCD, the conceptual model of the user becomes the superordinate principle guiding the IT design process for software user interfaces (UIs; Saffer, 2006; Stone, Jarrett, Woodroffe, & Minocha, 2005). Usage-centered design is different: It focuses primarily on (a) the empirical (i.e., observable and measurable) goal-based behavior and task-driven performance requirements of users’ activities, procedures, and processes, and (b) the corresponding information architecture required to optimize the effectiveness of the user-system dynamics to efficiently accomplish those functional goals (Constantine & Lockwood, 1999, 2003; Morrogh, 2003). By integrating human-factors engineering systems approaches, it may be possible to optimize the beneficent design quality of products and services from the perspective of the user’s operational and instrumental task-oriented needs (Constantine, 2001; Constantine & Lockwood, 2002; Kuniavsky, 2003; Wickens, Lee, Liu, & Becker, 2004). From the technology management perspective of the new science of user-experience design (UXD), intensive, hands-on project management is vital to the training of knowledge-age designers (Pinto, 2006). The development team needs an original design philosophy that is pliant, change embracing, and facilitates collaborative techniques, models, and skills that respect the proclivities of individual and group human behaviors (Henderson, 2000; Henderson & Harris, 2000; Kumar, 2006). To achieve this new paradigm of organizational dynamics, designers must become leaders in promoting and advocating approaches such as moving away from common functional units and rigid roles (i.e., using the agrarian-age container metaphor of silos) to cross-disciplinary, synergistic, intra-organizational open channels. According to Hughes (2003), running an organization as a collection of separate silos “can cause duplicate efforts, discourage cooperation, and stifle cross-pollination of ideas” (p. 9). For a
channel model to work, however, the IT design team must first embrace user-centered humanfactor methods and the collateral reorganization of the IT design process and product.
usEr-cEntErEd it dEsign procEssEs in human-factors EnginEEring The cross-disciplinary field of human factors recognizes that every new IT media product is ostensibly a dynamic form of person-person and person-technology interaction that expands our communication capabilities by reframing what we know and how we act (i.e., human knowledge and behavior). The underlying three core subdomains—information design, interactivity design, and media design—each need to become integrated within a unifying architecture derived from an object-oriented, modular infrastructure organized within an evolving taxonomy of digital affordances. Human-factors engineering recognizes (a) the holistic, ecological, and cross-disciplinary nature of human-technology system design, (b) the sociocultural, economic, and geopolitical importance of information utilization and knowledge generation, and (c) the dynamically hybrid, human-centered, user-centered, and usage-centered nature of the product form factor and interface design (Burns & Hajdukiewicz, 2004; Hofstede, 1991). Human-factor approaches strive to deeply integrate existing multidisciplinary domains including (a) systems theory, change management, and computer and information science, (b) cognitive informatics, learning, and performance, (c) philosophy, law, and ethics, (d) human physiological psychology, cognition, and perception, and (e) usability testing, industrial design, and ergonomics (Lehto & Buck, 2007; Rubin, 1994; Salvendy, 2006; Stanton, & Young, 1999; Ware, 2004; Wickens et al., 2004).
Human-Factors Design for Public Information Technology
In addition, the optimal human-factors design process entails an adaptive, applied, creative, and highly sophisticated cross-disciplinary problemsolving repertoire requiring (a) a basic literacy with effective, parsimonious information design, interactivity design, and media design principles, (b) competent technology and human communication skills, (c) core knowledge of the case-based research literature regarding human information processing, behavior psychology, user-object interaction and manipulation, and usability and accessibility, and (d) a basic appreciation of aesthetics and ergonomics (Guastello, 2006; Stanton, Hedge, Brookhuis, Salas, & Hendrick, 2004; UsabilityNet, 2006). In 1997, a working group of architects, product designers, engineers, and environmental designers (Connell et al., 1997) identified seven high-level percepts to evaluate existing product designs, refocus the future design process of new products, and educate both designers and consumers about the characteristics of more usable products and environments. They are particularly appropriate for application to all public information technology. These “Principles of Universal Design” are equitable use, flexible use, simple and intuitive use, perceptible information, tolerance for error, low physical error, and size and space for approach and use. Mapping these guidelines to our three-way human-factor design triarchy, Principles 1 and 2 are related to interaction design, 3 and 4 are associated with optimal information design, and 5, 6, and 7 are related to ergonomics, physical product form factors, and effective media design. However, a reasonable caveat is that not every principle may be relevant to every set of IT requirements, content domains, and enterprises. Each component (adapted and modified from Connell et al.) is briefly described below. 1.
Principle 1: Equitable use: The design is useful and marketable to people with diverse abilities and capabilities.
2.
3.
4.
5.
6.
7.
Principle 2: Flexible use: The design accommodates a wide range of individual preferences and abilities. Principle 3: Simple and intuitive use: The design is easy to understand and learn regardless of the user’s experience, knowledge, language literacy, technical skills, or current concentration level Principle 4: Perceptible information: The design effectively communicates necessary and sufficient information, regardless of ambient conditions or the user’s cognitive and perceptual abilities. Principle 5: Tolerance for error: The design is functionally robust and minimizes the occurrence of problems, errors, or the adverse consequences of any accidental or unintended actions. Principle 6: Low physical effort: The design can be used efficiently and comfortably and with a minimum of physical, sensory, or cognitive fatigue. Principle 7: Size and space for approach and use: The design affords appropriate size and space for approach, manipulation, and reach regardless of the user’s physical (body/hand) size, posture, or mobility.
The International Standards Organization (ISO 13407, 1999) has outlined an iterative highlevel process (Human-Centered Design Processes for Interactive System Teams) for effective human-factor design protocols. According to this widely adopted global ISO model, the four usercentered design activities are to (a) understand and specify the context of use, (b) specify the user and organizational requirements, (c) produce design solutions, and (d) evaluate designs against the requirements specified in Activity b. However, it is a formidable challenge to fully integrate human factors within the IT organization. This can only be achieved when (a) every phase of the product life cycle follows the prin-
Human-Factors Design for Public Information Technology
ciples of professional user-centered design theory and praxis, (b) when the public IT development team is cross-trained with the proper usability skills, tools, and knowledge, (c) when the IT design team is thoroughly supported by the commitment from upper management and a dedicated humancentered infrastructure, and (d) when awareness of this culture and ethos are properly disseminated within and throughout the organization (Dumas & Redish, 1999; Jokela, 2002; Keates, 2007; Knutson, 2001; Project Management Institute, 2004; Rubin, 1994; Skelton & Thamhain, 2005; Thamhain, 2005; UsabilityNet, 2006; Venturi & Troost, 2004). While examining ways to improve the signage, sense-of-space, self-directed way-finding, and user-centered design of interactive public information environments like museums, science centers, and shopping malls, C. G. Screven (1999) synthesized the insightful core metric of design efficiency: “the average time it takes viewers to find and process message-related information” (p. 147). Efficient IT designs facilitate users’ abilities to process and absorb new information. High design efficiency is thus characterized by a high ratio of needed and salient information to unneeded and superfluous information to ease working memory load and reduce affective stressors (i.e., engineers commonly use the adage “less noise, more signal”). Intelligently designing human-centered, goal-directed IT products reduces the aversive stimuli associated with the disconnects of failure, confusion, avoidance, task time and effort, media overload, product complexity, and the frustrating emotions engendered by inefficient product design solutions (Burns, & Hajdukiewicz, 2004; Moggridge, 2006). Specifically, efficient designs (a) conserve both mental and physical effort (i.e., working memory and time on task) needed by users to find, access, retrieve, quickly understand, and respond to messages and content, (b) reduce fatigue and maturation effects by providing intuitive, accessible affordances with low cognitive loads to quickly
engage and involve the user in self-directed behavior that enhances their interactivity with the tool or system information, and (c) generally improve all aspects of the information designs themselves by systematically employing larger, easy-to-read fonts; less dense and less difficult textual content (e.g., using familiar, jargon-free vocabulary); good contrast and consistent layouts (i.e., with good chunking, white space, use of colors, clear mnemonic cues, and reduced visual clutter); and precise, unambiguous headings and labels (Mijksenaar, 1997; Tidwell, 2005). In their insightful book Designing from Both Sides of the Screen: Don’t Impose—Respect Mental Effort, Isaacs and Walendowski (2001) analyzed screen-based user interfaces. Designers should leave their product users as much mental energy as possible so each user can focus on his or her task goal and forget about the transparent technology entirely. Isaacs and Walendowski noted that every user choice, in fact, every click, increases working memory load, and that each additional screen object (i.e., button, field, hyperlink, media element) increments functional over-choice and visual complexity. There is also an emotional, affective technostress factor that burdens users caught in overly complex interfaces. This dilemma is particularly apparent in public information technology applications that depend upon sophisticated, interactive synchronous and asynchronous networked systems and multi-user tools. User-experience designers need to be sharply focused on developing devices and systems that reduce (not increase) working memory loads, eliminate unnecessary features, and buttress the confidence and competence of basic user task behavior (Cooper & Reimann, 2003; Hackos & Redish, 1998). Collectively, human-factor design guidelines recommend that IT user interfaces (a) reduce way-finding memory by using visual elements like buttons, menus, icons, and legends sparingly, (b) make common tasks clearly visible and easy to find while hiding infrequent tasks (i.e., offer
Human-Factors Design for Public Information Technology
experienced users quick access to the common, high-priority tasks and offer less experienced users easy system start-ups and quick learnability), (c) give users feedback by combining audible and visual cues, simultaneously displaying signs of task progress and user location within the system, and finally, (d) build in good error recovery mechanisms so users can easily undo an operation, and if a system command cannot be carried out quickly, allow users to interrupt the process and return to a previously stable state. These IT system design approaches are well corroborated by other experts in the field (Cooper & Reimann, 2003; Guastello, 2006; Kalbach, 2004; Kuniavsky, 2003; Pugh, 2006; Rasmussen, Pejtersen, & Goodstein, 1994; Shneiderman, 2004; Tidwell, 2005; Vicente, 2006). While the serious, effective practice of an iterative design process involves much more than mere consideration for usability requirements per se (Nelson & Stolterman, 2003), a judicious application of the guidelines described in this chapter will support the pragmatic functional requirements of any product’s user-experience portfolio and effectively accommodate the needs of as many users as possible.
futurE trEnds Today’s human-factors, user-centered IT design and development teams are concerned with the mismatch between the rigidity of current conventional project management formalisms and the actual cognitive, perceptual, and intrinsically motivational richness of human lives and practices. The intention is to create pliant systems that are flexible and responsive, and that foster more effective, intuitive, and comfortable use because they better align with the tactical requirements of our work and the psychological, physiological, sensorial, and behavioral nature of human beings (ACM, 2005; Henderson, 2000; Henderson &
Harris, 2000). Managers and producers of public information technology understand the need to innovate but often struggle to pragmatically support creative processes within demanding, requirements-driven business environments. Information architects, knowledge developers, product designers, and usability specialists often face similarly demanding contexts, but must resourcefully find ways to support creative projects and innovative processes reliably and successfully. In a nutshell, the optimal vision statement for public information technology should codify that people shall come first—both from the perspective within the organization functionally and operationally as well as on the bottom-line business end of the equation as clients and end users of the products and services. These forward-thinking approaches to IT product usability are also linking the nature of our working and nonworking lives. The interesting implication is that human-factors engineering represents an ostensibly profound political and societal paradigm shift in the locus of control toward the proclivities of the person and away from the mechanics of the machine: the essence of the user-centered philosophy. In other words, humanfactor models and praxis not only support, foster, and augment user-centered public IT design, but in fact embody and entail optimal IT form and function (Vicente, 2006). The emerging applied profession of human-factors engineering may make a positive, significant, and possibly decisive change in the way we will design and deliver our public information technology environments in the 21st century (Lehto & Buck, 2007). The human factors and allied usability professions are in the process of improving user-experience designs for more accessible, more truly universal design solutions (Guastello, 2006; Hassenzahl, Beu, & Burmester, 2001; Jokela, 2002; Keates, 2007; Lidwell, Holden, & Butler, 2003; Preece, Rogers, & Sharp, 2002).
Human-Factors Design for Public Information Technology
conclusion Our daily lives have become increasingly belabored by a vast array of complex, intricate, and immensely powerful public information technology systems. Learning to use many of these computer-based tools, far from being transparently easy, requires a steep learning curve punctuated by psycho-emotional intimidation, trial and error, frustration, and a personal sense of belittlement in the face of the impersonal, ubiquitous and seemingly insurmountable dominance of inscrutable man-machine systems. Lowering the cognitive workload by simplifying the computer-human interface will contribute to a less stressful, more confident task performance and a more effective and efficient user experience. This is particularly salient for members of public information technology organizations who rely so critically upon their tools, software, and user interfaces to support their work environments and enterprise missions and to foster competitive, creative, and innovative knowledge workers. By (a) simplifying and standardizing interface design affordances, metaphors, and mnemonics, (b) reducing seldom-needed collateral features, and incorporating more intuitive, familiar, parsimonious, and easily grasped mental models to reduce working memory loads while improving simple functionality, and (c) facilitating a responsive, participatory inclusion of the person or user in all IT design solutions, we can potentially empower all users to take control over their information technologies and public IT organizations (Burns & Hajdukiewicz, 2004; Henderson, & Harris, 2000; Isaacs, & Walendowski, 2001; Keates, 2007; Rowland, 2004). In conclusion, superior human-factor engineering requires a deep understanding of the theory and practice of collaborative team leadership and participative stakeholder buy-in, as well as systems theory and the complete, concurrent, iterative public IT development life-cycle approaches from needs assessment, task analysis, and conceptual
design to production, quality assurance, usability design and testing, and courseware product deliverables and evaluation (Vicente, 1999). Postmodern public information technology enterprises become successful as their communication systems give their high-tech workers confidence, trust, a true sense of stakeholdership, commitment, and collaboration, and as they value each highperforming team member. Superb management of people and technology within these enterprises needs to support the special cohesive chemistry together with good communication that promotes openness within the organizational idea-generating infrastructure to everyone at every level. Then human-factors engineering, usability testing, and user-, usage-, and person-oriented design praxis may become an active paradigm shift for administrators, clients, public officials, workers, and designers, providing a rich array of adaptable strategies for change, growth, innovation, problem solving, and competitive product invention for the public IT transformations of the new millennium (Henderson, 2000; Henderson & Harris 2000; Hughes, 2003; Rowland, 2004; Skelton & Thamhain, 2005; Thamhain, 2005).
futurE rEsEarch dirEctions The following two high-priority areas of formal scientific research should be pursued in the interests of expanding the current pragmatic knowledge base pertaining to improving the human-factors design of public information technology systems. Future Research Focus One. There is an imperative need to develop an adaptive, research-based prescriptive design and development taxonomy for human-factors and ergonomics practitioners in the field. Much has already been done on the descriptive analysis in these areas—little work has been accomplished on determining which design strategies, techniques, approaches, and methods need to be applied for which particular public IT
Human-Factors Design for Public Information Technology
product (usage) requirements, environments, user populations, and networked distributed-media systems. To accomplish this agenda, new studies should be designed to integrate the three primary subdesign layers scaffolded by the research-based foundation of human-factors engineering (i.e., a three plus one design methodology) as follows. 1. 2.
3.
4.
5.
Create prototype IT products for testing. The affordances of each design layer comprise the independent variable(s); the user experience as measured by usability heuristics, performance rubrics, and other empirical metrics comprise the dependent variable(s). These products must map, define, and specify each of the three subdesign components (information design, interactivity design, and media design; see basic definitions of these in the “Key Terms” section of this chapter). Methodologically, each of these three design domains will overlap in production sequence as layers in a manner similar to Venn diagrams; the foundation layer is information, the middle layer is interactivity, and the top layer is media. At the center core of the combined and synthesized three-way design layer overlay is the user interface. In-depth, comprehensive research analysis should entail user reaction and usability performance impacted by each design layer individually (the direct, one-way interaction); by each design layer dyad (information-interactivity, information-media, and interactivity-media), and by the confluence of all three design layers—informationinteractivity-media in a single, three-way interaction.
Future Research Focus Two. A second vital set of issues to research should integrate human-factors, ergonomics, and universal-accessibility core principles and standards of practice for audio-vi-
sual-tactile user interfaces to augment the cognitive and perceptual user experience. Research into applying these standards should address how to integrate the emerging specifications of the World Wide Web Consortium (W3C) and ISO, and should also be applicable to globalization and localization of the user interface and user experience. Prototype IT products designed in conformance and compliance with these technical standards (e.g., enabling the Scalable Content Object Reference Model, SCORM, system interoperability) should be engineered to human-factors and ergonomics guidelines as well. Subsequent, advanced largesample empirical usability testing and robust heuristic evaluations should be conducted to add more data and explanatory and inferential statistical knowledge to our growing understanding of how to promulgate and deploy optimally efficient and effective public IT systems.
rEfErEncEs ACM. (2005). Ambient intelligence: Exploring our living environment. Interactions, 12(4), 20-58. Burns, C. M., & Hajdukiewicz, J. R. (2004). Ecological interface design. Boca Raton, FL: CRC Press. Connell, B. R., Jones, M., Mace, R., Mueller, J., Mullick, A., Ostroff, E., et al. (1997). The principles of universal design (Version 2.0). North Carolina State University, the Center for Universal Design. Retrieved November 3, 2006, from http://www.design.ncsu.edu:8120/cud/newweb/about_ud/udprinciples.htm Constantine, L. L. (2001). forUse: The Electronic Newsletter of Usage-Centered Design (Vol. 12). Retrieved January 5, 2007, from http://www. foruse.com/newsletter/foruse12.htm#2 Constantine, L. L., & Lockwood, L. A. D. (1999). Software for use: A practical guide to the models
Human-Factors Design for Public Information Technology
and methods of usage-centered design. Boston: Addison-Wesley Professional (ACM Press). Constantine, L. L., & Lockwood, L. A. D. (2002). Usage-centered engineering for Web applications. IEEE Software, 19(2), 42-50. Constantine, L. L., & Lockwood, L. A. D. (2003). Usage-centered software engineering: An agile approach to integrating users, user interfaces, and usability into software engineering practice. Proceedings of the 25th International Conference on Software Engineering (pp. 746-747). Cooper, A., & Reimann, R. M. (2003). About Face 2.0: The essentials of interaction design. Indianapolis, IN: Wiley Publishing. Dumas, J., & Redish, G. (1999). A practical guide to usability testing (Rev. Ed.). London: Intellect Books. Guastello, S. J. (2006). Human factors engineering and ergonomics: A systems approach. Mahwah, NJ: Lawrence Erlbaum Associates. Hackos, J. T., & Redish, J. C. (1998). User and task analysis for interface design. New York: John Wiley & Sons. Hassenzahl, M., Beu, A., & Burmester, M. (2001). Engineering joy. IEEE Software, pp. 70-76. Henderson, A. (2000). Pliant research. Retrieved June 1, 2006, from http://www.pliant.org/ Henderson, A., & Harris, J. (2000). Beyond formalisms: The art and science of designing pliant systems. A talk with Austin Henderson and Jed Harris. In K. Kaasgaard (Ed.), Software design and usability (pp. 107-133). Copenhagen, Denmark: Copenhagen Business School Press. Hofstede, G. (1991). Cultures and organizations: Software of the mind. Berkshire, England: McGraw-Hill International. Hughes, M. A. (2003). Managers: Move from silos to channels. Intercom, pp. 9-11.
International Standards Organization (ISO) 13407. (1999). Human-centered design processes for interactive system teams. Retrieved from http://www.iso.org/ Isaacs, E., & Walendowski, A. (2001). Designing from both sides of the screen: Don’t impose—Respect mental effort. Indianapolis, IN: Pearson Education (SAMS Publishing). Jokela, T. (2002). Making user-centered design common sense: Striving for an unambiguous and communicative UCD process model. ACM International Conference Proceeding Series: Vol. 31. Proceedings of the Second Nordic Conference on Human-Computer Interaction, Aarhus, Denmark (pp. 19-26). Keates, S. (2007). Designing for accessibility: A business guide to countering design exclusion. Mahwah, NJ: Lawrence Erlbaum Associates. Knutson, J. (2001). Project management for business professionals: A comprehensive guide. New York: John Wiley & Sons. Kumar, J. M. (2006). Working as a designer in a global team. Interactions, 13(2), 25-27. Kuniavsky, M. (2003). Observing the user experience: A practitioner’s guide to user research. San Francisco: Morgan Kaufman. Lehto, M. R., & Buck, J. R. (2007). Introduction to human factors and ergonomics for engineers. Mahwah, NJ: Lawrence Erlbaum Associates. Lidwell, W., Holden, K., & Butler, J. (2003). Universal principles of design: A cross-disciplinary reference. Gloucester, MA: Rockport Publishers. Mayhew, D. J. (1999). The usability engineering lifecycle: A practitioner’s handbook for user interface design. San Diego, CA: Academic Press. Mijksenaar, P. (1997). Visual function: An introduction to information design. New York: Princeton Architectural Press.
Human-Factors Design for Public Information Technology
Moggridge, B. (2006). Designing interactions. Cambridge, MA: MIT Press. Morrogh, E. (2003). Information architecture: An emerging 21st century profession. Upper Saddle River, NJ: Prentice Hall (Pearson). Nelson, H. G., & Stolterman, E. (2003). The design way: Intentional change in an unpredictable world. Foundations and fundamentals of design competence. Englewood Cliffs, NJ: Educational Technology Publications. Pinto, J. K. (2007). Project management: Achieving competitive advantage. Upper Saddle River, NJ: Pearson/Prentice Hall. Preece, J., Rogers, Y., & Sharp, H. (2002). Interaction design: Beyond human-computer interaction. New York: John Wiley & Sons. Project Management Institute. (2004). A guide to the project management body of knowledge (3rd ed.). Newtown Square, PA: Project Management Institute. Pugh, K. (2006). Interface oriented design: With patterns. Cambridge, MA: O’Reilly Media. Rasmussen, J., Pejtersen, A. M., & Goodstein, L. P. (1994). Cognitive systems engineering. New York: John Wiley & Sons. Rowland, G. (2004). Shall we dance: A design epistemology for organizational learning and performance. Educational Technology Research and Development, 52(1), 33-48. Rubin, J. (1994). Handbook of usability testing: How to plan, design, and conduct effective tests. New York: John Wiley & Sons. Saffer, D. (2006). Designing for interaction: Creating smart applications and clever devices. Berkeley, CA: Peachpit Press (Pearson Education). Salvendy, G. (2006). Handbook of human factors and ergonomics. Hoboken, NJ: John Wiley & Sons.
Screven, C. G. (1999). Information design in informal settings: Museums and other public spaces. In R. Jacobson (Ed.), Information design (pp. 131-192). Cambridge, MA: MIT Press. Skelton, T. M., & Thamhain, H. J. (2005). Managing risk in new product development projects: Beyond analytical methods. Proceedings of the 19th World Congress on Project Management, New Delhi, India. Stanton, N., Hedge, A., Brookhuis, K., Salas, E., & Hendrick, H. W. (2004). Handbook of human factors and ergonomics methods. Boca Raton, FL: CRC Press. Stanton, N. A., & Young, M. S. (1999). Guide to methodology in ergonomics: Designing for human use. New York: Routledge. Stone, D., Jarrett, C., Woodroffe, M., & Minocha, S. (2005). User interface design and evaluation. San Francisco: Morgan Kaufman. Thamhain, H. (2005). Management of technology: Managing effectively in technology-intensive organizations. Hoboken, NJ: John Wiley & Sons. Tidwell, J. (2005). Designing interfaces. Cambridge, MA: O’Reilly Media. UsabilityNet. (2006). International standards for HCI and usability. Retrieved March 2, 2007, from http://www.usabilitynet.org/tools/r_international.htm Venturi, G., & Troost, J. (2004). Survey on the UCD integration in the industry. Proceedings of the Third Nordic Conference on Human-Computer Interaction (pp. 449-452). Vicente, K. J. (1999). Cognitive work analysis: Toward safe, productive, and healthy computerbased work. Mahwah, NJ: Lawrence Erlbaum Associates. Vicente, K. J. (2006). The human factor: Revolutionizing the way people live with technology. New York: Routledge.
Human-Factors Design for Public Information Technology
Ware, C. (2004). Information visualization: Perception for design. San Francisco: Morgan Kaufman (Elsevier). Wickens, C. D., Lee, J. D., Liu, Y., & Becker, S. G. (2004). An introduction to human factors engineering (2nd ed.). Upper Saddle River, NJ: Pearson Education.
furthEr rEading Anders, P. (1999). Envisioning cyberspace: Designing 3D electronic spaces. New York: McGraw-Hill. Baudisch, P., & Rosenholtz, R. (2003). Halo: A technique for visualizing off-screen objects. Proceedings of SIGCHI Conference on Human Factors in Computing Systems, CHI, 5(1), 481488. Bennet, A., & Bennet, D. (2004). Organizational survival in the new world: The Intelligent Complex Adaptive System. A new theory of the firm. Boston: Knowledge Management Consortium International Press, Butterworth-Heinemann (Elsevier). Bias, R. G., & Mayhew, D. J. (Eds.). (2005). Costjustifying usability: An update for the Internet age (2nd ed.). San Francisco: Morgan Kaufmann (Elsevier). Carté, P., & Fox, C. (2004). Bridging the culture gap: A practical guide to international business communication. London: Canning (Kogan Page Ltd.). Catani, M. B., & Biers, D. W. (1998). Usability evaluation and prototype fidelity: Users and usability professionals. Proceedings of the Human Factors and Ergonomics Society Annual Meeting (pp. 1331-1335). Chauhan, V. (2006). Usability professionals: You’ve come a long way, baby! Interactions, 13(2), 14-17.
Desurvire, H. (1994). Faster, cheaper! Are usability inspection methods as effective as empirical testing? In J. Nielsen & R. L. Mack (Eds.), Usability inspection methods (pp. 173-202). New York: John Wiley & Sons. Esselink, B. (2000). A practical guide to localization. Philadelphia: John Benjamins. Gray, W. D., & Salzman, M. C. (1998). Damaged merchandise? A review of experiments that compare usability evaluation methods. Human Computer Interaction, 13(3), 203-261. Hertzum, M., & Jacobsen, N. E. (2001). The evaluator effect: A chilling fact about usability evaluation. International Journal of HumanComputer Interaction, 13(4), 421-444. Hertzum, M., Jacobsen, N. E., & Molich, R. (2002). Usability inspections by groups of specialists: Perceived agreement in spite of disparate observations. Proceedings of the Association for Computing Machinery Special Interest Group for Computer-Human Interaction 2002 (pp. 662-663). Jacobsen, N. E., Hertzum, M., & John, B. E. (1998). The evaluator effect in usability studies: Problem detection and severity judgments. Proceedings of the Human Factors and Ergonomics Society (pp. 1336-1340). Kessner, M., Wood, J., Dillon, R. F., & West, R. L. (2001). On the reliability of usability testing. Proceedings of the Association for Computing Machinery Special Interest Group for ComputerHuman Interaction 2001 (pp. 97-98). Kristof, R., & Satran, A. (1995). Interactivity by design: Creating and communicating with new media. Mountain View, CA: Adobe Press. Krug, G. (2005). Communication, technology and cultural change. London: Sage Publications Ltd.
Human-Factors Design for Public Information Technology
Landay, J. A., & Myers, B. A. (2001). Sketching interfaces: Toward more human interface design. Computer, 34(3), 56-64. Law, E. L.-C., & Hvannberg, E. T. (2004). Analysis of combinatorial user effect in international usability tests. Proceedings of the Association for Computing Machinery Special Interest Group for Computer-Human Interaction 2004 (pp. 9-16). Massey, A. P., Yu-Ting, C. H., Montoya-Weiss, M., & Ramesh, V. (2001). When culture and style aren’t about clothes: Perceptions of task-technology “fit” in global virtual teams. Proceedings of the ACM Conference on Group Computing (pp. 207-213). Molich, R., Bevan, N., Curson, I., Butler, S., Kindlund, E., Miller, D., et al. (1998). Comparative evaluation of usability tests. Proceedings of the Usability Professionals’ Association. Molich, R., Meghan, R. E., Kaasgaard, K., & Karyukin, B. (2004). Comparative usability evaluation (CUE-2). Behaviour & Information Technology, 23(1), 65-74. Moore, S., & Seymour, M. (2005). Global technology and corporate crisis: Strategies, planning and communication in the information age. New York: Routledge (Taylor & Francis Group). Myers, B. A., Nichols, J., Wobbrock, J. O., & Miller, R. C. (2004). Taking handheld devices to the next level. Computer, 37(12), 36-43. Nielsen, J. (1994). Heuristic evaluation. In J. Nielsen & R. L. Mack (Eds.), Usability inspection methods. New York: John Wiley & Sons. Nielsen, J. (2000). The unbearable lightness of Web design: A talk with Jakob Nielsen. In K. Kaasgaard (Ed.), Software design and usability (pp. 45-76). Copenhagen, Denmark: Copenhagen Business School Press. Norman, D. A. (2002). The design of everyday things. New York: Basic Books.
0
Singh, N., & Pereira, A. (2005). The culturally customized Web site. Burlington, MA: Elsevier. Snyder, C. (2003). Paper prototyping: The fast and easy way to design and refine user interfaces. San Francisco: Morgan Kaufmann. Thamhain, H. J. (1996). Best practices for controlling technology-based products. Project Management Journal, 27(4), 37-48. Vogel, D., & Balakrishnan, R. (2004, October). Interactive public ambient displays: Transitioning from implicit to explicit, public to personal. Proceedings of Interaction with Multiple Users, UIST ’04, Santa Fe, NM. Weiss, S. (2002). Handheld usability. West Sussex, England: John Wiley & Sons, Ltd. Whitehouse, R. (1999). The uniqueness of individual perception. In R. Jacobson (Ed.), Information design. Cambridge, MA: MIT Press. Wilson, C. (1999). Severity scale for classifying usability problems. Usability Interface, 5(4).
tErms and dEfinitions Computer-Human Interface Design (CHI): Computer-human interface design consists of the effective functional synthesis and operational integration of three overlapping subdesign domains: information design, interaction design, and media design (see collateral definitions in this “Key Terms” section). Each of these three layers must be clearly defined, represented, and manifested in the user interface design solution. These three subdesign realms and components are synthesized holistically by the conceptual usability glue of the principles of universal design. User interface affordances can be evaluated through usability heuristics and the degree to which these principles have been effectively applied and realized in the user experience—providing a coherent, logically consistent, and robust (i.e., intuitive, accessible, easy to learn and use) systems design.
Human-Factors Design for Public Information Technology
Human-Factors Engineering: Human factors is an emerging applied design field that entails a synthesis of cross-disciplines from classical ergonomics (i.e., the science of work, systemperson interaction, and functional and operational performance needs) and newer research gleaned from the allied fields of cognitive science, human physiological psychology, perception, learning, memory and brain-behavior science, interaction and interactivity design, product design, media design, communication design, and information design. Information Design: Information design means understanding and clearly mapping the full scope, sequence, concepts, principles, examples, and underlying inheritance structure of the declarative and procedural facets of the content domain (i.e., the infosphere). Information design is essentially concerned with understanding the purpose, organization, context, and interrelationships within a knowledge domain—the envisioning of information to be communicated. Interactivity Design: Interactivity design means understanding the affordances of the computer-human interrelationship from the user’s perceptual and cognitive experience; and identifying and specifying the dynamic time-space transactions between the user, information, and media elements, creating a graphical user-environment interface to navigate within that is intuitive, comprehensible, robust, and engaging. The core artifact of interactivity design is the user interface. Principal ingredients of the interface design are signage, cueing, mnemonics, style and layout conventions, and the ambient conceptual metaphor. Media Design: Media design involves the physical, functional, and operational manifestation of human-factors design. Media design is the tangible, concrete, tactile-audio-visual-sensorymotor experiential front end of the human-tech-
nology system. It requires an understanding of the multisensory nature of the user’s experience and applying human learning, memory, messaging, perception, and cognition to produce effective, aesthetic multiple media that provide cognitive, perceptual, and physical affordances to improve human-machine system communications for specific audiences and organizational requirements. Usability: The degree to which humantechnological systems, artifacts, and products are appropriately and efficiently designed for the user (i.e., ease of use) is the indication of that product’s usability. Various heuristics and criteria can provide an objective, empirical basis against which to measure and evaluate the level and degree of design efficiency corresponding to the key construct of usability. Usage-Centered Design: Usage-centered design focuses primarily on the functional goalbased behavior of users and case-based structuring activities, procedures, processes, operational needs, tools, and corresponding affordances to optimize the effectiveness of the user to efficiently accomplish those work goals and requirements. User-Centered or User-Experience Design (UCD or UXD): User-centered design focuses on constructing a user experience and environment with physical and virtual affordances that are manipulable, controllable, customizable, and adaptable from the essential perspective of the conceptual model of the user. This means both (a) the user’s own internal metamodel of their own goaldirected processes, activities, and contextual (i.e., sociopsychological and physical) environment, and (b) the designer’s representational model of the user-activity-environmental experience, with the former driving and superceding the latter in the design solution. Thus, the conceptual model of the user becomes the superordinate principle guiding the design process.
Chapter LX
An Overview of IT Outsourcing in Public-Sector Agencies Anne C. Rouse Deakin Business School, Deakin University, Australia
introduction For the past 15 years, governments in the developed, Western world have been contracting out, or outsourcing, services as a key part of publicsector reforms. Outsourcing has been argued to lead to cost savings, improved discipline, better services, access to scarce skills, and the capacity for managers to focus more time on the core business of their organizations (Domberger, 1998). Government outsourcing initiatives have encompassed a range of services, but given the large sums of money invested in IT assets, the outsourcing of IT services (IT outsourcing, or ITO) has been a major initiative for many agencies. Lacity and Willcocks (1998, p. 3) defined ITO as “handing over to a third party [the] management of IS/IT assets, resources and/or activities for required results.” For public-sector outsourcing, this handover is usually made by way of a competitive tender. Case studies have reported ITO successes and failures (e.g., Currie & Willcocks, 1998; Rouse & Corbitt, 2003; Willcocks & Currie, 1997; Lacity and Willcocks, 2001; Willcocks & Kern, 1998), but much of the evidence presented
to public-sector decision makers to justify this reform is anecdotal and unsystematic, and when investigated in depth, does not necessarily support widespread conclusions.
background The policy promises associated with contracting out government services are part of a broader movement toward the privatization of publicsector services. Osborne and Gaebler (1993) in their influential book Reinventing Government argued that governments should “steer, not row the boat” (p. 25); in other words, they should ensure that services are provided to the public, not necessarily provide the services themselves. This suggestion resonated with governments around the globe that were anxious to reduce public expenditure and risks. The move to outsource was also a response to arguments that through economies of scale, scope, and specialization, private-sector vendors could deliver outsourced services at a lower cost than governments themselves (Domberger, 1998). Initial forays into
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
An Overview of IT Outsourcing in Public-Sector Agencies
public-sector outsourcing involved relatively simple and straightforward services (like garbage collection or hospital cleaning), which were easy to specify and to measure. However, a growing vendor market, improvements in communications technologies, and emerging skills shortages that raised the cost of IT labor (and threatened salary relativities within public-sector agencies) were drivers for the move to outsource IT (Hodge & Rouse, 2006). Both project-based services (like the development of a new system) and routine support services (such as desktop and mainframe support, and the maintenance of legacy systems) became candidates for outsourcing. It is important to distinguish outsourcing from another key public-sector reform: privatization. While both may involve handing over public assets to the private sector, privatization is a once-off, irreversible sale of a state-owned asset. Governments generally retain some regulatory control over the provision of the privatized service, but retain no governance control and no operating risk (Jensen & Stonecash, 2005). In contrast, outsourcing is contracted for a specific period, after which governments might select an alternative vendor, or even return to providing the service inhouse. The latter is theoretically possible, though in practice it is rarely contemplated because of the financial costs and organizational disruption associated with re-insourcing. With outsourcing, governments retain responsibility for governance of the outsourced function and for specifying what is required while allowing the vendor to decide how to provide this. This means that, in practice, governments largely retain most of the risk associated with the outsourced function.
Empirical EvidEncE about govErnmEnt outsourcing outcomEs It is not yet clear whether the outsourcing of government services (including ITO) has delivered on
the theoretical promise. Anecdotal case studies of success are reported (e.g., Savas, 2000), but many of these considered only preliminary experiences. They may also be atypical. Before the success of outsourcing can be established, the nature of success needs to be defined. This depends on what was expected from the strategy in the first place (Parasuraman & Grewal, 2000). Expectations for complex services with substantial impact on organizational performance are multifaceted and go beyond simple cost comparisons. Because of the multifaceted nature of expectations, outsourcing usually results in mixed outcomes as is illustrated by the cases cited in the first paragraph. Rouse’s (2006) survey of 240 public- and private-sector purchasers found that while IT outsourcing provided access to scarce skills and high levels of technical service quality, it failed to provide substantial cost savings, leading to generally low levels of overall satisfaction, the result of failure to meet stated or unstated expectations, according to marketing theory (e.g., Parasuraman & Grewal, 2000). Another problem with determining success is that while outsourcing is known to be risky (Aubert, Patry, & Rivard, 2002; Gewald, Wüllenweber, & Weitzel, 2006), until potential downsides are encountered, decision makers may fail to recognize the level of risk. They may then perceive the arrangement to be more successful than it really is. According to Lacity and Willcocks (1998), perceptions of outsourcing success diminish the longer the arrangement lasts as many costs do not become apparent for some time. Purchasers risk vendor lock in (Wikipedia, 2006), where they find that they have no choice but to continue with an existing vendor because of high switching costs, or because few (or no) alternative bidders can be found. Another important risk, heightened for public-sector outsourcing because of public expectations, involves threats to the privacy and confidentiality of citizens’ records that are handed over to vendors. Increasingly in
An Overview of IT Outsourcing in Public-Sector Agencies
Europe, Australasia, and some U.S. states, privacy legislation is raising the stakes in relation to such records, and recent data protection failures by outsourced vendors (such as CardSystems in the United States) have increased community concerns about this issue. Outsourcing success is often defined in terms of whether or not costs have been saved compared to those of in-house delivery. Lacity and Willcocks (1998) reported that this reason for outsourcing IT appeared in 80% of the cases they studied. However, establishing whether costs are reduced when complex services (like IT) are outsourced is a technically complex and expensive exercise (Rouse & Corbitt, 2003). Transaction cost theory (Williamson, 1979) notes that two forms of costs are involved: production and transaction costs. Production costs (sometimes labeled transformation costs) are the direct costs of production, that is, of transforming inputs into outputs. Transaction costs are the costs of dealing with the marketplace, and include the costs of finding, contracting with, monitoring, and controlling the activities of the supplier. In general, with outsourcing, production costs are reduced because of competition, while transaction costs are often increased (Ang & Straub, 1998) as are the notional costs associated with the risks associated with the strategy: risk exposure costs. Very few of the studies on outsourcing have considered transaction costs as well as production costs when exploring savings (Hodge, 2000). Added to these costs must be the costs of absorbing risks (risk exposure), which are normally calculated as the probability of the downsides occurring, multiplied by the costs of these downsides (Aubert, Dussault, Patry, & Rivard, 1999). According to Rouse and Corbitt (2003), the risk exposures for complex outsourcing (like ITO) are much higher than generally recognized.
Empirical EvidEncE for outsourcing succEss public-sector outsourcing Even when only production costs savings are considered, there are mixed findings about government outsourcing (of IT and other services). The Australian Industry Commission (1996) reviewed 203 international studies of government outsourcing and concluded that the extent of savings varied widely, and that there was evidence of cost increases following outsourcing in some cases. When reviewing quantitative studies of government outsourcing using the technique of meta-analysis, Hodge (2000) established that some types of services achieved much better cost savings than others. He calculated that average savings of between 6 and 12% were obtained (this contrasts strongly with the 20% to 30% anecdotally quoted by outsourcing proponents). Hodge found evidence for substantial production cost savings (of between 19% and 30%) in the areas of garbage collection, and cleaning and maintenance services. Such simple services are easy to define and measure, and involve relatively unskilled staff with little bargaining power. However, Hodge found limited research into the cost outcomes for complex services (like IT), and in the few he found, average cost savings estimates ranged from an 8% saving to a 24% increase.
it outsourcing Despite the rapid growth of ITO in the public and private sectors, there is still no consensus in the academic literature on the extent of cost savings from contracting out IT services. There is also limited literature on whether government IT outsourcing achieves better or worse outcomes than those achieved by the private sector. In contrast to the relatively easy-to-measure services reported above, for IT services, the technical complexity of the tasks carried out by staff is high, the pro-
An Overview of IT Outsourcing in Public-Sector Agencies
fessional language of solutions is obscure, and the rate of change in development is substantial. Establishing cost savings in these circumstances is difficult. Possibly because of these difficulties, most research into ITO has involved singular cases studied at one point of time, which may not be representative of wider experiences. There are few survey-based studies of ITO outcomes, and of these, even fewer have targeted public-sector agencies. Little research has differentiated between the once-off, bespoke development of a new system on the one hand, and the ongoing delivery of IT services, on the other. The notion of savings in the former case is fuzzy because, since the move to post-Internet systems architectures, few publicsector agencies have the in-house skills to develop applications at the level of complexity required for government administration. Thus, comparisons of in-house vs. outsourced delivery become impossible. Furthermore, the bespoke nature of systems development means that each system’s costs are unique. When the costs of ongoing IT services are compared, the rate of technological changes in IT means it is rare to be able to compare in-house and external delivery costs with any precision, even when only production costs are considered. Rouse and Corbitt’s (2003) longitudinal study of the Australian government’s IT outsourcing established that it is relatively easy to get the business-case cost projections wrong; they reported that inaccurate cost projections (particularly the costs of risks) were a key reason the Australian government was forced to abandon its IT outsourcing initiative. As with other forms of outsourcing, case study research into ITO has revealed widely varying outcomes. Willcocks, Lacity, and Fitzgerald (1997) reported that for 41 cases of outsourcing, there were 23 instances where some cost savings were achieved (i.e., 56%), with savings being mixed in five instances (12%) and savings not being achieved in 13 cases (31%). Savings could not be determined in eight further cases. In a study of
7,500 government outsourcing contracts published by CTC Consultants (1999), Simon Domberger, a strong proponent of government outsourcing, reported an average increase in costs of 8.6% when IT services were outsourced. Aubert, Patry, and Rivard (1999) in their longitudinal study of Canadian IT outsourcing found that in 49% of firms costs rose. Willcocks and Currie (1997) also noted that in some cases, government outsourcing costs rose, in part because the private sector often paid higher salaries than the public sector. This was an explanation given by Domberger (as cited in CTC Consultants), too, for the generally poor outcomes for ITO outsourcing that he observed. The study by Rouse and Hodge (in press) statistically contrasted public-sector and privatesector agencies’ ITO experiences. These authors analyzed a survey of 240 IT directors and CIOs (chief information officer) for Australia’s largest public- and private-sector firms. Just over a third of respondents reported unequivocal satisfaction, however, only a minority reported cost savings, and only a small minority (7%) reported substantial cost savings. A sizeable minority (22%) reported cost increases from ITO. Rouse and Hodge investigated 27 individual outcome measures in that survey (including cost savings) and found that for the large majority of outsourcing outcomes, no statistical difference existed in perceived levels of performance between publicand private-sector firms. There were no statistically significant differences for the key success measures: technical benefits, strategic benefits, technical service quality, cost savings, business flexibility, economies of scale, and access to skilled personnel. Nor were there statistical differences in outsourcing satisfaction or perceptions of overall value: Government respondents reported the same relatively poor outcomes as did those in nongovernment organizations.
An Overview of IT Outsourcing in Public-Sector Agencies
futurE trEnds
futurE rEsEarch dirEctions
In view of the relatively dismal outcomes of large-scale outsourcing of IT by governments, many large, single-vendor contracts have not been renewed. Typically, governments now break apart the contract and deal with different suppliers for different IT services, and so the prevailing outsourcing arrangement involves a series of selective or best-of-breed contracts. This strategy appears to be a reaction to academic advice that selective outsourcing is less risky (e.g., Lacity & Willcocks, 2001), though empirical studies have failed to confirm this proposition. A larger number of selective contracts are argued to increase transaction costs (Currie & Willcocks, 1998), and in future research such costs should be included in cost analyses. Future research should also explore the comparisons that can be used to evaluate cost savings because once an agency has outsourced, it soon becomes impossible to compare in-house and outsourced costs. Research should also continue to explore reasons for the failures of many agencies to reap the theoretical benefits of outsourcing. There is an increasing trend in the literature to consider the risks of IT outsourcing (e.g., Aubert et al, 2002; Gewald et al., 2006; Rouse & Corbitt, 2003) and to look at the way perceptions of risk are related to whether decision makers are proponents of outsourcing or not. At this stage, discussions are largely theoretical, but in the future, more quantitative evaluation of risks should be carried out. Evidence from Ang and Straub (1998) suggests that when organizations choose to outsource, they tend to de-emphasize transaction costs, while those that consider but reject outsourcing tend to focus on transaction costs. A similar effect may occur with the perceptions of risk, and this too provides a fruitful avenue for future research.
In the 15 years or so since IT outsourcing emerged as an academic topic, new variations have been taken up by the private sector. Two important variations are business process outsourcing (BPO) and offshore outsourcing (offshoring). In practice, because of problems achieving substantial cost savings with onshore outsourcing, there is pressure on firms to consider offshore delivery of both IT and business processes. While growth in the IT outsourcing market had, by 2004, slowed (Gartner, 2005), the growth in new outsourcing forms (offshoring and BPO) is reportedly strong. Consequently, public-sector agencies, too, will be encouraged to explore these options in the continued search for reduced business costs. Offshore outsourcing, in particular, while benefiting developing countries and offering the promise of substantial savings, involves significant additional risks for purchasers, so it is likely to be of particular interest to researchers in the future. For public-sector purchasers, the issue of managing the privacy of citizens records sent to other legal and cultural jurisdictions will be particularly critical. Future studies of outsourcing risk will need to establish typical risk exposures for on- and offshore outsourcing, and the effectiveness of risk minimizing strategies. It is also important that research into offshore outsourcing be undertaken by independent academic researchers as well as vendors. A detailed review by Dibbern, Goles, Hirschheim, and Jayatilaka (2004) of the academic outsourcing literature to 2001 illustrated how little systematic quantitative research into outsourcing had been undertaken, and how much of this had concentrated on the reasons behind sourcing decisions rather than the outcomes. The predominant research methodology continues to be case studies, and there have been few theorytesting studies in the literature and hardly any studies that have statistically tested propositions related to outsourcing practices. At the same time,
An Overview of IT Outsourcing in Public-Sector Agencies
a large number of trade-based books have now been published on how to successfully outsource. These are based on the personal experience of the author rather than on any empirical research. Given the emphasis on individual and largely exploratory case studies and on practitioner opinion, there are substantial opportunities for future outsourcing research that shifts the emphasis to theory testing. There is a need for research that uses outsourcing outcomes as the dependent variables, and uses alternative research methods (including quantitative methods). Such research will allow for disconfirming propositions. In particular, future research should begin to explore which of the many prescriptions offered by researchers and consultants for ensuring successful outsourcing do generalize widely to public- and private-sector firms in the community. Researchers are also encouraged to investigate the measurable effects managerial decisions and behaviors have on outsourcing outcomes. There is opportunity for research that systematically examines differences in outcomes and management practices across different types of outsourcing (e.g., BPO and ITO; offshore and onshore outsourcing). There is also opportunity for research that explores experiences in both the private and public sectors, as the latter sector has not received the same level of attention, despite its widespread use of outsourcing.
conclusion IT costs form a large component of government expenditure, and governments around the world have sought to rein in these costs by outsourcing. From the evidence to date, though, achieving costs savings by outsourcing IT is relatively unlikely. Some of the other strategic benefits of outsourcing (such as being able to redirect attention to core business) also appear difficult to achieve in real life (Rouse & Corbitt, 2003). Earlier researchers (e.g., Lacity & Hirschheim, 1995) argued that
it is substantially more challenging to manage outsourcing arrangements than is generally recognized, and this observation is supported by the research cited above. It seems that rapidly changing business requirements resulting from dynamic community demands, together with unexpectedly high transaction costs and risks, substantially reduce the theoretical likelihood of success. Given the evidence to date, public-sector agencies need to proceed more cautiously down the road of IT outsourcing, ensuring that their cost projections include both transaction and production costs, and making financial provisions for the high levels of risks that appear to be involved in contracting out complex services.
rEfErEncEs Ang, S., & Straub, D. W. (1998). Production and transaction economies and IS outsourcing: A study of the US banking industry. MIS Quarterly, 22(4), 535-552. Aubert, B., Dussault, S., Patry, M., & Rivard, M. (1999). Managing the risks of IT outsourcing. Proceedings of the 32nd Hawaii International Conference on System Sciences, HI. Aubert, B., Patry, M., & Rivard, S. (1999). L’impartation des services informatique au Canada: Une comparaison 1993-1997. In M. Poitevin (Ed.), Impartition: Fondements et analyses (pp. 202-220). Montreal, Canada: University of Laval Press. Aubert, B., Patry, M., & Rivard, S. (2002). Managing IT outsourcing risk: Lessons learned. In R. Hirschheim, A. Heinzl, & J. Dibbern (Eds.), Information systems outsourcing: Enduring themes, emergent patterns and future directions (pp. 155-176). Berlin, Germany: Springer. Australian Industry Commission. (1996). Competitive tendering and contracting by public sector agencies: Report No. 48. Melbourne, Australia: AGPS.
An Overview of IT Outsourcing in Public-Sector Agencies
CTC Consultants. (1999). Government outsourcing: What has been learnt? Sydney, Australia: CTC Consultants. Currie, W., & Willcocks, L. (1998). Analysing four types of IT sourcing decisions in the context of scale, client/supplier interdependency and risk mitigation. Information Systems Journal, 8(2), 119-144. Dibbern, J., Goles, T., Hirschheim, R., & Jayatilaka, B. (2004). Information systems outsourcing: A survey and analysis of the literature. ACM SIGMIS Database, 35(4), 6-102. Domberger, S. (1998). The contracting organization: A strategic guide to outsourcing. Oxford: Oxford University Press. Gartner. (2005). Outsourcing drives IT services growth. Retrieved from http://www.gartner. com/5_about/press_releases/pr2004.jsp Gewald, H., Wüllenweber, K., & Weitzel, T. (2006). The influence of perceived risks on banking managers’ intention to outsource business processes: A study of the German banking and finance industry. Journal of Electronic Commerce Research, 7(2), 78-96. Hodge, G. A. (2000). Privatization: An international review of performance. Boulder, CO: Westview Press. Hodge, G. A., & Rouse, A. C. (2006). Outsourcing government information technology services: An Australian case study. In G. Boyne, K. Meier, L. O’Toole, Jr., & R. Walker (Eds.), Public service performance: Perspectives on measurement and management. London: Palgrave McMillan. Jensen, P. H., & Stonecash, R. E. (2005). Incentives and the efficiency of public sector outsourcing contracts. Journal of Economic Surveys, 19(5), 767-787. Lacity, M. C., & Hirschheim, R. (1995). Beyond the information systems outsourcing bandwagon: The insourcing response. New York: Wiley.
Lacity, M. C., & Willcocks, L. (1998). An empirical investigation of information technology sourcing practices: Lessons from experience. MIS Quarterly, 22(3), 363-408. Lacity, M., & Willcocks, L. (2001). Inside mega contracts: South Australian and Dupont. In M. Lacity & L. Willcocks (Eds.), Global IT outsourcing: In search of business advantage (pp. 40-88). New York: Wiley. Osborne, D., & Gaebler, T. (1993). Reinventing government: How the entrepreneurial spirit is transforming the public sector. Reading, MA: Addison-Wesley Publishing. Parasuraman, A., & Grewal, D. (2000). Serving customers and consumers effectively in the 21st century: A conceptual framework and overview. Journal of the Academy of Marketing Science, 28(1), 9-16. Rouse, A. C. (2006). Explaining I.T. outsourcing purchasers’ dissatisfaction. Proceedings of 10th Pacific Asia Conference on Information Systems (PACIS), Kuala Lumpur, Malaysia. Rouse, A. C., & Corbitt, B. (2003). The Australian government’s abandoned infrastructure outsourcing program: What can be learned? Australian Journal of Information Systems, 10(2), 81-90. Savas, E. S. (2000). Privatization and public-private partnerships. NJ: Chatham House Press. Wikipedia. (2006). Vendor lock in. Retrieved from http://en.wikipedia.org/wiki/Lock_in Willcocks, L., & Currie, W. L. (1997). Information technology in the public services: Towards the contractual organization? British Journal of Management, 107-120. Willcocks, L., & Lacity, M. C. (1998). Strategic sourcing of information systems: Perspectives and practices. Chichester, United Kingdom: Wiley. Willcocks, L., Lacity, M. C., & Fitzgerald, G. (1995). Information technology outsourcing in
An Overview of IT Outsourcing in Public-Sector Agencies
Europe and the USA: Assessment issues. International Journal of Information Management, 15(5), 333-351. Willcocks, L., Lacity, M., & Fitzgerald, D. (1997). IT outsourcing in Europe and the USA: Assessment issues. In L. Willcocks, D. Feeny, & G. Islei (Eds.), Managaing IT as a strategic resource (pp. 306-358). New York: McGraw Hill. Willcocks, L. P. & Kern, T. (1998). IT outsourcing as strategic partnering: The case of the UK Inland Revenue. European Journal of Information Systems, 7(1), 29-45. Williamson, O. E. (1979). Transaction cost economics: The governance of contractual relationships. Journal of Law and Economics, 22, 233-261.
furthEr rEading Bahli, B., & Rivard, S. (2005). Validating measures of information technology outsourcing risk factors. Omega, 33(2), 175-187. Carmel, E., & Tjia, P. (2005). Offshoring information technology: Sourcing and outsourcing to a global workforce. New York: Cambridge University Press. Earl, M. J. (1996). The risks of outsourcing. Sloan Management Review, 37(3), 26-32. Grover, V., Cheon, M. J., & Teng, J. T. C. (1996). The effect of service quality and partnership on the outsourcing of information systems functions. Journal of Management Information Systems, 12(4), 89-116.
Hirschheim, R., Heinzl, A., & Dibbern, J. (2002). Information systems outsourcing in the new economy: Enduring themes, emergent patterns and global challenges. Berlin, Germany: Springer-Verlag. Hirschheim, R., Heinzl, A., & Dibbern, J. (2006). Information systems outsourcing: Enduring themes, new perspectives and global challenges (2nd ed.). Berlin, Germany: Springer-Verlag. Kern, T., & Willcocks, L. P. (2000). Contract, control and presentiation in IT outsourcing: Research in thirteen UK organizations. Journal of Global Information Management, 8(4), 15-29. Kern, T., & Willcocks, L. P. (2002). The relationship advantage: Information technologies, sourcing, and management. Oxford: Oxford University Press. Lacity, M. C., & Hirschheim, R. (1993). Information systems outsourcing: Myths, metaphors and realities. Chichester, England: Wiley. Lacity, M. C., Willcocks, L., & Feeny, D. F. (1995). Information technology outsourcing: Maximising flexibility and control. Harvard Business Review, 84-93. Lacity, M., Feeny, D., & Willcocks, L. (2004). Commercializing the back office at Lloyd’s of London: Outsourcing and strategic partnerships revisited. European Management Journal, 22(2), 127-140. Lee, J. N., Huynh, M. A., Kwok, R. C., &, Pi, S. M. (2003). IT outsourcing evolution: Past, present, and future. Communications of the ACM, 46(5), 84-89.
Halvey, J. K., & Melby, B. M. (2000). Business process outsourcing: Processes, strategies and risks. Hoboken, NJ: Wiley.
Lee, J.-N., & Kim, Y.-G. (1999). Effect of partnership quality on IS outsourcing: Conceptual framework and empirical validation. Journal of Management Information Systems, 15(4), 29-61.
Halvey, J. K., & Melby, B. M. (2005). Information technology outsourcing transactions: Process, strategies, and contracts. Hoboken, NJ: Wiley.
Lee, J.-N., Miranda, S. M., & Kim, Y.-M. (2004). IT outsourcing strategies: Universalistic, contin-
An Overview of IT Outsourcing in Public-Sector Agencies
gency and configurational explanations of success. Information Systems Research, 15(2), 110-131. Looney, J. A. (1998). Outsourcing state and local government services: Decision-making strategies and management methods. Westport, CT: Quorum Books. Rouse, A. C., & Corbitt, B. J. (2003). Minimising risks in IT outsourcing: Choosing target services. Proceedings of the Seventh Pacific Asia Conference on Information Systems, Adelaide, South Australia. Rouse, A. C., & Corbitt, B. J. (2006a). Analysis of a large-scale IT outsourcing failure: What lessons can we learn? In M. Khosrow-Pour (Ed.), Outsourcing and offshoring in the 21st century: A socio-economic perspective. Idea Group Publishing. Rouse, A. C., & Corbitt, B. J. (2006b). Business process outsourcing. In R. Hirschheim, A. Heinzl, & J. Dibbern (Eds.), Information systems outsourcing: Enduring themes, emergent patterns and global challenges (2nd ed.). Berlin, Germany: Springer-Verlag. Rouse, A. C., & Hodge, G. A. (2006). Rethinking risk: A strategy for improving public sector sourcing performance. Paper presented at the Determinants of Performance in Public Organization International Conference, Hong Kong, China. Saunders, C., Gebelt, M., & Hu, Q. A. (1997). Achieving success in information systems outsourcing. California Management Review, 39(2), 63-79. Timbrell, G., Hirschheim, R., Gable, G. G., & Underwood, A. (1998). Government IT and T insourcing/outsourcing: A model and guidelines. Proceedings of the 9th Australasian Conference on Information Systems. Retrieved from http:// eprints.qut.edu.au/archive/00004491/01/4491. pdf
0
Walker, B., & Walker, B. C. (2000). Privatisation: Sell off or sell out? Sydney, Australia: ABC Books. Williamson, O. E., & Masten, S. E. (1999). The economics of transaction costs: Elgar critical writings reader. London: Edward Elgar Publishing.
tErms and dEfinitions Business Process: A business process is a set of interrelated organizational activities performed by a number of individuals with the goal of generating customer value. Business Process Outsourcing (BPO): BPO is the outsourcing of relatively complex business processes or activities that are supported by information technologies. IT Outsourcing (ITO): ITO is the outsourcing of IT services. The term is usually used in contrast to business process outsourcing, where the business function or process (including the IT that supports it) is outsourced. ITO includes the outsourcing of unique, once-off systems development projects, as well as the outsourcing of ongoing IT services, such as mainframe hosting, desktop support, telecommunications installation and maintenance, or maintenance and support of legacy systems. Offshoring: This is offshore outsourcing or cross-national outsourcing, where the vendor and client operate in different countries. Outsourcing: Outsourcing is the provision, at an agreed price, of specified services by an external vendor that is contracted to manage the day-today activities (and related assets and resources) so as to meet agreed performance and quality standards. Outsourcing involves specifying what will be done rather than how it will be done.
An Overview of IT Outsourcing in Public-Sector Agencies
Privatization: Privatization is the conversion of a government-owned enterprise to private ownership and operation. The theory behind privatization is that private enterprises run more effectively and offer better service. Production Costs: The costs of the processes involved in creating and distributing goods or services Risk: Risk is the potential harmful or undesirable consequences (downsides) that might arise in the future from a decision or course of action. Examples in the context of ITO include the release of confidential data, loss of organizational knowledge, and reduced business flexibility.
Risk Exposure: Risk exposure is the likelihood or probability of an undesirable future event, multiplied by the magnitude (e.g., costs) of the consequences of the event. RE = Probability(event) x Consequences(event) Transaction Costs: Transaction costs are the costs of contracting with a vendor through the marketplace in contrast to coordinating and managing service provision in-house (i.e., through the hierarchy). Key costs include finding, choosing, contracting with, monitoring, and controlling the work of the vendor, as well as coordinating the vendor’s activities with others being carried out by the purchaser.
Chapter LXI
E-Health, Local Governance, and Public-Private Partnering in Ontario Jeffrey Roy Dalhousie University, Canada
abstract The purpose of this chapter is to undertake a critical examination of the emergence of e-health in the Canadian Province of Ontario. More than solely a technological challenge, the emergence and pursuit of e-health denote a complex governance transformation both within the province’s public sector and in terms of public-private partnering. The Ontario challenge here is complicated by the absence of formal regional mechanisms devoted to health care, a deficiency that has precipitated the creation of Local Health Integration Networks (LHINs) to foster e-health strategies on a subprovincial basis, as well as ongoing difficulties in managing public information technologies. With respect to public-private partnering, a greater regionalization of decision-making and spending authorities, within transparent and locally accountable governance forums, could provide incentives for the private sector to work more directly subprovincially, enjoying greater degrees
of freedom for collaboration via more manageable contracting arrangements.
introduction The purpose of this chapter is to undertake a critical examination of the emergence of electronic health (e-health) in the Province of Ontario from two interrelated dimensions: first, the emergence of Local Health Integration Networks as the regionalization vehicle for governance reform within a province-wide health care system, and secondly, the usage of public-private partnering to pursue the realization of e-health mechanisms. There is no more profoundly consequential and complex example of public information technologies being deployed than in the realm of health care organization and delivery. In many respects, the two concepts of e-health and e-government are interrelated and coevolving layers of both organizational and institutional gov-
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
E-Health, Local Governance, and Public-Private Partnering in Ontario
ernance processes. As the single largest funding envelope of public-sector activity, particularly at the provincial level, the digital transformation of health care is also highly consequential for the digital transformation of the public sector as a whole (Roy, 2005, 2006). The chapter is organized as follows. Following this introduction, the second section reviews the main contours of e-health. Then we examine the conceptual challenges of public-private-sector partnerships. The fourth section provides an assessment of the e-health experience to date in Ontario from the two aforementioned dimensions of governance change. The fifth section then provides some conclusions as to the lessons learned from this case study.
the Emergence of E-health1 More than a mere technical apparatus for providing information, the Internet has also become an associational infrastructure, enabling knowledge and power to be more widely distributed and contested (Courchene, 2005; Paquet, 1997). One specific result is a lessening of tolerance for secrecy as individuals and new forms of associational movements mobilize around specific issues and interests (Dwyer 2004; Evans 2002). Governments themselves have not been immune or ignorant to these pressures for reform, responding increasingly with calls for more public participation and citizen engagement (Coleman & Norris, 2005; Oates, 2003; Oliver & Sanders, 2004). The application of a digital and interoperable information infrastructure carries the potential to enable faster and more integrated forms of care for the patient on a scale that could profoundly transform structures and performance (Hurley, Baum, & van Eyk, 2004; Prisma, 2004). Indeed, a centerpiece of e-health is the electronic health record—a basis for revolutionary improvements in information management, patient responsiveness, and service delivery capacities:
An electronic health record (EHR) provides each individual in Canada with a secure and private lifetime record of their key health history and care within the health system. The record is available electronically to authorized health care providers and the individual anywhere, anytime in support of high quality care. The Electronic Health Record Solution is a combination of people, organizational entities, business processes, systems, technology and standards that interact and exchange clinical data to provide high quality and effective healthcare.2 This passage usefully underscores the manner by which, as a basis for such change, the introduction of online mechanisms externally and new forms of digital interoperability internally require more than technological design as benefit realization is dependent on complex and multidimensional reforms (Fountain, 2001; Scholl, 2005). In terms of e-health and the EHR specifically (like many other areas of e-government but more acutely than most given the sensitivity of information and privacy), offsetting concerns about technical flaws and security glitches, leading to inappropriate access to personal information, are prevalent and of great sensitivity politically (Gath, 2004; Mundy, 2004). With regards to health care organization and service delivery, it is the combination of the EHR, greater interoperability across all segments of the health care system, and telemedicine that comprise the parameters of system-wide transformation. Such transformation includes not only how services are delivered through new organizational channels, but also how power is organized and deployed across the major stakeholder groups of health care, notably governments providing oversight and regulatory direction, delivery bodies such as hospitals and clinics, professional groups such as physicians and nurses, and communities at large (Eng & Beauchamp 2004). The EHR enables more efficient and citizencentric decision-making mechanisms through a
E-Health, Local Governance, and Public-Private Partnering in Ontario
much greater ability to share and access information about a patient that is the basis of any diagnosis and treatment. In turn, telemedicine offers the potential to deliver service outcomes in locations, particularly in remote jurisdictions, where actual medical facilities and staff may not be readily available (Demiris, 2004). Permeating both of these components is the need for a digital and organizational infrastructure capable of quickly and securely managing information flows across a complex system to facilitate client-centric outcomes (Patton, 2005). This need for a system-wide coordination to introduce such standards underlines a major governance challenge common to e-health and egovernment more generally, namely, the need for central coordination in a manner that respects the autonomy and need for flexibility across different components of the health care system. The degree to which those responsible for health care planning and technological design (typically a CIO- [chief information officer] type role) can thus realize interoperability through collaborative means (i.e., orchestrating change as opposed to imposing it in a more hierarchical fashion) is a major determinant of system-wide transformation, and it underscores one reason why the existence and degree of highlevel political leadership is so central in this regard (Culbertson, 2005; Dutil, Langford, & Roy, 2005; Langford & Roy, 2006). The expansion of information availability and oversight capacities facilitated by an online world are momentous forces in altering traditional roles and responsibilities in health care. Patients now often engage in self-diagnosis prior to visiting a physician, and while there is undoubtedly a potential benefit stream in more informed and aware clients and users of the health care system, offsetting concerns lie in the accuracy and reliability of the information circulating, particularly in cyberspace (Randeree & Rao, 2004). There is widening evidence, however, that medical experts face a less deferential and more inquisitive public than in the past (Allsop, 2003).
The same is true for governments holistically in terms of the public’s role as both a voter (in the representational mindset of democracy) and a stakeholder to be engaged in democratic governance more actively. As health care costs continually rise in tandem with growing demands for more sophisticated care, public-sector investments are under more scrutiny as performance reporting becomes more prevalent (Rosenbord, 2003). Similarly, difficult political and policy questions about the rules and reasonable expectations sought in health care are increasingly viewed as areas requiring a more direct form of engagement on the part of the public than merely deferring to the decisions of government managers and political leaders (reflecting the pressures for citizen engagement from a more informed public in a manner not unlike the changing power relations and balance between physicians and patients). More transparency also means that stakeholders working with governments in the organizing of e-health systems will face more scrutiny and pressures for openness. Given e-health’s reliance on a digital infrastructure, and by extension a significant role for the private sector in creating and maintaining this infrastructure, the widening scope and heightened complexity of public-private partnerships are a case in point. The decline in deferential trust accorded to both government and industry is thus a major challenge, one requiring a more direct effort to both inform and engage the citizenry in deliberations about choices, means, and outcomes (Eggers, 2005). A corresponding rise in those determinants of trust that are based more on direct experience and encounters than expertise and deferential authority, coupled with the rise of online connectivity, are also facilitating the creation of new relationships between citizens, activists, and health care users. The formation of such communities can be driven by a desire to influence government decision making or it can be more peer focused, such as support communities for individuals encountering similar health conditions (Josefsoon,
E-Health, Local Governance, and Public-Private Partnering in Ontario
2005). Moreover, the level of engagement between citizens as users of health care and the governance processes underpinning planning and decision making is very much intertwined with political legitimacy. A forceful case has been made that the public has not been well utilized as a stakeholder, more often than not the recipient of communication strategies by health care bodies and political leaders as opposed to genuine engagement and contribution (Ableson & Eyles, 2002). Here there is some convergence between schools of thought from management and governance generally and e-health specifically with respect to the need to widen involvement and participation of key stakeholder groups in order to effectively align human, organizational, institutional, and technological variables in a coherent manner (Borins, 2004; Fountain, 2004; Woodward, 2003).
public-private partnering There are two major sets of reasons underpinning the growing prominence of public-private partnerships for the creation and maintenance of various forms of infrastructure, both new and old. They may be summarized as financing and investment on the one hand, and innovation and performance on the other hand. Both dimensions of seeking to combine and align public and private interests may also be interlinked within the confines of a particular governance undertaking. Regarding the financing of new infrastructure, the attractiveness of leveraging private pools of capital investment is rooted in a similar context of new public management that arose during the 1980s, encouraging governments to look to industry for managerial techniques and governance practices. A key factor in this trend had been the view that excessive government bureaucracy leads to an inefficient allocation of resources for the jurisdiction and its governance system as a whole: Two alternative paths of recourse thus include either reforming government internally
or privatizing specific government functions (thereby shifting them from the public to the private realm). What both paths shared was an emphasis on efficiency and greater sensitivity to bottom-line measures of government spending, including accumulated debt and annual deficits. Throughout the 1980s (and in many OECD [Organization for Economic Cooperation and Development] countries continuing into the 1990s), this growing fiscal sensitivity resulted in constraints on public investment perhaps more severe than at any time since well prior to the postWWII reconstruction period largely financed by direct government taxation and investment. The example of the United Kingdom is illustrative, where by 1997 the British government reported the lowest levels of public infrastructure spending since the 1970s, the result being deterioration of facilities, equipment, and service standards across schools, hospitals, and other facets of traditionally public-sector assets such as roadways and public transportation. In Canada, estimates of the current infrastructure gap range anywhere from $50 billion to $125 billion, with at the very least a consensus that current levels of public-sector spending cannot keep pace with the refurbishment needs of the country’s aging infrastructure (TD Economics, 2006).3 The inability of the marketplace to respond to such decline is indicative of what economists term a market failure. Many such aspects of local and national infrastructure are collective in nature—of use by everyone (and all or many companies) for functions of critical importance that nonetheless lack competitive structures and pricing mechanisms to warrant private investment (as there would be little hope of generating an economic return). Yet, at the same time, markets and communities suffer collectively if reparation is not forthcoming, a function invariably demanding public-sector action. The question that emerges in such a context is whether or not a jurisdiction can find ways to leverage the potential benefits of private investment and competition for the
E-Health, Local Governance, and Public-Private Partnering in Ontario
pursuit of aims that are essentially collective by nature, and thus more characteristic of traditional forms of public goods warranting government investment and control. Inspired by the warnings of Jane Jacob’s on the potentially corrupting and ill-performing hybrids of mixing the two sectors (as many, but not all traditional forms of public infrastructure are of a more guardian orientation than that of the commercial syndrome), traditionalists defending separation and clarity would respond to such a question in the negative: Unless a particular aspect of infrastructure can be shown to be well suited to market pricing and investment (in which case the government should relinquish its involvement), the resulting public good demands clear government action and financing that is direct and unambiguous. Conversely, defenders of integrative models of public-private activity point to the possibility of generating value from aligning complementary perspectives in a transparent manner. The potential benefits are well summarized in a recent report by TD Canada Trust and they include greater flexibility for governments in planning infrastructure development, freeing up governments to focus on what they do best, improving the care of public assets, maintaining service quality through innovation, shifting risk from taxpayers to the private sector, and the potential for synergies by combining project components through a single contracting party (then empowered to coordinate the project more freely than would otherwise be the case if government managed each separate project variable). While the concept of a P3 typically denotes private-sector involvement in the construction and/or maintenance of a new capital asset to be used for public-interest purposes, such arrangements can take many forms. P3s fall short of privatization, which implies a relinquishing of government involvement in favor of market-based ownership and private-sector actors (thereby removing the conditions for partnership. Most P3s typically fall
in the middle ground of the diagram, with various financing schemes involving long-term leasing arrangements and in some cases, provisions as to whether the asset will revert to private or public ownership at the end of the project term. In the United Kingdom, the government introduced a scheme known as the Private Finance Initiative (PFI). Started by conservatives but maintained and expanded under the purview of Tony Blair and his Labour Government, the purpose of this program has been to create infrastructure of a public purpose through mechanisms leveraging private-sector involvement in financing, construction, and maintenance. Seeking a middle ground between direct government control and outright privatization, PFI seeks opportunities to share responsibilities and risks, creating new assets via payment schemes and commitments underwritten by the stable involvement of government authorities. PFI has not replaced direct public-sector provision, accounting for only a modest portion of overall infrastructure spending, but it has become an increasingly prominent—and controversial— aspect of new infrastructure development across the United Kingdom. Defenders of the initiative point to the flourishing of new infrastructure projects (meeting public-interest needs) that has been enabled by leveraging private-sector capital investment and project management competencies while shifting much of the financial risk associated with these new ventures to industry.
Can Risk be Shared? Under PFI, if a company in the United Kingdom faces cost overruns in completing the construction of a new facility within the agreed-to timeframe, it must absorb those costs, reducing its rate of return (and thereby providing a strong performance incentive to remain on time and on budget). In comparison to those projects administered directly by government authorities (i.e., situating the public-private governance to the lower left quadrant
E-Health, Local Governance, and Public-Private Partnering in Ontario
of the diagram, perhaps contracting out limited aspects of construction to private companies but nonetheless retaining full ownership and control), studies demonstrate that PFI initiatives enjoy stronger performance and fewer failures, and the program itself has been exported to many other countries in Europe and elsewhere (including Canada, a point returned to below). Yet, PFI is not without critics and controversy. The first PFI initiative led to the construction of the Skye Bridge in Scotland, connecting mainland Scotland to the island of Skye (and shutting local ferry services as a result). The arrangement proved controversial since its inception, with significant cost overruns incurred by governments as privatesector operators proved incapable of sustaining profits with toll charges that proved politically unpalatable. While the initiative began under the British national government (then under conservative rule), the since-created Scottish government eventually terminated the partnership and removed the tolls, reverting the bridge to a more traditional model of government control at a much greater expense than initially envisioned. While such spectacular failures have been rare, the financial implications of PFI remain complex and contested. From the government’s perspective, in particular, an advantage of PFI or a similar public-private variant is the avoidance of upfront costs, a politically attractive prospect to any government facing short-term scrutiny over spending (usually gauged by immediate budget projections and results for which annual metrics of deficits and/or surpluses represent the most closely scrutinized performance indicator, not unlike the private sector’s reliance on quarterly profits). The offsetting costs for such upfront off-loading is a long-term commitment to fixed payments of one sort or another (usually specified by leasing arrangements). Many observers contend that the cost-benefit analysis with respect to financing and return over the lifetime of the asset in question is proving to be more art than science and it may well be that a final analysis of
such quantitative results may not be forthcoming for some time to come. There are also ongoing questions about total costs of borrowing when investments are required for new asset construction. A key PFI rationale, as noted, has been the heightened constraints on public-sector borrowing following dramatic increases in public debt that occurred in many countries throughout the 1970s and 1980s (continuing in some jurisdictions to this day). However, when left to private companies to raise capital, it may be that borrowing costs rise since individual companies are generally unable to secure the most advantageous interest rates enjoyed by government borrowers. In essence, then, according to some observers, government may merely be avoiding upfront costs on its balance sheet but contributing to higher overall financing costs for the infrastructure initiative over its total lifetime. Skeptics of these new governance models, both within and outside of the public sector, will continue to scrutinize such arrangements and such scrutiny is important and should be viewed as not entirely unwelcome. More than ultimately leading to a stark endorsement of public or private means in terms new of infrastructure development, the analyses and counterarguments put forth by various parties (with nonetheless varying motivations and degrees of objectivity) are indicative of the sorts of governance pressures for transparency and accountability that are becoming more commonplace across all sectors (a theme discussed in previously). Along with uncertainties pertaining to cost and financing, however, are equally important questions of a more strategic nature pertaining to innovation and performance.
Innovation and Performance As infrastructure becomes more strategic, complex, and technologically sophisticated, questions pertaining to specialized skills and capacities and the sorts of performance outcomes being sought
E-Health, Local Governance, and Public-Private Partnering in Ontario
rise in importance. It is for such a reason that a program such as PFI specifically targets those areas where risk is high. If an asset is straightforward and relatively inexpensive to build, a traditional contracting approach between public owners and private companies may well suffice; if uncertainty is high, so too is the need for leading-edge expertise and innovative solutions. The example of highways and road tolls personify the complexities of governance and the range of both financing and performance variables at play. In Ontario, for instance, the construction of a new toll highway in the Greater Toronto Area that began in 1993 involved an innovative partnership scheme largely because of the desire to create a more technologically sophisticated solution to driver tolls than the traditional ticketing-booth method. At the same time, the Government of Ontario sought to fix its costs when tendering the project, thereby shifting risk to the private-sector developers, although initial financing for the project would be arranged by a provincial crown corporation on the grounds of securing lower costs of borrowing (Borins, 2004). Later in 1999, the Government of Ontario would sell its stake in the 407 system, moving closer to full privatization. Such a move would prove politically contentious, and while few doubt the technical success of the 407 experience, debates continue as to the financing and the politics of the scheme. Indeed, the specter of privatization would shadow the conservative government’s efforts in Ontario to embrace public-private partnering for infrastructure development, extending such efforts into the realms of education and health care. Such debate and tension would come to a head in the 2003 election campaign when a proposed PFI-type initiative for a new hospital in Ottawa drew much criticism: The long-term lease agreement stipulated that despite guaranteed public-sector usage of the facility, ownership would ultimately revert to private owners at the end of the 30-year deal.
The liberals criticized such an arrangement as unacceptable privatization (in the particularly sensitive health care realm), promising change. Once in power, their solution was a highly nuanced modification in the leasing agreement that would see public ownership ultimately retained in an otherwise unchanged partnership arrangement of private financing and upfront risk for a guaranteed, long-term annual payment stream. Since then, the liberals have, not unlike the Labour Party in the United Kingdom, sought to entrench PFI-type arrangements in a more mainstream, politically palatable template: the Infrastructure Planning, Financing and Procurement Framework (IPFP). While Ontario has proven to be one of North America’s most politically volatile jurisdictions over the past two decades (in terms of political leaders shifting dramatically across the ideological spectrum), it has also become indicative of the mainstream expansion of private-sector activity across the realm of public infrastructure. New hospitals in Ontario are now being constructed under collaborative, P3-type arrangements with the private sector, and the end determination of whether such deals ultimately generate net benefits for both companies and taxpayers must await a final account some 25 to 30 years hence. The main point, however, is less the ideology surrounding such partnerships (that are clearly becoming mainstream vehicles in jurisdictions with governments of every political stripe) and more the governance complexities of making such projects work, particularly with respect to complex IT-based e-health systems integrating public- and private-sector processes. An early e-government report by the OECD (2001) underlined the hidden threat to e-government as poor management of IT projects generally in large organizational settings, and follow-up work explored the compounding difficulties, many of them discussed in this section, of collaborative endeavors involving multiple sectors (Allen, Paquet, Juillet, & Roy, 2005). Ontario’s performance in this regard is examined further below.
E-Health, Local Governance, and Public-Private Partnering in Ontario
assessing E-health in ontario The Government of Ontario (2005) defines e-health as “achieving better health outcomes by transforming health systems and business practises through the investment in and more comprehensive use of information and information technology.” In terms of the realms of service and security, Ontario’s e-health efforts may be summarized (and somewhat simplified) in thee major directions: (a) efforts to create new e-health competencies and capacities within the core bodies of the provincial government with health care responsibilities, (b) efforts to realize interoperability and an EHR-type infrastructure to better serve the public users at the community level and in doing so, balance province-wide systemic reform with the localized dimension to health care organization, accountability, and delivery, and (c) efforts to address the particular circumstances of remote parts of the province, notably northern regions and communities (despite the importance of this third direction, it is the first two that are the focus of this chapter). Regarding the first direction, the provincial Office of E-Health was formally established in 2004. This new office is akin to a central policy unit guiding the overall evolution of e-health within the core provincial public service and across the extended health care system. In this latter realm, the office works closely with its predecessor organization that has since become an autonomous public agency, Smart Systems for Health (SSH), a provincial body focused on developing and introducing IT-based technological solutions in health care (and in doing so, working more directly with health care practitioners and their communities4). The purpose of the e-health office today, then, is essentially to provide a CIO capacity for the health care system as a whole in order to guide system-wide reforms pertaining to new technologies. The head of this office reports directly to
the deputy minister responsible for the provincial Ministry of Health and Long-Term Care (the appointed department head who, in turn, reports directly to the political minister). The e-health office also serves as the secretariat and coordinating body for the Ontario E-Health Council, a forum that is actually a collection of four separate councils with specific areas of focus (continuing care, laboratories, physicians, and hospitals). Four additional councils have also been proposed and their status is under consideration by the government (pharmacies, public health, regional integration, and program integration). It is these councils that comprise the extended and comprehensive network of sectors, professions, and organizational bodies that make up Ontario’s health care system. The council mechanisms are meant to reflect the separate needs of the groups such as physicians and hospitals, notwithstanding the danger of fragmentation: While the e-Health Council format provides valuable support for projects within each sector, it may also cause the creation of virtual silos that may impede or restrict the sharing of important information. To prevent this, the region must be aware of what is going on within each sector and work cooperatively to share projects. (Health Care Network of Southeastern Ontario, 2005, p. 4) The latter segment of this quote—the invocation of a regional concerted approach—underscores one of the most central quandaries facing e-health in Ontario, namely, the geographic alignment of health care operations and delivery across central (provincial), regional, and local dimensions. Identified as the second major direction of provincial efforts at the outset of this section, the problem is a familiar one to CIOs and students of e-government alike: striking the balance between system-wide coordination that is demanded if interoperability is to be realized and a requisite level of flexibility and autonomy that
E-Health, Local Governance, and Public-Private Partnering in Ontario
permits individual organizations and subnetworks of health care providers to innovative and act in a client-centric manner. In the case of Ontario, this challenge is somewhat unique in comparison to other provinces since there are no regional health care authorities with formal decision-making autonomy between the province as a whole and community-level care providers and facilities such as physicians, clinics, hospitals, and the like.5 The provincial e-health office and the provincial CIO structures represent the province-wide perspective on health care matters, whereas locally and regionally a variety of largely informal, advisory bodies (District Health Councils) promote coordination and dialogue for specific subprovincial zones while also providing input to the provincial government. Whereas the formation of such movements represents a bottom-up emergence of governance mechanisms to address shared externalities, both positive and negative, the province has also responded with a formalized strategy to instill more local coordination through the creation of a province-wide set of LHINs. The purpose of these bodies is to facilitate interoperability and integrated health care delivery in a collaborative manner without imposing a new layer of centralized, regional authority on the system. Even those engaged in e-health acknowledge that “the impact of the LHINS on Ontario’s e-Health agenda has not yet been defined” (Health Care Network of Southeastern Ontario, 2005, p. 7). Indeed, an important governance design question is the shape of relations between the new LHIN network and the proposed e-Health Council on Regional Integration. There is an expectation at this point in time that the LHIN will become a vehicle to promote a shared-services approach to a more common IT infrastructure that, in turn, can facilitate systemwide perspectives on resource planning and delivery within a given jurisdiction (Government of Ontario, 2005). In doing so, however, it is clear that the provincial government is walking on eggshells
0
in attempting to facilitate provincial guidance and stronger local coordination in a manner that is not interpreted as a threat to key stakeholders with their own territorial and operational autonomy. Hospitals are one key stakeholder group. Their views pertaining to the LHIN model would see strong support at first glance, but this is also conditional on the LHIN not undermining their own governance structures and authority: The hospitals of Ontario and the OHA (Ontario Hospital Association) are strong supporters of increased integration of the health care system and believe that efforts to improve integration should build on current system strengths and successes. Hospitals also support the establishment of LHINs that will focus on engaging communities in health system transformation by enhancing and supporting local capacity to plan, coordinate and integrate the delivery of health services at the community level. In addition, the OHA fully endorses the government’s commitment to maintaining local independent governance including the voluntary role of hospital trustees. (Ontario Hospital Association, 2005, p. i) The backdrop shaping this position is the contentious relationship between hospitals, health care groups generally, and the previous conservative-led governments in power from 1995 to 2003 that preferred a heavy-handed approach to government restructuring (driven less by an interest in e-health and more by an overarching agenda of tax cuts and spending reductions). In health care specifically, a number of hospital consolidations were imposed across the province, leading some hospitals to fight such moves (i.e., smaller hospitals fused into larger ones) while others positioned themselves as larger health care centers with more responsibility and autonomy. Importantly, the absence of regional authorities in Ontario meant that power throughout this process and since has been shared between the province centrally and individual hospitals locally.
E-Health, Local Governance, and Public-Private Partnering in Ontario
With respect to e-health, the hospitals have not only positioned themselves as supporters of the LHIN model as a complement to their own authority, but also as a critic of the province’s lack of more forceful and political interest in IT-led transformations of the health care system as a whole (Ontario Hospital Association, 2005). Part of this interest may no doubt be ascribed to a genuine attachment to the potential of e-health to improve medical systems, but it should also be viewed as a position that complements the hospital’s own agenda in seeking more direct funding from the province to invest in their own operations through their own autonomous governance structures. There is evidence to suggest that the foundational e-health work being undertaken by the province is not translating into wider public learning and support. Public opinion research in Canada suggests that it is in Ontario over the past 3 years where the decline in confidence accorded to the provincial government has been the steepest.6 Elected on a campaign of democratic renewal, promising more openness and direct public engagement in governing, the provincial liberals have acted on a number of their pledges. In the realm of health care, however, the government has sought a more directly interventionist approach in crafting a message of reassurance and refurbishment through more taxation and improved public-sector delivery (dismissing, for example, more flexibility through private-sector care). While such an approach has prioritized health care spending, it raises the issue of whether the Government of Ontario is adequately preparing the populous for the significance of e-health in terms of health care organization and delivery associated with online connectivity and digital technologies. This question, in turn, needs to be examined from two complementing planes. At the macro level of the system as a whole, some groups such as the Ontario Hospital Association and the private sector have conveyed the view that the province
has not accorded sufficient attention politically to e-health. Notwithstanding the self-interest motive at play, there is some merit to this view when one considers the evolution of the current government’s approach to health care management during their first mandate. First, the centerpiece of their strategy has been a new health care levy (or tax) designed to expand funding for the existing system, albeit in ways that achieve specific improvement targets laid out by the province. These targets, however, are focused on user outcomes and the communication of service improvements, with the overall tone of the political message being one of reassurance as opposed to systemic change. While the LHIN represents a notable governance innovation, the e-health agenda generally has not been accorded a high degree of political visibility or new public funding (Ontario Hospital Association, 2005). This tone mirrors, by and large, the government approach toward e-government and service transformation, which has been to become more cautious on large-scale, IT-led transformations due to a view that such ventures in the past have yielded more problems and failures than clear improvements.7 Here the question of whether the Government of Ontario is acting transparently with respect to health care reform is an important one. At one level, for those stakeholders and informed observers engaged in e-health areas, there is ample information about the strategies and mechanisms deployed (and the rationale for doing so). Yet, at the same time, the absence of more political attention devoted to e-health and both the opportunities and risks associated with new digital technologies suggests less a desire to withhold information or mislead the public than a preference for a form of positive-laden clarity (i.e., the message of government succeeding) over a willingness to embrace uncertainty and complexity. The risk of such a direction, however, is a further erosion of public legitimacy and trust as such efforts are viewed as manipulative and politically deceitful
E-Health, Local Governance, and Public-Private Partnering in Ontario
at worst, partial and incomplete at best (Paquet, 2004; Reed, 2004; Reid, 2004). At the community level, then, the creation of the LHIN raises the issue of whether or not such a mechanism will be sufficiently empowered to orchestrate a level of systemic change on a subprovincial level (i.e., the 14 regions each denoted with an LHIN). While the OHA and the province view the LHIN as an important vehicle for community mobilization and engagement in health care transformation (a view supported by the increasingly participative determinants of governance legitimacy and positive change), it is also the case that the public must be convinced of the usefulness of doing so. A weak regional mechanism with little authority to instill change may well undermine the credibility of the system, viewed once again as a vehicle for provincial intervention and communication as opposed to genuine consultation and engagement (Woodward, 2003). The government provides a counterargument rooted in incrementalism: the LHIN as a starting point for dialogue and consultation across a diverse province where the needs of different regions are varied. Accordingly, the difficult context faced by the LHIN, if it is to serve as a vehicle for community involvement and dialogue, has been well summarized by Ableson and Eyles (2002) in their assessment of public participation in the governance of health care in Canada (put forth in their original form as four concluding statements). •
•
•
Citizen domination by powerful groups interested in involving the public when it suits their purpose; Policy-makers touting citizen governance as a critical element to achieve more responsive decision making while using these structures as instruments of cost-cutting and restructuring; The ability for only the most educated and sophisticated and arguably the most unrepre-
•
sentative and biased “publics” to participate as citizen governors; and An increasingly cynical public, weary of the pre-determined illegitimate public consultation processes…seeking more accountable consultation. (p. vi)
It is precisely this mood that was captured in the present Ontario government’s electoral platform of 2003 that emphasized the need for more meaningful citizen engagement. Since taking power, however, the creation of the 14 LHIN bodies and the appointment of initial chairs and board members (nine in each region), a process that was complete by June 2006, has proven to be carefully orchestrated by the provincial government. For instance, all board chairpersons were selected and appointed directly by the province, as were an initial set of members. The province then used a process of local selection to form a pool of candidates for the final section while retaining its authority to render the final decision in each case. Such an approach raises questions about the degree of local engagement enjoyed by the networks at the outset of their mandates.
Private-Sector Partnering From the private-sector vantage point, the creation of the regional LHINs has done little to alter their involvement in e-health architecture and prospective technology solutions. There are two reasons. First, the largely centralized nature of the design process used to date has not dramatically altered the decision-making structures of health care spending in Ontario and, and result (the second reason), major procurement and policy decisions pertaining to IT usage and e-health remain at the provincial level. It bears noting once again that many of the most sophisticated heath care systems are highly regionalized, including countries such as those across Scandinavia who in many ways can be compared to a province such as Ontario (as they
E-Health, Local Governance, and Public-Private Partnering in Ontario
are smaller in both population and territory). Accordingly, the usage of a private-sector-run health authority in Stockholm was not a national decision but rather a regional one. In the United Kingdom, a much larger country, there has been a federated effort to empower separate regional health authorities with their own distinct partnering arrangements (coordinated by an overarching national policy framework). In Ontario, by contrast, the government has itself recognized its own weaknesses in managing large IT projects both in-house and via private partners, albeit doing so early in its mandate by way of an external review commission (that not only sought improvements but could be cast as a retroactive look at the previous government’s failures). Nonetheless, the review commission, headed up by a former national auditor general, had very little by way of specific improvements to offer with respect to public-private partnering beyond the need to consult industry and improve the relationship in going forward (Dutil et al., 2005). Some reluctance on the part of the province to turn to a greater role for the private sector is not without some understanding in light of trends elsewhere, including Canada. In British Columbia, for instance, where the provincial government has been perhaps the most aggressive in embracing public-private partnering across both traditional infrastructure and IT and organizational systems, a recent outsourcing initiative aimed at improving the service delivery apparatus of the health insurance systems has created some controversy as to whether performance is being improved or not (Williams, 2006). Even the industry-friendly, generally protechnology publication offered indirect warnings by virtue of its April 2006 cover story on the State of Maine’s failed IT partnership in the realm of health care. Yet, despite such risks, it remains apparent that most governments are ill-equipped to proceed entirely through in-house solutions in the realms of e-government, e-health, and IT architecture
generally. A key issue in the Ontario experience is whether the risks of greater partnering, if and when they are pursued, are likely to be compounded in light of the centralized nature of IT management and health care organization within the ministry and its affiliate bodies.
conclusion and futurE rEsEarch dirEctions The Province of Ontario is hardly alone in facing a variety of technological, organizational, and political challenges as it seeks to orchestrate health care reform. E-health in particular creates tensions in the immediate term between the need for stability and incremental progress on the one hand (as governments in power look to reassure the public and ensure voter support at the next electoral encounter), and the potential for systemwide transformation on the other. There is a body of evidence in e-government and IT management demonstrating that in the absence of significant pressure to modify the existing structures of power and authority, the application of new technologies will result by and large in a strengthening of existing arrangements (Kraemer & King, 2003). With respect to the adaptation of large-scale public-sector health care systems, this view of technological incrementalism seems consistent with the manner by which the provincial government appears reluctant to share power locally and regionally on the one hand, and through more innovative governance models involving the private sector on the other hand. Clearly consequential for Ontario’s fledging LHINs, such incrementalism can only weaken bottom-up innovation and experimentation. As digital technologies permeate wider segments of public-sector operations, and as further democratic innovation is pursued (in light of the signs of eroding trust at these levels), the deployment of new technology and its alignment with orga-
E-Health, Local Governance, and Public-Private Partnering in Ontario
nizational and political structures will grow in scope and complexity. The risk of federal and provincial technocracies is real since administratively larger governments have typically adapted more slowly than subnational ones (Goldsmith & Eggers, 2004). The enterprise architecture of a public sector in a federal environment is complicated by the need to sort out roles and assign resources and authority in a manner that either respects or explicitly seeks agreement to overcome the limits of sovereign autonomy at each political and organizational level enjoined within the system as a whole. While the province is beginning to create many aspects of a solid technical and organizational infrastructure for e-health, what remains problematic is the political landscape. What is therefore required is a much stronger exploration of the means by which public and community engagement can be both expanded and leveraged as a means toward creating the conditions for (a) a higher level of trust among the province’s citizenry as a whole (that at present is highly skeptical of the government’s message of reassurance and incremental improvement) and (b) stronger forms of public engagement at the local and regional level to orchestrate the sorts of shared processes required to underpin complex change with mechanisms for socializing learning, shared risk, and collective change. In terms of public-private partnering, a greater regionalization of decision-making and spending authorities, within transparent and locally accountable governance forums, could provide incentives for the private sector to work more directly subprovincially, enjoying greater degrees of freedom for collaboration via more manageable contracting arrangements. Such an approach does not need to be fragmented or displace a provincial role; rather, it calls for provincial coordination to provide an overall policy and interoperability framework within which these new partnership models can flourish.
Finally, with respect to future research directions, what would emerge as an imperative is the undertaking of interdisciplinary efforts to enjoin what have until now typically been separate realms of the technology of electronic health on the hand, and the governance of health care delivery on the other. The success of e-health is very much intertwined with public-sector governance and, as this chapter has demonstrated, the capacities of local and regional communities to engage the public that is at once a recipient of health care services, a partner in the formation of health care strategies, and a voter overseeing the usage of public resources and the stewardship of managers and elected officials. Researchers must therefore delve into the nexus between technological adaptation and civic engagement in order to better understand the new governance landscape of health care systems in an increasingly digital and networked world.
rEfErEncEs Ableson, J., & Eyles, J. (2002). Public participation and citizen governance in the Canadian health system (Discussion Paper No. 7). Commission on the Future of Health Care in Canada. Retrieved from http://www.hc-sc.gc.ca/english/pdf/romanow/pdfs/Abelson_E.pdf Allen, B. A., Paquet, G., Juillet, L., & Roy, J. (2005). E-government as collaborative governance: Structural, accountability and cultural reform. In M. Khosrow-Pour (Ed.), Practising e-government: A global perspective (pp. 1-15). Idea Group Publishing. Allsop, J. (2003). Evaluating user involvement in primary healthcare. International Journal of Healthcare Technology and Management, 5(1/2), 34-44. Bajkowski, J. (2005). Ill wind blows through Australia’s e-health adoption. IT World Canada. Retrieved from http://www.itworldcanada.com
E-Health, Local Governance, and Public-Private Partnering in Ontario
Batini, C., Cappadozzi, E., Mecella, M., & Talamo, M. (2002). Cooperative architectures. In W. J. McIver & A. K. Elmagarmid (Eds.), Advances in digital government: Technology, human factors and policy. Boston: Kluwer Academic Publishers. Bliemel, M., & Hassanein, K. (2004). E-health: Applying business processes reengineering principles to healthcare in Canada. International Journal of Electronic Business, 2(6), 625-643. Borins, S. (2004). A holistic view of public sector information technology. Journal of E-Government, 1(2), 3-29. Cairncross, F. (2002). The company of the future. Cambridge: Harvard Business School Press. Charih, M., & Robert, J. (2004). Government online in the Federal Government of Canada: The organizational issues. International Review of Administrative Sciences, 70(2), 373-384. Coe, A. (2004). Government online in Canada: Innovation and accountability in 21st century government. Unpublished master’s thesis, Kennedy School of Government, Cambridge, MA. Coleman, S. (2003). The future of the Internet and democracy beyond metaphors, towards policy. In Promise and problems of e-democracy: Challenges on online citizen engagement. Paris: E-Government Project. Coleman, S., & Norris, D. (2005). A new agenda for e-democracy. International Journal of Electronic Government Research, 1(3) 69-82. Courchene, T. J. (2005). “E-the-people”: Reflections on citizen power in the information age. Policy Options, 26(3), 43-50. Culbertson, S. (2005). E-government and organizational change. In M. Khosrow-Pour (Ed.), Practising e-government: A global perspective. Idea Group Publishing.
Demiris, G. (2004). Electronic home healthcare: Concepts and challenges. International Journal of Healthcare, 1(1) 4-16. Dutil, P., Langford, J., & Roy, J. (2005). E-government and service transformation relationships between government and industry: Developing best practises. Toronto, Canada: Institute of Public Administration. Dwyer, P. (2004). The rise of transparency networks: A new dynamic for inclusive government. In J. Halligan & T. Moore (Eds.), Future challenges for e-government. Canberra, Australia: Government of Australia. Eggers, W. (2005). Government 2.0: Using technology to improve education, cut red tape, reduce gridlock and enhance democracy. New York: Rowman and Littlefield Publishers. Eng, T., & Beauchamp, N. (2004). Striving for critical mass. Proceedings of the Fourth Annual eHealth Developer’s Summit, Seattle, WA. Evans, K. G. (2002). Virtual dialogue and democratic community. The Transformative Power of Dialogue, 12, 157-177. Fountain, J. E. (2001). Building the virtual state: Information technology and institutional change. Washington, DC: Brookings Institution Press. Fountain, J. E. (2004). Digital government and public health. Public Health Research, Practise, and Policy, 1(4), 1-5. Gath, S. (2004). Electronics health records for Australia: Some legal and policy issues. In J. Halligan & T. Moore (Eds.), Future challenges for e-government. Canberra, Australia: Government of Australia. Gibbons, R. (2004). Federalism and the challenge of electronic portals. In L. Oliver & L. Sanders (Eds.), E-government reconsidered: Renewal of governance for the knowledge age. Regina, Canada: Canadian Plains Research Center.
E-Health, Local Governance, and Public-Private Partnering in Ontario
Goldsmith, S., & Eggers, W. D. (2004). Governing by networks: The new shape of the public sector. WA: Brookings Institution Press.
Oates, B. J. (2003). The potential contribution of ICT’s to the political process. Electronic Journal of E-Government, 1(1).
Government of Ontario. (2005). Local health integration networks (Bulletin No. 13). Ministry of Health. http://www.health.gov.on.ca
Oliver, L., & Sanders, L. (Eds.). (2004). E-government reconsidered: Renewal of governance for the knowledge age. Regina, Canada: Canadian Plains Research Center.
Health Care Network of Southeastern Ontario. (2005). Southeastern Ontario information technology & communications strategy, 2005-09. Retrieved from http://www.seohealthnet.com Hurley, C., Baum, F., & van Eyk, H. (2004). Designing better health care in the South: A case study of unsuccessful transformational change in public sector health reform. Australian Journal of Public Administration, 63(2), 31-41.
Ontario Hospital Association. (2005). Transformation agenda risks and opportunities report: E-health as the strategic enabler. Retrieved from http://www.oha.com Ontario Hospital eHealth Council. (2002). An update on data and technology standards in Ontario. Ontario Hospital Association. Retrieved from http://www.oha.com
Josefsoon, U. (2005). Coping with illness online: The case of patients’ online communities. The Information Society, 21, 143-153.
Ontario Hospital eHealth Council. (2004). Home telehealth in Ontario. Ontario Hospital Association. Retrieved from http://www.oha.com
Koch, C. (2005, March 1). A new blueprint for the enterprise. CIO Magazine.
Organization for Economic Cooperation and Development (OECD). (2001). The hidden threat to e-government, avoiding large government IT failures (PUMA Policy Brief 8).
Kraemer, K., & King, J. L. (2003). Information technology and administrative reform: Will the time after e-government be different? Irvine, CA: Center for Research on Information Technology and Organizations. Langford, J., & Roy, J. (2006). E-government and public-private partnerships in Canada: When failure is no longer an option. International Journal of Electronic Business, 4(2), 118-135. McMillan, M. L. (2004). Financial relationships between regional and municipal authorities: Insights from the examination of five OECD countries (IIGR Working Paper No. 3). Kingston: Queen’s University. Mieczkowska, S., & Hinton, M. (2004). Barriers to e-health business processes. International Journal of Healthcare, 1(1) 47-59. Mundy, D. (2004). Electronic transmission of prescriptions: Towards realizing the dream. International Journal of Healthcare, 1(1) 112-125.
Paquet, G. (1997). States, communities and markets: The distributed governance scenario. In T. J. Courchene (Ed.), The nation-state in a global information era: Policy challenges. Kingston: John Deutsch Institute for the Study of Economic Policy. Paquet, G. (2004). There is more to governance than public candelabras: E-governance and Canada’s public service. In L. Oliver & L. Sanders (Eds.), E-government reconsidered: Renewal of governance for the knowledge age. Regina, Canada: Canadian Plains Research Center. Patton, S. (2005, March 1). Sharing data, saving lives. CIO Magazine. Pavlichev, A., & Garson, G. D. (Eds.). (2004). Digital government: Principles and best practises. Hershey, PA: Idea Group Publishing.
E-Health, Local Governance, and Public-Private Partnering in Ontario
Prisma. (2004). Implications of Pan-European best practise in eHealth service delivery. European Union. Retrieved from http://wwwiprismaeu.net Randeree, E., & Rao, H. R. (2004). E-health and assurance: Curing hospital Websites. International Journal of Healthcare, 1(1), 33-46. Reed, B. (2004). Accountability in a shared services world. In J. Halligan & T. Moore (Eds.), Future challenges for e-government. Canberra, Australia: Government of Australia. Reid, J. (2004). Holding governments accountable by strengthening access to information laws and information management practices. In L. Oliver & L. Sanders (Eds.), E-government reconsidered: Renewal of governance for the knowledge age. Regina, Canada: Canadian Plains Research Center. Rosenbord, L. D. (2003). A facilitated approach to developing collaborative action in primary healthcare. International Journal of Healthcare Technology and Management, 5(1/2), 63-80. Roy, J. (2005). Services, security, transparency and trust: Government online or governance renewal in Canada? International Journal of E-Government Research, 1(1), 48-58. Roy, J. (2006). E-government in Canada: Transformation for a digital age. Ottawa, Canada: University of Ottawa Press. Roy, J. (2007). E-health in Ontario: A multi-dimensional governance transformation. International Journal of Health Care Technology and Management, 8(1/2), 66-84. Scholl, H. (2005). Motives, strategic approach, objectives and focal points in e-government-induced change. International Journal of E-Government Research, 1(1), 59-78. TD Economics. (2006). Creating the winning conditions for public-private partnerships in
Canada. Retrieved from http://www.td.com/economics/special/db0606_p3s.pdf Wickramasinghe, N., & Misra, S. K. (2004). A wireless trust model for health care. International Journal of Healthcare, 1(1), 60-77. Williams, L. (2006). Tech to the rescue? CIO Government Review. Woodward, V. (2003). Participation the community work way. International Journal of Healthcare Technology and Management, 5(1/2), 3-19.
tErms and dEfinitions E-Government: E-government is the usage of news policy tools and organizational processes involving digital technologies in order to improve public-sector capacities. E-Health: E-health is the usage of new technologies to improve health care organization and service delivery. Electronic Health Record (EHR): EHR provides each individual with a secure and private lifetime record of their key health history and care within the health system. Governance: It is the mechanism of coordinating resources, making decisions, and structuring accountability. Identity Management: This involves policies and technologies that enable authenticated identities to be verified in order to confirm and enable access or transactions between individuals and organizations. Interoperability: It is the ability of different information and computer systems to exchange data and enable communication and coordination across different organizations or subsystems within an organization in order to be participative in shared governance systems.
E-Health, Local Governance, and Public-Private Partnering in Ontario
Local Health Integration Networks (LHIN): LHINs are flexible bodies created to facilitate interoperability and integrated health care delivery in a collaborative manner without imposing a new layer of centralized, regional authority on the system.
Chapter LXII
Implementing a Sound Public Information Security Program Stephen K. Aikins University of South Florida, USA
introduction The evolving nature of information security threats such as cybercrime, as well as the need to ensure the confidentiality and privacy of citizen information and to protect critical infrastructure call for effective information security management in the public sector. According to Evers (2006), the FBI (Federal Bureau of Investigation) estimates that cybercrime will cost businesses an estimated $67.2 billion per year. Citizens’ privacy and the security of their personal information have become issues of increasing concern as headlines of data security breaches and identity thefts abound in the mainstream media. For example, in 2005, 9.3 million U.S. citizens, about 4.25% of the population, were victims of identity theft and fraud, costing approximately $54.4 billion (Council of Better Business & Javelin Strategy & Research, 2006). E-government applications have made it easier for citizens to conduct business online with government agencies, although their trust in the ability of governments to keep that information private is low. Considering the amount of
citizen information held by governments at all levels and the steps needed to address potential homeland-security and IT-related threats to critical infrastructure, the need for effective means of safeguarding public agency data has become an issue of paramount importance. In addition, the need to ensure integrity and availability of public information resources is crucial to many government operations. As a result, several states are recognizing the importance of information security and privacy in their state IT strategic plans (National Association of State Chief Information Security Officers [NASCIO], 2006).
background Almost two decades after the Computer Security Act was signed into law, federal IT security reviews indicated continuing risks to federal operations (U.S. General Accounting Office [GAO], 2000, 2001, 2002). This appears to be happening despite the escalating cost of federal IT security spending, which is expected to increase from $4.2 billion in 2003 to $6 billion in 2008 (Walker,
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Implementing a Sound Public Information Security Program
2003). A key requirement for effective planning and management of public organizations’ information security is the implementation of a public information security program. Several public agencies such as the National Institute of Standards and Technology (NIST), the Office of Management and Budget (OMB), the National Security Agency (NSA), the GAO, and NASCIO have published numerous security documentations that serve as sources of reference for public organizations in managing the security of their information resources. However, a recent survey by NASCIO (2006) revealed that about 30% of the state chief information security officers (CISOs) were unfamiliar with major cybersecurity-related documents. This chapter discusses the elements of an effective information security program and how it could be implemented by state and local governments to mitigate their security vulnerabilities, threats, and exploits.
ElEmEnts of an information sEcurity program The implementation of an effective information security program begins with a risk assessment and the development of an enterprise-wide information security plan. State and local governments should understand their environment and conduct IT risk assessment. This is done to determine their security needs and formulate plans that include strategic goals to protect critical infrastructure and citizen privacy. Once the plan is in place, an information security program that embodies security management structure and comprehensive policy, related standards, and procedural guidelines should be developed. GAO (2001) outlines the following elements of an information security program as critical. 1. 2.
0
Periodic risk assessment Documented entity-wide security program plan
3. 4. 5.
Security management structure with clearly assigned security responsibilities Effective security-related personnel policies Security program evaluation
perform periodic risk assessment Understanding the risks of the public organization is crucial in determining the proper security policies, procedures, guidelines, and standards to put in place to ensure adequate information security controls. The IBM Foundation for the Business of Government (2002) argues a risk assessment should include a complete inventory of critical systems and assets as well as a gap analysis between the actual and ideal levels of IT security. Therefore, the risk assessment should include a review of such broad areas as employee management and training; information systems, including network and software design and information processing, storage, transmission, and disposal; and detection, prevention, and response in the case of attacks, intrusions, and failures (NSA, 2002). Effective risk assessment should have three major components: threat assessment, vulnerability assessment, and asset identification. In a survey of State CISOs, NASCIO (2006) found that although 73% of the respondents reported they have conducted risk assessments on systems that are homeland-security-critical assets, 76 to 84% reported they have inadequate or no information regarding threats from “internal ineptitude” and “internal maliciousness,” which are potentially the most dangerous aspects of security breaches. Assessing the threats posed by a malicious insider (e.g., disgruntled employee), accidental insider (e.g., poorly trained or curious employee), malicious outsider (e.g., hacker, industrial espionage), and nature (e.g., fire, flood) will be useful in identifying and assessing the government agency’s vulnerabilities (Hurd 2001; NIST, 2002). Vulnerability assessment should focus on key areas of information security, including identifi-
Implementing a Sound Public Information Security Program
cation and authentication, account management, auditing, virus protection, configuration management, physical environments, personnel security, incident handling, and security awareness training (NIST, 2002). Once the public entity has assessed its vulnerability, the importance of each asset to that organization should be defined, and a rating system for confidentiality, integrity, and availability should be developed. After completing these three steps, the threats to the public entity’s systems can then be compared to the vulnerabilities identified and balanced against the need and cost to protect the confidentiality, integrity, and availability (NIST, 1998). A good risk assessment clarifies the posture of agency security need and can serve as a source of justification for increased funding (Gartner, 2004).
Document Entity-Wide Security program NASCIO (2006) found that only 47% of responding State CISOs in its recent survey had good documented information security guidelines in place. With pressures from legislatures for government agencies to be strictly accountable for IT expenditures, as well as private and public data security breaches remaining in the headlines, clearly defined programs can clarify the agencies’ security posture and help make the case for funding. A good entity-wide security program requires documentation of the security policy and related procedures, guidelines, and standards that will equip the program managers and their information security officers with the tools needed to implement the policy (U.S. GAO, 2001). A good program includes guidelines and standards for authorization and authentication, logging, auditing and monitoring, vulnerability and patch management, and virus management. Aligning security policy with management controls, operational controls, and technical controls is useful in ensuring effective vulnerability assessment
mapping (NIST, 1998; NIST, 2000).The State of Minnesota, for example, has initiated an aggressive enterprise-wide information security program enabling it to launch IT standards and a resource management program, and set hardware and software standards. Providing visible and regular security awareness training, including e-mail updates and newsletters, is critical to effective enforcement of the program. To ensure adequate understanding of the entity’s security posture, the security program should provide for a central management of incident handling. The program should also provide for compliance reviews and enforcement of the policies, procedures, and standards (OMB, 2000). This could be accomplished through self-assessment and reporting, vulnerability and penetration testing, site security reviews, and a tool that queries agents on every desktop to give the security configuration status (NIST, 2000, 2001; NSA, 2002; Wood, 2001).
Establish Security Management Structure with Clearly Defined responsibilities Clearly defined management structure and security roles and responsibilities as part of the security program could enhance enforcement (NIST, 1995; U.S. GAO, 2001). The management structure could vary depending on the nature of the government and its agencies, as well as its IT governance model. For example, in the State of Ohio, information technology is managed from two perspectives: from the perspective of the state as an enterprise covering all agencies, and from the perspective of the Office of Information Technology (OIT) as the primary IT service and infrastructure provider to the state agencies. This management structure enabled the OIT to issue a state IT security policy that requires agencies to perform security vulnerability assessment of the major agencies’ computing and network systems (NASCIO, 2006). In Michigan, the Office of En-
Implementing a Sound Public Information Security Program
terprise Security is accountable to the director of the Michigan Department of Information Technology for identifying, managing, and mitigating the state’s IT security risks and vulnerabilities. Depending on the size of the public entity, there could be as many as six key players in the management of information security. These include the CISO who is responsible for the overall management, implementation, and enforcement of the IT security program, and the system program manager, who is responsible for life cycle planning of the system. There is also the information system security officer who is responsible for administrative and operational aspects of the system, and the system administrator who is responsible for day-to-day care and feeding of the system. There could also be the agency security personnel responsible for the management of physical access to all organizational facilities, including the IT data center and all other buildings hosting the IT department. Another key player could be a designated approval authority who is a senior executive with the ultimate responsibility for funding, configuration, and operation of the system (OMB, 2000). Finally, there are the business owners and functional managers who help to set the requirements for the level of protection needed for their processes or data. Effective information security management depends on clear coordination and understanding between these stakeholders (Wood, 2001).
develop and implement Effective Security-Related Policies In general, department managers and supervisors are charged with keeping employees informed about policies and programs that pertain to their work, including those that govern information security and privacy. Policies, related standards, and procedures should spell out clearly the required actions to secure the agencies’ information assets, and enforcement procedures and the consequences
of noncompliance. In addition, employees must be trained to be reasonably well aware of the precautions they should take and ethics they should uphold in using information systems, e-mail, and the Internet in terms of cybersecurity. A well-developed and effective implementation of information technology polices and standards could lead to savings. As part of its enterprise-wide information security program, Minnesota successfully implemented the state’s first ever desktop standards through a publicprivate partnership with an expected savings of $24 million per year. The CISO should work with various departments within the entity such as human resources, legal, and contractors to help put in place controls such as background checks, nondisclosure agreements, termination and transfer procedures (e.g., return of equipment and ID, termination of user ID), regular vacations and shift rotations, and so forth. Background checks and termination and transfer procedures, if properly implemented, are critical in minimizing insider risks (NIST, 1996).
monitor the Effectiveness of the Security Program and Make Necessary Changes Once the public entity has the information security program in place, there is the need to monitor the effectiveness of the program to determine whether it is operating as intended. This can be done through reliance on a combination of selfreporting and hands-on review such as a tailored version of NIST 800-26 reviews with supporting documentation, vulnerability and system scanning of technical controls, random on-site checks of operational and technical controls, specific security policy compliance reviews, and audit finding reviews. In Nebraska, the CIO’s office has partnered with the state auditor’s office to provide IT risk assessment and application testing. As a result of tying the results of IT assessments to
Implementing a Sound Public Information Security Program
the financial audits, a higher level of attention is paid to the outcome of the IT assessment within each state agency. Once the program monitoring is completed, it is important to coordinate with system administrators, security officers, business and system program managers, and upper level executives to determine a viable solution to the compliance problems identified (Wood, 2001). This calls for prioritization and initial tackling of the critical issues that require immediate management attention.
futurE trEnds The information technology threat environment is constantly changing, but the efforts of many governments at all levels to adapt to these changes and implement adequate countermeasures have been constrained by inadequate budgetary resources. As the need for change management to deal with these threats has become increasingly glaring, a number of states like Kansas, Minnesota, and Michigan have initiated enterprise-wide information security programs that include risk assessment, policy guidelines and standards, as well as countermeasures that address known threats. However, as stated earlier, the National Association of State Chief Information Officers found in its 2006 strategic cybersecurity survey that roughly 30% of state CIOs were unfamiliar with existing cybersecurity-related documents and strategies. Promotion of these documents, especially at the local government level, will be important in helping to develop effective information security programs in future. Considering the even more limited resources of many local governments, an effective way of addressing the information security threats in the future will be resource alignments in which local governments take advantage of existing national and state government resources in order to enhance economies of scale. Recognizing the importance
of information security initiatives in today’s highly threatening environment, the National Association of State Chief Information Officers has conducted studies and developed guidelines and recommendations to assist members to take actionable steps and to make the business case for increased information security funding. It is expected that more and more states and local governments will take advantage of these resources as information security breaches continue to make the headlines and pushes security concerns to the forefront. In order to compete with other service areas for funding, state information security officers will have to align information security with government business needs. For example, they will have to demonstrate how improved security can enhance protection for and reliability of information in critical government functions like tax assessments, revenue collection, budgeting, public safety, health, and education. In addition to understanding the security needs, they will also have to demonstrate expertise in taking possible proactive steps in preventing breaches.
conclusion The growing threat of information security breaches is an issue of grave concern to both governments and citizens. With finite government resources and the drain of entitlements and other significant discretionary spending, information security will continue to compete with other services and programs for much needed taxpayer dollars. This calls for innovativeness and efficiency on the part of CISOs in managing risks. The key to successful outcomes are well documented, and so are effectively implemented enterprise-wide information security programs that align information security with the business needs of governments. State and local governments and related agencies should avail themselves with the plethora of existing information security
Implementing a Sound Public Information Security Program
documents and strategies to help understand their environments, perform risk assessments, and put in place information security programs to manage risks effectively. This will happen if the implemented programs are enforced and their effectiveness evaluated on a periodic basis.
futurE rEsEarch dirEctions With the number of resources being made available by NIST, NASCIO, and other agencies, future research will be needed to determine the extent of utilization of these resources at the state and local government levels, as well as the impact of such utilization on the adoption of information security programs; the prevention, detection, and reduction of security breaches; as well as changes in information security funding. For those state governments that are pioneers in adopting information security programs and action steps to reduce threats, case studies will be beneficial to ascertain whether the intended savings and benefits from information security investments have been realized and whether their practices could serve as models and benchmarks for future information security investment projects. Such studies could include analysis of what went well and lessons learned for the purposes of developing a reservoir of documented information security best practices and experiences to help guide future investment decisions and adoption practices. Considering the enormity and the constant nature of information security vulnerabilities, threats, and exploits, an information security management consortium consisting of federal, state, and local government agencies, private-sector experts, and university professors is needed to effectively address the risks facing the United States and the rest of the world. The consortium could have a research wing, the purpose of which should include empirical investigations into potential and emerging threats and how best to
leverage its resources to improve problem solving. Another area of research could be operational alignment of the existing agencies to enhance efficiency. Properly constituted, such a consortium will bring to bear the expertise of all members on pertinent issues and provide an avenue for cross fertilization of ideas, effective problem solving, and management. The consortium can also partner with international organizations in the field of information security to research into potential global antidotes to the cybersecurity threats and exploits.
rEfErEncEs Council of Better Business & Javelin Strategy & Research. (2006). New research shows identity fraud growth is contained and consumers have more control than they think. Retrieved February 16, 2007, from http://www.bbbonline. org/IDTheft/safetyQuiz.asp Evers, J. (2006). FBI wants businesses’ help in fighting cyber crime. Retrieved February 16, 2007, from http://news.com.com/FBI+wants+busi nesses+help+to+fight+cybercrime/2100-7348_36040521.html Gartner, Inc. (2004). Summary report: Assessing of agency compliance with enterprise security standards (Prepared for the North Carolina State CIO). Retrieved, January 31, 2007, from http:// www.iso.scio.nc.gov.pdf/Summary RportFV02. pdf Hurd, B. E. (2001). The digital economy and the evolution of information assurance. Retrieved October 16, 2005, from http://www.itoc.usma. edu/Workship/2001/Authors/Submitted_Abstracts/paperWIC3(20).pdf National Association of State Chief Information Security Officers. (2006). The IT security business case: Sustainable funding to manage the risks. Retrieved from http://www.nascio.org/nascioCommittees/securityprivacy/members#publications
Implementing a Sound Public Information Security Program
National Institute of Standards and Technology. (1995). An introduction to computer security: The NIST handbook (NIST SP 800-12). Retrieved January 21, 2006, from http://csrc.nist.gov/publications/nistpubs/800-12/handbook.pdf National Institute of Standards and Technology. (1996). Generally accepted principles and practices for securing information technology systems. Retrieved October 14, 2005, from http://csrc.nist. gov/publications/nistpubs/800-14.pdf National Institute of Standards and Technology. (1998). Guide for developing information security plans for information technology systems. Retrieved November 26, 2006, from http://csrc. nist.gov/publications/nistpubs/800-18/Planguide/ PDF National Institute of Standards and Technology. (2000). Federal information technology security assessment framework. Retrieved November 26, 2005, from http://www.cio.gov/Documents/federal%5Fit%F security%5Fassessment%5Fframework%5Ffram ework %5F112800%Ehtml National Institute of Standards and Technology. (2001). Security self-assessment guide for information technology systems (NIST SP 800-26). Retrieved January 2006 from http://csrc.nist. gov/publications/nistpubs/800-26/sp800-26.pdf National Institute of Standards and Technology. (2002). Risk management guide for information technology systems. Retrieved November 26, 2005, from http://csrc.nist.gov/publications/nistpubs/800-30/sp800-30.pdf National Security Agency. (2002). Infosec assessment methodology. Retrieved January 26, 2007, from http://www.iatrp.com/ Office of Management and Budget. (2000). Management of federal information resources (OM Circular A-130, November 28, 2000). Retrieved December 15, 2005, from
http://www.whitehouse.gov/omb/circulars/a130/ a130trans4.html Sans Institute. (2002). The twenty most critical Internet security vulnerabilities (Version 2.504.2). Retrieved March 7, 2006, from http://www.sans. org/top20.html United States General Accounting Office. (2000). Computer security: Critical federal operations and assets remain at risk (T-AIMD-00-314). United States General Accounting Office. (2001a). Computer Security: Improvements needed to reduce risk to critical federal operations and assets (GAO-02-231T). United States General Accounting Office. (2001b). Federal information system control audit manual (GAO/AIMD-12.19.6). Retrieved March 7, 2006, from http://www.gao.gov/special. pubs/ail2.19.6.pdf United States General Accounting Office. (2002). Computer security: Progress made but critical federal operations and assets remain at risk (GAO-03-303T). Walker, R. (2003). As threats rise, feds shelter their IT. Government Computer News. Wood, C. C. (2001). Information security policies made easy. Houston, TX: Pentasafe Security Technologies, Inc.
furthEr rEading Allen, J. (2003). Making compelling business case for investing in information security. U.S. Computer Emergency Readiness Team (US-CERT). Retrieved from http://www.cert.org/features/ green/business_case.html Anderson, A. (2006). Effective management of information security and privacy. Educause Quarterly. Retrieved from http://www.educause. edu/ir/library/pdf/EQM0614.pdf
Implementing a Sound Public Information Security Program
Appendix F: Cyber-security. (2006). 2006 Michigan IT Strategic Plan. Retrieved from http://www.michigan.gov/documents/AppendixF_149547_7.pdf Federal Agency Security practices. (n.d.). Federal Agency Security Program Manager’s Forum. Retrieved from http://www.csrc.nist/gov/fasp/ index.html Gordon, L., Loeb, M. P., Lucyshyn, W., & Richardson, R. (2005). CSI/FBI computer crime and security survey. Retrieved from http://www.gocsi. com/press/20050714.jhtml Heiman, D. (2002). Public sector information security: A call to action for public sector CIOs. NASCIO. Retrieved from http://www.businessofgovernment.org/main/publications/grant_reports/details/index.asp?GID=51 Jarrett, T. (2005). Securing cyberspace: Efforts to protect national information infrastructures continue to face challenges. Testimony before the Subcommittee on Financial Management, Government Information and International Security of the Senate Committee on Homeland Security and Government Affairs. Retrieved from http://hsgac. senate.gov/_files/071905Jarrett.pdf National Association of State Chief Information Security Officers (NASCIO). (2003). Business case and beyond: A primer on state government IT business cases. Retrieved from http://www. amrinc.net/nascio/publications/shoppingCart/index.cfm#business National Association of State Chief Information Security Officers (NASCIO). (2004). Enterprise Architecture Development Tool-Kit, v.3.0. Retrieved from http://www.nascio.org/nascioCommittees/EA National Association of State Chief Information Security Officers (NASCIO). (2005). The year of working dangerously: The privacy implications
of wireless in the state workplace: Parts I and II. Retrieved from http://www.nascio.org/nascioCommittees/privacy National Association of State Chief Information Security Officers (NASCIO). (2006a). Born of necessity: The CISO evolution. Bringing the technical and the policy together. Retrieved from http://www.nascio.org/nascioCommittees/securityprivacy/members#publications National Association of State Chief Information Security Officers (NASCIO). (2006b). Findings from NASCIO’s strategic cyber security survey. Retrieved from http://www.nascio.org/publications/documents/NASCIO-CyberSec_Survey_ Findings.pdf National strategy to secure cyberspace: A White House report. (2003). Retrieved from http://www. whitehouse.gov/pcipb Ponemon Institute and the CIO Institute of Carnegie Mellon University. (2004). Privacy trust survey of the United States Government. Retrieved from http://cioi.web.cmu.edu/research/ 2004PrivacyTrustSurveyoftheUnitedStatesGovernmentExecutiveSujjayV.6.pdf Ross, R., Jackson, A., Katzke, S., Toth, P., & Rogers, G. (2006). Guide for assessing security controls in federal information systems (NIST Special Publication No. 800-53A, 2nd public draft). Retrieved from http://csrc.ncs1.nist.gov/publications/drafts/SP800-53A-spd.pdf State of Kansas, IT security self-assessment, systems questionnaire. (n.d.). Retrieved from http://www.da.ks.gov/itec/itsec/Security_SelfAssessment.doc State of Maryland, standards self-assessment checklist. (2004). State information technology policy and standards, Version 1.2 (p. 27). Retrieved from http://www.dbm.maryland.gov/ dbm_publishing/public_content/dbm_taxonomy/
Implementing a Sound Public Information Security Program
security/prevention/itsecuritypoliciesjuly2003. pdf State of Montana Information Technology Strategic Plan 2006 update. (2006). Retrieved from http://itsd.mt.gov/stratplan/MT_IT_Strategic_ Plan__Update_2006_Govenor’s_draft.doc State of North Carolina, Office of the State CIO, Strategic Initiatives Office, Risk Management Program. (n.d.). Retrieved from http://www.iso. scio.nc.gov/SecurityAssessment.htm State of Texas, Department of Information Resources. (n.d.). Texas Project Delivery Framework. Retrieved from http://www.dir.state. tx.us/pubs/framework/index.htm Stoneburner, G., Goguen, A., & Feringa, A. (2002). Risk management guide for information technology systems: Recommendations of the National Institute of Standards and Technology. Retrieved from http://csrc.nist.gov/publications/ nistpubs/800-30/sp800-30.pdf Swanson, M. (2001). Security self-assessment guide for information technology systems (NIST Special Publication No. 800-26). Retrieved http://csrc.nist.gov/publications/nistpubs/80026/sp800-26.pdf Swanson, M., Hash, J., Wilson, M., & Kissel, R. (2005). Guide for information security program assessments and reporting form (NIST Special Publication No. 800-26, Rev. 1, initial public draft). Retrieved from http://csrc.nist.gov/publications/drafts/Draft-sp800-6Rev1.pdf U.S. Office of Management and Budget. (2000). Incorporating and funding security information system investments. Retrieved from http://www. whitehouse.gov.omb/memoranda/m00-07.html U.S. Office of Management and Budget. (2005). Mapping your investments to the federal enterprise architecture. Retrieved from http://www. whitehouse.gov/omb/egov/documents/Mapping_Investments.pdf
tErms and dEfinitions Authentication: Authentication is the process of identifying an individual, usually based on a user name and password. In security systems, authentication is distinct from authorization, which is the process of giving individuals access to system objects based on their identity. Authentication merely ensures that the individual is who he or she claims to be, but says nothing about the access rights of the individual. Authorization: Authorization is the privilege granted to an individual by management to access information based upon the individual’s clearance and need-to-know principle. It is the granting to a user, program, or process the right of access. Configuration: Configuration is the relative or functional arrangement of components in a system. It is the way a system is set up, or the assortment of components that make up the system. Configuration can refer to hardware or software, or the combination of both. Configuration Management: This is the management of security features and assurances through the control of changes made to a system. It is a procedure for applying technical and administrative directions and surveillance to identify and document the functional and physical characteristics of an item or system, control any changes to such characteristics, and record and report the change, process, and implementation status. Identification: Identification is the process that enables the recognition of an entity (subject or object) by a computer system, generally by the use of unique machine-readable user names. Information Security Plan: It is a document that provides a road map for the goals to be accomplished in securing the information assets of an organization. The information security plan provides the basic blueprints for documenting an information security program.
Implementing a Sound Public Information Security Program
Information Security Policy: This is a document that outlines the rules, laws, and practices that regulate how an organization will manage, protect, and distribute its sensitive information (both corporate and client information). It lays the framework for the computer-network-oriented security of an organization. Information Security Program: It is a documented set of information security policies, procedures, guidelines, and standards implemented to provide the road map for effective information security management practices and controls. Information Security Standards: This is a set of documents that outline the criteria and specific level of performance regarding the actions needed to be taken to secure the information assets of an organization. Management Controls: Management controls are actions taken to manage the development, maintenance, and use of the system, including system-specific policies, procedures, and rules of behavior, individual roles and responsibilities, individual accountability, and personnel security decisions. Operational Controls: These are day-to-day procedures and mechanisms used to protect operational systems and applications. They address security methods that focus on mechanisms that primarily are implemented and executed by people (as opposed to systems). Operational controls affect the system and operational environment. Patch: A patch is a section of software code that is inserted into a program to correct mistakes or to alter the program.
Security Management: This is the process of monitoring and controlling access to network resources. This includes monitoring usage network resources, recording information about the usage of resources, detecting attempted or successful violations, and reporting such violations. Technical Controls: These are hardware and software controls used to provide automated protection to the information technology system or applications. Technical controls operate within the technical system and applications. Threat: A threat is any circumstance or event with potential cause for harm to a system in the form of destruction, disclosure, modification of data, and/or denial of service. It is the potential for the exploitation of a vulnerability. Threats arise from internal failures, human errors, attacks, and natural catastrophes. The examination of all actions and events that might adversely affect a system or operation is known as threat analysis. Vulnerability: A vulnerability is a weakness in a network computer system’s security procedures, administrative controls, system design, implementation, internal controls, or so forth that could be exploited by a threat for one to gain unauthorized access to information, to disrupt critical processing, or to violate a system security policy. It is a flaw that may allow harm to occur to an information technology activity. A measurement of vulnerability, which includes the susceptibility of a particular system to a specific attack and the opportunities available to a threat agent to mount that attack, is known as vulnerability assessment.
Chapter LXIII
Evaluation of E-Government Web Sites Michael Middleton Queensland University of Technology, Faculty of Information Technology, Australia
introduction In recent times, the popularity of the Internet has led to e-government practices being widely recognized as an important option for service to the general public. In response, various tiers of government from national to local level have sought opportunities to engage the public through Web sites. Many governments now provide some level of access to government through Web interfaces, for example, through access to resources such as publications and government data. In some cases there are services provided that may be executed online. For example, users may provide personal information for licensing or to undertake payments. There continues to be a diversity of implementation quality and levels for such services. The facilitation of e-government has been characterized in various ways. For example, the European Union has seen it in terms of four main tasks: the development of Internet-based services to improve access to public information and services, the improvement of the transparency of public administration by using the Internet, the
full exploitation of information technology within public administration, and the establishment of e-procurement (Strejcek & Theilb, 2003). More recently, the United Nations (UN), noting that ICTs may be used to transform its internal and external relationships, has also identified four similar but distinct areas: internal processes such as record keeping, electronic service delivery, virtual communities for digital democracy, and e-business opportunities such as procurement (United Nations Department of Economic and Social Affairs, 2005). Legislation that supports e-government may address many aspects of its implementation. For example, a regulatory environment for electronic contracts, regulation of telecommunications, digital signatures, and consumer protection is being developed by different states. In terms of access to public information and services, the USA has tackled the issue with an E-Government Act that provided for an office of electronic government within its Office of Management and Budget along with minimum standards of information for the public on federal agency sites (Jaeger, 2004; Matthews, 2002).
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Evaluation of E-Government Web Sites
The UN has also produced participation and readiness indexes in order to indicate the extent of participation in, and development toward, egovernment (United Nations Department of Economic and Social Affairs, 2004). These indexes are among a number approaches to measuring e-government performance. A comparison of methods for assessing implementation has been made by Janssen, Rotthier and Snijkers (2004). They have used the term “supply oriented eGovernment measurements” for evaluations that focus on delivery, typically through the Internet. This is closest to the second of the four UN areas, that of service delivery, and is the focus of what is described in the following. The intent here is to provide an overview of different approaches to Web site evaluation in order to suggest further application and development of evaluation instruments.
background Evaluation instruments are useful to governments at all levels in order to provide for benchmarking, and detailed assessment and comparison between Web sites. Such instruments may also help governments in developing nations to direct their online strategies by reference to analytical tools for evaluating Web sites. It has been noted by Sharma (2004) that a number of e-government benchmarking studies have been limited by a focus solely upon the supply side. Likewise, Holland, Bongers, Vandeberg, Keller, and te Velde (2005) have pointed out that the measurement of e-government must take account of more than the supply side. Beyond the provision of services, there should also be consideration of policy including the regulatory environment, prerequisites such as Internet penetration, and internal government functioning such as intranet development. However, as the focus here is on services as presented via the Web, two main types of evaluation are relevant for consideration.
00
These are, first, approaches to Web site design and evaluation in general, and second, approaches to examining the performance of e-government as delivered via the Internet. In the case of Web site design and evaluation, there are many examples of guidance. These may take the form of online checklists (Ciolek & Goltz, 1996-2006; World Wide Web Consortium [W3C], 1999). There are also many general texts on the subject that provide direction in matters of information architecture and design, style, and information quality in Web site development (Benyon, Turner, & Turner, 2005; Lazar, 2006; Rosenfeld & Morville, 2002). A great deal has been written on e-government online delivery practices, however, in this work the focus is on the evaluation of delivery that makes some reference to methods for testing the delivery. For example, a need for the effective benchmarking of e-government implementations was suggested by Kaylor, Deshazo, and van Eck (2001). They conducted studies on local governments in the USA, and focused on the functions and services that cities typically provide. The model they used contains detailed questions on services delivered online. It also used a four-point scale system to measure the presence and the degree of implementation of online services. Korsten and Bothma (2005a, 2005b) have evaluated South African government Web sites for content and usability. They also studied the portal South African Government Online, which provides a gateway to government information. Usability was differentiated for first-time users and frequent users, where the focus was on efficiency and satisfaction. Web sites were assessed with regard to site-level criteria that included the home page and site-wide design, information architecture, navigation, search capability, linking strategy, overall writing style, page templates, and layout. Specific issues relating to individual pages including downloading time, coding problems, and error messages were considered to be outside the scope, as were accessibility and downtime.
Evaluation of E-Government Web Sites
The criteria used have been detailed by Korsten (2003). She considers content that is well written, comprehensive, current, of high quality and authoritative, to be a fundamental element of an effective Web presence. Additionally, Web sites must cater for a wide range of audiences and discharge the host institution’s objectives relating to communication and information dissemination through publishing. Another Web site evaluation instrument whose use has been reported in the literature is one used to support the e-Qual (formerly known as WebQual) method. This has been utilized to evaluate a number of UK government sites (Barnes & Vidgen, 2002, 2004). The associated questionnaire seeks Likert-scale responses to 23 questions on usability, information quality, and service interaction. The e-Qual approach has been refined by the use of comment analysis in association with the traditional survey data. This has been found to provide a useful approach to triangulation, thereby strengthening Web quality assessment (Barnes & Vidgen, 2005). Choudrie and Ghinea (2005) have also adopted a dual evaluative approach using what they termed an integrated sociotechnical perspective by utilizing participant evaluation along with use of Web diagnostic tools for the sites of four countries. The Center for Public Policy at Brown University has conducted ongoing research that provides annual reports on comparative performance of egovernment sites. These have been national (West, 2006b) and international (West, 2006a), using a set of criteria relating to the presence of various features dealing with information availability, service delivery, and public access. Among the criteria questions are those that seek the presence of online publications, databases, audio or video clips, alternative languages or translation capability, commercial advertising, premium fees, user payments, disability access, privacy policy, security features, digital signatures, credit card payments facility, feedback options, automatic
e-mail updates, Web site personalization, and personal digital assistant (PDA) access. Similarly, the Accenture (2004) company has assessed advances that different countries have made in their e-government capacity, and compares the different approaches countries have taken. Accenture also publishes reports on innovations in e-governance that the surveyed countries have undertaken for better service for their citizens. One of the more detailed instruments reported is that of Melitski, Holzer, Kim, Kim, and Rho (2005). They have used five main categories for evaluation of e-government sites: security and privacy, usability, content, service, and citizen participation. Within these five categories, there are a total of 92 questions, 47 of which use a fourpoint scale. This approach has been extended by Henriksson, Yi, Frost, and Middleton (2006). It is instruments such as these whose elements have been consolidated to form the basis for evaluation criteria that are included in the following section.
wEb sitE Evaluation The generic approach and specific e-government approach to evaluation naturally have a certain amount of overlap, but they are considered separately here, so the generic evaluation is given a brief summary overview in order to introduce the specific.
generic web site Evaluation Methods for Web site evaluation based upon the design and architecture principles referenced above use a variety of characterizations. For example, Edwards (1998) sees evaluation in terms of access, quality, and ease of use. Alexander and Tate (2003-2006) use the categories authority accuracy, objectivity, currency, and coverage, for
0
Evaluation of E-Government Web Sites
Table 1. A summary of generic Web site evaluation categories Criterion
Example of factor
Examples of check
Functionality
Site maps
-Is there a summary of site organization by showing broad categories of pages?
Authority
Affiliation
-Do the authors indicate who their employer is? -Is an organization responsible for governance of the site?
Validity
Referring links
-How many other sites provide links to this one?
Obtainability
Format support
-Does the site display all its aspects on the browser that is in use? -Does the site require plug-ins for full functionality?
Relevance
Currency
-Is there an indication of when it was last updated? -Is there an indication of how frequently it is updated?
Substance
Evidence
-Are statements supported by illustrations, quoted sources, linked Web sites?
which they elaborate different subcriteria for different Web site types such as informational sites. Many of the evaluation categories are consolidated by Middleton (2002), and an extract drawn from these is shown in Table 1. The categories are briefly outlined as follows. •
•
0
Functionality refers to how effectively a site is designed in order to help you navigate around it. It is concerned with such matters as layout, link activity, error-free markup, and user assistance facilities such as a help function, site map, and search facility. It is also concerned with support such as the provision of text for images for users who are restricted in their capabilities. Authority refers to the trustworthiness of the information carried at the site, and how reliable it is thought to be based upon what is known of the individual or organization responsible for its production. Evaluation considers such factors as the creators, their credentials and affiliations, whether there is editorial oversight, whether the site makes intellectual property claims, and what its funding and viability are.
•
•
•
•
Validity is an indication of the extent to which a site is considered useful by other parties. It takes into account such factors as ratings, the extent of links referring to the site (for example, listing in sites that provide gateways to the site in question), reviews of the site, and the amount of usage. Obtainability refers to the ease with which a site may be recalled and displayed. It takes into account factors such as format support, load aspects, metadata, naming, and speed. It also takes into consideration security matters and costs that may be involved. Relevance accounts for the information requirements of a user and how pertinent to them a site is. It involves the audience to which a site is directed, as well as such matters as balance, currency, and depth of content. Substance assesses the significance of the site for producing unbiased and reliable content. It is therefore concerned with accuracy, extent and detail of coverage, readability, supporting evidence such as references, and the extent of explanation of content.
A complete explanation of these criteria accompanied by examples of checks to be performed for factors is provided on a website called FAVORS (Queensland University of Technology, 1999-2006).
Evaluation of E-Government Web Sites
E-government service Evaluation
•
The factors that influence the quality of government Web sites can be placed into six categories: (a) security and privacy, (b) usability, (c) content, (d) services, (e) citizen participation, and (f) features. The first five of these categories and a number of the security and privacy questions are based upon factors outlined by Melitski et al. (2005). These have been extended by applying generic material as outlined in the section above to the government site context by including an increased number of factors in the categories, and by adding an additional features category. Table 2 gives an example of an element in each category and the type of question that may assist in checking for the element’s application. •
Security and privacy are concerned with the existence and quality of any privacy policy presented on a government Web site. It takes into account the security of data transmission to the site and the site’s servers. Furthermore, this category is concerned with the use of cookies in identifying and monitoring users and whether the Web site is still usable when cookies are disabled. The internal security measures taken within the government department itself are also of interest.
•
•
Usability is the broadest of the six categories and derives mainly from the generic functionality category. The areas examined range from the readability of a Web site’s text fields to whether the site employs a consistent style through the usage of cascading style sheets. The ease of using the Web site’s navigation system is estimated, and the robustness of forms encountered is assessed. As government sites are intended for general public use, overall accessibility is embraced within this category and takes into account disability access and backward compatibility with older systems. The user friendliness of the Web site is observed by looking at factors such as the presence of help pages, and whether the site is available in more than one language. Content is judged on the amount of public information available on the Web site. The amount of horizontal integration between various government agencies is also assessed, along with an estimate of the amount of information available about the dealings of these agencies. The logical grouping of information for easy access to diverse groups within society is also observed. Services comprise two subcategories: services for citizens and services for businesses. In each of these subcategories, the availability of payment, registration, and
Table 2. Summary of e-government evaluation categories Category
Example of element
Examples of check
Security/privacy
Privacy policy
-Does the Web site policy explicitly state the use of personal data in relation to third parties?
Usability
Disability access
-Is the Web site W3C Standard Priority 1 compliant for vision-impaired users?
Content
Public information
-Does the Web site offer access to databases containing regularly updated public information?
Services
Business
-Does the Web site offer online registration?
Citizen participation Features
Business
-Does the Web site offer online surveys or opinion polls?
Personal pages
-Can these be customized based upon characteristics and preferences of individual users?
0
Evaluation of E-Government Web Sites
•
•
application services is observed. For businesses, the presence of online tendering is examined, and for citizens, the availability of online recruitment is considered. Citizen participation examines the extent to which citizens are able to communicate both with the government agency and with each other through the Web site. The availability of opinion polls, bulletin boards, and satisfaction surveys is observed. The existence of a government strategy to educate citizens about the online channel and the presence of any government incentives to drive the usage of the e-government Web sites are also included. Features include assessment for the availability of personal pages and the degree to which the government agency allows each citizen to create his or her own space on the Web site, the time taken for the agency to answer questions made online and the nature of such answers, and the presence of commercial advertising, external links, and advanced search capabilities.
futurE trEnds Although generic Web site evaluation instruments are now mature, their adaptation to e-government and in particular, government Web sites, requires development in line with the increasing capabilities of those sites and innovation in public-sector facilities. Instruments such as that which have been developed by Henriksson et al. (2006) take account of best practices in Web site design and have been especially tuned to assess the areas of importance to public administration. However, they must be continually refined in order to provide better tools for evaluation. For example, they need to take account of the following: •
0
Specific criteria for Web site development standards developed by groups such as W3C,
•
•
•
•
•
which should be embodied in evaluation checklists as the standards are promulgated. Refining approaches to the scalability of questions, for example, by making use of readability measures that have been developed for other contexts, and by developing gradations of executability for site functionality. Improving approaches to developing checklists so that an appropriate balance is maintained between those factors that can be determined online by visiting a Web site, and those factors such as security ones, which require internal access for insight into the workings of a government agency. Adapting evaluation instruments so that they take account of the legal environment to which a particular government Web site is subjected. Adapting evaluation to deal with services at some tiers of government that implement a decentralized approach across multiple agencies and sites. Incorporating the use of automatic analysis tools to deal with appropriate factors in order to reduce the manual time spent on analysis.
conclusion As the number of government Web sites proliferates, so must the ability to evaluate those sites be refined. Typically, the criteria that are developed should test if quality content is being made more amenable by developing a Web site that provides straightforward navigability and an information architecture that facilitates browsing or searching. Future development of the Web sites must take into account regulatory constraints, and make certain of transaction security. Yet, this development must also ensure that a digital divide is not perpetuated by providing information visibility
Evaluation of E-Government Web Sites
to users who have a wide range of computer and information literacy. General tools for Web site evaluation need to be associated with specific approaches to measuring electronic delivery of government services. These measurements have initially focused on the functionality of sites allied with the quantity and quality of information provided. However, increasing attention is being paid to services, either for citizens through participation and personalization or for enterprises (government or business) by support of business processes. In addition to evaluating design, typography, and consistent approaches to page layout that contribute to the look and feel of Web sites, criteria must be developed to consider the following: •
•
•
•
•
The extent of citizen access, sometimes termed universal access, including availability to those without personal equipment The extent of coordinated dissemination of information through a whole-of-government approach The extent of metadata support underlying the organization of pages so that they may be identified in focused search engines The extent to which the security of the environment reassures citizens of privacy and data protection The extent to which interfaces may be personalized
The examples provided in this chapter offer an introduction to a structured approach to Web site evaluation that derives from generic site evaluation, and then establishes a framework for government site evaluation. The examples, together with the references that are provided, give an insight into the development of instruments that may be used to measure government Web sites, and lead to their optimization for the user environment that they are meant to service.
futurE rEsEarch dirEctions Further research in the area can be considered from two viewpoints. First, the area of services delivery requires more analysis of delivery modes accompanied by a study of factors influencing effective service provision. This should lead to refinements in instruments used for evaluation, and improved approaches to the ways in which they can be implemented. Associated with this, there should be more concentration on user analysis so that perceptions of effectiveness of e-government facilities are sought from those who make use of them as well as those who provide them. Second, there should be more connection between the analysis of the service aspects and the other sustaining areas of digital e-government, namely, those supporting internal processes such as digital record keeping, the development of virtual communities for exercising citizens’ input, and the advancement of government-to-government and government-to-business opportunities including administration and procurement. It is to be anticipated that such investigation will take the form of case studies, social and technical analysis, and the development of guiding models for different levels of jurisdiction.
rEfErEncEs Accenture. (2004). eGovernment leadership: High performance, maximum value. Retrieved August 23, 2005, from http://www.accenture.com/xdoc/ en/industries/government/gove_egov_value.pdf Alexander, J., & Tate, M. A. (2003-2006). Evaluate Web pages. Retrieved June 26, 2006, from http://www.widener.edu/Tools_Resources/Libraries/Wolfgram_Memorial_Library/Evaluate_Web_Pages/659/ Barnes, S., & Vidgen, R. (2002). An integrative approach to the assessment of e-commerce qual-
0
Evaluation of E-Government Web Sites
ity. Journal of Electronic Commerce Research, 3(3), 114-127. Barnes, S., & Vidgen, R. (2004). Interactive egovernment: Evaluating the Web site of the UK Inland Revenue. Journal of Electronic Commerce in Organizations, 2(1), 42-46. Barnes, S., & Vidgen, R. (2005). Data triangulation in action: Using comment analysis to refine Web quality metrics. Proceedings of the 13th European Conference on Information Systems, Regensburg, Germany. Retrieved September 26, 2006, from http://www.webqual.co.uk/papers/comment.pdf Benyon, D., Turner, P., & Turner, S. (2005). Designing interactive systems: People, activities, contexts, technologies. Harlow, United Kingdom: Addison-Wesley. Choudrie, J., & Ghinea, G. (2005). Integrated views of e-government Website usability: Perspectives from users and Web diagnostic tools. Electronic Government, An International Journal, 2(3), 318-333. Ciolek, T. M., & Goltz, I. M. (1996-2006). Information quality WWW virtual library. Retrieved January 24, 2006, from http://www.ciolek.com/ WWWVL-InfoQuality.html Edwards, J. (1998). The good, the bad and the useless: Evaluating Internet resources. Retrieved June 28, 2006, from http://www.ariadne.ac.uk/issue16/digital/ Henriksson, A., Yi, Y., Frost, B., & Middleton, M. (2006). Evaluation instrument for e-government Websites. Paper presented at Internet Research 7.0: Internet Convergences, Brisbane, Australia. Retrieved June 26, 2006, from http://eprints.qut. edu.au/archive/00003113 Holland, C., Bongers, F., Vandeberg, R., Keller, W., & te Velde, R. (2005). Measuring and evaluating e-government: Building blocks and recommendations for a standardized measuring tool. In M.
0
Khosrow-Pour (Ed.), Practicing e-government: A global perspective (pp. 179-198). Hershey, PA: Idea Group/OECD. Jaeger, P. T. (2004). Beyond section 508: The spectrum of legal requirements for accessible egovernment Web sites in the United States. Journal of Government Information, 30(4), 518-533. Janssen, D., Rotthier, S., & Snijkers, K. (2004). If you measure it they will score: An assessment of international eGovernment benchmarking. Information Polity, 9(3/4), 121. Kaylor, C. R., Deshazo, R., & van Eck, D. (2001). Gauging e-government: A report on implementing services among American cities. Government Information Quarterly, 18(4), 293-307. Korsten, H. (2003). An evaluation of and a model for South African government Websites. Unpublished doctoral dissertation, University of Pretoria, Pretoria, South Africa. Korsten, H., & Bothma, T. J. D. (2005a). Evaluating South African government Web sites: Methods, findings and recommendations (Part 1). South African Journal of Information Management, 7(2). Korsten, H., & Bothma, T. J. D. (2005b). Evaluating South African government Web sites: Methods, findings and recommendations (Part 2). South African Journal of Information Management, 7(3). Lazar, J. (2006). Web usability: A user-centered design approach. Boston: Pearson Addison Wesley. Matthews, W. (2002). House passes compromise e-government bill. Federal Computer Week, 16(41), 13. Melitski, J., Holzer, H., Kim, S.-T., Kim, C.-G., & Rho, S.-Y. (2005). Digital government worldwide: An e-government assessment of municipal Web
Evaluation of E-Government Web Sites
sites. International Journal of Electronic Government Research, 1(1), 1-19. Middleton, M. (2002). Information management: A consolidation of operations, analysis and strategy. Wagga Wagga, Australia: CSU Centre for Information Studies. Queensland University of Technology. (19992006). FAVORS. Retrieved June 28, 2006, from http://www.favors.fit.qut.edu.au/ Rosenfeld, L., & Morville, P. (2002). Information architecture for the Word Wide Web (2nd ed.). Cambridge, MA: O’Reilly. Sharma, S. K. (2004). Assessing e-government implementations. Electronic Government, An International Journal, 1(2), 198-212. Strejcek, G., & Theilb, M. (2003). Technology push, legislation pull? E-government in the European Union. Decision Support Systems, 34(3), 305-313. United Nations Department of Economic and Social Affairs. (2004). Global e-government readiness report 2004: Towards access for opportunity. Retrieved January 24, 2006, from http://unpan1. un.org/intradoc/groups/public/documents/un/unpan019207.pdf United Nations Department of Economic and Social Affairs. (2005). World public sector report 2005: Unlocking the human potential for public sector performance. Retrieved January 24, 2006, from http://unpan1.un.org/intradoc/groups/public/ documents/un/unpan021616.pdf West, D. (2006a). Global e-government 2006. Retrieved August 20, 2006, from http://www. insidepolitics.org/egovt06int.pdf West, D. (2006b). State and federal e-government in the United States, 2006. Retrieved August 20, 2006, from http://www.insidepolitics.org/egovt06us.pdf
World Wide Web Consortium (W3C). (1999). Checklist of checkpoints for Web content accessibility guidelines. Retrieved January 24, 2006, from http://www.w3.org/TR/WCAG10/fullchecklist.html
furthEr rEading Al-Hakim, L. (Ed.). (2007). Global e-government: Theory, applications and benchmarking. Hershey, PA: Idea Group Publishing. Asgarkhani, M. (2005). Digital government and its effectiveness in public management reform: A local government perspective. Public Management Review, 7(3), 465-487. Australian Government Information Management Office. (2006). Responsive government: A new service agenda. Retrieved April 11, 2006, from http://www.agimo.gov.au/publications/2006/ march/introduction_to_responsive_government Bekkers, V., & Homburg, V. (Eds.). (2005). The information ecology of e-government: Egovernment as institutional and technological innovation in public administration. Amsterdam: IOS. Bertelsmann Foundation. (2002). Balanced egovernment. Retrieved September 30, 2005, from http://www.begix.de/en/studie/studie.pdf Böhlen, M., Gamper, J., Polasek, W., & Wimmer, M. A. (Eds.). (2005). E-government: Towards electronic democracy: International Conference, TCGOV 2005, Bolzano, Italy, March 2-4, 2005. Proceedings. Chadwick, A. (2006). Internet politics: States, citizens, and new communication technologies. New York: Oxford University Press. Curtin, G. G., Sommer, M. H., & Vis-Sommer, V. (Eds.). (2003). The world of e-government. New York: Haworth.
0
Evaluation of E-Government Web Sites
De, R. (2005). E-government systems in developing countries: Stakeholders and conflict. Electronic Government, Proceedings (Vol. 3591, pp. 26-37). Deakins, E., & Dillon, S. M. (2002). E-government in New Zealand: The local authority perspective. The International Journal of Public Sector Management, 15(4/5), 375. Drèuke, H. (Ed.). (2005). Local electronic government: A comparative study. London: Routledge. Edmiston, K. D. (2003). State and local egovernment: Prospects and challenges. American Review of Public Administration, 33(1), 20-45. Garcia, A. C. B. M. C., & Pinto, F. B. (2005). A quality inspection method to evaluate egovernment sites. In M. A. T. R. Wimmer, A. Gronlund, & K. V. Andersen (Eds.), Lecture notes in computer science: Vol. 3591. Electronic government: 4th International Conference, EGOV 2005 (pp. 198-209). Berlin, Germany: SpringerVerlag. Gil-Garcia, J. R., & Pardo, T. A. (2005). Egovernment success factors: Mapping practical tools to theoretical foundations. Government Information Quarterly, 22(2), 187-216. Guangwei, H., & Weijun, Z. (2004). Research on the survey and evaluation method of egovernment Websites. Journal of the China Society for Scientific and Technical Information, 23(4), 495-501. Guijarro, L. (2005). Policy and practice in standards selection for e-government interoperability frameworks. In Electronic Government, Proceedings (Vol. 3591, pp. 163173). Gupta, M. P., & Jana, D. (2003). E-government evaluation: A framework and case study. Government Information Quarterly, 20(4), 365387.
0
Hadzilias, E. A. (2005). A methodology framework for calculating the cost of egovernment services. In E-Government: Towards Electronic Democracy, Proceedings (Vol. 3416, pp. 247-256). Henriksson, A., Yi, Y., Frost, B., & Middleton, M. (in press). Evaluation instrument for egovernment Websites. Electronic Government: An International Journal. Ho, A. (2002). Reinventing local government and the e-government initiative. Public Administration Review, 62(4), 434-444. Huang, W., Siau, K., & Wei, K. K. (Eds.). (2005). Electronic government strategies and implementation. Hershey PA: Idea Group. Irani, Z., Love, P. E. D., Elliman, T., Jones, S., & Themistocleous, M. (2005). Evaluating egovernment: Learning from the experiences of two UK local authorities. Information Systems Journal, 15(1), 61-82. Iyer, L. S., Singh, R., Salam, A. F., & D’Aubeterre, F. (2006). Knowledge management for governmentto-government (G2G) process coordination. Electronic Government, An International Journal, 3(1), 18-35. Jaeger, P. T. (2006). Assessing Section 508 compliance on federal e-government Web sites: A multi-method, user-centered evaluation of accessibility for persons with disabilities. Government Information Quarterly, 23(2), 169190. Kampen, J. K., Snijkers, K., & Bouckaert, G. (2005). Public priorities concerning the development of e-government in Flanders. Social Science Computer Review, 23(1), 136-139. Khosrow-Pour, M. (Ed.). (2005). Practicing egovernment: A global perspective. Hershey, PA: Idea Group.
Evaluation of E-Government Web Sites
Kim, T. H., Im, K. H., & Park, S. C. (2005). Intelligent measuring and improving model for customer satisfaction level in e-government. In Electronic Government, Proceedings (Vol. 3591, pp. 38-48).
Sun, S.-Y., Ju, T. L., & Chen, P.-Y. (2006). Egovernment impacts on effectiveness: A survey study of an e-official-document system. Electronic Government, An International Journal, 3(2), 174-189.
Koh, C. E., & Prybutok, V. R. (2003). The three ring model development of an instrument for measuring dimensions of e-government functions. The Journal of Computer Information Systems, 43(3), 34.
Wood, F. B., Siegel, E. R., LaCroix, E.-M., Lyon, B. J., Benson, D. A., Cid, V., et al. (2003). A practical approach to e-government Web evaluation. IT Professional, 5(3), 22-28.
Kunstelj, M., & Vintar, M. (2004). Evaluating the progress of e-government development: A critical analysis. Information Polity, 9(3, 4), 131. La Porte, T. M., Demchak, C. C., & de Jong, M. (2002). Democracy and bureaucracy in the age of the Web: Empirical findings and theoretical speculations. Administration & Society, 34(4), 411-446. Larsen, B., & Milakovich, M. (2005). Citizen relationship management and e-government. Electronic Government, Proceedings (Vol. 3591, pp. 57-68). McGovern, G., Norton, R., & O’Dowd, C. (2002). The Web content style guide: An essential reference for online writers, editors, and managers. Harlow: Financial Times Prentice Hall. McMahon, R. (2005). E-government: Effects on the netologically disadvantaged. Electronic Government, An International Journal, 2(4), 460-471. McMahon, R., & Bressler, R. (2005). Egovernment: Do audits aid the netologically disadvantaged. Electronic Government, An International Journal, 2(4), 413-425. Shackleton, P., Fisher, J., & Dawson, L. (2006). E-government services in the local government context: An Australian case study. Business Process Management Journal, 12(1), 88-100.
Zhu, X., & Gauch, S. (2000). Incorporating quality metrics in centralized/distributed information retrieval on the World Wide Web. Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 288-295).
tErms and dEfinitions Cookie: A cookie is a facility that enables a Web server to send a Web browser a packet of data that will be returned by the browser each time it accesses the same server. It may include any data the server is set to determine, and is often used to authenticate a registered user of a Web site without requiring them to sign in again every time they access it. Encryption: Encryption involves any procedure used for converting plaintext into cipher text (encrypted message) in order to prevent any but the intended recipient from reading the data. It may then be carried, for example, by secure sockets layer (SSL), a protocol that uses a secure hypertext transmission method designed to provide encrypted communications on the Internet. Executability: This is the extent to which a user may interact with a service function via a Web site. This may range from nonexistent through information display, information download, par-
0
Evaluation of E-Government Web Sites
tial executability (in which a user may submit digital material but not receive responses), and fully executable in which mutual citizen-agency interaction takes place with a validation of the transaction provided. Information Architecture: It is the arrangement of information in a structured way for Web site interface presentation based upon the technical architecture in databases and static files. Navigation: Navigation is the process of following a path in a database to find desired information. It is usually applied to the exploration of a hypertext system, or a database that has graphic representation via a Web site.
0
Privacy Policy: A Web site’s official statement on the type of information collected on the site, how the information will be used, how the person can access this data, and the steps for having the data removed. A privacy statement will also usually include information regarding systems that are in place to protect the information of visitors. Readability Measures: These are analytical measures based upon algorithms such as the Fog Scale and the Flesch-Kincaid grade level that estimate the extent to which text is comprehensible.
Chapter LXIV
IT Evaluation Issues in Australian Public-Sector Organizations Chad Lin Curtin University of Technology, Australia
introduction Public-sector organizations are one of the top spenders in information technology (IBM, 2006). According to an IDC report, global public-sector IT spending will exceed $138 billion in 2006, representing 12.2% of overall IT spending (IBM). In the United States, public-sector IT spending is likely to grow to $92 billion in 2010 from $71 billion in 2005 (Pulliam, 2005). Despite the huge and growing IT spending by public-sector organizations, the resulting benefits from these IT spending are still not clearly understood (Gunasekaran, 2005). This is often due to the poor IT investment evaluation process implemented by these public-sector organizations (Hall, 1998). In other words, there is a lack of understanding of the impact of the proper IT investment evaluation processes of IT projects in the public-sector organizations. The IT investment evaluation is an
ongoing process that seeks to identify best practice and use it as a basis for evaluating public-sector IT project performance in order to set up clear goals and identify areas for improvement (Gunasekaran, 2005). For example, without undertaking proper IT investment evaluation processes, organizations are at the risk of failing to establish clear IT project goals and design. Therefore, research in the public-sector organizations is becoming critical, especially in how these organizations evaluate their IT projects and ensure that benefits expected from these projects are eventually delivered. The main objective of this chapter is to identify evaluation issues that are critical in the implementation of IT projects by public-sector organizations. A key contribution of the chapter is to identify and examine evaluation issues and other key factors faced by public-sector organizations undertaking IT projects. The key issues presented are of interest to senior public-sector
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
IT Evaluation Issues in Australian Public-Sector Organizations
executives concerned with making decisions about IT investments and realizing IT benefits.
background it investment Evaluation While organizations continue to invest heavily in IT, research studies and practitioner surveys report contradictory findings on the effect of the expenditures on organizational productivity (Osei-Bryson & Ko, 2004; Thatcher & Pingry, 2004). Therefore, it is not difficult to see that the measurement of the business value of IT investment has been the subject of considerable debate by many academics and practitioners (Sugumaran & Arogyaswamy, 2004). Although some IT productivity studies have produced inconclusive and negative results, and the interpretation of the results may have depended on many factors (e.g., Zhu, 2004), many research studies have indicated that IT spending is directly related to organizational performance (e.g., Hu & Quan, 2005). In addition, the complex role and scope of IT investment decision-making processes are often the major constraints and difficulties in IT investment evaluation and benefits realization processes (Lin, Lin, Huang, & Kuo, 2006; Sugumaran & Arogyaswamy, 2004; Tsao, Lin, & Lin, 2004). Many private-sector IT projects fail to deliver what is expected of them because most organizations focus on implementing the technology rather than the adoption of the tools necessary to help to track and measure the IT projects (Hillam & Edwards, 2001). For example, a study by Sohal and Ng (1998) has found that in large Australian organizations, the potential of IT has not been utilized to meet the competitive challenges due to inadequate and inappropriate evaluation of the proposed IT projects. Moreover, they have reported that 59% of the responding organizations did not determine whether expected benefits were being realized.
There have also been a lot of reports on publicsector IT project failures. One of the major reasons for IT project failure is that most organizations fail to properly monitor and evaluate IT projects (Ballantine & Stray, 1998; Domberger, Fernandez, & Fiebig, 2000; Perrin & Pervan, 2004). It should be understood that IT investment evaluation in the public sector is highly complex, due in part to legal requirements that govern organizational processes (Khalfan, 2003), but also because it is a very politically sensitive process with many stakeholders holding very different and often conflicting perspectives (Allen, Kern, & Mattison, 2002; Heeks, 1999). While IT investment evaluation processes in the private sector is generally seen as something normal, there are special characteristics of the public sector that makes it inappropriate or extremely difficult (Bannister, 2001; Kouzmin, Loffler, Klages, & Korac-Kakabadse, 1999). Sullivan and Ngwenyama (2005) have found that some public-sector guidelines do not effectively address IT investment performance monitoring and evaluation. According to Jones and Hughes (2001), IT investment evaluation techniques are not widely used in public-sector organizations. However, according to Forrester Research, only 55% of public-sector organizations have intended to increase their efforts in evaluating their IT investments (IBM, 2006). Hence, the inability of many organizations to measure and apply IT both inter- and intra-organizationally is resulting in missed opportunities and a lack of business value (van Grembergen & van Bruggen, 1998).
IT Benefits Realization While IT investment evaluation processes are important, they are insufficient in terms of ensuring that the benefits identified and expected by organizations are eventually realized and delivered (Lin, Pervan, & McDermid, 2005). The essence of benefits realization is to organize and manage so that the potential benefits arising from the use of IT can actually be realized (Ward, Taylor, & Bond, 1996).
IT Evaluation Issues in Australian Public-Sector Organizations
Seddon, Graeser, and Willcocks (2002) have indicated that the identification and measurement of benefits is the most difficult issue in evaluating IT. According to Ward et al. (1996), very few organizations have a benefits realization approach. Much attention is paid to ways of justifying investments, with little effort being expended to ensuring that the benefits expected are realized (Ballantine & Stray, 1998). For example, a survey by Forrester Research indicated that only 51% of public-sector organizations had considered making some serious efforts in realizing the expected benefits of their IT investments (IBM, 2006). As the result, there is a massive imbalance between IT investment and benefits derived from that investment (Love, Irani, Standing, Lin, & Burn, 2005). While the search for benefit identification can contribute to the success of an IT investment, organizations have often found it difficult to evaluate them and as a result tend to use notional arbitrary values for assessing benefits (Lin & Pervan, 2003; Standing & Lin, 2007). The use of a formal benefits realization methodology (e.g., Cranfield Process Model of Benefits Management; Ward et al., 1996) is important in assisting organizations to ensure that their expected benefits are delivered (Changchit, Joshi, & Lederer, 1998; Lin et al., 2005).
rEsEarch mEthodology and findings research objectives and Methodology Before the case study was conducted, a pilot survey was carried out to first obtain an overview of current Australian service industry practices and norms in managing IT benefits and evaluation. Questionnaires were sent to IT managers in Australian public-sector organizations. The pilot survey results show some similarity to much of
the non-Australian studies (e.g., Ward et al., 1996; Willcocks & Lester, 1997). In summary, a variety of informal IT investment evaluation methodologies were used and most users were not involved in IT investment evaluation and implementation. Respondents had different reasons and success measurements or benchmarks for IT investments. Moreover, while about two thirds of the survey respondents indicated that they had ever used an evaluation methodology, only about one third had ever utilized a benefits realization methodology to ensure their expectations were realized eventually. While the survey was useful in obtaining an overview of the IS and IT evaluation practices of the large public-sector organizations, case studies were needed to investigate detailed issues such as the use of the IT investment evaluation and benefits realization methodologies and their effects on IT project implementation. An in-depth case study was conducted on two public-sector organizations in Australia. The casestudy method was chosen because it enables the researcher to examine the context of the evaluation processes and better understand the responses given in the interviews through observation (Silverman, 2001). The first organization (Case A) had more than 10 major IT projects running concurrently and many of them were contracted out to external IT vendors. On the other hand, the second organization (Case B) had only three major IT projects and only one was contracted out. Case A had adopted an informal IT investment evaluation methodology but without any IT benefits realization methodology. Case B also used an informal IT investment evaluation methodology but with a formal IT benefits realization methodology. In total, 20 key participants (CEOs [chief executive officers], CIOs [chief information officers], IT managers, and project managers) were interviewed, including some from their major external IT contractors. The interviews focused on both organizations’ major IT projects, the IT investment evaluation methodology deployed, and benefits realization process used. All interviews
IT Evaluation Issues in Australian Public-Sector Organizations
were taped and the transcripts were sent to the interviewees for validation. Other data collected included some of the actual contract documents, planning documents, and some minutes of relevant meetings. More than 200 pages of transcripts were analyzed. Qualitative content analysis was used to analyze the data from the case studies. The analysis of the case-study results was conducted in a cyclical manner and the results were checked by other experts in the field.
cost-benefit analysis) were used. However, several participants mistakenly thought project control and evaluation mechanisms specified within the SLAs constituted their IT investment evaluation methodology or technique. Furthermore, this may be also due to the fact that both case-study organizations were unable to introduce a formal IT investment evaluation methodology because they were required to follow the public-sector IT project contract guidelines.
Case-Study Findings
Conflict between Motivations and success criteria for it projects
A number of issues emerged from the analysis of the text data, and some of the key issues surrounding the use of evaluation methodologies are presented below in some detail.
lack of use of formal it investment Evaluation Methodology Most of the case-study participants claimed that a methodology or process was in place for investment evaluation. However, closer examination of the participants’ responses reveals that there was a lack of formal IT investment evaluation methodology or process used. Most of the participants from the case-study organizations claimed formal IT investment evaluation methodologies or processes were used for evaluating the IT projects. However, closer examination of the participants’ responses and project documents revealed otherwise. Documents such as service-level agreements (SLAs), monthly reports, standard contract management, and public-sector guidelines provided by organizations were stated by most participants as the IT investment evaluation methodology or process used for evaluating IT projects. Most of these measurements were related to the contract conditions specified in the SLAs within each project and no formal IT investment evaluation methodology, process, or technique (e.g., information economics) was mentioned. In reality, only informal IT investment evaluation processes (e.g.,
There appeared to be a conflict between both organizations’ motivations for investing in IT projects and the criteria for determining the success of these projects. Access to the required technical expertise was cited by most participants as one of the motivations for investing in these projects. Therefore, one can expect that getting the required expertise and cost saving should be mentioned by at least half of the participants as their criteria for determining the success of these IT projects. However, this was not the case. None of the participants mentioned getting the required technical expertise as one of the criteria for determining the success of the IT projects. Cost saving was not explicitly mentioned by any participant. It seemed that the participants had different expectations regarding the IT projects. Alternatively, the participants may have felt that the IT projects had already brought in the required technical expertise and, therefore, should not be used as one of the criteria for determining the success of the projects.
lack of user involvement in it project implementation and Evaluation User involvement has a positive influence on the successful outcome of system implementation. This implies that getting users involved in the
IT Evaluation Issues in Australian Public-Sector Organizations
project implementation and evaluation processes may improve their attitudes toward the system, and enhance the importance and relevance users perceive about the system. However, none of the participants was involved with any of the original IT project justification and negotiation processes. It appeared that the IT project justification process was handled by other units within both organizations. There appeared to be an “organizational memory gap” where units within both organizations possessed knowledge of different sorts (i.e., investment evaluation and benefits realization) of the entire IT system’s development cycle. However, the knowledge did not seem to be shared by all units because different units participated in different stages of the IT project development cycle. It is arguable that both organizations’ project implementation and evaluation processes would be even more successful if the participants were involved in the original IT project justification and negotiation processes as well as the benefits realization process.
General Lack of Commitment by contractors There was a general lack of commitment by external IT project vendors. Most external vendors’ criterion for determining the success of the external contracts seemed to be maximization of profit and revenue. The contractors’ lack of commitment can also be demonstrated by the fact that they either did not know (or care?) why the organizations selected external IT vendors. This was despite the fact that all of the external contracts were in partnership type of arrangements. This result appears to confirm the studies conducted by several researchers, which indicate that not only are many organizations from the private or public sector skeptical about partner-
ships (Hancox & Hackney, 2000), but also the partnership type of contract is not the most successful because the profit motive is not shared (Lacity & Willcocks, 1998).
futurE trEnds IT evaluation must be considered as a continuous activity as the IT itself evolves and adapts over time. More successful organizations in the future are likely to be the ones that persistently evaluate their new IT initiatives in order to gain the expected benefits. These include the use of formal IT investment evaluation and benefits realization methodologies and the formulation of effective strategies to assess their IT initiatives from the technical, social, and economic perspectives. These organizations will invest only in those IT projects that are likely to deliver productivity gains.
discussions As mentioned earlier, the effective deployment of appropriate IT investment evaluation and benefits realization methodologies are critical to the successful outcomes for IT projects. The results from this study indicate that most organizations have suffered from poor IT investment evaluation practices. A number of issues have emerged from the data and some key issues have been presented in this chapter. These include the lack of use of formal IT project evaluation processes, conflicting motivations and different perceptions of success of the IT projects by stakeholders, lack of user involvement in IT project evaluation, and general lack of commitment by external IT vendors.
IT Evaluation Issues in Australian Public-Sector Organizations
futurE rEsEarch dirEctions According to Carr (2003), IT has become a commodity because it has become widespread, as has happened to other innovations such as engines and telephones. IT has become the infrastructural technology and, therefore, is often subject to overinvestment and causes economic troubles such as the “Internet Bubble” (Carr). Carr’s views on IT are not shared by many IT practitioners and academics who argue that IT still has a lot to offer in the future and can deliver competitive advantages to organizations (Evans, 2003). More recent evidence suggests that many organizations simply got carried away with IS and IT and spent money unwisely in the late 1990s. According to a study by the McKinsey Global Institute, more successful organizations analyzed their economics carefully and spent on only those IS and IT applications that would deliver productivity gains, sequencing their investments carefully through a disciplined approach with innovative management practices (Farrell, 2003). The need for better methods of IS and IT investment evaluation has arisen from problems such as the productivity paradox where existing measures fail to reveal the gains made from these investments. IS and IT investment evaluation may also be needed for project justification, project comparisons, control, learning, and competitive advantage. Similarly, different approaches to benefits realization are also needed. The benefits of IS and IT can only be fully realized when systems and available technology are applied to specific and relevant tasks, which in turn are aligned with the business strategy (Andresen et al., 2000). Moreover, recent research indicates that formal approaches are not often used (Lin & Pervan, 2003; Lin, Huang, & Tseng, 2007) and that the application of structure and discipline to the process will improve the measurement of IS and IT benefits in organizations and lead to more effective investment of organizations’ scarce
resources in key elements of information systems and technology. Finally, the study took place at a particular point in time. Further research could be conducted to capture the opinions of respondents at various stages of the IS and IT project development process. Alternatively, our study could be replicated in a few years time to examine how IS and IT benefits realization and investment evaluation have changed and are being managed in light of emerging technologies such as e-commerce. Nevertheless, some key issues have been presented by the research and these may be helpful to practitioners and researchers in this field.
rEfErEncEs Allen, D., Kern, T., & Mattison, D. (2002). Culture, power and politics in ICT outsourcing in higher education institutions. European Journal of Information Systems, 11, 159-173. Andresen, J., Baldwin, A., Betts, M., Carter, C., Hamilton, A., Stokes, E., et al. (2000). A framework for measuring IT innovation benefits. Electronic Journal of Information Technology in Construction, 5, 57-72. Ballantine, J., & Stray, S. (1998). Financial appraisal and the IS/IT investment decision making process. Journal of Information Technology, 13, 3-14. Bannister, F. (2001). Dismantling the silos: Extracting new value from IT investments in public administration. Information Systems Journal, 11, 65-84. Carr, N. G. (2003). IT doesn’t matter. Harvard Business Review, 8(1), 4-50. Changchit, C., Joshi, K. D., & Lederer, A. L. (1998). Process and reality in information systems benefit analysis. Information Systems Journal, 8, 145-162.
IT Evaluation Issues in Australian Public-Sector Organizations
Domberger, S., Fernandez, P., & Fiebig, D. G. (2000). Modelling the price, performance and contract characteristics of IT outsourcing. Journal of Information Technology, 15, 107-118. Evans, B. (2003). Business technology: IT doesn’t matter? InformationWeek. Retrieved from http:// www.informationweek.com/story/showArticle. jhtml?articleID=9800088 Farrell, D. (2003). The real new economy. Harvard Business Review, 81(10), 104. Gunasekaran, A. (2005). Benchmarking in public sector organizations. Benchmarking: An International Journal, 12(4). Hall, R. (1998). New electronic communication from local government: Marginal or revolutionary? Local Government Studies, 24, 19-33. Heeks, R. (1999). Reinventing government in the information age. London: Routledge. Hillam, C. E., & Edwards, H. M. (2001). A case study approach to evaluation of information technology/information systems (IT/IS) investment evaluation processes within SMEs. The Electronic Journal of Information Systems Evaluation, 4(2). Hu, Q., & Quan, J. J. (2005). Evaluating the impact of IT investments on productivity: A causal analysis at industry level. International Journal of Information Management, 5(1), 39-53. IBM. (2006). Government in an era of transformational opportunity. IBM Investor Relationships. Retrieved from http://www.ibm.com/investor/ viewpoint/features/2006/09-05-06-1.phtml Jones, S., & Hughes, J. (2001). Understanding IS evaluation as a complex social process: A case study of a UK local authority. European Journal of Information Systems, 10, 189-203. Khalfan, A. (2003). A case analysis of business process outsourcing project failure profile and implementation problems in a large organisation
of a developing nation. Business Process Management Journal, 9(6), 745-759. Kouzmin, A., Loffler, E., Klages, H., & Korac-Kakabadse, N. (1999). Benchmarking and performance measurement in public sectors towards learning for agency effectiveness. The International Journal of Public Sector Management, 12(2), 121-144. Lacity, M. C., & Willcocks, L. P. (1998). An empirical investigation of information technology sourcing practices: Lessons from experience. MIS Quarterly, pp. 363-408. Lin, C., Huang, Y., & Tseng, S. (2007). A study of planning and implementation stages in electronic commerce adoption and evaluation: The case of Australian SMEs. Contemporary Management Research, 3(1), 83-100. Lin, C., Lin, K., Huang, Y., & Kuo, W. (2006). Evaluation of electronic customer relationship management: The critical success factors. The Business Review, 6(2), 206-212. Lin, C., & Pervan, G. (2003). The practice of IS/IT benefits management in large Australian organizations. Information and Management, 41(1), 13-24. Lin, C., Pervan, G., & McDermid, D. (2005). IS/IT investment evaluation and benefits realization issues in Australia. Journal of Research and Practices in Information Technology, 37(3), 235-251. Love, P. E. D., Irani, Z., Standing, C., Lin, C., & Burn, J. (2005). The enigma of evaluation: Benefits, costs and risks of IT in small-medium sized enterprises. Information and Management, 42(7), 947-964. Osei-Bryson, K., & Ko, M. (2004). Exploring the relationship between information technology investments and firm performance using regression splines analysis. Information and Management, 42(1), 1-13.
IT Evaluation Issues in Australian Public-Sector Organizations
Perrin, B., & Pervan, G. (2004, May). Performance monitoring systems for public sector IT outsourcing contracts. Proceedings of the 15th International Conference of the Information Resources Management Association (IRMA 2004), New Orleans, LA. Pulliam, D. (2005). Pace of information technology spending to slow, report predicts. National Journal Group Inc. Retrieved from http://www. govexec.om/dailyfed/0505/050405p1.htm Seddon, P., Graeser, V., & Willcocks, L. (2002). Measuring organizational IS effectiveness: An overview and update of senior management perspectives. The Database for Advances in Information Systems, 33(2), 11-28. Silverman, D. (2001). Interpreting qualitative data (2nd ed.). London: Sage. Sohal, A. S., & Ng, L. (1998). The role and impact of information technology in Australian business. Journal of Information Technology, 13, 201-217. Standing, C., & Lin, C. (2007). Organizational evaluation of the benefits, constraints and satisfaction with business-to-business electronic commerce. International Journal of Electronic Commerce, 11(3), 107-153. Sugumaran, V., & Arogyaswamy, B. (2004). Measuring IT performance: “Contingency” variables and value modes. The Journal of Computer Information Systems, 44(2), 79-86. Sullivan, E. W., & Ngwenyama, O. K. (2005). How are public sector organizations managing IS outsourcing risks? An analysis of outsourcing guidelines from three jurisdictions. Journal of Computer Information Systems, 45(3), 73-87. Thatcher, M. D., & Pingry, D. E. (2004). Understanding the business value of information technology investments: Theoretical evidence from alternative market and cost structure. Journal of Management Information Systems, 21(2), 61-85.
Tsao, H., Lin, K. H., & Lin, C. (2004). An investigation of critical success factors in the adoption of B2BEC by Taiwanese companies. The Journal of American Academy of Business, 5(1/2), 198-202. van Grembergen, W., & van Bruggen, R. (1998). Measuring and improving corporate information technology through the balanced scorecard. Electronic Journal of Information Systems Evaluation, 1(1). Retrieved from http://is.twi.tudelft. nl/ejise/indpap.html Ward, J., Taylor, P., & Bond, P. (1996). Evaluation and realization of IT benefits: An empirical study of current practice. European Journal of Information Systems, 4, 214-225. Willcocks, L., & Lester, S. (1997). Assessing IT productivity: Any way out of the labyrinth? In L. Willcocks, D. F. Feeny, & G. Islei (Eds.), Managing IT as a strategic resource (chap. 4, pp. 64-93). London: The McGraw-Hill Company. Zhu, K. (2004). The complementarity of information technology infrastructure and e-commerce capability: A resource-based assessment of their business value. Journal of Management Information Systems, 21(1), 167-202.
furthEr rEading Anderson, J., & Van Crowder, L. (2000). The present and future of public sector extension in Africa: Contracting out or contracting in? Public Administration and Development, 20, 373-384. Apte, U. M., Sobol, M. G., Hanaoka, S., Shimada, T., Saarinen, T., Salmela, T., et al. (1997). IS outsourcing practices in the USA, Japan and Finland: A comparative study. Journal of Information Technology, 12, 289-304. Barthelemy, J. (2003). The hard and soft sides of IT outsourcing management. European Management Journal, 21(5), 539-548.
IT Evaluation Issues in Australian Public-Sector Organizations
Barthelemy, J., & Geyer, D. (2004). The determinants of total IT outsourcing: An empirical investigation of French and German firms. The Journal of Computer Information Systems, 44(3), 91-97. Burnes, B., & Anastasiadis, A. (2003). Outsourcing: A public-private sector comparison. Supply Chain Management: An International Journal, 8(4), 355-366. Cilek, P., Fanko, W., Koch, S., Mild, A., & Taudes, A. (2004). A hedonic wage model-based methodology for evaluating the benefits of IT investments in public-sector organisations. The Journal of Enterprise Information Management, 17(4), 269-275. Currie, W. (1996). Outsourcing in the private and public sectors: An unpredictable IT strategy. European Journal of Information Systems, 4, 226-236. Dean, A. M., & Kiu, C. (2002). Performance monitoring and quality outcomes in contracted services. International Journal of Quality & Reliability Management, 19(4), 396-413. Dos Santos, B. L. (1994). Assessing the value of strategic information technology investments. In L. Willcocks (Ed.), Information management: The evaluation of information systems investments. London: Chapman & Hall. Douglas, V. (1999, May 10). Fahey under fire over fat fees and dubious savings. Informationweek, pp. 20-25. Fink, D., & Shoeib, A. (2003). Action: The most critical phase in outsourcing information technology. Logistics Information Management, 16(5), 302-311. Gordon, M. L., & Walsh, T. P. (1997). Outsourcing technology in government: Owned, controlled, or regulated institutions. Journal of Government Information, 24(4), 267-283.
Hirschheim, R., & Lacity, M. (2000). The myths and realities of information technology insourcing. Communications of the ACM, 43(2), 99-107. Huang, Y., Lin, C., & Lin, H. (2005). Technoeconomic effect of R&D outsourcing strategy for small and medium-sized enterprises: A resource-based viewpoint. International Journal of Innovation and Incubation, 2(1), 1-22. Kakabadse, A., & Kakabadse, N. (2001). Outsourcing in the public services: A comparative analysis of practice. Capability and Impact, Public Administration and Development, 21, 401-413. Kuo, W., Lin, C., Hsu, G., & Huang, Y. (2006). An empirical study of resource contribution in SMEs alliance. Journal of Global Business Management, 2(2), 103-111. Lacity, M. C., & Willcocks, L. (1997). Information systems sourcing: Examining the privatization option in USA public administration. Information Systems Journal, 7, 85-108. Laitinen, E. M. (2002). A dynamic performance system: Evidence from small Finnish technology companies. Scandinavian Journal of Management, 18, 65-99. Lee, B., & Barua, A. (1999). An integrated assessment of productivity and efficiency impacts of information technology investments: Old data, new analysis and evidence. Journal of Productivity Analysis, 121, 21-43. Lewis, B. R., & Byrd, T. A. (2003). Development of a measure for the information technology infrastructure construct. European Journal of Information Systems, 12, 93-109. Lin, C., & Huang, Y. (2007). An integrated framework for managing eCRM evaluation process. International Journal of Electronic Business, 5(3). Lin, C., Huang, Y., Cheng, M., & Lin, W. (2007). Effects of information technology maturity on the
IT Evaluation Issues in Australian Public-Sector Organizations
adoption of investment evaluation methodologies: A survey of large Australian organizations. International Journal of Management, 24(4). Lin, K., & Lin, C. (2007). Evaluating the decision to adopt RFID systems using analytic hierarchy process. The Journal of American Academy of Business, 11(1), 72-78. Lin, K., Lin, C., & Tsao, H. (2005). IS/IT investment evaluation and benefit realization practices in Taiwanese SMEs. Journal of Information Science and Technology, 2(4), 44-71. McIvor, R. (2000). A practical framework for understanding the outsourcing process. Supply Chain Management, 5(1), 22. Misra, R. B. (2004). Global IT outsourcing: Metrics for success of all parties. Journal of Information Technology Cases and Applications, 6(3), 21-34. O’Looney, J. (1998). Outsourcing the city: State and local government outsourcing. New York: Quorum Books. Parker, M. M., Benson, R. J., & Trainor, H. E. (1988). Information economics. London: Prentice Hall. Remenyi, D., Sherwood-Smith, M., & White, T. (1997). Achieving maximum value from information systems: A process approach. Chichester, England: John Wiley & Sons. Standing, C., Burn, J., & Lin, C. (2006). Case study: Information systems in Western Australian universities. Australasian Journal of Information Systems, 14(1), 251-260. Standing, C., Guilfoyle, A., Lin, C., & Love, P. E. D. (2006). The attribution of success and failure in IT projects. Industrial Management and Data Systems, 106(8), 1148-1165. Strassman, P. A. (1990). The business value of computers. New Canaan: The Information Economics Press.
0
Tallon, P. P., Kraemer, K. L., & Gurbaxani, V. (2000). Executives’ perceptions of the business value of information technology: A process-oriented approach. Journal of Management Information Systems, 16(4), 145-173. Willcocks, L., & Currie, W. (1997). Contracting-out information technology in the public sector context: Research and critique. Journal of the Australian and New Zealand Academy of Management, 2(2), 34-49. Willcocks, L., Fitzgerald, G., & Lacity, M. (1996). To outsource it or not? Recent research on economics and evaluation practice. European Journal of Information Systems, 5, 143-160. Willcocks, L. P., Lacity, M. C., & Kern, T. (1999). Risk mitigation in IT outsourcing strategy revisited: Longitudinal case research at LISA. Journal of Strategic Information Systems, 8, 285-314. Young, S. (2005). Outsourcing in the Australian health sector: The interplay of economics and politics. International Journal of Public Sector Management, 18(1), 25-36.
tErms and dEfinitions Benchmarking: This refers to the identification of historical data against which a data set can be compared now and in the future. Cost-Benefit Analysis: This is a technique or approach used to compare the various costs associated with the expected benefits. IT Benefits Realization: It is a managed and controlled process of benchmarking, involving implementing and adjusting the expected results and continuously adjusting the path leading from IT investments to expected business benefits. IT Project: This is an organizational initiative that employs or produces IT or IT-related assets.
IT Evaluation Issues in Australian Public-Sector Organizations
Organizational Memory: It is also called corporate knowledge. It refers to the repository where hard data and soft information are stored for future use. The soft information can be in the form of tacit know-how, expertise, biases, experiences, and anecdotes.
Preinvestment Justification: These are approaches or techniques that are used to assess organizations’ potential IT investments before decisions are made to invest.
Performance Measurement: Performance measurement involves the data that will be gathered, analyzed, reported, and utilized to make business decisions. It is also used to justify business spending, report progress toward established business objectives, and identify areas for improvement.
Chapter LXV
Performance and Accountability in E-Budgeting Projects Gabriel Puron-Cid Rockefeller College of Public Affairs and Policy, University at Albany, SUNY, USA J. Ramon Gil-Garcia Centro de Investigación y Docencia Económicas, Mexico
introduction An influential theoretical tradition in information systems research suggests that information and communication technology has the power to transform organizational structures and individual behaviors. This approach has been called “technological determinism.” In contrast, recent studies have found evidence of more complex relationships between information technologies and the organizational and institutional contexts in which those technologies are embedded (Fountain, 2001; Kling & Lamb, 2000; Orlikowski & Baroudi, 1991). The theories that Orlikowski and Iacono (2001) have categorized as the “ensemble view” explain that information technologies should not be conceptualized as physical artifacts only, but
that the social relations around those artifacts should also be considered. In addition, the relationship between information technologies and social structures is at least bidirectional, and therefore organizational characteristics and institutional arrangements also have an impact on government ICT projects (Fountain; García, 2005; Kraemer, King, Dunkle, & Lane, 1989). As a result of this embedment of ICT in government settings, certain characteristics of the information technologies are expected to reflect important aspects of the institutional and organizational environment and, therefore, help preserve the status quo instead of promoting change (Fountain; Kraemer et al.). In the last decade, the Mexican federal government has attempted to significantly transform its administrative processes and improve the quality
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Performance and Accountability in E-Budgeting Projects
of the services it provides through the use of ICT (Organisation for Economic Co-operation and Development [OECD], 2003; Puron-Cid & GilGarcía, 2004). This trend toward service quality and results (performance) has been reflected in numerous organizational transformations and the implementation of performance-oriented budgeting (Arellano-Gault & Gil-García, 2004; Arellano-Gault & Puron-Cid, 2004; Petrei, 1997; World Bank, 1997). Accountability and control are indeed always concerns for government agencies, particularly for the Ministry of Finance, and therefore some tensions between performance and accountability within budgetary reform will exist. Based on the analysis of three federal initiatives, this chapter argues that due to the embedment of ICT in government institutional and organizational environments, the tensions between performance and accountability become also reflected in the goals, features, and functionality of e-budgeting projects (see “Key Terms”). Furthermore, the prevalence of accountability for finance and fairness (accountability bias) already identified in the literature (Behn, 2001) is also reflected in the formal goals, general characteristics, and technical capabilities of the e-budgeting systems. The cases thus support the general hypothesis that information technologies do not necessarily have the power to transform government radically, at least not in the case of e-budgeting initiatives. This chapter is organized into six sections, including this introduction. The second section provides an overview of the importance of accountability and its tension relation with performance. It also explains the accountability-bias argument. The third section describes the research method and design. Next the chapter presents the analysis of three e-budgeting projects in Mexico, highlighting specific aspects of these projects related to performance and accountability. The fifth section identifies certain future trends in relation to this topic, and finally, the sixth section provides final comments and suggests areas for additional research.
background: accountability and pErformancE Scholars and practitioners in government have emphasized the tensions between accountability and performance, where accountability usually prevails over performance, producing what is called accountability bias (Behn, 2001). The current characteristics of ICT applications in government reflect multiple power struggles and negotiations among the actors involved in making decisions and creating rules regarding the design, implementation, and use of these systems (Bovens & Zouridis, 2002; Fountain, 2001; Heeks, 1998; Kraemer & King, 2003; Newcomer & Caudle, 1991; Puron-Cid & Gil-García, 2004). It is expected that the prevalence of accountability is also reflected in some characteristics of government IT projects. In the following sections, we define accountability and its bias. As performance is a multidimensional concept, we consider accountability bias to be any kind of impact on performance as a consequence of privileging accountability in the design, development, and use of ICT applications in government.
Defining Accountability Accountability has been considered to be one of the most important features of democratic governments (Behn, 2001). In recent reviews of the history of different governments, Light (1993) and Rosenbloom (2001) considered accountability as one of the main democratic values. In fact, the key actors in this democratic scene, such as Congress, the Executive, and the Bureaucracy, must interpret and practice accountability and usually do so in different ways (Aberbach & Rockman, 2000; Wood & Waterman, 1991). Scholars have offered several distinct definitions and classifications of accountability (Behn; Fesler & Kettl, 1991; Shafritz, 1988). Based on three e-budgeting initiatives, this chapter demonstrates how government information systems reflect the tensions between
Performance and Accountability in E-Budgeting Projects
what Behn calls different forms of accountability: finance, fairness, use of power, and performance (see “Key Terms”).
Tensions between Accountability and performance Traditionally, political and organizational systems have established rules, procedures, and standards to accomplish different types of accountability (Aberbach & Rockman, 2000; Anechiarico & Jacobs, 1996; Behn, 2001; Derthick, 1990; Kearns, 1996, 1998; Light, 1993). In the case of the budgetary process and the management of public resources, holding people accountable for performance while also holding them accountable for finance and fairness creates a dilemma, namely, that accountability rules for finance and fairness can hinder performance (Behn). This tension is considered one of the classic dilemmas in public administration and is described in different ways by different researchers (Self, 1972). Mosher (1980) depicted the problem as a conflict between accountability and the values of experimentation, inventiveness, and risk taking. Moe (1984) explained that bureaucrats are constrained by multiple rules and control mechanisms that then have a negative impact on their work, activities, responsibilities, and overall performance. Romzek and Dubnick (1987) noted that “the essence of this dilemma was the inability of entities to be accountable [to] many masters and manage the government’s business under conditions of multiple accountability relationships and systems.” March and Olsen (1995) observed this tension as risk aversion in terms of paying more attention to finance and fairness in the short term than to performance in the long term. Anechiarico and Jacobs (1996) suggested a trade-off between maintaining the integrity of political decisions (accountability) and attaining public administration efficiency (performance). Finally, this tension is also present in the tide of reforms sponsored
by international organizations, reforms that can bring potential conflicts between traditional notions of public administration and accountability and the emerging forms of public management that stress performance strongly (Arellano-Gault, 1999; Behn, 2001; Light, 1997; Lynn, 1998; OECD, 1995; Ricucci, 2001; World Bank, 1997).
Accountability Bias The accountability environment of government can be described as a constellation or group of forces—legal, political, sociocultural, and economic—that place pressure on organizations and the people working in them to engage in certain activities and refrain from others (Kearns, 1996). Behn (2001) describes these tensions as accountability bias: It is easier and clearer to codify accountability for finance or fairness in explicit rules than accountability for performance, which usually presents a challenge of defining multidimensional aspects of public organizations and a need for performance measurements to be created or defined before their incorporation into the systems. Then, it can be expected that accountability holders concentrate their efforts and resources to develop and use ICT infrastructure and tools to monitor finance and fairness and give much less attention to performance.
pErformancE and accountability in E-budgEting initiativEs Three e-budgeting projects are included in our analysis: (a) the Digital Signature for Budgetary Transactions and Procedures Project, (b) the Integral Process for Programming and Budgeting Project, and (c) the Government Strategic Planning Project. The following sections describe each of these initiatives, emphasizing aspects related to performance and accountability as reflected in the formal documents, general characteristics,
Performance and Accountability in E-Budgeting Projects
and technical features of these systems. Some background information about the budgetary reform in Mexico is also provided.
Budgetary Reform and E-Budgeting projects During the last decade, many governments have attempted to transform their organizational structures and improve the quality of their services by using ICT as an instrument for administrative reform (Kraemer & King, 2003; OECD, 2002, 2003). The budget process has been part of this wave of changes occurring since 1998, through an initiative called Budgetary Reform (Giugale, Lafourcade, & Nguyen, 2001; e-Strategia Consulting Group, 2002a; World Bank, 1997). In Mexico, these changes included new budgetary rules (performance-oriented budgeting) and ICT tools for budgetary control and decision making (Arellano-Gault & Gil-Garcia, 2004; ArellanoGault & Puron-Cid, 2004; Petrei, 1997; Puron-Cid & Gil-Garcia, 2004; World Bank). The new e-budgeting systems in Mexico were designed to endow public managers with more rational tools to calculate their annual budgets that were based on functional categories, metrics of performance, and expected results. ICT applications were expected to play an important role in the implementation of this new model for the budget cycle (OECD, 1995; e-Strategia Consulting Group, 2002a).
Digital Signature for Budgetary transactions and procedures project During 2002, the Ministry of Finance (SHCP) created a project team to modernize budgetary transactions and procedures. This team discovered several problems: (a) slow response to budgetary procedures, (b) impractical budgeting procedures for users, (c) discretional application of budgetary rules, and (d) lack of analysis, budgetary impact
on government goals, and objectives. In sum, there were too many official document-oriented procedures that were causing inefficiencies and low performance (SHCP, 2003f). The team proposed to use ICT applications to improve government performance while still maintaining budgetary control according to the existing regulations and responsibilities of the Ministry of Finance. The Digital Signature for Budgetary Transactions and Procedures Project (DSBT) was designed and implemented in fiscal year 2003, and it is still operating with few adjustments. Today, most of the budgetary transactions are operated using this system. By analyzing the system manual and some budgetary regulations (SHCP, 2002a; SHCP, 2003e), it was possible to identify the following specific features. The goal of the DSBT is to facilitate budgeting management of the budgetary appropriations (SHCP, 2003f). Specifically, this e-budgeting project attempts to improve communication and the workflow between the Ministry of Finance and all other involved public organizations. This ICT tool also seeks a delegation and flexibility of budgetary operations (called BO) but without losing data control, transaction security, and process transparency. Digital signatures are used to ensure security standards during transaction transmissions, auditing requirements, and supervising protocols. We argue that accountability was the most important criterion in defining the system requirements and priorities for the DSBT in terms of high response, utility for multi-users, deregulation, security, and paperless orientation. Based on an analysis of the SHCP official information, namely, manuals, regulations, procedures, memorandums, and presentations, we were able to identify the following system features that matched the accountability-bias description. According to the diagnostic report (SHCP, 2003f) and the budgetary regulations (SHCP, 2002a), the system was designed to improve administrative efficiency for budgetary proce-
Performance and Accountability in E-Budgeting Projects
dures and analysis of the impact of budgetary adjustments in government goals. However, the manual suggests that the system certainly improves administrative efficiency by translating and programming the budgetary regulations into the system’s rules (SHCP, 2002b). However, the system does not provide any application to evaluate the impact of these transactions in government performance. The system is mostly finance oriented and not very useful for agencies to use to evaluate their performance. It is almost exclusively useful for SHCP and its focus on budgetary control and regulations. The diagnostic report (SHCP, 2003f) and the rules for the Ministry of Finance (SHCP, 2003e) advocate the need to provide tools for external users, such as Congress, citizens, and other agencies. For technical reasons offered in the manual (SHCP, 2002b), the final features of the system mainly reinforces a dominant legal authority and the roles of the executives in the Ministry of Finance over the public agencies for many budgetary procedures. Therefore, the system is limited to the Ministry’s procedures and administrative needs (mostly related to accountability and control) and does not include appropriate features to evaluate the performance of the agencies. In all of the manuals and regulations, there is no evidence of any application available for longterm analysis of budget, neither for the budgetary effects on performance of programs or projects. The budgetary information is limited only to the current fiscal year. It is not possible with the present tools in the system to evaluate information of previous budgets or budgeting trends, or perform comparative analysis for detecting problems, identifying alternatives, or proposing solutions. The system focuses mainly on the short-term activities of the Ministry of Finance and does not contain, as initially planned, tools for other agencies to use to evaluate their activities in relation to their budgets.
integral process for programming and budgeting project The Ministry of Finance analyzes programming and budgeting project procedures annually (SHCP, 2003d). The main problems identified during the 2004 budget project process were (a) the lack of internal coordination and collaboration among SHCP subunits and (b) the large number and diversity of public organizations that had to bring their budget projects into line (public resources) within proper programming structures (goals for performance). Some of the results of the presence of these problems were (a) useless planning exercises by the agencies that were incompatible with their programming structures, (b) conflicting and disarticulated budget project integration, and (c) unrealistic budgets for real public financial needs (SHCP, 2003b). The Integral Process for Programming and Budgeting Project (IPPB) attempts to improve the budgetary management process in a sequential and integral process among the different levels of authority in the Ministry of Finance and public agencies. The IPPB includes three modules: (a) programming, (b) integration and authorization, and (c) authorized appropriation and accountability reports (SHCP, 2003b). For the complex analysis that involves the IPPB (Puron-Cid & Gil-García, 2004), the present section only focuses on the first module called programming, where the budgetary project is integrated for the Congress through the use of the Integration Budget Project (IBP) procedure. The IBP consists of budgetary allocation according to a strategic plan, programming structure, and financial needs. The system consists of an interactive Web application that is available to public administrators and Ministry executives who have specific profiles and levels of access. Every IBP document must be registered in the system to comply with the procedure described in the manual (SHCP, 2003d).
Performance and Accountability in E-Budgeting Projects
After examining the manual and the regulations (SHCP, 2003a, 2003b, 2003c, 2003d), we found evidence of the accountability-bias effect in the IPPB system. Previous analysis established the need for coordination to integrate any budget project to the Congress in terms of congruency between strategic planning, programming structure, and performance goals in order to allocate resources. However, the final design and use of the system highlights the operational needs of the Ministry of Finance more than the needs of the Congress and the agencies so as to bridge their budgets with their plans, programming structures, and financial needs. The Ministry of Finance reinterpreted the requirements of information from the Congress in terms of more budgetary controls that favor the Ministry over the agencies. This interpretation of the role of the Ministry of Finance in the budgetary operations made the IPPB a much more finance-oriented tool for the Ministry to use rather than a performance and planning tool for other agencies. In addition, the original design proposes an integral and interactive application to support federal agencies in the integration process. However, the position and responsibilities of the Ministry of Finance over public funds prevailed in the final features of the IPPB. By analyzing manuals and guides (SHCP, 2003a, 2003c, 2003d), it is common to find applications to monitor the integration process across the agencies and the annual controls for budgetary balance instead of reporting tools that can offer information or assistance for programming or performance evaluation. The IPPB was significantly influenced by the predominant administrative role of the Ministry of Finance, which is shaped by accountability regulations over the course of the whole budgetary cycle.
government strategic planning project After the first budget experience in the present administration in 2000, the President’s Office (Oficina de la Presidencia de la República, OPR) and the Ministry of Finance perceived that the executive policies were dispersed and disarticulated throughout the federal governmental structure (e-Strategia Consulting Group, 2002b). Therefore, the budgetary process became a strategic factor for success of the President’s strategies and plans (e-Strategia Consulting Group, 2002a). The objective of the Government Strategic Planning Project (GSP) is to facilitate the strategic planning process among the executive and operative levels of government through the budgetary process (e-Strategia Consulting Group, 2002a). The GSP is empowered by an external provider who collaborates directly with the OPR and the Ministry of Finance. The GSP provides ICT tools for planning across agencies to define standard strategic information. The role of the executive through the OPR in this process is to establish a group of projects for each agency to be included in the next programmatic structure and budget project (e-Strategia Consulting Group, 2002b). The OPR is responsible for defining methodology, formats, and norms. The Ministry of Finance is responsible for integrating these elements into the programming process that eventually appears in the budget project chapter. The GSP was designed to assist public executives with project-management tools and the monitoring of strategic planning and budget projects, including a panel of indicators for strategic and administrative performance. After examining the manuals and guides (e-Strategia Consulting Group, 2002a, 2002b), we found proofs of an accountability-bias impact on the GSP.
Performance and Accountability in E-Budgeting Projects
The GPS design prioritizes an articulation of efforts among the OPR, the Ministry of Finance, and agencies to bridge the planning process to the budgetary cycle. However, the dominant role of the Ministry of Finance in terms of the reinterpretation of information requirements and regulations and its constant emphasis on budgetary control importantly influenced final OPR decisions for more finance-related tools in the system. Similarly, the budgetary operations of the agencies through Ministry of Finance procedures were critical to confirm the Ministry’s domination of the final features in the GSP. The prevalence of control over the agencies’ activities, the need of administrative effectiveness in these processes, and the annual integration of the budget in the short-term finally emphasized the existing role of the Ministry of Finance, allowing less room for new initiatives from the OPR. The GSP was finally shaped by the dominant purposes of the Ministry of Finance agenda and its operational needs. The finance-oriented accountability in the regulations and the role of the Ministry of Finance were more clearly incorporated into the system than were the President’s need to oversee the performance of the agencies.
futurE trEnds There is an increasing investment in technology for government operations and services in developed and developing countries. Many of these projects attempt to improve government performance and efficiency. The budget cycle has often been part of these new waves of reform, which do include the more extensive use of ICT. This trend is expected to continue in the near future, leading to more investments and more complex e-budgeting initiatives. However, some of the institutional and organizational features of government have imposed important restrictions on e-budgeting projects, causing them not to be
useful for evaluating the performance of agencies or to encourage Congressional or Presidential political control. In our cases, the systems were only useful for the Ministry of Finance and mainly designed for accountability and financial control, even when performance assessment was part of the initial goals and objectives. It is a key issue, therefore, to identify precisely what factors cause the prevalence of accountability over performance in order to better understand this phenomenon and increase the value of ICT investments. Despite great expectations about the transformational power of ICTs, it seems that organizational and institutional constraints will continue having an important influence on the design, implementation, and use of e-government initiatives in general and e-budgeting projects in particular.
conclusion Across the case studies examined here, the role of the Ministry of Finance in the budgetary cycle was predominant for e-budgeting projects, even when the original objectives of the systems included important performance aspects. The prevalence of accountability for finance, alternatively giving much less attention to performance, influenced the characteristic of the systems that consistently serve as administrative and control tools for the executives of the Ministry of Finance rather than providing useful applications for agency managers to use to support their daily budgetary tasks and evaluate their organizations’ performance. In addition, the use of these systems favored the Ministry of Finance’s operations and helped to accomplish the existing budgetary regulations instead of improving political control by the Congress or the Presidency. Therefore, the systems were only useful in terms of accountability and control for the relationship between the Ministry of Finance (as principal) and other agencies (as agents), but not for the relationship
Performance and Accountability in E-Budgeting Projects
between the Ministry of Finance (as an agent) and its principals. The focus on accountability for finance is also a common characteristic of e-budgeting systems and emphasizes the continuity of existing models of administration and the centralization of rules, procedures, and processes. The existing technical and organizational features suggest that the designers of the systems were not able to change most of the existing rules or to incorporate creativity, experimentation, inventiveness, and risk taking into the characteristics of the e-budgeting systems. Instead, the systems concentrated mostly on administrative control and accountability for finance, which were precisely the most important values for the Ministry of Finance. There was no evidence of appropriate tools to evaluate the consistency of public expending relative to the political decisions of Congress or the Executive. In all the cases, the systems were designed to manage short-term projects and low-risk activities that were involved in the budgetary cycle rather than long-term and high-risk performance evaluation, which is normally more difficult to measure.
futurE rEsEarch dirEctions A common practice in the literature has been to consider information technology as a homogeneous theoretical construct with uniform and unidirectional effects on organizational and social settings. One of the challenges in this area is to find adequate ways to classify projects, operationalize measures, collect data, and test results in order to make comparable generalizations of diverse types of ICT projects across different governmental and managerial needs, including budgeting and financial management. Future studies should develop classifications and test how the effects of institutional and organizational factors are similar or different in various organizational contexts and among organizations dealing with different problems. Similarly, the accountability bias was
a useful framework for the analysis of e-budgeting projects. Future research should explore the usefulness of this theoretical framework when studying other types of e-government projects, including interorganizational initiatives at different levels of government. This chapter uses analysis of documents as the primary research method. Documents such as systems manuals and budgetary regulations were examined. Using this research method, we were able to identify the specific features, system requirements, and priorities of each project to evaluate the prevalence of accountability over efficiency (response time, functionality, deregulation, security, and paperless orientation). However, there are limitations in using analysis of documents in particular and as a single research method in general. There is a great opportunity for multimethod approaches combining qualitative and quantitative methods. Future research should analyze some of the factors that do result in accountability bias for e-budgeting projects through the use of multimethod approaches (GilGarcía & Pardo, 2006). Finally, there are conceptual frameworks in different disciplines such as sociology, psychology, communications, and economics that may help to understand how public managers and executives define formal goals, make decisions about general characteristics, and define technical capabilities of the e-budgeting systems in government. A multidisciplinary discussion would enrich the field with complementary and contrasting concepts and approaches to the studies of information and communication technologies in organizational settings. Future studies should explore these opportunities.
acknowlEdgmEnt This work was partially supported by the National Science Foundation under Grant No. 0131923. Any opinions, findings, and conclusions or recom-
Performance and Accountability in E-Budgeting Projects
mendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
rEfErEncEs Aberbach, J. D., & Rockman, B. A. (2000). In the Web of politics: Three decades of the U.S. Federal executive. Washington, DC: Brookings Institution Press. Anechiarico, F., & Jacobs, J. B. (1996). The pursuit of absolute integrity: How corruption control makes government ineffective. Chicago: University of Chicago Press. Arellano-Gault, D. (1999). De la administracion publica a la nueva gestion publica: Cinco dilemas. Revista Conmemorativa del Colegio Nacional de Ciencias Politicas y Administracion Publica, 1, 35-47. Arellano-Gault, D., & Gil-García, J. R. (2004). Public management policy and accountability in Latin America: Performance-oriented budgeting in Colombia, Mexico, and Venezuela (1994-2000). International Public Management Journal, 7(1), 49-71. Arellano-Gault, D., & Puron-Cid, G. (2004). Mexico. Reforma al sistema presupuestal: Una reforma atrapada por las inercias. In Mas alla de la reinvencion del gobierno: Fundamentos de la nueva gestion publica y presupuestos por resultados en America Latina. Mexico: Editorial Porrua and CIDE. Behn, R. (2001). Rethinking democratic accountability. Brookings Institution Press. Bovens, M., & Zouridis, S. (2002). From streetlevel to system level bureaucracies: How ICT is transforming administrative discretion and constitutional control. Public Administration Review, 62(2), 174-183.
0
Derthick, M. (1990). Agency under stress: The Social Security Administration in American government. Washington, DC: Brookings Institution Press. e-Strategia Consulting Group. (2002a). Guía de referencia para la administración de proyectos presidenciales en el spe. Author. e-Strategia Consulting Group. (2002b). Manual de orientación del sistema de planeación estratégica y das-g. Author. Fesler, J. W., & Kettl, D. F. (1991). The politics of the administrative process. Fountain, J. E. (2001). Building the virtual state: Information technology and institutional change. Washington, DC: Brookings Institution Press. Gil-García, J. R. (2005). Enacting state Websites: A mixed method study exploring e-government success in multi-organizational settings. Unpublished doctoral dissertation, University at Albany, State University of New York, New York. Gil-García, J. R., & Pardo, T. A. (2006, January). Multi-method approaches to digital government research: Value lessons and implementation challenges. Paper presented at the 39th Hawaii International Conference on System Sciences (HICSS), HI. Giugale, M. M., Lafourcade, O., & Nguyen, V. H. (2001). Mexico: A comprehensive development agenda for the new era. Washington, DC: The World Bank. Heeks, R. (1998). Information systems and public sector accountability. Manchester, United Kingdom: Institute for Development Policy and Management, University of Manchester. Kearns, K. P. (1996). Managing for accountability. Jossey-Bass. Kearns, K. P. (1998). Institutional accountability in higher education: A strategic approach. Public Productivity and Management Review, 22(2), 141.
Performance and Accountability in E-Budgeting Projects
Kling, R., & Lamb, R. (2000). IT and organizational change in digital economies: A sociotechnical approach. In I. E. B. B. K. (Ed.), Understanding the digital economy: Data, tools, and research. Cambridge, MA: The MIT Press. Kraemer, K. L., & King, J. L. (2003, September). Information technology and administrative reform: Will the time after e-government be different? Paper presented at the Heinrich Reinermann Schriftfest, Post Graduate School of Administration, Speyer, Germany. Kraemer, K. L., King, J. L., Dunkle, D. E., & Lane, J. P. (1989). Managing information systems: Change and control in organizational computing. San Francisco: Jossey-Bass. Light, P. (1993). Monitoring government: Inspectors general and the search for accountability. Washington, DC: Brookings Institution Press. Light, P. (1997). The tides of reform: Making government work, 1945-1995. Yale University Press. Lynn, L. E. (1998). A critical analysis of the new public management. International Public Management Journal, 1(1), 107-123. March, J. G., & Olsen, J. P. (1995). Democratic governance. Free Press. Moe, T. M. (1984). The new economics of organization. American Journal of Political Science, 28, 739-777. Mosher, F. C. (1980). The changing responsibilities and tactics of the federal government. Public Administration Review, 40(6). Newcomer, K. E., & Caudle, S. L. (1991). Evaluating public sector information systems: More than meets the eye. Public Administration Review, 51, 377-384. Organisation for Economic Co-Operation and Development (OECD). (1995). Budgeting for results:
Perspectives on public expenditure management. Paris: Author. Organisation for Economic Co-Operation and Development (OECD). (2002). OECD e-government project. Paris: Author. Organisation for Economic Co-Operation and Development (OECD). (2003). The e-government imperative. Paris: Author. Orlikowski, W. J., & Baroudi, J. J. (1991). Studying information technology in organizations: Research approaches and assumptions. The Information Systems Research, 1-29. Orlikowski, W. J., & Iacono, C. S. (2001). Research commentary: Desperately seeking the “it” in IT research: A call to theorizing the IT artifact. Information Systems Research, 12, 121-134. Petrei, H. (1997). Presupuesto y control: Pautas de reforma en América Latina. Washington, DC: Banco Interamericano de Desarrollo. Puron-Cid, G., & Gil-García, J. R. (2004). Enacting e-budgeting in Mexico. Public Finance and Management, 4(2), 182-217. Ricucci, N. (2001). The old public management versus the new public management: Where does public administration fit in? Public Administration Review, 61, 172-175. Romzek, B. S., & Dubnick, M. J. (1987). Accountability in the public sector: Lessons from Challenger tragedy. Public Administration Review, 47(3), 228. Rosenbloom, D. H. (2001). History lessons for reinventors. Public Administration Review, 61, 161-165. Self, P. (1972). Administrative theories and politics. Shafritz, J. M. (1988). The Dorsey dictionary of American government and politics. Dorsey Press.
Performance and Accountability in E-Budgeting Projects
SHCP. (2002a). Lineamientos normativos y de operación para la transferencia en línea de las adecuaciones presupuestarias internas vía Internet (Internal Document No. VIII-2002). Author. SHCP. (2002b). Manual de normas presupuestarias para la administración pública federal (D.O.F. Publication No. 3-IX-2002). Author. SHCP. (2003a). Guia de acceso al proceso integral de programacíon y presupuesto (Internal Document No. 31-VII-2003). Author. SHCP. (2003b). Guía funcional del proceso integral de programación y presupuesto (Internal Document No. VII-2003). Author. SHCP. (2003c). Lineamientos operativos de concertación de estructuras programáticas 2004 (Internal Document No. 04-VII-2003). Author. SHCP. (2003d). Manual de programación y presupuesto (Internal Document No. X-2003). Author. SHCP. (2003e). Oficio circular por el que se establecen los lineamientos para la operación de los sistemas electrónicos de la subsecretaría de egresos, mediante la utilización de firma electrónica (D.O.F. Publication No. 05-III-2003). Author. SHCP. (2003f). Presentación para capacitación del sistema sicp-sicgp con firma electrónica (Internal Document No. 31-III- 2003). Author. Wood, B. D., & Waterman, R. W. (1991). The dynamics of political control of the bureaucracy. American Political Science Review, 85(3), 801828. World Bank. (1997). World report. Washington, DC: Author.
furthEr rEading Atkinson, R. D., & Leigh, A. (2003). Customeroriented e-government: Can we ever get there? In
G. G. Curtin, M. H. Sommer, & V. Vis-Sommer (Eds.), The world of e-government (pp. 159-81). New York: Haworth Press. Attewell, P. (2001). The first and second digital divides. Sociology of Education, 74(3), 252-259. Brown, M. M., & Jeffrey, L. B. (1998). Public sector information technology initiatives: Implications for programs of public administration. Administration & Society, 30(4), 421-442. Carvin, A. (2004). E-government for all: Ensuring equitable access to online government services. Newton, MA: EDC Center for Media & Community & NYS Forum, Rockefeller Institute of Government. Chengalur-Smith, I., & Duchessi, P. (1999). The initiation and adoption of client-server technology in organizations. Information & Management, 35, 77-88. Clark, D. D., Guillet, S. E., Lehr, W., Sirbu, M., & Fountain, J. (2003). Local government stimulation of broadband: Effectiveness, e-government, and economic development. KSG. Cresswell, A. M. (2004). Return on investment in information technology: A guide for managers. Albany, NY: Center for Technology in Government, University at Albany, SUNY. Cresswell, A. M., & Pardo, T. A. (2001). Implications of legal and organizational issues for urban digital government development. Government Information Quarterly, 18, 269-278. Dawes, S. S., Gregg, V., & Agouris, P. (2004). Digital government research: Investigations at the crossroads of social and information science. Social Science Computer Review, 22(1), 5-10. Dawes, S. S., Pardo, T., & DiCaterino, A. (1999). Crossing the threshold: Practical foundations for government services on the World Wide Web. Journal of the American Society for Information Science, 50(4), 346-353.
Performance and Accountability in E-Budgeting Projects
Dawes, S. S., Pardo, T. A., Simon, S., Cresswell, A. M., LaVigne, M., Andersen, D., et al. (2004). Making smart IT choices: Understanding value and risk in government it investments. Albany, NY: Center for Technology in Government. Fletcher, P. D., Holden, S. H., & Norris, D. F. (2001). E-government: Planning, funding, and outsourcing. Washington, DC: International City/County Management Association. Garson, G. D. (Ed.). (2003). Public information technology: Policy and management issues. Hershey, PA: Idea Group Publishing. Gil-Garcia, J. R., & Helbig, N. (2006). Exploring e-government benefits and success factors. In A.V. Anttiroiko & M. Malkia (Eds.), Encyclopedia of digital government. Hershey, PA: Idea Group Inc. Gil-Garcia, J. R., & Luna-Reyes, L. F. (2006). Integrating conceptual approaches to e-government. In M. Khosrow-Pour (Ed.), Encyclopedia of e-commerce, e-government and mobile commerce. Hershey, PA: Idea Group Inc. Gil-Garcia, J. R., & Pardo, T. A. (2005). E-government success factors: Mapping practical tools to theoretical foundations. Government Information Quarterly, 22(2), 187-216. Gubbins, M. (2004, April 8). Global IT spending by sector. Computing, p. 28. Heeks, R. B. (2006). Implementing and managing eGovernment: An international text. London: Sage Publications. Heeks, R. B., & Bhatnagar, S. C. (2001). Understanding success and failure in information age reform. In R. B. Heeks (Ed.), Reinventing government in the information age (pp. 49-74). London: Routledge. Holden, S. H. (2003). The evolution of information technology management at the federal level: Implications for public administration. In G. D. Garson
(Ed.), Public information technology: Policy and management issues (pp. 53-73). Hershey, PA: Idea Group Publishing. Jennings, E. T. (2002). E-government and public affairs education. Chinese Public Administration Review, 1(3/4). Lazer, D., & Binz-Scharf, M. C. (2004). Information sharing in e-government projects. Boston: National Center for Digital Government, Harvard University. Lee, G., & Perry, K. L. (2002). Are computers boosting productivity? A test of the paradox in state governments. Journal of Public Administration Research and Theory, 12(1), 77-102. Luna-Reyes, L. F., Zhang, J., Gil-Garcia, J. R., & Cresswell, A. M. (2005). Information systems development as emergent socio-technical change: A practice approach. European Journal of Information Systems, 14(1), 93-105. McDaniel, E. A. (2003). Facilitating cross-boundary leadership in emerging e-government leaders. InSITE: “Where Parallels Intersect.” Pardo, T. A., Cresswell, A. M., Thompson, F., & Zhang, J. (2006). Knowledge sharing in crossboundary information system development in the public sector. Information Technology and Management, 7(4), 293-313. Pinsonneault, A., & Kraemer, K. L. (2002). Information technology and middle management downsizing: A tale of two cities. Organization Science, 13(2), 191-208. Rocheleau, B. (2000). Prescriptions for public-sector information management: A review, analysis, and critique. American Review of Public Administration, 30(4), 414-435. West, D. M. (2005). Digital government: Technology and public sector performance. Princeton, NJ: Princeton University Press.
Performance and Accountability in E-Budgeting Projects
Zhang, J., Cresswell, A. M., & Thompson, F. (2002). Participant’s expectations and the success of knowledge networking in the public sector. Paper presented at the AMCIS Conference, TX.
tErms and dEfinitions Accountability: According to Robert Behn (2001), individuals exercise accountability in four ways: For finance, when we establish detailed expectations about how public officials will handle public resources; for fairness or equity, when we create values, principles, and ethical standards to ensure that government and its employees treat its citizens fairly; for use (or abuse) of power, when we create rules to limit the discretion of public officials and to prevent those officials from abusing their power and discretion in finances or fairness; and finally, for performance, when we define the expectations for the actual outcomes that public officials will achieve using public resources and their invested power. Accountability Bias: This is the expected partiality from accountability holders used to concentrating their efforts and resources on building ICT infrastructure for monitoring finance and fairness, thus giving much less attention to performance. Codifying accountability for finance or fairness in explicit rules is normally easier than codifying accountability for performance in government. Administrative Reform: It is any initiative to improve the management performance of any public organization or institution in government by using managerial techniques, best practices, and recommendations.
Budgetary Cycle: It is a process of the management of funds and resources during a period of time that involves planning, programming, integrating, presenting a project, authorizing appropriations, calendaring funds, and controlling, evaluating, and auditing according to a group of rules, systems, procedures, transactions, standards, manuals, policies, and regulations. Digital Signature: A digital signature is any procedure of encryption for the purpose of information or transactional safety, which is intended to be equivalent to a hand-written signature. E-Budgeting: E-budgeting is any ICT application or tool for budgetary functions, procedures, or services across the budgetary cycle (planning, programming, budgeting, appropriations, control, and evaluation of financial resources). E-Project: An e-project is any initiative that involves the design, implementation, and use of ICT applications for institutional or organizational purposes. ICT Embedment: The term is used to describe how information technologies do not operate in a vacuum or in isolation of their context. Instead, information technologies affect and are affected by organizational structures and processes, as well as institutional arrangements. Performance: Performance involves a set of measurements that indicate the level of how an individual, group, organization, or institution is achieving a goal or a set of goals.
Chapter LXVI
A Model for Reengineering IT Job Classes in State Government Craig P. Orgeron Mississippi Department of Information Technology Services, USA
introduction For public-sector administrators burdened with the task of recruiting and retaining information technology (IT) professionals, these are difficult times. A shortage of IT personnel combined with intense demand for new technology skills has made recruiting and retaining staff harried pursuits; additionally, the demand for technical workers in corporate America keeps rising (Pawlowski, Datta, & Houston, 2005). This demand for workers with leading-edge IT skills is exacerbated by the looming retirement of seasoned government workers, estimated at a 30% reduction in public-sector workforce across state governments by 2006 (Council of State Government [CSG], 2002). Despite the mounting demands on IT to be leveraged in state governments as a cost-reducing and efficiency-increasing tool (Levinson, 2003), even with economic recovery (Information
Technology Association of America [ITAA], 2004) many scholars and industry analysts fear a widening shortage in available IT professionals (Pawlowski et al.). Pawlowski et al. suggest that for state governments, more so than private-sector firms, the problem has become acute, heightened by not only recruitment and retention barriers (CSG, 2000), but also by an ideological backlash against contracting for IT services with offshore firms (Hira, 2004). In recent research, barriers to recruitment (low base salaries, lack of qualified candidates, a poor image of civil service, and limited advancement opportunities) and retention (inability to compete with the private sector, low base salaries, insufficient reward systems, and lack of advancement opportunities) were documented through a survey of 400 IT professionals working in state agencies and universities (Pawlowski et al.). The ubiquitous nature of information technology at all levels of government and the
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Model for Reengineering IT Job Classes in State Government
core requirement to recruit and retain qualified technology professionals calls for an expansion in the body of research; this research can provide invaluable insight into the success and failure in public-sector information technology human resource practices. The intent of this research is the utilization of the DeMers’ (2002) seven-pronged approach to critically examine Mississippi state government agencies with the expected result of assessing the effectiveness and efficiency of the IT personnel classification system. This leadingedge and highly effective IT personnel classification system, designed specifically to improve IT recruitment and retention, was implemented by the State of Mississippi in partnership with the Hay Group, an internationally known human resource consultancy.
background Information technology has become crucial in the innovation of government service delivery in the day-to-day operations of many publicsector agencies at the federal, state, and local level (Heiman, 2002; Modesitt, 2002). Indeed, public-sector agencies at all levels depend on information technology in order to accomplish their varied missions (Babcock, Bush, & Lan, 1995; Fletcher, 2000). Specifically, state governments have invested heavily in information technology. To that end, at least 420,000 IT personnel, representing more than 20% of executive-branch state workers, are employed throughout the 50 states (Brown & Brudney, 1998). Varying authors have noted the tension between private- and public-sector organizations in recruiting and retaining experienced IT professionals (Agarwal & Ferratt, 1999). Careful and systematic analysis of research conducted finds that many scholars have discussed and agreed that examination of existing public and private IT recruitment and retention strategies can aid in the formulation of a more effective, forward-thinking human resource
policy (DeMers, 2002; Lan, Riley, & Cayer, 2005; Pawlowski et al., 2005). Indeed, DeMers sets forth a seven-pronged approach toward IT recruitment and retention, created from an assimilation of “realistic government strategies” (DeMers, p. 28). The objective of DeMers’ best-practice approach allows public-sector agencies the ability to build human resource policies to compete with the private sector for the recruitment and retaining of the best and brightest IT professionals. According to a United States Department of Commerce report (1998) entitled America’s New Deficit: The Shortage of Information Technology Workers, 1,134,000 new IT positions will be created between 1996 and 2006, and an additional 240,000 already existing positions will have to be filled due to retirements. According to the America’s New Deficit study, the reasons for the shortage are varied, including a general lack of interest in the field and a belief that there is an excess of IT workers already. According to a related United States Department of Commerce report entitled The Digital Work Force: Building Information Technology Skills at the Speed of Innovation (1999), information technologies represent more than 25% of the growth in the current U.S. economy, and the entire information industry employs almost 7.4 million people with an average salary of $64,000. The Digital Work Force study also reports that job growth in the IT field will increase at a rate of 137,800 jobs per year; an industry that grows so fast needs workers, but fewer people are entering the IT field. Inherent differences between managing public and private IT organizations have long been recognized (Cats-Baril & Thompson, 1995). Businesses, individual citizens, and nongovernmental organizations empowered by the Internet are creating new challenges that public agencies will not be able to meet unless they become much more technologically savvy (Naim, 2000). State governments are faced not only with a shortage of information technology workers, but also with the prospect of losing out to corporations in the
A Model for Reengineering IT Job Classes in State Government
competition to hire these workers. Government agencies simply do not have the budget, flexibility, or compensation infrastructure to offer company cars and huge bonuses, and stock options are nonexistent (Goodridge, 2000; Pawlowski et al., 2005). In recent years, a wealth of research has emerged that has identified the primary causes of why information technology employees leave their employers. These factors fall into five broad categories: (a) poor quality of work life, (b) compensation below market level, (c) lack of training and development opportunities, (d) bad management, and (e) an inability to advance one’s career without going into management (Longenecker & Scazzero, 2003). Additionally, researchers have found that the “civil service system emphasizing rules and regulations, control systems, political context, and limited autonomy and flexibility can serve as a barrier to creating outcome-oriented human resource management in the public sector” (Kim, 2005, p. 138). To that end, as described by Kim, many public-sector agencies are leveraging their first-rate perks, including “job stability and security, flexibility, and their social- and civicservice orientation” (p. 138) to retain experienced IT professionals. Yet, according to Kim, even with these retention-based efforts many public-sector organizations are not maintaining adequate levels of senior-level IT professionals, and have thus, in many cases, turned to alternative strategies for diminishing IT employee turnover. Hence, for state governments to stem the tide of IT employee turnover, many states have placed significant effort in understanding specific job characteristics affecting retention (Bruce & Blackburn, 1992; Pawlowski et al., 2005; Rainey, 1997). The recruitment of IT staff by states is often made difficult for a variety of reasons (CSG, 2000). Success in the hiring of IT personnel must be based upon a clear understanding of the marketplace and the needs and desires of the IT candidates (Murray, 1999). Restrictive merit systems, noncompetitive salaries, and a negative
perception of public service impede recruiting efforts. Centralized civil service systems have been criticized for obstructing and impeding the ability of agencies or departments with special skill requirements from responding quickly to labor market opportunities (Dawes, 1994). Civil service systems often become more focused on procedural correctness than on the end results. To be successful, public agencies must have selection systems in place that are designed to enable swift selection from among the best-qualified candidates and allow agencies to shape their workforces to meet their changing missions (Pynes & Bartels, 1996). Dawes puts forth specific recommendations, which include developing new approaches to testing, exploring alternatives to written tests for some occupations, and recognizing professional or technical credentials and educational degrees as qualifiers for selected positions.
Information Technology Personnel in mississippi The research surveyed indicates that government agencies have struggled with the recruitment and retention of IT professionals and have thus been unable to benefit from the efficiencies of a highly effective IT staff. As documented by DeMers (2002), utilizing pragmatic public-sector strategies affords state governments the capacity to compete with the private sector. Clearly, the body of knowledge regarding the application of creative hiring and retention practices for IT personnel in the public sector argues that flexible hiring practices, advanced hiring techniques, increased employee recognition, and the availability of professional development training provide a means of attracting and retaining top-notch IT professionals (CSG, 2000; Pawlowski et al., 2005). Table 1 shows the seven-step approach developed by DeMers from experience with a myriad of public- and privatesector strategies for IT personnel recruitment and retention.
A Model for Reengineering IT Job Classes in State Government
Table 1. Seven-step approach for IT recruitment and retention (DeMers, 2002) Approach 1 Approach 2 Approach 3 Approach 4 Approach 5 Approach 6 Approach 7
Allow flexibility of hiring practices. Employ technologically advanced hiring techniques such as Web-based applications and database skill tracking. Enable an organization-wide commitment to hiring qualified IT staff. Advocate an increase in employee recognition through the use of bonuses and innovative awards. Fund professional development training. Examine the significance of environment and innovation. Stress the importance of publicity for both an area and its government.
In 1997, the Mississippi Department of Information Technology Services (ITS) contracted the Hay Group, an internationally known human resource consultancy, to conduct a detailed review of the classification structure, role definition, compensation levels, and IT organizational structure for the Mississippi state government. ITS, along with many other state government agencies, recognized that the current job classifications were out of date and did not reflect the true nature of the work being done, and that the existing compensation structure and processes did not enable the state to attract, retain, and reward the quality and caliber of IT employees needed to meet the business needs of the Mississippi state government. For the State of Mississippi, the creation of a leading-edge job classification structure, compensation plan, and organizational framework that would allow the state to attract, retain, and reward high-performing IT professionals was the primary strategic objectives. These IT professionals are necessary to sustain the information technology business requirements of the state and to provide first-class, customer-oriented service. It is critical to appreciate that the DeMers’ model focuses sharply on an assimilation of “realistic government strategies” (DeMers, 2002, p. 28). Thus, no single, magic bullet will repair a recruitment
and retention system that is culturally entrenched in a government agency, or the government as a whole. Rather, the impetus of the DeMers’ bestpractice approach allows government agencies the ability to combine multiple, realistic human resource policies to compete with the private sector and leverage its existing strengths for the recruitment and retaining of the best and brightest IT professionals. As noted by DeMers (2002), flexibility is a term used to describe an organization’s ability to provide flexible recruiting, as well as to retain qualified, existing employees. Specifically, however, DeMers notes that flexibility is focused on the availability of variable pay in response to real-time market conditions. Resulting from the IT review undertaken, the Mississippi State Personnel Board established a Special Compensation Plan for the purpose of attracting, retaining, and developing competent information technology professionals. This Special Compensation Plan, designed by ITS and the State Personnel Board, has become the focal point in producing change in the recruitment and retention of the best and brightest IT professionals for Mississippi state government. The Special Compensation Plan provides methods for employment, promotion, and reassignment that are responsive to organizational
A Model for Reengineering IT Job Classes in State Government
needs and offers flexibility in the negotiation and adjustment of salaries. In this regard, the State of Mississippi has implemented a model to reward individual achievement. Salary increases for inclass movement under the Special Compensation Plan are awarded for achievement of educational objectives, the development of additional technical skill sets, or documented evidence of increasing complexity of work goals (Mississippi State Personnel Board [MSPB], 2006). DeMers’ (2002) description of new technology includes the design, development, and implementation of Web-based employment applications: “utilization of web-based job boards, and database skill tracking” (p. 30). The automation of the recruitment process allows human resource departments in state government to gain access to a wider array of applicants, track their skills, and hire them faster. The Special Compensation Plan provides methods for recruitment and appointment. The Plan states that the hiring of new employees into information technology positions are exempted from the formal Certificate of Eligibles process established by the State Personnel Board, provided that the applicant meets the minimum qualifications of the classification of the position (MSPB, 2006). As described in DeMers’ second approach, the State of Mississippi employs Web-based hiring techniques. The State of Mississippi, via the State Personnel Board, implemented an e-application that enables the collection of a full-range of relevant information on a prospective applicant, such as educational attainment, professional experience, and familiarity with leading-edge information technologies. Similarly, Web-based job board advertisements have provided increased visibility for information technology jobs open in Mississippi state government. As noted by DeMers, the implementation of Web-based employment applications, utilization of Web-based job boards, and database skill tracking can make the process “easy and productive for both the recruiter and the recruit” (p. 32).
The fourth and fifth steps in the DeMers (2002) approach are employee recognition and skill acquisition and education, respectively. According to DeMers, employee recognition and skill training is a crucial factor for decreasing the turnover rate among existing IT employees. However, DeMers notes that potential recruits often show apprehension that their expertise with leading-edge technology will fade if they accept state government employment. As noted by DeMers, a report published by the ITAA suggested that “training after the employee is hired is rated significantly more effective than pre-hire methods of training” (p. 34). Indeed, “84% of managers rated on the job training as effective or very effective compared to 41% rating pre-hire training as high” (p. 34). DeMers cites a program in the State of Kansas whereby established bonuses reward not only specific skills but also the completion of successful projects; furthermore, the program “encourages motivated IT professionals to acquire new mission critical skills” (p. 33). In the State of Mississippi, under the direction of the State Personnel Board, an Information Technology Professional Development Committee (ITPDC) has been instituted. The ITPDC provides recommendations to the State Personnel Board on specific personnel actions within the field of information technology. The ITPDC, composed of 10 members, has specific responsibility to review and recommend the level of information technology positions required within a state agency, as well as to review and recommend the appropriateness of educational requirements associated with information technology job classifications. The ITPDC has enjoyed much success with its interagency approach to addressing employee recognition and skill acquisition and education, respectively, in the Mississippi state government. Specifically, the ITPDC has been deemed successful in the reward of employees who have attained additional education and skill sets that directly add value to their ability to perform the duties and tasks of their positions (MSPB, 2006).
A Model for Reengineering IT Job Classes in State Government
According to DeMers (2002), the work environment component of employment by IT professionals has largely been ignored by public administration researchers. In fact, relatively little research simultaneously examines both indicators of the work environment such as job satisfaction or perceived job characteristics and market conditions such as perceived job alternatives or pay competitiveness on IT worker turnover (Thatcher, Stepina, & Boyle, 2003). Of course, many publicsector employees desire a dynamic work environment, though that is often a shifting concept to clearly describe. For the purpose of this research, work environment is defined as “a catchall for innovative programs that make employment more enjoyable” (DeMers, p. 35). As DeMers describes, telecommuting remains one of the most sought after work environment perks sought by publicsector information technology personnel. Though DeMers cited numerous examples from the private sector, the research did not reveal vast adoption of telecommuting or other work environment perks offered by state governments; this is true for the Mississippi state government as well: Very few work environment perks are offered to information technology personnel. Lastly, DeMers (2002) notes that it is necessary for public agencies to make information available to recruits both actively and passively, beyond simply posting job information on Web pages and third-party job boards. Interestingly, DeMers advocates that public agencies recruit individuals not actively seeking employment in the government arena. DeMers cites an example from the City of San Jose; the city used banners pulled from planes at local sporting events as a means of advertising the benefits of city employment. In addition, the city also “purchased advertising space at local movie theaters in an attempt to create interest among passive citizens” (DeMers, p. 37). The State of Mississippi has not attempted these kinds of techniques, nor was any evidence uncovered to suggest that the state would use them in the future. DeMers notes that
0
“government’s lower pay, slower response time, inflexibility, lack of performance rewards, and sometimes stifling environment hurt its ability to recruit qualified employees of any kind” (p. 38). However, when marketing, DeMers states that the public sector should accentuate its work environment assets: stability, comparatively low time demands, good retirement benefits, and lowstress environment.
futurE trEnds Numerous recent studies have recounted approaches for public-sector agencies to consider in the methods used to recruit, retain, appraise, and compensate government workers, including IT professionals (Lan et al., 2005; Lipiec, 2001; Pawlowski et al., 2005; Perry, Wise, & Martin, 1994; Roberts, 2002). Still, in addition to exposure to Internet-based technology currently leveraged in the development of electronic government applications, results of a recent survey “show that work exhaustion, an emphasis on participatory management, and opportunities for advancement were statistically significant variables affecting state government IT employee turnover intentions, and that salary satisfaction was not a statistically significant factor” (Kim, 2005, p. 137). Given that, while efforts to boost public-sector salaries for IT professionals are warranted, “as states continue to improve their compensation programs, it is equally important to look beyond financial incentives as the primary long-term strategy for recruitment and retention” (Pawlowski et al., p. 90). The focal point of designing a thriving IT workforce in the public sector is enabled by harnessing employee motivation received from managers and the subsequent feeling that the employee is “an important part of the organization” (Pawlowski et al., p. 90). Thus, a work environment that fosters both creativity and autonomy, while harnessing a sense of accomplishment is a target for attracting top IT talent (Pawlowski et
A Model for Reengineering IT Job Classes in State Government
al.; Smits, McLean, & Tanner, 1993). The capacity of a state government to “become an employer of choice will be dependent upon dual strategies of improved compensation practices and excellence in other human resource management practices” (Pawlowski et al., p. 91).
conclusion The uniquely crafted information technology personnel classification system in Mississippi state government was implemented to aid in the recruitment and retention of information technology professionals. For this research, the IT personnel classification system was appraised via the DeMers’ seven-step approach toward IT recruitment and retention, created and defined from an assimilation of “realistic government strategies” (DeMers, 2002, p. 28). The State of Mississippi is one of several states that have recognized and responded to the IT labor crisis in a variety of ways. The work completed by the Mississippi Department of ITS and the Hay Group, as assessed by the DeMers’ seven-step approach toward IT recruitment and retention, embodies a progressive relationship between public and private sectors to bring contemporary human resource concepts and practices to the recruitment and retention of information technology personnel. The approaches of organization-wide commitment to recruiting, environment, innovation, and publicity, as outlined by DeMers, uncovered little, if any, innovation on the part of the Mississippi state government. In addition to the lack of innovation within several categories in the DeMers seven-step approach, the overhauled information technology personnel classification system, as does traditional job descriptions, depicts an “artificially static view of work and organization” (Klingner & Nalbandian, 1998, p. 106). Given the effort in the State of Mississippi, limited innovation has been initiated since the originating endeavor, thus allowing the work
originally accomplished to become out of date, as is the case in many public-sector reengineering projects. Additionally, as is the case with traditional job descriptions, the updated classification system “promotes a hierarchical and controloriented relationship between the organization and its employees which works against employee involvement and ‘ownership’ of the organization or its mission” (Klingner & Nalbandian, p. 106). This fact, while arguably unavoidable in the public sector, has dampened the enthusiasm for the overhauled information technology personnel classification system, considered, at least initially, to be a dramatic transformation in the management of IT job classes. Thus, the effort to review the classification structure, role definition, compensation levels, and IT organizational structure for the Mississippi state government ultimately resulted in a more effective “position management, rather than management of work or employees” (Klingner & Nalbandian, p. 106). Yet, given these constraints, largely common across public-sector job classification systems, the effort to review the classification structure, role definition, compensation levels, and IT organizational structure for Mississippi, and the subsequent implementation of proposed recommendations has been largely successful. Specifically, the newly overhauled program for the recruitment and retention of information technology personnel in the State of Mississippi faired well in the approach toward flexibility, new technology, employee recognition, and skill acquisition and education. There are, of course, no easy answers or simple solutions. Each approach examined by DeMers requires some resource allocation; taken in aggregate, the seven steps form a strategy for dealing with the difficult issue of information technology recruitment and retention. This research shows that much progress has been made in enabling the state to attract, retain, and reward the quality and caliber of IT employees needed to meet and accomplish the business of Mississippi.
A Model for Reengineering IT Job Classes in State Government
futurE rEsEarch dirEctions As agency missions in the public sector become more dependent on job functions such as IT, government executives, as well as human resource managers, will be challenged to recruit and retain qualified and motivated civil servants. To that end, many personnel systems will require a functional overhaul. The single greatest impetus to collectively reform public personnel systems is rooted in concepts and best practices captured in the National Performance Review (Executive Office of the President, 1993). There is evidence that points to state governments leveraging the reform recommendations contained in the National Performance Review, which include decentralized hiring practices, a reduction of procedural restrictions in the management of human resources, a renewed emphasis on meritbased performance, and increased management flexibility (Hays, 2004). Specifically, with regard to a future-oriented research agenda, considering the reinvention concepts as a foundation, scholars will need to devote attention to recruitment, infamously restrictive for public-sector managers vying for exceptional talent in the job market. Related to recruitment, the retention of high-quality employees is crucial; research topics inclusive of pay schedules, performance reviews, and work environments should be entertained with respect to employee retention. Regarding of the position classification of public employees, scholars should focus on job descriptions that “promote cross-training, reassignments, and the ability of managers to use workers where they can be most effective” (Hays, p. 260). Scholars may also find emerging research opportunities in the arena of employee training and development, “perhaps the most neglected function in public agencies” (Hays, p. 261). Lastly, another research area that has not received adequate attention is succession planning for skilled, tenured public-sector managers. Hays notes that the “current generation of public managers is nearing retirement, and many
are leaving as soon as they are eligible due to buy-outs, fiscal pressures, and generous pension plans that may soon fade into history” (p. 264). Public management scholars would be encouraged to focus on the development of innovative leadership development programs; these programs “need to address special leadership concerns of public agency managers, including creative thinking, collaboration, cross-organizational team building, and leading for results” (Ingraham & Getha-Taylor, 2004).
rEfErEncEs Agarwal, R., & Ferratt, T. W. (1999). Coping with labor scarcity in information technology: Strategies and practices for recruitment and retention (Practice-driven research in IT management series). Cincinnati, OH: Pinnaflex Educational Resources. Babcock, T., Bush, M., & Lan, Z. (1995). Executive use of information technology in the public sector: An empirical examination. Journal of Government Information, 22, 119-130. Brown, M., & Brudney, J. (1998). Public sector information technology initiatives. Administration & Society, 30, 421-443. Bruce, W. M., & Blackburn, J. W. (1992). Balancing job satisfaction and performance. Westport, CT: Quorum Books. Cats-Baril, W., & Thompson, R. (1995). Managing information technology projects in the public sector. Public Administration Review, 55, 559-566. Council of State Governments (CSG). (2000). Technical difficulties: Hiring and keeping IT employees in state government. Retrieved August 30, 2006, from http://www.csg.org Council of State Governments (CSG). (2002). State employee worker shortage: The impend-
A Model for Reengineering IT Job Classes in State Government
ing crisis. Retrieved September 2, 2006, from http://www.csg.org Dawes, S. (1994). Human resource implications of information technology in state government. Public Personnel Management, 23, 31-47. DeMers, A. (2002). Solutions and strategies for IT recruitment and retention: A manager’s guide. Public Personnel Management, 31, 27-40. Executive Office of the President. (1993). From red tape to results: Creating a government that works better and costs less. Washington, DC: U.S. Government Printing Office. Fletcher, P. (2000). Governmental information systems and emerging computer technologies. In G. D. Garson (Ed.), Handbook of public information systems (pp. 577-596). New York: Marcel Decker. Goodridge, E. (2000). Uncle Sam wants you for IT. Information Week, 7, 160-164. Hays, S. (2004). Trends and best practices in state and local human resource management. Review of Public Personnel Administration, 24, 256-275. Heiman, D. (2002). Public-sector information security: A call to action for public-sector CIOs. Washington, DC: PricewaterhouseCoopers. Hira, R. (2004). White collar jobs move overseas: Implications for states. Spectrum: The Journal of State Government, 77, 12-18. Information Technology Association of America (ITAA). (2004). Adding value growing careers: The employment outlook in today’s increasingly competitive IT job market. Retrieved September 2, 2006, from http://www.itaa.org/ Information Technology Association of America (ITAA) & Virginia Polytechnic Institute. (1998). Help wanted: A call for collaborative action for the new millennium. Retrieved August 30, 2006, from http://www.itaa.org/
Ingraham, P., & Getha-Taylor, H. (2004). Leadership in the public service: Models and assumptions for leadership development in the federal government. Review of Public Personnel Administration, 24, 95-112. Kim, S. (2005). Factors affecting state government information technology employee turnover intentions. American Review of Public Administration, 35, 137-156. Klingner, D. E., & Nalbandian, J. (1998). Public personnel management: Contexts and strategies (4th ed.). Upper Saddle River, NJ: Prentice Hall. Lan, G., Riley, L., & Cayer, N. (2005). How can local government become an employer of choice for technical professionals? Review of Public Personnel Administration, 25, 225-242. Lipiec, J. (2001). Human resources management perspective at the turn of the century. Public Personnel Management, 30, 137-146. Longenecker, C., & Scazzero, J. (2003). The turnover and retention of IT managers in rapidly changing organizations. Information Systems Management, 20, 59-66. Mississippi State Personnel Board (MSPB). (2006). Administrative polices and procedures for the special compensation plan for information technology classification. Retrieved August 30, 2006, from http://www.spb.state.ms.us/SPB Documents/SPB/SPB pubs.asp Modesitt, C. (2002). Bridging the gap between citizens and local government with information technology: Concepts and case studies. Washington, DC: National Civic League. Murray, J. (1999). Successfully hiring and retaining IT personnel. Information Systems Management, 16, 18-25. Naim, M. (2000). The digital drain. Foreign Policy, 120, 120-122.
A Model for Reengineering IT Job Classes in State Government
Pawlowski, S., Datta, P., & Houston, A. (2005). The (gradually) changing face of state IT jobs. Communications of the ACM, 48, 87-91. Perry, J., Wise, L. R., & Martin, M. (1994). Breaking the civil service mold: The case of Indianapolis. Review of Public Personnel Administration, 14, 40-54. Pynes, J., & Bartels, L. (1996). The times they are a changing. Public Productivity and Management Review, 20, 121-136.
furthEr rEading Ashbaugh, S., & Miranda, R. (2002). Technology for human resources management: Seven questions and answers. Public Personnel Management, 31, 7-21. Ban, C., Drahnak-Faller, A., & Towers, M. (2003). Human resource challenges in human service and community development organizations. Review of Public Personnel Administration, 23, 133-153.
Rainey, H. G. (1997). Understanding and managing public organizations. San Francisco: Jossey-Bass.
Bennan, E., Bowman, J., West, J., & Van Wart, M. (2001). Human resource management in public service: Paradoxes, processes, and problems. Thousand Oaks, CA: Sage Publications, Inc.
Roberts, G. (2002). Employee performance appraisal system participation: A technique that works. Public Personnel Management, 31, 317332.
Coursey, D. (2005). Human resource management challenges in government information technology. Review of Public Personnel Administration, 25, 203-206.
Smits, S., McLean, E., & Tanner, J. (1993). Managing high-achieving information system professionals. Journal of Management Information Systems, 9, 103-120.
Coursey, D., & McCreary, S. (2005). Using technology in the workplace. In S. Condrey (Ed.), Handbook of human resource management in government (2nd ed., pp. 189-214). San Francisco: Jossey-Bass.
Thatcher, J., Stepina, L., & Boyle, R. (2003). Turnover of information technology workers: Examining empirically the influence of attitudes, job characteristics, and external markets. Journal of Management Information Systems, 19, 231-261. United States Department of Commerce, Office of Technology Policy. (1998). America’s new deficit: The shortage of information technology workers. Retrieved August 30, 2006, from http://www. ta.doc.gov/Reports/itsw/itsw.pdf United States Department of Commerce, Office of Technology Policy. (1999). The digital work force: Building information technology skills at the speed of innovation. Retrieved August 30, 2006, from http://www.ta.doc.gov/Reports/itsw/ digital.pdf
Fountain, J. (2001). Building the virtual state: Information technology and institutional change. Washington, DC: Brookings Institution. Garson, G. (2003). Public information technology: Policy and management issues. Hershey, PA: Idea Group Publishing. Hays, S., & Kearney, R. (2001). Anticipated changes in human resource management: Views from the field. Public Administration Review, 61, 585-597. Heeks, R. (Ed.). (1999). Reinventing government in the information age: International practice in IT-enabled public sector reform. New York: Routledge.
A Model for Reengineering IT Job Classes in State Government
Helton, K., & Soubik, J. (2004). Pennsylvania’s changing workforce: Planning today with tomorrow’s vision. Public Personnel Management, 33, 459-473.
Schuh, A. (2006). Institutional values: The foundation for civil service change. Public Personnel Management, 35, 49-69.
Ito, J. (2003). Career mobility and branding in the civil service: An empirical study. Public Personnel Management, 32, 1-22.
Thompson, F., & Miller, H. (2003). New public management and bureaucracy versus business values and bureaucracy. Review of Public Personnel Administration, 23, 328-343.
Johnson, G., & Brown, J. (2004). Workforce planning not a common practice, IPMA-HR study finds. Public Personnel Management, 33, 379-388.
Turban, E., Leidner, D., McLean, E., & Wetherbe, J. (2005). Information technology for management: Transforming organizations in the digital economy.
Kellough, J., & Selden, S. (2003). The reinvention of public personnel administration: An analysis of the diffusion of personnel management reform in the states. Public Administration Review, 63, 165-176.
West, J., & Berman, E. (2001). From traditional to virtual HR. Review of Public Personnel Administration, 21, 38-65.
Lewis, G., & Zhenhua, H. (2005). Information technology workers in the federal service: More than a quiet crisis? Review of Public Personnel Administration, 25, 207-224. Light, P. (1999). The new public service. Washington, DC: Brookings Institution. McEntee, G. (2006). The new crisis of public service employment. Public Personnel Management, 35, 343-346. Ott, J., & Dicke, L. (2001). Challenges facing public sector management in an era of downsizing, devolution, dispersion and empowerment: And accountability? Public Organization Review, 1, 321-339. Peled, A. (2001). Outsourcing and political power: Bureaucrats, consultants, vendors and public information technology. Public Personnel Management, 30, 495-515. Ressler, S. (2006). Recruiting and retaining young IT leaders. Public Manager, 35, 49-52. Riccucci, N. (2005). Public personnel management: Current concerns, future challenges (4th ed.). New York: Longman.
Wright, B., & Davis, B. (2003). Job satisfaction in the public sector: The role of the work environment. American Review of Public Administration, 33, 70-90.
tErms and dEfinitions Civil Service System: This is a model for managing a bureaucracy in which individuals are selected for employment in the government on the basis of either competitive examinations or special qualifications, such as professional training. Electronic Government: This is the transformation of internal and external business processes toward customer-centricity based upon service delivery opportunities offered by new communication technologies (such as Web-based technologies) to better fulfill the purposes of government to provide efficiency and effectiveness as well as fairness and equitability. Flex Time: Flex time is a variable work timetable, in contrast to a traditional work arrangement, which allows employees to set their own schedules. Human Resource Management: This describes the functions within an organization that
A Model for Reengineering IT Job Classes in State Government
include the employment of suitable staff and the administration of the employment relationships, as well as both the strategic and operational view of personnel requirements. Perk: A perk is an incidental benefit awarded for certain types of employment. Special Compensation Plan for Information Technology: It was used in the State of Mississippi to provide methods for employment, promotion,
and reassignment that are responsive to organizational needs and offers flexibility in the negotiation and adjustment of salaries (MSPB, 2006). Telecommute: The human resource practice of working off site, often at home, and communicating with the primary office in a different location via a personal computer equipped with communications software.
Section VI
Selected Readings
Concluding the Handbook of Research on Public Information Technology is a “selected reading” collection of 10 refereed journal articles for additional insight into the realm of information technology in the public sector. These articles come highly recommended and introduce innovative applications, trends and technologies within this fast growing area of information science and technology.
Chapter LXVII
Developing a Generic Framework for E-Government Gerald Grant Carleton University, Canada Derek Chau Carleton University, Canada
abstract Electronic government (e-government) initiatives are pervasive and form a significant part of government investment portfolio in almost all countries around the world. However, understanding of what is meant by e-government is still nascent and becomes complicated because the construct means different things to different people. Consequently, the conceptualization and implementation of e-government programs are diverse and are often difficult to assess and compare across different contexts of application. This paper addresses the following key question: Given the wide variety of visions, strategic agendas, and contexts of application, how may we assess, categorize, classify, compare, and discuss the e-government efforts of various government administrations? In answering this question, we propose a generic e-government framework that will allow for the identification of e-government strategic agendas and key application initiatives that transcend country-specific requirements. In developing the framework, a number of requirements are first outlined. The framework is proposed and described; it is then illustrated using brief case studies from three countries. Finally, findings and limitations are discussed.
introduction E-government (electronic government) is increasingly a global phenomenon that is consuming the attention of politicians, policy makers, and even ordinary citizens. Governments around the world have made and continue to make massive financial and political commitments to establishing e-government (Accenture, 2004). A report by the United Nations (UN World Public Sector Report,
2003) indicates that by 2003, over 173 countries had developed government Web sites. Additionally, many countries (including Canada, Germany, Malaysia, Norway, the UK, and the U.S.) have embarked on ambitious multi-year programs to create more citizen-centered, effective, and efficient governments (Accenture, 2004). E-government is predicated on leveraging the capabilities and power of IT to deliver services provided by governments at local, municipal,
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Developing a Generic Framework for E-Government
state, and national levels. While early conceptions of e-government have largely focused on electronic service delivery as the key feature of the phenomenon, a closer examination suggests a more complex set of circumstances. Beyond service delivery, e-government offers additional channels of interaction among governments, businesses, and citizens, separately or collectively. For example, individual citizens may interact with government electronically by filing their income tax documents online. Governments in delivering services may do so directly or indirectly through intermediaries such as banks, postal outlets in private businesses, and by other means. Consequently, any e-government effort must meet the needs of a diverse set of stakeholders that operate in the political, business, or civic spheres of influence. E-government, however, is more than a technological phenomenon. Whether through deliberate choice or passive acceptance, it is transformative in nature, affecting the management of human, technological, and organizational resources and processes. Consequently, the implementation of e-government is a monumental change effort. The drive to implement e-government has resulted in the adoption of many e-government visions and strategic agendas (Accenture, 2004). However, each vision is driven by its own unique set of social, political, and economic factors and requirements. Consequently, the mission and objectives that emanate from these e-government visions variously manifest strong focus on one or two elements. For example, the United States has placed a major focus on service delivery and on increasing cross-functional efficiencies (OMB, 2003). The South African government’s e-government program is heavily weighted towards service delivery, while e-government efforts in the United Kingdom have tended to balance several strategic objectives. A key factor driving the achievement of any e-government program is the vision of e-government, articulated and adopted by a government administration. Coupled with actual developments undertaken by the administration, the articulated vision (expressed in documents, objectives, frameworks) greatly helps to describe the e-government
space. However, these efforts are not necessarily well articulated nor well coordinated. In large part, this is due to how electronic government is conceptualized. With each administration articulating its own view, it becomes difficult to identify, assess, and understand what is being accomplished under the e-government aegis. While some e-government strategic agendas focus primarily on service delivery issues, others may focus more on creating internally efficient systems and processes. Still others may adopt a more comprehensive view, incorporating issues such as constituent relationship management and e-democracy. Understanding what is meant by e-government becomes complicated because the construct means different things to different people. Although each of these views of e-government may be legitimate, there is a need for some common understanding to allow for assessment, comparison, and explanation of current efforts vis-à-vis past and future investment in the e-government enterprise. This article addresses the following key question: Given the wide variety of visions, strategic agendas, and contexts of application, how may we categorize, classify, assess, compare, and discuss the e-government efforts of various government administrations? In answering this question, we see the need for a mechanism that will facilitate the articulation and discussion of current issues and concepts related to e-government. We believe this instrument would ideally transcend countryspecific requirements and identify experiences and elements that could be transferred across contexts of application. For example, we should be able to describe and discuss the e-government efforts in a country such as Malaysia and make some broad comparisons with similar efforts in the United Kingdom. Therefore, we propose the development of a generic framework that can be used to categorize, classify, and compare electronic government visions, strategic agendas, and application initiatives. Such a framework, rather than seeking to rigidly constrain or categorize egovernment activities, should act as a lens to focus attention and awareness on underlying issues and elements that could be debated, discussed, and
Developing a Generic Framework for E-Government
further developed. Clearly, our main objective is to find a way to make sense of the plethora of perspectives and developmental agendas populating the e-government space. Our proposed framework should be viewed as a first step in this process. It represents another way to consider e-government efforts and provide a starting point for integrating current experiences and knowledge. This article is organized as follows. Following this introduction, section one (Literature Review) focuses on developing a working operational definition for e-government that will underpin the development of a generic e-government framework. Key requirements for the development of such a framework are identified and discussed. A framework is proposed. Section two (Methodology) provides insight into the development of the framework. We illustrate, in section three, the application of the framework. Here, we use it to categorize, classify, and discuss the e-government visions and implementation efforts of three nations: the United States, the United Kingdom, and Malaysia. Section four presents the findings from the case studies. In section five we discuss the practical and theoretical implications of applying the framework. Possible future roles and directions will also be considered.
litEraturE rEviEw Defining E-Government Definitions of e-government abound in the literature. Some definitions narrowly focus on using ICTs, particularly the Internet, to deliver more efficient and effective government services, while others view e-government as a broad-based effort to transform government and governance. In the examples below, e-government is characterized as:
•
0
the use of technology to enhance the access to and delivery of government services to benefit citizens, business partners, and employees (Deloitte Research, 2000, p. 4);
•
•
•
electronic information-based services for citizens (e-administration) with reinforcement of participatory elements (e-democracy) to achieve objectives of balanced e-government (Bertelsmann Foundation, 2001, p. 4); the use of information and communication technologies, particularly the Internet, as a tool to achieve better government (OECD, 2003, p. 63); and the use of information and communication technologies in all facets of the operations of a government organization (Koh & Prybutok, 2003, p. 34).
These definitions, while useful in describing e-government in a broad-based manner, offer little insight into deeper issues and considerations relating to the construct. On the one hand, those definitions that focus exclusively on the service delivery component of e-government efforts, fail to capture the more complex aspects of transforming government or acknowledge the role of the information and information technology elements. Such a one-sided focus tends to skew the e-government development and deployment agenda. Consequently, most implementation activities center around service delivery concerns with little emphasis on real transformation of the services themselves or the processes associated with their delivery (Poostchi, 2002). On the other hand, definitions that are too broad make it difficult to determine what really constitutes e-government and, as a consequence, may confuse the treatment of the issue. For e-government to be properly understood and applied, we believe that it needs to be more comprehensively conceptualized (Caldow, 2001). However, the nascence of the e-government phenomenon, coupled with the complexities associated with the public sector context, contribute to the multiple interpretations and confusion surrounding the concept. Our review of academic, practitioner, and a variety of government publications suggests that any conceptualization of egovernment needs to address a variety of concerns beyond the service delivery elements (Chadwick & May, 2003; Marche & McNiven, 2003; OECD,
Developing a Generic Framework for E-Government
Table 1. E-government operational definition Characteristic
Description
References
Strong service delivery and information provision component
Electronic Services and Information provision provide the chief mode of interaction
E-government is a transformation effort
Represent multiple levels of engagement
Transaction and feedback effected via services and information provision components
Cuts across functional and organizational boundaries Digital age public sector reform
Deloitte Research, 2000; Bertelsmann Foundation, 2001; World Markets Research Centre, 2001; Accenture, 2002, 2004; Koh & Prybutok, 2003; Layne & Lee, 2001 Reddick, 2004; OECD, 2003 Osborne and Gaebler, 1992; Rais Abdul Karim, 1999; OECD, 2003; Accenture, 2004;
Demands new forms of interaction between citizens and government Diverse number of solutions and patterns of development
Country specific Implementation differs across contexts of application (political, social, economic)
UK Cabinet Office, 2000a; Corrocher & Ordanini, 2002; OMB, 2001; OMB, 2003; Accenture, 2002, 2004; OECD, 2003.
Multiple patterns of development prevail IS/IT Based development but not limited to IS/IT
IS/IT infrastructure essential in deploying an e-government program Leverages IS/IT capabilities to deliver systems and services
Box, 1999; Guy, 2001; Heeks, 1999; UK Cabinet Office, 2000a; Chadwick & May, 2003.
Overlapping functionality and knowledge Added complexities from public sector context IS/IT knowledge insufficient to explain and predict future trends Convergence of integration, sophistication, and maturity
Extends beyond service automation, and efficiencies to integrated service offering Integrative efforts and requirements increasing with added functionality and citizen centric design Increasing complexity and functionality requires commensurate development of understanding and knowledge of relationships between e-government and other functional areas and organizational concepts including IS/IT contributions
Deloitte Research, 2000; Bertelsmann Foundation, 2001; Accenture, 2002, 2004; Koh & Prybutok, 2003; Layne & Lee, 2001; Working Group on E-Government, 2002;
Asymptoting towards higher levels of service interaction and maturity International Phenomenon
Diversity of e-government realizations
Accenture 2002, 2004; Deloitte Research, 2000; Basu, 2004; Ke & Wei, 2004;
Crosses geographical boundaries Adaptable to country specific requirements Growing number of implementations and developments worldwide
2003). E-government is a detailed and complex development that is difficult to conceptualize. What is known and understood is mostly of a descriptive and anecdotal nature. The end result is that e-government implementations have yet to realize the upper stages of maturity and that the understanding and knowledge of the area is still in the process of formation.
Developing an Operational Definition for E-government Our review of the e-government literature (frameworks, models, white papers, government documents, etc.) suggests several characteristics that should be taken into consideration when
Developing a Generic Framework for E-Government
defining e-government. These are identified and listed in Table 1.
Strong Service Delivery and Information Component Most e-government programs emphasize a strong service delivery and information provision component, particularly in the initial stages (Accenture, 2004; Bertelsmann Foundation, 2001; Deloitte Research, 2000; World Markets Research Centre, 2001). Citizen experiences with customer-centric information and service offerings via the Internet have resulted in an increased demand on government administrations to organize information, services, and government functions around the citizen (Deloitte & Touche, 2000). The breadth and maturity of information and service offerings are also increasing where a significant number of nations with high levels of maturity have been measured (Accenture, 2002, 2004). For example, transaction level services have been observed, categorized, and represented in several measurement frameworks (Accenture, 2002; Koh & Prybutok, 2003) and models (Layne & Lee, 2001). These include services such as tax filing and payment, postal agency bill payments, and license renewal and registration. Portals such as the U.S. www. consumer.gov and the Canadian www.canada. gc.ca, provide access to a broad range of information and services, reflecting a “cradle to grave” philosophy that supports citizens’ information and service needs throughout their lives.
E-Government as a Transformational Endeavor Most operational definitions of e-government betray an underlying transformational or reformation theme that ranges from more effective services delivery to greater participation through relationship building with stakeholders. They embrace many aspects of the transformational agenda promoted under the rubric of the “new public management,” which calls for the reinvention of government (Osborne & Gaebler, 1992). The transformation agenda focuses on the need for governments to
more effectively manage inputs, processes, and outputs of the public administration organization, and envisions broad classes of institutional reform (Osborne & Gaebler, 1992; Rais Abdul Karim, 1999). These include:
• • • • •
Increased efficiencies in government operations; Decentralization of services and administration; Increased accountability; Improved resource management; and Marketization and leveraging of market forces to enhance public sector and private sector relationships
Research by the OECD (2003b) suggests that e-government can be an important catalyst to public sector reform agendas, whether as a tool of reform, a catalyst for change initiatives, or an instrument for improving processes and governance. The recent report by Accenture (2004) implies that to get more value from e-government investments, governments will need to embrace a more ambitious transformation agenda.
Diverse Number of Solutions and Contexts of Application Changes and transformational efforts can be undertaken across many dimensions and are often a reflection of the unique political, social, and economic needs and capacities of the hosting nation or government administration. We refer to this as the context of application. Corrocher and Ordanini (2002) suggest that different patterns (asymmetric and symmetric) of development exist, depending on the particular economic and administrative situation of the nations in question. We anticipate that e-government as a public sector and technology-enabled initiative will be subject to contextual factors similar to those experienced by other public sector initiatives, and also will exhibit multiple patterns of development. We see examples of this in the differing foci and patterns of development in nations such as the U.S., the UK, Malaysia, and South Africa. In
Developing a Generic Framework for E-Government
some nations, e-government developments have a market and efficiencies emphasis (OMB, 2001, 2003). In others, increasing citizen awareness and access to services are given priority. Still others take a balanced approach to e-government development (UK Cabinet Office, 2000).
Electronic Government, Information, and IT There is a special relationship among e-government, information, and IT. Information permeates all organizational activities and is used by all members of the organization, from the front line to back room, impacting every organizational function (Lacity & Willcocks, 1998; Ward & Peppard, 2002). This is particularly true for government organizations that are charged with managing (in the public interest) multiple stakeholders across the social, political, and economic domains. Information is a key resource for the operation of government and generates key inputs for producing the outputs of policy and public action (Box, 1999; Guy, 2001). Accurate and timely information about the external environment and stakeholder requirements are at the heart of effective decision-making, policy development, and administration. A strong relationship also exists between egovernment and the use of information technology and systems. E-government, like e-business, would be impossible without the technological platform provided by modern information and communication technologies. Perhaps the most significant capability that is enabled by IT is the creation of a new interaction and communication channel. The connectivity element of IT provides another conduit for information and feedback between government agencies, departments, and stakeholders through the application of standardized Web browsers, workflow, project management, and customer relationship management technologies, among others (Koh & Prybutok, 2003). The use of real time communication and data processing technology also allows for almost instantaneous exchange and feedback that cuts across geography, time, and organizational boundaries. The IT infrastructure effectively enables the rapid
propagation of information and data throughout the e-government network to all connected parties, increasing both the quality and quantity of information received. This unique combination of increased efficiency, information quality and quantity, and organizationwide distribution and connectivity creates an effect that goes beyond the support of government operations. The use of IT creates the potential for change and reinforces the transformation elements of e-government, providing a medium for the realization of electronic ideas, goals, and objectives (Heeks, 1999). However, egovernment is not principally driven by technology concerns, but rather must reflect the operational context and obligations of the public sector (Chadwick and May, 2003; UK Cabinet Office, 2000). IT systems are enablers of e-government. The inclusion of these systems and technologies within e-government is not the final objective.
Integration, Service Sophistication, and Maturity Electronic service delivery is undoubtedly a key component of any electronic government effort. However, governments have begun to recognize the need to move beyond front end efficiencies to a more sophisticated model where saving “money should not be the broad vision that motivates egovernment” and where the implementation is more than just automated service delivery and a series of Web presences (Pacific Council on International Policy, 2002). The ultimate goal of many government efforts is to present one view of government, regardless of the point of access, through an integrated and secure service and interaction environment. This represents an evolution of the service contract between citizens and government and is also driven by constituent relationship management (CRM) efforts aimed at serving citizens and other stakeholders better (Accenture, 2002). Consequently, e-government is expected to impact every area of the organization (Bertelsmann Foundation, 2001; Deloitte Research, 2000) and crosses political and functional boundaries. The effort is beyond the scope of any
Developing a Generic Framework for E-Government
one agency, and ultimately, the degree of integration an administration achieves will strongly determine how much value is brought to itself and its citizens (Bertelsmann Foundation, 2001). Integration of applications and services across governments are relatively more complex and problematic than similar integration in private businesses. Integration efforts in government require cross agency cooperation, which is not necessarily forthcoming or legally permitted (Marche & McNiven, 2003). In many governments, individual ministers and agencies are charged with executing the responsibilities assigned to them through legislation and are, therefore, not eager or able to expend resources on cross-agency arrangements that were not anticipated or provided for in budgetary allocations and mandates (OECD, 2003). The difficulty encountered in deploying an integrated services platform is partly responsible for the slowdown in e-government advances in many countries (Accenture, 2004). Any definition of e-government, therefore, needs to encompass a whole-of-government view that envisions an integrated network of applications and services that will provide a seamless service delivery and transaction environment to which there is ‘no wrong door.’ The maturity and breadth of e-government services are on the rise (Accenture, 2002; 2004). The reach and range of e-government applications and other developments are increasing in terms of both scope and functionality. Implementations have begun to move towards full transaction level experiences between citizens and government where communication can occur in duplex mode (i.e., feedback is given bi-directionally by both government and citizens). Transaction level services, such as tax filing and payment and postal-agency-based bill payments, have been observed, categorized, and represented in measurement frameworks (Accenture, 2002, Koh & Prybutok, 2003) and models (Layne & Lee, 2001). Service sophistication and maturity in the e-government area parallel developments in the IS/IT sphere (Nolan & Gibson, 1974; Somogyi & Galliers, 1987). There has also been a shift from an internal to an external focus. This includes the
creation of citizen-centric services in response to citizen demands that stem from Internet-based encounters (Deloitte Research, 2000). This shift is evidenced in the growing focus on CRM efforts, which seek to organize services and government functions around the citizen. Emerging initiatives in service personalization extends this trend (Accenture, 2004). E-government service maturity will continue to change and evolve in keeping with changing stakeholder requirements. This maturation process suggests that conceptualizations of e-government should not be limited to the services and applications that currently exist. Any definition or framework that seeks to describe or explain e-government phenomena must be sufficiently robust to accommodate the evolution of activities and applications without requiring a total reconstitution of the frameworks or models previously developed.
International Phenomenon E-government is an international phenomenon that is undertaken by a diverse array of government administrations (Basu, 2004; Chadwick & May, 2003; Ke & Wei, 2004; OECD, 2003). Studies from independent consultants detail the efforts of both developed and developing nations to deploy e-government services and applications (Accenture, 2002, 2004; Deloitte Research, 2000). These initiatives have inherent benefits that apply to all administrations, regardless of economic and social background, although there is great variability in how the implementation is undertaken. Governments around the world are convinced that e-government efforts can provide that engine for deep and valuable economic, social, political, technological, and strategic transformation due to its broad scope (OECD, 2003). Consequently, we are now witnesses to the proliferation of e-government implementation around the world, superseding geographical as well as cultural boundaries. This means, therefore, that as we conceptualize e-government, we need to be cognizant of its international dimension and that it has implications for more substantive economic, social, political, and cultural impacts.
Developing a Generic Framework for E-Government
Based on the preceding discussion and our understanding of published material on the subject, we propose a working operational definition of e-government that expands the limited view present in most practitioner definitions. We define electronic government as follows. A broad-based transformation initiative, enabled by leveraging the capabilities information and communication technology; (1) to develop and deliver high quality, seamless, and integrated public services; (2) to enable effective constituent relationship management; and (3) to support the economic and social development goals of citizens, businesses, and civil society at local, state, national, and international levels.
developing a generic framework for E-government The operational definition of electronic government implies a number of complex issues. From our research, we have observed that every government has its own vision of what constitutes the construct and its own approach to implementing that vision. However, we feel that there is a common need to characterize and identify the direction and dimensions of each of these approaches. Certain commonalities and challenges exist in every egovernment undertaking. Our goal is to develop a generic e-government framework (GEF) that will help to overcome and to restate these challenges. To do this, we first identify key requirements for developing the framework. We then propose a framework, identifying and discussing its various features and functionality.
requirements for a generic E-government framework E-government deployment is influenced by diverse factors. Consequently, in developing a generic framework, we need to establish a set of criteria that should inform and shape what factors constitute the framework, its requirements, and features, and how they are arranged and relate to each other. We believe a generic e-government framework should meet the following requirements.
a.
Provide a “noise free” representation of e-government
To respond to and represent the desires of a variety of stakeholder groups, an e-government program must embrace and satisfy the desires of citizens, businesses, civil society, and policymakers, while taking into consideration the requirements of existing political structures and operating processes. As a result, a significant amount of “noise” and interference generated by political, social, and technological factors and agendas, are injected into the process of identifying, analyzing, and deciding what projects and processes to adopt in executing the e-government program. E-government efforts inherit problems similar to those found in the IS/IT field, where technical jargon and definitions are confusing and a large number of terms are used inconsistently for the same concepts (Ward & Peppard, 2002). Technology issues, including technology jargon, can also obscure the true situation (Luftman, 1996; Keen, 1991). As a result, there are often disconnects between the points of view of key participants (i.e., policymakers and technology experts). The resulting interference makes it difficult to determine the true situation surrounding the overall strategy and transformation effort (Osborne & Gaebler, 1992). A nonpartisan view would aid greatly in optimizing planning and implementation activities. Therefore, it is vital to have a highlevel framework that has the ability to capture a more objective representation of e-government. Such a framework can then be shared and easily referenced across political, ideological, and technological boundaries, providing a common basis for informed discourse about issues relating to the e-government agenda. b.
Enable identification and articulation of e-government goals and objectives
Effective coordination and organization of an e-government program requires clear understanding of goals and objectives. The power of setting clear goals and a shared mission statement is potentially one of the most significant acts that
Developing a Generic Framework for E-Government
an administration can undertake in the reform process (Osborne & Gaebler, 1993). These are not always easy to ascertain, given the many vagaries involved in the vision development and realization process. In order to maximize support and success, the vision must be shared and communicated effectively (Cufaude, 2003). The articulation of goals and objectives also helps to meet the requirement of creating a shared vision and has the powerful effects of (1) communicating the fundamental purpose and mission of the effort; (2) creating a common understanding and commitment to the overall effort; and (3) integrating and aligning different functional areas. Therefore, a framework that will aid in the articulation and identification of common goals and objectives will have significant benefits. c.
Identify the gap between the present and the future states of e-government deployment
Understanding and identifying the gap between current and future states are key elements in the strategic management process and provide an effective way of monitoring the progress of e-government initiatives. The distance between reform initiatives and the current realities of the public sector is often of vital importance in determining the success and failure of reform initiatives (Heeks, 1999). Mapping the progress from “the way things are now” to “the way things ought to be” also allows decision makers to weigh the potential progress of each area against the level of future development required by the e-government vision. It can also assist policymakers and administrators in analyzing the potential benefits and impacts of committing resources and implementation effort as they engage in the planning process. Beyond the determination of success and failure, the mapping of current reality against the mission statement of the future also generates the energy for change that is a requirement for any transformation process. In effect, the transformation process needs both a vision and a picture of the present to move forward. Therefore, we have another compelling argument for the creation of a new representation of e-government efforts that
allows planners and implementers to measure and map the current state of the implementation against its intended result. d.
Support prediction of future trends affecting e-government initiatives
Electronic government is a reform effort that is breaking new ground in many areas. For example, beyond the ability to access information and perform basic electronic transactions, citizens in some jurisdictions are demanding more customized products and services rather than acceding to the traditional public institution ‘one size fits all’ approaches (Osborne & Gaebler, 1992). The emphasis of technology-enabled service delivery has shifted towards providing the right systems and services that users need and want rather than purely emphasizing operational efficiencies (Friedman, 1994). Trends are towards increasing functionality, specialization, and integration (Bertelsmann, 2001; Nolan & Gibson, 1974). New funding arrangements and horizontal and vertical integration are some of the many new elements that are being implemented (Accenture, 2001; Layne & Lee, 2001). As a result, there are no preestablished roadmaps; an international standard or format does not exist (Bertelsmann, 2001). Beyond citizen-focused initiatives, electronic government also requires the balancing of diverse elements across strategic and operational domains. Changes in the technology field can significantly affect the direction and progress of applications development by either enhancing or limiting choices or functionality. Planners and decision makers must maintain a high level of awareness of required technology and applications change (Nolan & Gibson, 1974). Strategic foresight, coupled with an understanding of the technology and application changes required, is essential to managing internal and external operational and strategic constraints. Consequently, a framework for e-government, acting as a diagnostic tool, should be helpful in predicting the impact of future trends and requirements. In this way, the potential effect of future events can be anticipated and taken into account throughout the entire development life cycle.
Developing a Generic Framework for E-Government
e.
Be transferable across different contexts of application
Every nation has its own functional, social, and administrative objectives to fulfill. As a result, each nation’s vision will differ with respect to the strategic priorities of the policymakers and the jurisdiction they represent, giving a unique flavor and direction for each e-government endeavor. Therefore, every e-government program should be viewed and assessed with respect to its context of application (Corrocher & Ordanini, 2002). A greater understanding of motivations and resulting patterns of development in different settings can facilitate the process of comparing approaches and provide a rational means of setting the reform of public administration on course for efficiency and transparency, with clear orientation towards its citizens (Bertelsmann, 2001). It is vitally important to distinguish patterns of development and motivations for e-government and identify transferable elements. Common experiences and practices from one nation’s realization could be incorporated into another’s, resulting in the creation of a synergistic learning capability. This would be invaluable, given that electronic government is an evolving and pioneering effort where all countries can be considered as being in the early stages of development (Pacific Council on International Policy, 2002). The ability to leverage the experiences and lessons learned by other administrations and selectively identify the applicable elements of other nations would create a synergistic learning and knowledge network that would reduce planning and strategizing efforts. A generic framework that encapsulates these principles would go a long way to improve the discourse about how e-government is being developed across different jurisdictions and contexts. f.
Support a system representation of strategic agendas and implementation efforts
A system representation of e-government is required to capture the system’s nature of the concept. Systems are one of the common units of analyses within the e-government endeavor. Systems must
be balanced across the implementation and are also an area where mapping efforts can be applied and gaps can be identified. The impact and effects of future needs and requirements also have a direct effect on the e-government systems. Given that the operation of e-government depends significantly on the effective management of a variety of systems, a generic framework should capture details of electronic government at this level. g.
Provide a functional representation of egovernment objectives
Another common unit of analysis is at the functional level. Goals, objectives, and future needs and requirements are typically articulated in the form of functional capacities. In addition, gaps in functionality provide the first indicators that the realization of the e-government vision is not on course. Functional capacities are also used in the comparison of international efforts. To account for these realities the framework must maintain a capability to identify functional capabilities at a high level in a way that reflects the primary motivations and objectives of the overall e-government program. Therefore, a functional representation of e-government is a necessary requirement, supporting several types of analytical efforts. h.
Support reusability and expandability of framework constructs
The analysis and consideration of electronic government also has an implicit requirement for reusability and expandability. E-government is an evolving phenomenon that reflects the changing needs and requirements of a society. The mapping of current development against the future is an ongoing effort and must be periodically reviewed and revised. As a result, the generic framework also must be able to capture new data and information about new developments on an ongoing basis. In brief, the framework must classify and describe the ongoing activities at a high level and provide a functional representation that can aid in the planning and implementation process. With regards to the changing nature of e-government,
Developing a Generic Framework for E-Government
the framework should act as a “sliding window” that reveals the relationship between current initiatives and the overall vision. By meeting these requirements, we are seeking to create a new model for viewing e-government programs as a contiguous whole, while providing a starting point for discussing current knowledge and practices and extending the boundaries of these two areas.
a generic E-government framework (gEf): a proposal In Figure 1, we present a proposal for a generic framework for e-government. The framework comprises a number of features that attempt to provide a comprehensive representation of e-government endeavors that meet the requirements identified and discussed earlier. Following a discussion of some of the key features of the framework presented in Figure 1, we discuss how the framework was developed.
gEf framework features The GEF model presented in Figure 1 displays a number of features that, we believe, enhance its usefulness.
Graphical Representation A graphical representation was chosen, since it provided the best balance between detail and abstraction of e-government visions and implementation activities. From our analysis of other models and frameworks in the IS/IT and e-government areas, a graphical representation or hybrid representation (text plus graphics) was often utilized to better illustrate key details and concepts, including: • • •
levels of abstraction; complex relationships and interactions; multiple dimension spanning developments; and
Figure 1. A generic framework for electronic governmen Fig. 1: A Generic Framework for Electronic Government
GOVERNMENT X
E-GOVERNMENT TRANSFORMATION
Strategic Focus Areas (SFAs)
Service Delivery
Citizen Empowerment
Market Enhancement & Development
Exposure & Outreach
Key Functional Applications KFAs Service Automation and Information Provision
Interaction Based Services
CRM Constituent Relationship Management
e-Particpation and e-Democracy
Collaboration and Partnership
Infrastructure Consolidation and Standardization Integrated applications, Secure Channel, Data Privacy Common Infrastructure & Service Centers
14
Global Business Development
Marketing Electronic Government
Connected Elements
Developing a Generic Framework for E-Government
•
systems and functional perspective.
For example, the four-stage EDP model (Nolan & Gibson, 1974) uses a combination of text and graphical elements to provide varying levels of abstraction with respect to growth elements and management techniques over the model’s four development stages. Graphical illustrations were also used to describe the dynamics of the IS function within user organizations (Friedman, 1994), as well as the interconnection between technology, human, and business elements (Alter, 2002). With respect to e-government developments, the graphical representation is more direct. Koh and Prybutok (2003) directly depict the three-ring model for measuring the dimensions of e-government functions in graphical form. Similarly, Layne and Lee (2001) provide a graphical representation of functional development mapped against overall integration and development complexity in their four-stage model of fully functional e-government. A pictorial view was also deemed to be the most effective way of communicating in a common language that provides a functional perspective free of technical language jargon and document-based artifacts. Such a framework would be easier to share and disseminate across administrations and development contexts, and it reduces the number of implicit assumptions and uncertainties. Our hope is that such a representation would be subject to less interpretation errors and could be debated and discussed with greater facility than a traditional text-based document.
Modularity, Flexibility, and Scalability The model illustrated in Figure 1 provides a system level representation of a generic Government X’s implementation or vision. Strategic focus areas (SFAs) and key functional applications (KFAs) are represented as stand-alone blocks and sub-blocks that represent vertical and horizontal elements of electronic government vision and implementation. The basic security and computing infrastructure platform was also designed to meet the same requirements. These are stand-alone elements that can be reused and interconnected in a variety of
ways. This modularity enables reuse and ease of modification and also significantly supports scalability and flexibility capabilities. By virtue of its flexibility and modularity, the framework can be expanded and modified to represent a diverse array of e-government visions of developments through the selection of SFA clusters and by interconnecting these with relevant developmental areas. In this way, customized views can be built to provide a contingency capability that allows for differentiation and representation of different development perspectives and development patterns. This allows for multiple applications of the framework to different implementations and vision plans and also meets the requirement for transferability across contexts. Note also that the model classifies efforts by function or end objective and not by departmental or agency boundaries, and thus provides additional flexibility in considering electronic government developments. Application of the framework is not limited to the pre-defined SFAs and key application areas. New SFA and KFA blocks can be created or interchanged, and interconnections between elements also can be readily changed to reflect new relationships and developments as they arise over time. The framework is not a static construct with rigid classifications; the key application areas and SFAs given are only meant as starting points for future investigations. The framework is customizable, not only across contexts of application, but also across different development periods where it can be reapplied to the same implementation periodically. This scalability and ease of reapplication underline the need for longitudinal analysis and continual monitoring and environmental scanning.
mEthodology Formulating Key Application Areas and Identifying Strategic agenda clusters The conceptualization of the generic framework was a multi-phased process and began with the study of information age agendas proposed by
Developing a Generic Framework for E-Government
Heeks (1999). These were identified as increased efficiency, decentralization, increased accountability, improved resource management, and marketization. These agenda items are, in effect, the horizontal motivating elements that are shared across functional and departmental lines. We inferred from this that every electronic government initiative could be considered in some way to fulfill one or more of these agendas. To build upon these findings, we then identified prominent areas of electronic government development by analyzing e-government vision and white paper documents of various nations, along with independent consulting reports and academic research. Initially, we focused on developed nations (US, Canada, France, Denmark), but further expanded the analysis to include developing and developed nations (Malaysia, Singapore, Malta, Mauritius, South Africa, United Kingdom). This level of analysis highlighted a number of key functional applications and represents a progression in the framework development process from high-level motivations to more concrete implemen-
tation aspects. Table 2 outlines the e-government applications initially identified from the literature investigated. The next step was the refinement of the application areas to include operational and infrastructure components, and human and collaborative elements that were not explicitly identified in the reviewed material. These components act as bridging and enabling elements to many of the other areas and must necessarily be acknowledged as essential areas of e-government development. See Appendix 1 for a detailed description of the application areas. In Table 3 some key refinements introduced into the conceptualization of the framework at this stage were:
•
Separation of services into interactive and automation and information based efforts. This demarcation parallels the evolutionary progression of services towards greater maturity and sophistication and allows for the measurement of efforts with differing levels of each.
Table 2. Identification of e-government functional applications E-Government Functional Applications (Initial)
Sources
Interaction based services
Bertilsmann, 2001, Layne, & Lee, 2001; OECD, 2003
Seamless service delivery and automation
Accenture 2001, 2002; Working Group Egov., 2002; Word Markets Research, 2001
Integration of information, services and agencies
Accenture 2002, Government of Canada, 2002, Koh & Prybutok, 2003
Information organization and content aggregation
Layne, & Lee, 2001, Koh & Prybutok, 2003, Working Group E-gov., 2002
CRM: Constituent Relationship Management
Accenture, 2001, Deloitte Research, 2000, Government of Canada, 2002
Democracy and Participation
UK Online Action Plan, 2001, Government of Canada, 2002,
Transparency and Constituent Connectivity
0
Exposure and outreach
Accenture, 2001, Heeks, 1999; Chadwick and May, 2003; OECD, 2003
Global Business Development
MAMPU Flagship Applications, 2002, CIMU White Paper, 2002, Accenture, 2002, Working Group E-gov., 2002
Data and security protection
Accenture, 2002, Working Group E-gov., 2002, Government of Canada, 2002, UK Online Action Plan, 2001
Developing a Generic Framework for E-Government
Table 3. E-government functional applications (refined) E-Government Functional Applications (Initial)
E-Government Key Functional Applications (Refined)
Interactive services
Interactive Services
Seamless service delivery & automation
Service Automation & Information Provision
Integration of information, services and agencies
Infrastructure consolidation & Standardization
Information organization & content aggregation
CRM Development
CRM: Constituent Relationship Management
e-democracy & e-participation
Democracy & Participation
Collaboration & Partnership Programs
Transparency & Constituent Connectivity
Marketing e-government
Exposure & outreach
Global Business Development
Global Business Development Data & Security protection
•
•
•
•
Recognition of a common infrastructure and security platform that provides a baseline for developing internal efficiency, reliability, and acceptance for IT and human systems. This platform addresses the need for a high degree of integration between technology, organizational and human systems. Recognition of ICT as a determining factor in the participation and democracy developmental areas by the inclusion of the “e” prefix Recognition of collaborative elements involved in the planning, rollout and adoption of E-Government initiatives and services. This can also be seen as the acknowledgment of soft integration elements not defined in the basic infrastructure platform. Addition of Marketing of E-Government as another key area that supports the adoption and diffusion of initiatives but is not intended to replace the marketization agenda presented earlier.
At this stage, we have identified the generic areas of development that an administration can potentially undertake. However, the framework, as it stood, did not meet the requirements for transferability across different contexts of application. To meet this exigency, we created a classification scheme entitled Strategic Focus Areas (SFAs) that could be used to identify high-level e-government vision objectives that would be applicable
across different government administrations. Each group incorporates a number of relevant functional applications. The SFAs are identified in Figure 1 as:
• • • •
Service Delivery Citizen Empowerment Market Enhancement and Development Exposure and Outreach
Each SFA provides a suggestive mapping between KFA and higher-level agendas. SFA clustering represents an initial interpretation of the strategies and directions employed in current e-government programs and, at best, is an informed estimate of current efforts. In addition to these four groupings, we introduce the horizontal dimension of the infrastructure consolidation and standardization. Given the level of coupling between groupings and an integrated and secure infrastructure, each SFA’s associated developmental area is interconnected with the integrated platform. We note one caveat, however. The growing importance and impact of security and privacy concerns within e-government may soon merit their own Strategic Focus Area.
SFA Analytical Capabilities The model is designed to provide the flexibility to consider developments at different levels of abstraction. Figure 1 represents a high-level cat-
CRM SA-Multilanguage and Literacy Level delivery mechanisms US-CRM ** Labor: www.ajb.org MAL-Citizen Central, user group focused info (community, young adult, parent) http://www.myeg.com.my
Collaboration/Partnership Fin: interactive public discussion forum hosted by ministry: www. otakantaa.fi/kaynnissa.cfm UK: Online Media Campaign coincides w/ 2005 service launch
eParticpation/Democracy US-Central Gov’t Portal: www. FirstGOv.gov (figures, account driven/ legislative docs search) US-Census/Library Congress: www.census.gov, www.thomas. loc.gov UK-Democracy: Citizen space UK-Democracy: live debate coverage of parliament + feedback encouraged, 12 month trial: www. parliamentative.tv: SA: Voting and registration services and information and results updates: www.elections.org.za
Service Automation & Info SA-PIT: Public information SA-Transport: www.transport. gov.za (train schedules, road construction updates) UK-WAP wireless services (Land Registry)
Interactive Services SA-Education: www.education. pwv.gov.za Www.unisa.ac.za (online ente + fees) SA-Taxes: www.mytax.co.za (filing public/private, view correspondence, forms) US-Postal: www.moversguide. com (follow-up procedure) CAN: Revenue Netfile: private/ citizen filing, paperless: www. netfile.gc.ca
Citizen Empowerment E-Particpation/Democracy Collaboration/Partnership CRM
Service Delivery Service Auto & Info Interactive Services CRM
Global Business Development MALTA: Partnering w/ Global firm to develop common ICT infrastructure UK-digital key infrastructure to enable eCommerce, one certification authority SA-PIT: public information terminal → example of Reach focus + provide access to masses
Collaboration/Partnership US-Bus/Agency Proc. Portal: procurement portal for business and agencies: www.FedBizOpps.gov MAL: Government Supplier and Vendor Commerce site http:// home.eperolehan.com.my/
Market Enhancement & Development Collaboration/Partnership Globalization
Marketing E-Gov. SA: Alternative Technologies ITV, WAP, KIOSK SA-Multilanguage and Literacy Level delivery mechanisms SA-Smartcard & Population Registry → example of Reach focus + brings EGOV to masses
Global Business Development MAL: Cyber Jaya, technology showcase, integrated infrastructure, open invitation to international firms to share knowledge openly…
Exposure and Outreach Globalization Marketing E-Gov.
Internal efficiencies and procurement SA: Consolidate buying power of state, common procurement platform US: 24 month high pay off initiatives, internal efficiencies and federal infrastructure US; Internal Efficiency and Effectiveness Portfolio MAL: Generic Office Environment (Paperless) MAL: Electronic Procurement (Internal Supply Chain initiative)
Infrastructure Consolidation and Standardization
Developing a Generic Framework for E-Government
Table 4. Generic framework breakout capability
Developing a Generic Framework for E-Government
egorization of general trends and directions that could be used to generate awareness and discussion. Lower-level differentiation would consist of analyzing actual implementation initiatives and nesting them within the appropriate key functional application areas. This ability to decompose the elements of the functional application areas is useful for operational and strategic analyses. This decomposition technique can be applied for the following: a.
Verification of direction and strategy against actual implementation activities on an ongoing basis
This constitutes a top down application of the GEF. An objective view of e-government is created when implementation activities are nested within SFAs. This view provides a measure of the progress of the implementation relative to strategic objectives and can be used in conjunction with predefined qualitative or quantitative criteria (e.g., usage rates or degree of operational readiness). If applied on a continuing basis, this application can be utilized as a dynamic check that reveals how far strategic objectives have been realized, based on the degree of completion within key application areas.
b.
Vision identification for emerging e-government efforts
This represents a bottom-up application. Key initiatives can be identified and used to form functional application areas. These can then be used to create appropriate SFA clusters that can be used to define vision objectives and directions. The application of GEF in such a manner would be ideal for situations where e-government initiatives are ad hoc and an overall vision or strategy has not yet been articulated. In Table 4, we provide some examples of the decomposition of the GEF using e-government initiatives and activities from five nations. Various activities are identified and placed under the appropriate SFA and KFA headings. This mapping then gives the relative degree of activity in each SFA and KFA. As presented, the depiction gives an overall view of e-government initiatives internationally. However, each nation’s relative activity level could also be determined only by considering its specific activities. For example, South African efforts focus mainly on Service Delivery and Exposure and Outreach SFAs, with minor activity elsewhere.
Table 5. GEF features and requirements Framework Features Graphical representation
Modularity, Flexibility, Scalability
Breakout Capability
Key Benefits
Requirements Supported
Differing levels of abstraction
System perspective
Illustration of complex relationships and interactions
Noise free representation
Multi-dimensional view
Transferability and across contexts of application
Common alignment medium
Functional representation of E-government
Stand alone vertical and horizontal elements
System perspective
Flexible interconnection
Functional Representation of E-government
Customizable
Reusability and Expandability
Capture capability (strategic developments, developments)
Mapability of present against overall objectives
Longitudinal analysis and ease of reapplication
Transferability across contexts of application
Differing levels of abstraction
Reusability and expandability
Strategic & operational Applications
Mapability of present against overall objectives
Diagnostic capabilities
Developing a Generic Framework for E-Government
consolidated gEf features and requirements A mapping between the requirements and GEF features is given in Table 5 to conclude our discussion. E-government is an ongoing endeavor that is constantly evolving. We expect that the rate of change will continue to accelerate, given the potential for new technological advances and the complexity of government processes and citizen needs and requirements. To accommodate the fluid nature of the effort, future oriented tools and methodologies must be scalable and have the capability to measure progress on a continuing basis. The framework must act as a sliding window that can be used to update key stakeholders of the current state of the vision. Therefore, a framework or conceptual model also must have the freedom to classify and reclassify ongoing developments as the implementation and vision mature.
casE studiEs To further illustrate the application of the framework, we used it to study the e-government programs of three countries: the United States; the United Kingdom; and Malaysia. The case studies are presented in brief and should not be considered as the complete representation of all the e-government efforts in the countries selected. Our aim is simply to illustrate the main features of the framework. In Table 6, we outline the key features of each of the cases. Information used in the table was gleaned from published government documents (white papers, reports, etc.) typically found on government Web sites and from other publicly available sources. Data presented here were gathered in 2002 and 2003. Note that some of the information since might have been updated since then. We begin the table by first identifying the published title for e-government vision and the principal agency responsible for executing the vision. We then briefly outline key e-government initiatives undertaken by each government
administration under the rubric of the SFAs. We should note that the visions and initiatives here represent activities at the national/federal level only. Once the initiatives have been listed, we provide a qualitative assessment of how much emphasis is placed on them by the government administration in question. We classify the emphases as major, balanced, or minor. As noted earlier, each government administration may place a different emphasis on an application area. The emphasis placed depends on the strategic objectives being pursued, the history and context influencing the choices being made, as well as the level of e-government implementation experience and maturity. Most government administrations give priority to service delivery efforts because they are able to deliver “quick wins” in this area (Accenture, 2002, 2004). However, as they grow in e-government deployment maturity and the service delivery application areas become more fully functional, they tend to move on to other areas of focus.
findings We now present some general observations and comments with respect to overall e-government development. From the above, the generic framework was applied to the three nations and provided a good initial mapping of their e-government development efforts. The modularity feature provided a high degree of freedom in classifying the different initiatives and strategic objectives of each vision, which mapped well to the predetermined and SFA and KFA clusters. The case outlines also uncovered a number of potentially key developmental themes including: • • •
varying degree of centralization lack of balanced development service delivery SFA as the most prevalent development cluster
Similar to the measurement of digital divide dimensions by Corrocher and Ordanini (2002), there were significant differences in how ad-
Market Enhancement & Development
Citizen Empowerment
Service Delivery
Strategic Focus Area
Principal Driver:
Vision Title:
United States
United Kingdom
Malaysia
Collaboration and Partnership: Dedicated E-Commerce group to develop E-Commerce environment with public, private stakeholders
Core Process Identification and High Payoff Initiatives: 24 high payoff initiatives across 28 lines of business
Action plans to overcome regulatory/legal barriers and enhance connectivity to the global community
Balanced Citizen focused development: Extensive government wide CRM efforts to identify and segment user groups
UK Online Centers basic skills training and framework for bringing e-Government to a national audience.
Business centric collaboration & partnership
Major
Web access to Government legislation and initiative status
Service oriented based segmentation
Government to Business Portfolio: Streamlined support process, E-Business communication enhancement, one-stop business compliance provision.
PIU web based policymaking and parliament web TV –governance and public participation integration efforts
eVoting
Balanced
Automated service pilot programs: including interaction public service/citizen based services (knowledge sharing, labour, licensing)
Alternate technology services: WAP wireless services
Minor
Centralized public service offerings: Human Resources, Electronic Procurement
Transaction based services: Postal & Education
Government to Business Portfolio: Expanded tax products for business, international trade licensing and regulation
Technology component of E-Government is significantly articulated.
Marketing of Malaysia as a technological R&D enter for international development.
Minor
Articulated CRM efforts: Rough segmentation of targeted groups
KIOSK development principally to meet service delivery objectives
Minor
PMO Office—Paperless environment: Internally focused program to create electronic seat of Government.
UK Online Centers: 6000 online center gateway to introduce e-Government services by 2002
Government to Citizen Portfolio: Consumer portal with diverse free and pay based benefits & services (Education, tax, forms, postal)
Major
Malaysian Administrative Modernization and Management Planning Unit (MAMPU)
Electronic Government (1997-present)
Balanced
Office of the e-Envoy
Information age Government (1998-present)
Major
Office of Management and Budget
Information Super Highway Updated with Expanded Electronic Government 2001 (1993present)
Developing a Generic Framework for E-Government
Table 6. Case study summaries
continued on following page
Overall Assessment
Infrastructure consolidation and standardization
Exposure & Outreach
A market driven vision with high pay off applications developed on a case by case basis to maximize internal efficiencies across 28 lines of business. Strong focus on support services and streamlining processes to develop the business environment. A diverse and large array of services offered with citizen engagement predominantly focused on linking users with product offerings.
Internal efficiency and effectiveness portfolio: Commercial best practices application
Government to government portfolio: Integrated federal architecture for IT efficiency (information, processes, resources)
Major
Consumer portal: A single point of access to federal information sources online.
Minor
A balanced and integrated approach supported by a wide variety of services for private citizens, business and the public service coupled to extensive segmentation and customization CRM efforts. A clearly articulated and integrated marketing strategy to maximize exposure and outreach at local and international levels along with ongoing collaboration to create a conducive E-Commerce framework
E-government interoperability framework
Minor
Strategic marketing plan: Online media campaign for 2005 service launch, E-Government branding & high profile marketing of services to users at local and international arenas
Balanced
An outreach and exposure based vision that is also service oriented. Integrated with the Multimedia Super Corridor (MSC) transformation effort to build and create a knowledge-based society and technology utopia. Significant showcasing and marketing of technological areas of excellence and innovation including paperless environments, cyber smart cities at the international level. Centralized services and pilot applications serve both public service and private citizens.
Multimedia Super Corridor (MSC): 20 year transformation effort to build and create a knowledge based society and technology utopia
PMO Office—Generic Office Environment: Showcase a fully integrated, distributed and scalable deployment of multimedia information technology
Cyberjaya: Smart CityMultimedia fully wired oasis designed to attract innovative companies from around the world
Major
Developing a Generic Framework for E-Government
Table 6. continued
Developing a Generic Framework for E-Government
Table 7. Framework requirement evaluation Framework Requirement
Successfully Demonstrated
Noise free representation
Partial
Transferability and across context of application
Yes
Functional representation of Egovernment objectives
Yes
System representation
Partial
Reusability and Expandability
Partial
Mapability of present against overall objectives
No, untested
ministrations choose to develop their visions of electronic government. Development efforts were undertaken in both symmetric and asymmetric fashions. Certain nations, such as the United Kingdom, focused on more integrated and distributed efforts (symmetric) across all SFAs, while others focused only on a few of the SFA efforts to meet specific contexts of application (asymmetric). Patterns of development also reflected the degree of centralized direction that each nation followed. Most nations followed a long-term approach, with the exception of the United States, which drives its implementation based on a 24month-long initiative with clear and measurable returns. There is also a distinct lack of balanced development. SFA development was more prevalent in areas with more well defined and measurable impacts and benefits. Most nations did not pursue a balanced development across all dimensions, but chose one or more key areas on which to concentrate their efforts. The overall US and UK visions for e-government represented the two endpoints of the centralization spectrum. Service Delivery SFA initiatives were present for all nations, however. exposure and outreach and market enhancement SFA clusters were the next most prevalent. The weakest SFA was citizen empowerment. This supports the observation made by Chadwick and May (2003) that the consultative and participatory focus of e-government efforts were the least developed and often overlooked. We believe that the development and performance of the former are more measurable (usage rates, penetration levels, revenue generated) than the latter, and that
a relationship exists between measurability and the frequency of implementation. Additionally, there may be a relationship between the maturity of the vision (infrastructure, service sophistication and diversity, degree of articulated centralized direction) and the level of symmetric development. From our findings, therefore, we can confirm the following:
• • •
There is a strong services component in egovernment visions and realizations. There is a prevalence of a practitioner perspective that focuses on “quick wins.” There are many different ways to conceptualize and implement e-government.
discussion and conclusion framework capabilities and requirements We now discuss the implications of findings on framework capabilities and requirements. Six initial requirements were suggested and incorporated into the framework design. Of these, five were fully or partially demonstrated by the first application of the generic framework, and one was not. From this application, we observed that the framework provided:
• • •
a mapping of diverse electronic government elements to a common perspective; the ability to compare and differentiate underlying goals and themes between different implementations; and the ability to draw general conclusions and compare differences and similarities across implementations.
In our application of the framework, we were able to represent higher-level objectives as Strategic Focus Areas and Key Functional Applications. These provide the common elements that are flexible enough to classify the specific elements of each initiative, yet not introduce too great a level of detail. As a result, interpretation and jargon
Developing a Generic Framework for E-Government
“noise” effects were minimized. The profiles are highly customizable and allow for the creation of a common view of electronic government that could be easily shared and compared. For example, in the U.S. and Malaysian profiles, we were able to identify key areas of strategic focus. Both nations had a similarly strong focus on the service delivery component, but differed on the other dimensions. The U.S. e-government program has a strong market enhancement and infrastructure consolidation and standardization focus, whereas the Malaysian development’s other area of concentration lay in the exposure and outreach area. Therefore, different areas of strategic focus and activities were captured, and different patterns of development were observed by comparing levels of strategic concentration (minor, major) and related key functional application activities. U.S. development tended to be more internalized and focused on providing maximum efficiencies and economic benefits over a shorter period (24month high payoff initiatives), while Malaysian efforts reflected a more externalized approach to raise global awareness of Malaysian society and technology offerings over an extended period (20 years). Some potential contextual factors were also highlighted during this process. Perhaps U.S. efforts reflect a more established base of applications and development, while Malaysian efforts are more focused on growth and enhancement of emerging technical and social conditions. In a similar fashion, country profiles were created for all three nations and used to identify patterns of development that highlighted both similarities and differences in implementation activities and strategic areas of focus. We observed a number of general trends and developments.
• • • •
Differing patterns of development Varying degrees of centralization Lack of Balanced Development Service Delivery SFA as the most prevalent development cluster
We were able to identify these because the framework was expandable and reusable across the e-government programs studied. We successfully
used the breakout capabilities to analyze and classify the activities and articulated objectives. In this manner, we were able to perform a mixed analysis with some qualitative (articulated objectives) and quantitative (achievements, implementations) elements. The built-in modularity, scalability, and flexibility features then provided the means to identify patterns of development in a simplified yet inclusive manner. As a result, we were able to consider these patterns at a manageable level of abstraction and draw general conclusions that transcend the specific activities of each e-government endeavor and that could be shared across contexts of application.
limitations However, we also observed the following shortcomings of the framework:
• • •
Lack of support for a full systems representation Potential noise effects could be present Case study represents a static application
The generic framework only provides partial systems representation of e-government. The interconnection of key application areas and SFAs provides a high-level system representation of overall objectives and some of the relationships between different areas of development. By its very nature, it is more functionally orientated and not optimized towards representing system connections and interrelationships. Profiles are designed to provide a common perspective based on analyzing findings from material that tends to describe developments from a functional or capabilities viewpoint. Consequently, these common elements are abstracted from the actual systems and relationships employed by the administrations under study. For example, the relationships and connectivity of the IS/IT function with the executive and public service branch of government are not detailed. Only high-level objectives and relationships are depicted as interconnected SFAs and KFAs. This level of abstraction is required in order to maintain the common perspective and abil-
Developing a Generic Framework for E-Government
ity to transcend application contexts. Therefore, for an analysis of the actual systems, additional models and tools will be required to complement the application of the GEF. Another post application discovery was the lack of filtering for potential bias and inconsistencies in vision claims and achievements included in the country-specific profiles. The profiles did present a common view that filtered out technical jargon and multiple interpretation artefacts, but did not provide any checks on the validity of the data that was used in profile construction. Initially, our intention was that a “noise-free” representation would be achieved by virtue of reviewing several sources from both involved and independent parties; additionally, profiles were envisioned to be generated based on a substantial body of achieved, as well as articulated, objectives. However, most developments were in the initial stages, and, therefore, profiles were based almost exclusively on the latter. Consequently, this application may contain political noise artefacts and inconsistencies and only offers a best attempt at “noise-free” depiction based on the data available. The case studies only represent a static application at one point in time. The expandability and reusability requirement has only been confirmed with respect to the application to different e-government efforts, but not over the dimension of time. As a result, the measurability of e-government maturity requirement is untested, given that vision objectives and achievements were combined instead of measuring the current state against the intended vision. In conclusion, the framework was successful in providing a common perspective that transcends any one e-government offering, but must be supported by additional analytical tools to provide a full systems perspective. It must also be reapplied to fully test its applicability over time. In addition to these shortcomings, there are a number of questions still to be addressed. For example, if service delivery and a practitioner perspective are the basis for e-government developments, what direction will future developments take? Is balanced development across all areas necessary for superior performance? Additionally,
what are the ultimate potential and benefits of application areas such as Citizen Empowerment? Currently, initiatives in this area are the weakest (Chadwick & May, 2003). We argue that, in the end, some form of balanced development will be necessary. There are only a limited number of “quick-win” initiatives available, and most of these have already been exploited in advanced countries (Accenture, 2004). Ultimately, electronic government development and maturity must reflect the changes in the political, social, and economic orientation of the hosting nation. Therefore, it must evolve beyond its current state. However, this will require additional knowledge, experience, and deliberation. New technology and organizational infrastructure will also need to be developed. We hope that the generic framework can be refined and reapplied to identify some of these upcoming trends and developments, and we look forward to the emergence of additional tools and models that will move the e-government consciousness closer to this point of maturity.
rEfErEncEs Accenture (2001). eGovernment leadership: Innovation delivered. Accenture (2002). eGovernment leadership: Realizing the vision. Accenture (2004, May). eGovernment leadership: High performance, maximum value. E-Government Executive Series. Alter, S. (2002). Information systems: Foundations of e-business (4th edition). Upper Saddle, NJ: Prentice Hall. Basu, S. (2004). E-government and developing countries: An overview. International Review of Law, Computers & Technology, 18(1), 109-132. Bertelsmann Foundation (2001). Balanced e-government: E-government—Connecting efficient administration and responsive democracy.
Developing a Generic Framework for E-Government
Box, R. (1999). Running government like a business: Implications for public administration theory and practice. American Review of Public Administration, 29(1), 19-43. Brodie, J. (1999). Critical concepts: An introduction to politics. Scarborough, Canada: Prentice Hall; Allyn and Bacon. Caldow, J. (2001). Seven e-Government leadership milestones. Working paper - Institute of Electronic Government, IBM Corporation. Chadwick, A., & May, C. (2003, April 27). Interaction between states and citizens in the age of the Internet: “E-government” in the United States, Britain, and the European Union. Governance, 16(2), 271-300. Corrocher, N., & Ordanini, A. (2002). Measuring the digital divide: A framework for the analysis of cross-country differences. Journal of Information Technology, 17(1), 9-19. Cufaude, J. (2003). Creating the future while managing the present. Association Management, 55(8), 28-34. Davies, A., & Heeks, R. (1999). Different approaches to information age reform. In R. Heeks (Ed.), Reinventing government in the information age (pp. 9-12). London and New York: Routledge. Deloitte Research – Public Sector Institute. (2000). At the dawn of e-government: The citizen as customer. Friedman, A. (1994). The stages model and the phases of the IS field. Journal of Information Technology, 9, 76-88. Guy, J.J. (2001). People, politics and government: A Canadian perspective. Scarborough, Ontario: Prentice Hall. Heeks, R. (1999). Reinventing government in the information age. In R. Heeks, (Ed.), Reinventing government in the information age (pp. 9-12). London and New York: Routledge. Ke, W., & Wei, K.K. (2004, June). Successful e-government in Singapore. Communications of the ACM, 47(6), 95-99. 0
Keen, P.G.W. (1991). Shaping the future: Business design through information technology. Boston: Harvard Business School Press. Koh, C.E., & Prybutok, V.R. (2003). The three ring model and development of an instrument for measuring dimensions of e-government functions. Journal of Computer Information Systems, 43(3), 34-39. Lacity, M.C., & Willcocks, L.P. (1998). Strategic sourcing of information systems: Perspectives and practices. Chicester, UK; New York: Wiley. Layne, K., & Lee, J. (2001). Developing fully functional e-government: A four stage model. Government Information Quarterly, 18, 122-36. Luftman, J.N. (1996). Competing in the information age: Strategic alignment in practice. New York: Oxford University Press. Marche, S., & McNiven, J.D. (2003). E-government and e-governance: The future isn’t what it used to be. Canadian Journal of Administrative Sciences, 20(1), 74-86. Nolan, R., & Gibson, C. (1974). Managing the four stages of EFP growth. Harvard Business Review, 52(1), 76-78. OECD (2003). The case for e-government: Excerpts from the OECD Report “The e-government imperative.” OECD Journal on Budgeting, 3(1), 62-96. Office of Management and Budget (OMB) (2003). Implementing the president’s management agenda for e-government: E-government strategy, April. Retrieved July 30, 2004: http://www.whitehouse. gov/omb/egov/2003egov_strat.pdf Osborne, D., & Gaebler, T. (1992). Reinventing government: How the entrepreneurial spirit is transforming the public sector. New York: Plume. Pacific Council on International Policy (2002). Roadmap for e-government in the developing world: 10 questions e-government leaders should ask themselves. Report of the Working Group on e-government in the developing world.
Developing a Generic Framework for E-Government
Poostchi, M. (2002). Implementing e-government: Potential impact on organization structure, business processes, and costs. MBA Thesis. Carleton University. Rais Abdul Karim, M. (Ed.) (1999). Reengineering the public service: Leadership and change in an electronic age. Subang Jaya, Malaysia: Pelanduk Publications. Reddick, C.G. (2004). A two-stage model of e-government growth: Theories and empirical evidence for U.S. cities. Government Information Quarterly, 21, 51-64. Somogyi, E.K., & Galliers, R.D. (1987). Applied information technology: From data processing to strategic information systems. Journal of Information Technology, 2, 30-41. UK Cabinet Office (2000). E-government: A strategic framework for public services in the information age. Central IT Unit, April. United Nations World Public Sector Report 2003: E-government at the crossroads (2003). Retrieved April 2004: http://unpan1.un.org/intradoc/groups/ public/ documents/un/unpan012733.pdf Ward, J., & Peppard, J. (2002). Strategic planning for information systems. Wiley Series in information systems. West Sussex, UK: John Wiley & Sons. World Markets Research Centre (2001). Global e-government survey. Providence, Rhode Island: Brown University.
ISP framework version 1.0. Retrieved December 2002: http://www.mampu .gov.my/mampueng/ICT/ ISPlan/ISPlan.htm ISP template version 1.0. Retrieved December 2002: http://www.mampu. gov.my/mampueng/ ICT/ISPlan/ISPlan.htm PMO office: Generic office environment. Retrieved July 2002: http://www.mampu .gov.my/mampueng/Corporat/profile/eg/goe.htm Human resources management information system. Retrieved July 2002: http://www.mampu.gov. my/mampueng/Corporat/profile/eg/hrmis.htm Electronic procurement (internal supply chain). Retrieved July 2002: http://www.mampu.gov. my/mampueng/Corporat/profile/eg/ep.htm
Malta White paper on the vision and strategy for the attainment of e-government (n.d.). Retrieved July 2002: http://www. cimu.gov.mt/htdocs/content. asp?c=34 Maltese exchange. Retrieved July 2002: http:// www.cimu.gov.mt/eGovWP contents.asp Government portal architecture. Retrieved July 2002: http://www.cimu.gov.mt/e GovWPcontents. asp National email. Retrieved July 2002: http://www. cimu.gov.mt/eGovWP _contents.asp
South Africa additional rEfErEncEs E-government vision documents and white papers Malaysia Electronic government flagship applications (2002). Retrieved June 2002: http://www.mampu. gov.my/mampueng/Ict/flagship.htm
e-governemnt policy, second draft 2001 version 3.2. Department of Public and Administrations. Republic of South Africa. Retrieved July 2002: www.dpsa. gov.za/e-gov/2001docs/e-govpolicyFramework.htm Smart card and population registry. Retrieved July 2002: http://www. dpsa.gov.za/e-gov/2001docs/egovpolicyFramework.htm and http://www.dpsa. gov.za/e-gov/e-govindex.htm
Developing a Generic Framework for E-Government
PIT – Public information terminal. Retrieved July 2002: http://www.dpsa.gov. za/e-gov/2001docs/egovpolicyFramework.htm and http://www.dpsa. gov.za/e-gov/e-govindex.htm Transportation information services. Retrieved July 2002: www. transport.gov.za
United States First gov your first click to the US Government (2002). Retrieved July 2002: www.firstgov.gov E-government strategy: Simplified delivery of services to citizens (2002). Office of Management and Budget. Retrieved July 2002: www. whitehouse.gov/OMB
Federal business opportunities. Retrieved July 2002: www.FedBizOpps.gov
United Kingdom UK online action plan (2001). Retrieved December 2002: http://www.e-envoy.gov.uk/ukonline/progress/actplan/table.htm Department of Trade and Industry (1999). Building the Knowledge Driven Economy. United Kingdom. Retrieved June 2002: http://www.dti. gov.uk/comp/competitive/wh_int1.htm Electronic government services for the 21st century (2000). Retrieved June 2002: http://www.cabinetoffice.gov.uk/innovation/2000/delivery/intro.htm
Core process identification and high payoff initiatives. Retrieved July 2002: http://www.firstgov. gov
UK e-commerce services for the 21st century (2001). Retrieved June 2002: http://www.cabinetoffice.gov.uk/innovation/1999/ecomm.html
Department of Labor Website. Retrieved July 2002: www.ajb.org
Postal services. Retrieved July 2002: www.consignia.com
Consumer portal. Retrieved July 2002: www. consumer.gov
Parliament Web TV. Retrieved July 2002: www. parliamentlive.tv
Developing a Generic Framework for E-Government
appEndix a: kEy E-govErnmEnt application arEas service automation and information provision This refers to efforts that are primarily concerned with automating existing government services. Typically, existing services are duplicated to provide an alternate communication channel. Additionally, implementation programs are skewed towards the online aggregation and publishing of materials and information. The default standard is towards information quantity and richness versus organization for usage.
interactive services These constitute development efforts that are implemented for providing a broad range of services and a high level of functionality. Initiatives focus on providing information and content that is transactional in nature and that offer a higher level of interaction. Communication capability is performed in duplex mode (i.e., both citizens and government have the capacity to provide feedback to one another).
crm development Another refinement in service delivery is to add citizen centricity to the design and development effort. Constituent Relationship Management (CRM) techniques, such as focus groups, need analysis feedback and field-testing, which are integral parts of the action plan. Constituent or user targeting and segmentation efforts are also prevalent, and sites are structured around the needs and requirements of citizens and business. Not only are services offered, but applications also offer value-added services and may provide proactive advisory capabilities, as well. Service integration and user-friendly interfaces also characterize CRM service development.
collaboration and partnership programs Advances in fostering cooperation and joint initiatives between internal agencies and external organizations are the focus. Partnerships are encouraged and the risks and rewards are shared. Often, service delivery and maintenance are distributed between government and private industry. External stakeholders can also have a significant role in determining and plotting out vision strategies and objectives. Stakeholders also work cooperatively to implement and realize the vision.
infrastructure consolidation and standardization Internal transformation efforts relate to the growth and implementation of initiatives that aid in reorganizing internal processes to improve efficiency and decision-making effectiveness. Core processes are analyzed and optimized for online delivery. Organizational changes are also brought about to enable these efforts and include modifying reward programs, evaluation procedures, and procedures for resource allocation. Redundant process and information are also reduced or eliminated.
Developing a Generic Framework for E-Government
E-Democracy and E-Participation Citizens are informed and educated about the decision-making process and programs to increase the transparency and responsiveness of government as a whole. Citizen feedback and involvement in dayto-day governance is encouraged as much as possible. Additionally, access to legislation and the decision-making process is more prevalent, and efforts seek to bring government closer to the population and to as wide an audience as possible.
marketing E-government This overlaps to some degree with E-Democracy and citizen centricity objectives. Citizens and business again are informed and educated. However, developments in this area are targeted at promoting and marketing the visions and services. Comprehensive plans to introduce and advocate electronic government services and technology are put into place. Consumer confidence and trust are key objectives, and dedicated resources are allocated to the overall strategy. The measuring of service and CRM performance is another key feature.
global business development The principal thrust of globalization is to increase global connectivity. A high level of partnership and fostering of relationships internationally are central to these efforts. A main activity is the development and encouragement of e-commerce and business ventures. Programs also reflect a high degree of effort towards establishing links and contacts across physical and state boundaries.
This work was previously published in Journal of Global Information Management, Vol. 13, No. 1, edited by F. Tan, pp. 1-30, copyright 2005 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
Chapter LXVIII
A Web Query System for Heterogeneous Government Data Nancy Wiegand University of Wisconsin–Madison, USA Isabel F. Cruz University of Illinois at Chicago, USA Naijun Zhou University of Wisconsin–Madison, USA William Sunna University of Illinois at Chicago, USA
ABSTRACT This paper describes a Web-based query system for semantically heterogeneous government-produced data. Geospatial Web-based information systems and portals currently are being developed by various levels of government along with the GIS community. Typically, these sites provide data discovery and download capabilities but do not include the ability to pose DBMS-type queries. One of the main problems in querying distributed government data sources is the difference in semantics used by various jurisdictions. We extend work in schema integration by focusing on resolving semantics at the value level in addition to the schema or attribute level. We illustrate our method using land use data, but the method can be used to query across other heterogeneous sets of values. Our work starts from an XML Web-based DBMS and adds functionality to accommodate heterogeneous data between jurisdictions. Our ontology and query rewrite systems use mappings to enable querying across distributed heterogeneous data.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Web Query System for Heterogeneous Government Data
introduction Governmental data are increasingly are being produced and distributed over the Web. Integrating and querying these data will greatly help improve governmental decision making. However, most current Web-based practices rely on keyword search engines and do not consider the heterogeneous problem in data sources. This paper presents a study of deploying powerful Database Management System (DBMS) technology to improve capabilities for data search and query and to use ontology approaches for data semantics. We give an overview of our system and address the problem of mediating between highly heterogeneous information developed independently by many different government units. We focus on geospatial data in general and on diverse land use coding systems in particular. To be able to map between heterogeneous values found in land use attributes, we extend work on semantic integration by resolving semantics at the value level of individual attributes in addition to the schema level. Web sites are currently being developed by many levels of government to serve geospatial data. A recent example from the federal government is Geospatial One-Stop (2003), which was initiated to have all geospatial data and information from federal agencies along with many state, local, tribal, and private agencies accessible from one portal. Other examples of geospatial clearinghouses and portals at the national level include the Alexandria Digital Library and the Federal Geographic Data Committee (FGDC) Clearinghouse. In addition, there are many geospatial Web sites being produced at the state, county, and local levels. For example, state level examples for Wisconsin include the Wisconsin Land Information Clearinghouse (WiscLinc) and the prototype Wisconsin Land Information System (WLIS). WLIS, for example, will allow central access to distributed data sets that remain under local government control and reside on local and county servers across the state (WLIS Project Team,
2000). The problem is that without state-mandated standards, data sets produced by local and county governments are highly heterogeneous. As one moves across jurisdictional boundaries, not only do database schemas vary, but also the definitions and acceptable values of attributes change significantly. In fact, most of the effort required to integrate diverse geospatial data lies with the nonspatial attributes. That is, integration problems caused by the use of different coordinate systems, for example, are more easily solved (WLIS Project Team, 2000). We began our work in the context of WLIS but are extending it to a larger scale such as needed for Geospatial One-Stop. The typical purpose of WLIS and other geospatial Web sites is to enable search and download capabilities for geospatial data and services. However, such sites are limited or lacking in their ability to allow full DBMS-type querying over the content of distributed government data sources. Instead, querying in these sites is restricted to selecting a few metadata fields for the purpose of locating data sources. Sites that do allow DBMS querying on source data content provide an explicit connection over the Web to a backend DBMS containing the data of a single data source. Otherwise, to query data source content, a user has to download the data into a local Geographic Information System (GIS) or DBMS. To enhance these government Web sites, we are working to support full-fledged DBMS-type querying over distributed data sources. This paper discusses our approach as embodied by a prototype Web-based query system in which we focus on resolving semantic heterogeneities between the attributes of related data sets. To design our system architecture, we first consider characteristics of government-produced geospatial data. Next, we discuss the semantic heterogeneity problem in general and then present our specific application problem concerning land use codes. We then discuss ontology integration methods including our semi-automatic solution. Our existing working system using land use data examples is described in the last section.
A Web Query System for Heterogeneous Government Data
wEb dbms dEsign for distributEd gEospatial govErnmEnt data Special considerations of data models and architectures are needed for designing a Web-based distributed query system for government-produced geospatial data. Although some characteristics are not unique to this application, traditional DBMS and current Web architectures do not accommodate these characteristics. We determined that to provide the necessary functionality, our Web query system needs to recognize the following characteristics: multiple inherent data set organizations, such as theme, jurisdiction, and spatial extent; existence of separate metadata files; spatial functionality requirements; and heterogeneity in schemas and data content. Geospatial data produced by government agencies have various inherent data set organizations. For example, GIS data sets tend to be based on a theme (also called a data category or channel) such as land use, transportation, hydrography, and so forth. In addition, data sets characterize a particular jurisdiction. Jurisdictions range from cities, towns, and villages to counties, regions of various types, the state, and the federal level. These base organizations (theme and jurisdiction) often are used in geospatial portals as main search criteria. A Web-based DBMS can take advantage of such information to determine data sets involved in a query, eliminating the need for the user to identify and specify exact data sources as is required in typical DBMSs. Geospatial data often have separate metadata files that describe the data source in many ways. Theme, jurisdiction information, type of data such as raster or vector, and temporal aspects are examples of metadata. This type of metadata provides more information than typical DBMS schemas and can be queried separately. A common format for metadata is the FGDC’s Content Standard for Digital Geospatial Metadata (CSDGM), often referred to as the FGDC standard. A Web DBMS for geospatial data needs to be able to handle separate metadata files and their associations with the actual data. Challenges regarding
metadata have previously been recognized (Maier & Delcambre, 1999). Another characteristic of geospatial data is implicit or explicit spatial extent. Explicit spatial coordinates may exist to represent the data at the data set level or the record level. A full geospatial Web-based query system would include spatial query processing, spatial indexes, and functionality such as buffering and overlay. Because that is not the focus here, we design and implement a subset of spatial capabilities. Finally, a Web-based query system needs to resolve the heterogeneity between data sources created by different government jurisdictions. This is our main focus, and we are particularly concerned with semantic heterogeneity at the value level. For example, land use descriptions, that vary by jurisdiction, need to be integrated over multiple jurisdictions need to be integrated for comprehensive planning purposes.
rElatEd work on hEtErogEnEity This section discusses selected prior work on mediating between heterogeneous data sources. We also further discuss the level of semantics at which we are currently working, which is the value level.
GIS Semantic Interoperability Types of interoperability relevant to geospatial data and GISs previously have been identified, and the difficulties at the semantic level have been noted (Bishr, 1998). In particular, Bishr’s highest level of interoperability proposes seamless communication between remote GISs without prior knowledge of their semantics. To help realize GIS interoperability, the Open Geospatial Consortium (OGC) was formed in 1994. OGC has extended the eXtensible Markup Language (XML) (Bray, Paoli, Sperberg-McQueen, Maler, & Yergeau, 2004) to develop the Geography Markup Language (Cox, Daisey, Lake, Portele, & Whiteside, 2003) to further help achieve
A Web Query System for Heterogeneous Government Data
geospatial interoperability. GML uses a standard markup for spatial representations, although variations exist. However, in GML, the representation and meaning of the nonspatial elements do not follow a standard and may be described in any manner. As a result, GML does not solve semantic problems for nonspatial properties (just as XML does not). In this regard, OGC recommends the formation of information communities that share a data dictionary and metadata schema (OGC Topic 14, 1999). Ontologies have been proposed as a solution for semantic integration (Fensel, 2001), and this is currently a very active area of research within geospatial and other communities. An ontology is a shared and machine-executable conceptual model in a specific domain of interest (Bussler, Fensel, & Maedche, 2002). Ontology engineering, which is methodology for developing ontologies (Staab & Studer, 2004), includes automatic creation of ontologies, as explored by Malyankar (2002), for example. Ontology-driven GIS has been proposed for geographic information integration, especially between GIS and remote sensing systems (Fonseca, Egenhofer, & Agouris, 2002). Their proposal recognizes the difficulties of creating and imposing standards similar to the difficulties we found regarding a statewide land information system. Instead, they assume communities will commit to common ontologies, which will be used to characterize and locate information sources. An example of work being done by government agencies on national standards related to land parcel data is the National Integrated Land System (NILS). NILS is an effort between the BLM and the USDA Forest Service to manage cadastral records and land parcel information specifically for use in geographic information systems. Also, a new XML standard related to land information exchange for land development and transportation professionals is LandXML. LandXML is being used for civil engineering applications, for example. However, until additional standards are developed or information communities are formed,
many GIS data sets will continue to be independently developed and heterogeneous. Furthermore, legacy data sets may never be converted to a new standard.
dbms semantic integration Heterogeneity in Database Management Systems has received extensive attention (Bouguettaya, Benatallah, & Elmagarmid, 1998). Various types of heterogeneity have been identified. Syntactic heterogeneity can be solved by mapping between data models, such as between a relational model and an object-oriented model. Schematic heterogeneity is handled by schema integration and is important when there are semantically similar objects of interest. Semantic heterogeneity concerns discrepancies in the meaning, interpretation, and intended use of the same or related data (Sheth & Larson, 1990). Most work on semantic integration has focused on the schema level (Rahm & Bernstein, 2001). For example, Clio is a tool for managing schema integration (Hernandez, Miller, & Haas, 2001; Miller et al., 2001). Because of the difficulty to fully automate semantic decisions, Clio combines user intervention with automated techniques. An interesting feature of Clio is that it provides actual data values for review to help with mapping decisions. An XML query system using ontologies has been developed (Theobald & Weikum, 2000). In this system, if the user chooses a special similarity operator, query processing will include semantically similar terms obtained from a knowledge base developed using WordNet. Use of the similarity operator produces ranked results for the similar words. In our application, however, the land use code mappings that are needed to identify similar terms cannot be found in a general collection such as WordNet. Also, there is a greater need to present the user with precise semantic nuance information instead of retrieval relevance rankings. As a result, we needed to develop a precise ontology mapping method.
A Web Query System for Heterogeneous Government Data
DBMS Heterogeneity at the value level Prior research on schema integration has recognized semantic problems at the value level. By value level, we mean the values in the domain of an attribute. Bouguettaya et al. (1998) present three representational differences for value level conflicts: differences in the expression (e.g., 4.0 vs. A); differences in units (e.g., miles vs. kilometers); and differences in precision (e.g., cardinality differences such as low, medium, high vs. a range with five choices). These types of representational differences usually can be resolved using straightforward formulas or algorithms. Other types of heterogeneity at the value level are not easily resolved. The land use coding systems in our application include categories such as agriculture, commercial, and residential. The systems are often hierarchical with each category being subdivided into more specific land uses. As above, there are differences in the expression of land use codes (e.g., agriculture codes beginning with A vs. beginning with 9). And, units and precision vary (e.g., hectares vs. acres and six subcategories for agriculture vs. 11). But, in addition, the meanings of codes between different local data sources cannot be directly compared. For example, a coding scheme for one jurisdiction in which the commercial category is divided into commercial sales and commercial services cannot easily be compared to another code scheme divided into commercial intensive and commercial nonintensive.
valuE lEvEl sEmantic problEms for land usE data One of the most immediate purposes of the Wisconsin Land Information System, for example, is to support comprehensive land use planning. Wisconsin, similar to other states, has passed smart growth legislation requiring local governments to develop land use plans. Of the many types of data sets needed to create a land use plan, such as transportation, environmental corridors, wetlands,
and so forth, current land use information is of prime importance. However, land use data is one of the most problematic types of data because of its heterogeneity. Land use codes are oftentimes found in GIS data sets. The data may be parcel based in which each polygon forms a parcel or may be such that each polygon covers an area having the same land use without regard for parcel boundaries. In either case, there is great variability in what other attributes exist in a data set containing a land use attribute. The largest problem, however, is that city and county governments and regional planning commissions develop their own coding systems for land use. Historically, a standard coding system was never imposed, and individual communities preferred to develop land use codes that more closely represented the particular land uses in their jurisdictional area. A multi-dimensional coding system called the Land Based Classification System (LBCS) (Everett & Ngo, 1999) was developed by the American Planning Association to help provide a standard. However, to date, it has not been widely adopted. Table 1 shows example land use codes in Wisconsin. The synonyms (Lucode, Tag, Lu1, and Lu_4_4) for the land use attribute are resolved more easily than determining whether the code descriptions share common definitions. Value level semantic resolution is needed because the descriptions are not exact matches; each description slightly varies from one another. For example, the 8110 code of the City of Madison makes no
Table 1. Heterogeneity in land use codes Planning Authority
Attribute Identifier
Land Use Code
Description of Code
Dane County RPC
Lucode
91
Cropland/ Pasture
Racine County (SEWRPC)
Tag
811 815
Cropland Pasture and Other Ag
Eau Claire County
Lu1
AA
General Agriculture
City of Madison
Lu_4_4
8110
Farms
A Web Query System for Heterogeneous Government Data
distinction between cropland and farm buildings; whereas, the Dane County Regional Planning Commission (RPC) has a separate code for farm buildings. Eau Claire County’s most specific code that would include cropland is at the general agriculture level, which also includes various other subcategories. Comprehensive planning between neighboring communities requires mapping between coding systems. An example query needed as part of a land use decision may be to find all lands with a particular land use over a watershed spanning several counties. The problem of resolving land use codes is a significant one. Not only are the ramifications important within a state, they also are important for planning efforts between states or even between countries, such as between the U.S. and Mexico (Ganster & Wright, 2000). One solution to resolving differences in meanings of land use codes is only to compare codes at a general level, such as commercial, residential, or agriculture, and not to try to resolve subcategories. Even at this general level, a system to automatically make comparisons between diverse coding systems is extremely valuable for comprehensive planning, because, currently, efforts to resolve codes are done currently by hand on a case by case basis. We developed a method, however, to keep comparisons between the meanings of subcategories for codes as precise as needed.
Figure 1a. Ontology DTD
valuE lEvEl sEmantic intEgration using ontology mEthods
Figure 1b. Ontology for land use code values
strategies for semantic integration: global-as-view and local-as-view Data integration to solve schematic heterogeneity has taken two general approaches: Global-As-View (GAV) and Local-As-View (LAV). In GAV, each entity in a global schema is associated with a view over the data sources. Contrary to this, in LAV, a source is defined as a view over the global schema. Our work is more similar to the LAV approach
0
(table+)> (tuple+)> (attrname+)> (attrvalue*)> (attrvalue*)>
id CDATA #REQUIRED> id CDATA #REQUIRED> id CDATA #REQUIRED> id CDATA #REQUIRED> id CDATA #REQUIRED>
in that we consider a global ontology of master attributes and values, which are then mapped to local attributes and values. A global ontology, available through the user interface, is beneficial for querying heterogeneous data sets because the user can select from provided ontology terms. To support the LAV approach, we use a tablebased DTD to define the ontology (Figure 1a) (Cruz, Rajendran, Sunna, & Wiegand, 2002). A unique feature of our approach is that the ontology DTD includes an optional subelement—attrvalue—to hold ontology values for attributes with heterogeneous domains. The attrvalue element can have nested attrvalue elements to describe any level of subcategories, such as needed for land use coding systems. We present a subset of the values in the global ontology for the land use code attribute (Figure 1b). Our vocabulary con-
< attrvalue id= “Commercial-Function-Service”/> … … …
A Web Query System for Heterogeneous Government Data
sists of about 200 different terms obtained from a subset of around a dozen land use databases that we collected. Another unique feature to our approach, in addition to providing a method to allow value level representations, is that the global ontology does not specify a standard set of land use code values, which can be a contentious issue. Instead, it is a potential composite of all values, including synonyms and various overlapping categorical terms that might be used in a query. Including multiple categorizations in the ontology provides a solution to the problem of a domain being categorized differently in various coding systems. For example, in Figure 1b, the commercial code includes both the scale and function subcategories.
Ontology Integration Methods Given the semantic differences between the domains for land use coding systems, it is not possible to resolve completely the discrepancies in the level of precision that we need using fully automatic methods. For example, algorithms that involve string matching either will not work or will not capture the needed semantics. That is, cropland will not match agriculture, although cropland is a subset of agriculture. Other types of advanced automatic matching methods involve rankings of results using probabilities. For example, we experimented with modifications to the Naïve Bayesian classifier to automatically match categories by comparing the occurrence of terms in their subcategories (Zhou, 2003). Others have also used similarity measures to produce probabilities (Doan, Madhavan, Domingos, & Halevy, 2002). However, returning answers with attached probabilities for an application involving land use planning may not be sufficient or appropriate if, for example, a land use planner needs to know as precisely as possible where and how much cropland exists over a multi-jurisdictional area. We also experimented with Formal Concept Analysis (Ganter & Wille, 2004) to form a concept lattice as a means of relating broad categories in the various land use coding systems (Zhou & Wiegand, 2004). Although the results were interesting, they were
not precise enough to use for automatic resolutions between coding systems. We concluded that a semi-automatic approach would provide the most accurate results for ontology integration.
Semi-Automatic Ontology Alignment We developed a tool to map the global ontology to each local schema and the values of its land use coding system (Figure 2). Using this tool, a domain expert first performs initial mappings manually between concepts in the global ontology and concepts in the local schema. Based on these mappings and the user’s request, the tool can perform a semi-automatic deduction process to generate more mappings (Cruz & Rajendran, 2003; Cruz, Sunna, & Chaudhry, 2004). The user can commit the new mappings generated by the tool or override them. The user can resume the process of manual mapping and can invoke the semi-automatic deduction process anytime. Concepts are mapped using the following mapping types: exact—the connected concepts are equivalent in their meaning; approximate—the connected concepts have similar meanings but are not quite exact; no mapping—a concept in the global ontology does not have a candidate match in meaning in the local schema; superset—the concept in the global ontology is more general in meaning or contents than a concept or a group of concepts in the local schema; and subset—a concept or a group of concepts in the global ontology is less general in meaning or contents than a concept in the local ontology. The semi-automatic deduction process establishes mappings between concepts based on mappings of their children. For example, in Figure 3, vertices auto parts and construction and electrical supplies in a part of a global ontology on the left are mapped using mapping types exact and superset to vertices auto parts and electrical supplies in a part of a local schema on the right. The mapping type between their parents manufacturing and production can be deduced to be superset based on the mapping between the children, because we consider that the content of the parent is the aggregation of the contents of its children. All
A Web Query System for Heterogeneous Government Data
Figure 2. Tool to create an agreement file
the children of manufacturing are mapped to all children of production. This is the Fully Mapped (FM) case. The Partially Mapped (PM) case occurs if there are some children in the global ontology that cannot be mapped to any of the children in the local schema. For example, in Figure 4, vertices auto parts and electrical on the left are mapped to vertices auto parts and electrical supplies on the right using mapping type exact for both mappings. But the vertex construction material in the global ontology on the left cannot be mapped to any concept in the local schema. As a result, vertex manufacturing is automatically mapped using mapping type superset to the vertex production. Detailed specifications of the deduction process for all possible combinations of vertex mappings, the resulting mappings for their parents and the assumptions upon which the process works have previously been described (Cruz and Rajendran, 2004).
Preliminary experiments were performed to test the effectiveness of the semi-automatic deduction process. The automatic deduction process yielded 20% of the total mappings when two fairly large shallow ontological trees were mapped to each other. More experiments are being run to study the effect of the depth and the width of ontological trees on the performance of the deduction process. The tool automatically generates an agreement file in XML for each local mapping. XML tags and attributes are used within the agreement file to record the semantics of the mappings (Figure 5). For example, the attribute value OAAD (Ontology Agriculture) maps as a superset to two local values for Dane County. And, the ontology code for Multi-Family maps as a subset to Residential. RDF and OWL have emerged as Semantic Web technologies since we implemented our system in XML, but XML was sufficient for our needs. Information from the agreement files is used to generate subqueries sent into Niagara.
A Web Query System for Heterogeneous Government Data
Figure 3. Semi-automatic ontology integration: A fully mapped case
Figure 4. Semi-automatic ontology integration: A partially mapped case
Figure 5. Fragment of an agreement file 91 92 Residential
ontology-basEd dbms wEb QuEry systEm In this section, we present the functionality and flow of our Web query system that extends government portals and clearinghouses such as Geospatial One-Stop or WLIS by providing full DBMS querying over the internal content of the distributed geospatial data sources. We introduce details of the system from the aspects of data
provider, system design, and user. Although we illustrate our method using land use codes, we are not limited to that domain. Instead, the method can be generalized to capture semantic differences for any theme, attribute, and set of attribute values.
System Overview Our system is based on the search and query engines of Niagara, an XML Internet DBMS system (Naughton et al., 2001). Contrary to an HTML engine, the Niagara search engine allows text-in-context searching in addition to keyword searching. That is, rather than just typing in a keyword, such as cropland, one can add context to limit the search by typing, for example, LandUseCode contains cropland. This search will yield the URL(s) of the data set(s) that contain a value of cropland for an element (attribute) named LandUseCode.
A Web Query System for Heterogeneous Government Data
Figure 6. System architecture
Node
Node
Node
Crawler
MetaData Indices
XML Data Indices
Search Engine
Ontology System
Ontology Mappings
Query Engine
XML DBMS Query
GeoSpace (query rewrite)
Niagara
Spatial Display
XML Query Result
Result Aggregate
by extracting metadata information from the user to determine appropriate data sets. Our system extends the Niagara architecture to support semantic integration (Figure 6). Our modifications include the metadata indexes and ontology mappings, a query rewrite system in an ontology component, and enhanced output displays (Wiegand, Zhou, Cruz, & Sunna, 2004; Wiegand, Zhou, Cruz, & Rajendran, 2002). As described further in the following sections, we provide a custom Graphical User Interface (GUI), and we modified the XML-QL query language to support querying through an ontology. Briefly, our custom GUI captures the user query along with metadata to locate appropriate data sources. Our ontology system locates data sources and their semantic agreement files, which contain the correspondences between the land use codes, to perform lookups for query rewriting. Our system supports list and graphical output displays and provides additional semantic information to the user.
Messages
In addition to the search engine, Niagara has a query engine that allows users to pose full DBMS queries using an XML query language, such as XML-QL (Deutsch, Fernandez, Florescu, Levy, & Suciu, 1998). The query engine produces answers to queries rather than URLs. For example, it can return the name of the county that has the greatest acreage of cropland. Although a system such as Niagara has great potential for Web-based querying, it also has limitations. A significant limitation is that it cannot handle the heterogeneity that exists among conceptually similar data sources that are represented differently. Furthermore, it is not designed to take advantage of the unique characteristics described earlier for government-produced geospatial data—that is, inherent data set organizations such as government jurisdiction and theme, the existence of separate metadata files, and spatial information. For example, although Niagara has an “IN*” statement allowing the user to pose a query over an entire Web space without specifying particular data sources, we modify this idea
registering spatial databases to the Query System Although Niagara will crawl any XML data found on the Web, it also can range over a subset of the Web. For our prototype, because we need additional information from the data providers, we designed our system assuming a registered set of data producers who include metadata, URLs of data sources, and other related information. An approach requiring registration and authentication also ensures only authorized data are available. We assume the distributed data are marked up in XML. For our test data, we use land use data sets, which typically are in ArcView GIS (Environmental Systems Research Institute [ESRI]) shapefile format. We converted the spatial and nonspatial components of each data set to one XML file by embedding the spatial coordinates within each feature. We also calculated and included the bounding box extent for each feature to use for our MapObjects (ESRI) spatial display of the results. Figure 7 shows sample XML data for a land parcel in Eau Claire County having a land use code of AR, farm residence.
A Web Query System for Heterogeneous Government Data
Figure 7. XML source data
Figure 8. User interface
<EauClaireLandUseData> <area> 1704995.587470 5223.944820 258 AR … Wilson <xycoords> 477229.236863, … <extent> 477229.236, , , 295553.507 ....
indexing data, metadata, and Ontology Information Base XML data files are crawled and indexed into inverted lists by Niagara (Figure 6). Our system adds metadata and ontology mapping indexes to the Niagara architecture to supplement the base data index. That is, each land use data set has an associated metadata file (also in XML) and an agreement file. Our minimal metadata currently consists of theme, jurisdiction type, and jurisdiction name, and we create metadata for data sets not already having associated FGDC metadata files. Although all data and their associated files are locally maintained by their providers, the information they contain is centrally indexed. We build an index network from which all data and index information can be retrieved given a few known metadata values. For example, given a land use type, data records together with their schema, metadata, and ontology mappings can be retrieved.
User DBMS Query Figure 8 shows the initial user interface to our system, which is modeled after Geospatial One-Stop. However, we have an additional link that allows a user to query data sources (Query Distributed Data Content). For the query option, there are two subsequent steps: metadata selection and database query. Metadata selection is currently designed for Wisconsin jurisdications. Users can choose Wisconsin jurisdiction types (county, city,
and/or town) and jurisdiction names over which a query will range. All jurisdictions of a type can be chosen (e.g., all counties in the state), or a subset can be specified using dropdown lists or by clicking on a map (limited now to Wisconsin counties). Finally, the user selects one or more themes to query, and the global ontology schemas and values for those themes are presented, allowing the user to pose a DBMS query by selecting terms. Our system currently supports full queries on a land use ontology.
Query Rewrite and Execution We give an example of our query rewrite with a typical query in a land use application—Find all cropland in Dane, Racine, and Eau Claire counties. This type of query is different from a traditional DBMS query, because more than one data source is identified, but there is no join. Instead, the same predicate is applied over multiple data sources. Because the data sources represent geographic areas, we call this type of query a GeoQuery. After a user poses a query using an ontology, our system applies a query rewrite method to solve the semantic heterogeneity problem across multiple databases. To indicate the query should consult our ontology component, we developed a GeoSpace statement (Figure 9) as a formal representation of the XML-QL query.
A Web Query System for Heterogeneous Government Data
Figure 9. BNF for a GeoSpace statement Query ::= Geospace WherePart ConstructPart | WherePart ConstructPart ; Geospace ::= GEOSPACE id = SourceList; SourceList ::= SourceList, Source | Source; Source ::= string; GEOSPACE ::= string; id ::= string;
Figure 10. GeoSpace in an XML-QL query GEOSPACE Area =“www.co.wi.us/Dane.xml, www.co.wi.us/Racine.xml, www.co.wi.us/EauClaire.xml” WHERE “cropland” ELEMENT_AS $a CONSTRUCT $a
Figure 11. A generated subquery WHERE “91” ELEMENT_AS $a IN www.co.dane.wi.us/Dane.xml CONSTRUCT $a
An example query with a GeoSpace statement is shown in Figure 10. The GeoSpace statement includes a variable—in this instance, Area—to hold the list of URLs for the data sources needed in the query. The variable Area is also used in the body of the query as a qualifier for generic ontology terms. The syntax “WHERE ” indicates that the XML-QL query ranges over any nested XML element structure having (the equivalent of) a LandUseCode element. When the query is submitted for execution, a GeoSpace Interpreter consults the agreement files (ontology mappings) to translate qualified ontology terms to local terms, and it rewrites the user query into subqueries in local terms specific to a data source. Subqueries are sent into Niagara for execution. An example subquery pertaining to Dane County is shown in Figure 11.
Query Result Display and semantic output messages Our ontology system reassembles Niagara’s results and aggregates by data source. Various summary statistics are calculated such as total land acreage in cropland for each jurisdiction. As a spatial data query system, our system also provides graphical displays of query results using MapObjects (ESRI). Semantic relation information is additionally conveyed, so users can understand what level of semantic interoperability is achieved. For example, cropland does not map exactly to Dane County’s cropland/pasture code (Figures 10 and 11). However, the user will know that a superset is being returned, because our output explicitly includes the query value and the mapped local code(s) for each data set involved in the query.
summary and conclusion Data produced by local governments, in particular, tend to be highly heterogeneous. However, government operations such as land use planning that affect a multi-jurisdictional area require data integration for querying and analysis purposes. Many levels of government are producing geospatial Web sites, such as the Federal Geospatial One-Stop and the Wisconsin Land Information System, that are used for locating data sources. Our goal is to greatly enhance the capabilities of these sites to include query support over data content. To do this, we developed new technology for semantic integration within an Internet query system. We extended work in schema integration by focusing our efforts on resolving differences at the value level. This was needed to be able to query across the many different land use coding systems in use in Wisconsin, for example. However, our method is general enough to be applicable to heterogeneous domains of any attribute, and our framework is complete to handle any data. We developed a tool to aid local experts in mapping a global ontology to their local schemas and values. The tool automatically generates an XML agreement file in which semantic relationships are
A Web Query System for Heterogeneous Government Data
stored using XML tags. To formally represent a typical type of query in our application in which the same predicate is applied over multiple data sets, we extended XML-QL with a GeoSpace statement. We have demonstrated our system to government partners and they are extremely interested in the potential of our work regarding automatic or semi-automatic ontology alignment and the ability to do query processing on heterogeneous data sets distributed across the state. Our system achieves the technical goal of designing and implementing a Web-based query system that, in particular, solves the problem of semantic heterogeneity among distributed and independently developed government data sources.
acknowlEdgmEnt This work was partially supported by the Digital Government Program of NSF, Grant No. 091489.
rEfErEncEs Alexandria Digital Library. (n.d.). Retrieved August 4, 2004: http://webclient. alexandria. ucsb.edu Bishr, Y. (1998). Overcoming the semantic and other barriers to GIS interoperability. International Journal of Geographical Information Science, 12(4), 299-314. Bouguettaya, A., Benatallah, B., & Elmagarmid, A. (1998). Interconnecting heterogeneous information systems. Boston: Kluwer Academic Publishers. Bray, T., Paoli, J., Sperberg-McQueen, C., Maler, E., & Yergeau, F. (2004). Extensible markup language (XML) 1.0 (third edition), W3C recommendation 04 February 2004. Retrieved August 4, 2004: http://www.w3.org/TR/2004/REC-xml20040204/
Bussler, C., Fensel, D., & Maedche, A. (2002). A conceptual architecture for semantic Web enabled Web services. SIGMOD Record, 31(4), 24-29. Content Standard for Digital Geospatial Metadata (CSDGM), Federal Geographic Data Committee (FGDC). Retrieved August 4, 2004: http://www. fgdc.gov/metadata/meta data.html Cox, S., Daisey, P., Lake, R., Portele, C., & Whiteside, A. (2003). OpenGIS geography markup language (GML) implementation specification. Retrieved August 4, 2004: http://www.opengis. org/docs/02-023r4.pdf Cruz, I.F., & Rajendran, A. (2003). Exploring a new approach to the alignment of ontologies. Proceedings of the Semantic Web Technologies for Searching and Retrieving Scientific Data Workshop, 2nd International Semantic Web Conference, Sanibel Island, Florida, October 20-23 (pp. 7-12). Cruz, I.F., Rajendran, A., Sunna, W., & Wiegand, N. (2002). Handling semantic heterogeneities using declarative agreements. In A. Voisard, & S. Chen (Eds.), Proceedings of ACM GIS (pp. 168-174). Cruz, I.F., Sunna, W., & Chaudhry, A. (2004). Ontology alignment for real-world applications. Proceedings of the National Conference on Digital Government Research, Seattle, Washington, May 24-26 (pp. 393-394). Deutsch, A., Fernandez, M., Florescu, D., Levy, A., & Suciu, D. (1998). XML-QL: A query language for XML. Retrieved August 4, 2004: http://www. w3.org/TR/NOTE-xml-ql Doan, A., Madhavan, J., Domingos, P., & Halevy, A. (2002). Learning to map between ontologies on the semantic Web. Proceedings of the WWW2002, Honolulu, Hawaii, May 7-11. Environmental Systems Research Institute (ESRI). (n.d.). Retrieved August 4, 2004: http://www.esri. com Everett, J., & Ngo, C. (1999). Land-based classification standards—Federal role. Proceedings of the American Planning Association National Planning Conference. Retrieved August 4, 2004:
A Web Query System for Heterogeneous Government Data
http://www.asu.edu/caed/proceedings99/LBCS/ EVERETT.HTM
Miller, R., et al. (2001). The Clio project: Managing heterogeneity. SIGMOD Record, 30(1), 78-83.
Federal Geographic Data Committee (FGDC) Clearinghouse. (n.d.). Retrieved August 4, 2004: http://www.fgdc.gov/clearinghouse/clearinghouse.html
National Integrated Land System (NILS). (n.d.). Retrieved August 4, 2004: http://www.blm.gov/ nils
Fensel, D. (2001). Ontologies: Silver bullet for knowledge management and electronic commerce. Berlin: Springer-Verlag. Fonseca, F., Egenhofer, M., Agouris, P., & Camara, G. (2002). Using ontologies for integrated geographic information systems. Transactions in GIS, 6(3), 231-257. Ganster, P., & Wright, R. (eds.) (2000). San DiegoTijuana international border area planning atlas. Institute for Regional Studies of the Californias, San Diego State University. Retrieved August 4, 2004: http://www-rohan.sdsu.edu/~irsc/atlas/atlashom.html Ganter, B. & Wille, R. (2004). Formal concept analysis: Mathematical foundations. Springer Verlag. Geospatial One-Stop (2003). Retrieved August 4, 2004: http://www.geo data.gov and http://www. geo-one-stop.gov Hernandez, M., Miller, R., & Haas, L. (2001). Clio: A semi-automatic tool for schema mapping, demo. Proceedings of the 2001 ACM SIGMOD International Conference of the Management of Data, Santa Barbara, California, May 21-24 (p. 607). LandXML. (n.d.). Retrieved August 4, 2004: http://www.landxml.org Maier D., & Delcambre, L. (1999). Superimposed information for the Internet. Proceedings of the ACM SIGMOD Workshop on the Web and Databases (WebDB ’99), Philadelphia, Pennsylvania (pp. 1-9). Malyankar, R. (2002, May). Vocabulary development for markup languages—A case study with maritime information. Proceedings of the WWW2002, Honolulu, Hawaii.
Naughton, J., et al. (2001). The Niagara Internet query system. IEEE Data Engineering Bulletin, 24(2), 27-33. Open GIS Consortium (OGC), Inc. (n.d.). Retrieved August 4, 2004: http://www. opengeospatial.org OpenGIS Abstract Specification (1999). Topic 14: Semantics and information communities, version 4. Open GIS Consortium. Retrieved August 9, 2004: http://www.opengis.org/docs/99-114.pdf Rahm, E. & Bernstein, P. 2001. A survey of approaches to automatic schema matching. VLDB Journal, 10(4), 334-350. Sheth, A.P., & Larson, J.A. (1990). Federated database systems and managing distributed, heterogeneous, and autonomous databases. ACM Computing Surveys, 22(3), 83-226. Staab, S. & Studer, R. (eds.) (2004). Handbook on ontologies. Springer. Theobald, A., & Weikum, G. (2000). The indexbased XXL search engine for querying XML data with relevance ranking. Proceedings of EDBT 2000 (pp. 477-495). Wiegand, N., Zhou, N., Cruz, I.F., & Rajendran, A. (2002). Querying heterogeneous GIS land use data over the Web. Proceedings of GIScience 2002 Abstracts, Boulder, Colorado (pp. 207-210). Wiegand, N., Zhou, N., Cruz, I.F., & Sunna, W. (2004). Ontology-based geospatial XML query system. Proceedings of the National Conference on Digital Government Research, Demo (pp. 289-290). Wisconsin Land Information Clearinghouse (WiscLinc). (n.d.). Retrieved August 4, 2004: http://wisclinc.state.wi.us
A Web Query System for Heterogeneous Government Data
Wisconsin Land Information System (WLIS) Core Node Pilot Project. (n.d.). Retrieved August 4, 2004: http://wlis.dnr.state.wi.us/wlis Wisconsin Land Information System (WLIS) Project Team (2000). Final report of the Wisconsin land information system project team. Retrieved August 9, 2004: http://wlis.dnr.state.wi.us/wlis/ downloads/background/wlis_ project_team_report_sep2000.pdf
Zhou, N. (2003). Automatic ontology mapping of categorical information. Proceedings of the National Conference on Digital Government Research, Boston, Massachusetts (pp. 401-404). Zhou, N. & Wiegand, N. (2004). Formal concept analysis for semantic integration. Technical Report.
This work was previously published in International Journal of Electronic Government Research, Vol. 1, No. 2, edited by M. Khosrow-Pour, pp. 64-82, copyright 2005 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
0
Chapter LXIX
Digital Government Worldwide: An E-Government Assessment of Municipal Web Sites James Melitski Marist College, Poughkeepsie, USA Marc Holzer Rutgers University – Campus at Newark, USA Seang-Tae Kim Sungkyunkwan University, South Korea Chan-Gun Kim Rutgers University – Campus at Newark, USA Seung-Yong Rho Rutgers University – Campus at Newark, USA
abstract This article evaluates the current practice of digital government in large municipalities worldwide. The study assesses 84 cities from around the world that use a five-stage e-government framework. Our research and methodology goes beyond previous research by utilizing 92 measures that were translated into the native language of each city. In addition, the assessment of each municipal web site was conducted by a native speaker of the municipality’s language between June and October of 2003. We reviewed relevant e-government literature for evaluating Web sites nationally and internationally, and discussed our sample selection, methodology, theoretical framework, findings, and recommendations. Our results indicate that Seoul, Hong Kong, Singapore, New York, and Shanghai are the top five large cities for providing digital government opportunities to citizens online. Our research also suggests a difference in the digital government capabilities among the 30 developed nations belonging to the Organization for Economic Cooperation and Development (OECD) and lesser developed (non-OECD) nations.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Digital Government Worldwide
assEssing municipal wEb sitEs What are the objectives and rationale for public sector services? Some scholars have suggested a list of questions that public administrations have faced, including “How can the public sector achieve efficiency and effectiveness, at the same time balancing those concerns with equity in service delivery?” (Holzer & Gabrielian, 1998, p. 49) and “How can the public sector bring about a more honest dialogue between citizens and their governments?” (Nathan quoted in Holzer & Gabrielian, 1998, p. 79). Since public organizations should be efficient and effective (i.e., productive internally), and should pursue such democratic values as equality, equity, and participation for a better society externally, both dimensions are important to service delivery. Technological utopians (Blanchard & Horan, 1998; Dyson et al., 1994; Firssen, 1997; Klein, 1999; Winner, 1997) have suggested that information and communication technologies (ICTs) are conducive to both efficient and effective public organizations. In addition, they have argued that ICTs enable citizens to engage in policy deliberation (see Holzer et al., 2004). Following this logic, e-government researchers have examined in recent years how public organizations are using the Internet. They have often sought to answer two questions: (1) Why do public organizations adopt new initiatives such as e-government programs? (2) Why are some initiatives more successful than others? One way that researchers determine success is to assess performance of organizations that adopt e-government criteria by developing such criteria and conducting a content analysis of government web sites. Unfortunately, most of this literature focuses on state and federal governments in the U.S. in terms of examining trends in digital government (the Center for Digital Government and Microsoft Corp. 2002, 2003). At the local level, a notable survey was conducted of local government managers in conjunction with the International City/Country Management Association and Public Technologies Inc. (Holden et al., 2003; Norris et al., 2001).
Little research has been conducted on analyzing the worldwide movement to digital government from a comparative perspective. Researchers at Brown University, led by Darrell West, have conducted a content analysis of US states and federal government web sites in the US since 2000; they also completed in 2001 a worldwide analysis of central government web sites. A summary of the results was published recently in the Public Administration Review (West, 2004). West’s research finds an improvement of digital service delivery at federal, state, local, and international levels. Previous research, however, lacks a comprehensive framework for evaluating digital government. Such studies have paid attention to a few aspects of digital service delivery, comparing limited aspects of public organizations. They have not provided specific exemplars of practices and conditions necessary to successfully reproduce each best practice. Furthermore, their research that evaluates web site content consists of only 27 dichotomous variables (West 2004). This research attempts to take a more comprehensive approach by utilizing 92 measures, of which 45 are dichotomous and 47 are measures that use a four-point scale. This research also utilizes a theoretical framework that is consistent with e-government and egovernment literature. In 2002, Moon developed a framework for e-government analysis that consists of five stages: 1. 2. 3. 4. 5.
Information dissemination/catalogue Two-way communication Service and financial transactions Vertical and horizontal integration Political (citizen) participation
Based on Moon’s research, one can imply performance improvement as a progression from “Stage 1 Information Dissemination” through “Stage 5 Citizen Participation.” From a theoretical perspective, Ho (2002) describes performance improvement as a paradigm shift from a bureaucratic paradigm toward an e-government paradigm. According to Ho, “the new (e-government) paradigm transforms organizational principles
Digital Government Worldwide
in government. While the bureaucratic model emphasizes top-down management and hierarchical communication, the new model emphasizes teamwork, multidirectional networks, direct communication between parties, and a fast feedback loop” (Ho, 2002). Ho argues that while the shift between paradigms is occurring at the city level, socioeconomic and organizational barriers are slowing the process to the point where city government web sites are not reaching their fullest potentials. In contrast, Melitski (2003) argues that e-government and the use of the Internet by public organizations is value neutral. While Melitski’s information technology (IT) and public administration paradigms are similar to Ho’s bureaucratic and e-government paradigms, Melitski argues that adoption of Internet technologies does not automatically progress to open, accountable government (e.g., the public administration paradigm). Instead, Melitski argues for the existence of two competing paradigms similar to Burrell and Morgan (1979). Melitski argues that organizational and cultural factors influence whether public managers will use the Internet and e-government to exert central control over their organization (IT paradigm), or decentralize and empower their organization (public administration paradigm). In other words, while other e-government theorists argue that adoption of e-government leads to the idealistic use of technology to empower citizens, we believe that competing paradigms within public organizations may hinder such a paradigm shift. Despite our value neutral view of technology, we do believe that the greatest potential for Internet use in public organizations lies in applications designed to facilitate open communication between agencies and create dialogue between citizens and their government. This research uses a five-stage continuum based on previous research, with citizen participation as the fifth stage. Along with these theoretical movements, our society has been engaged in an ongoing transformation process from an industrial to an information based society. Rapid change in ICTs has further expedited this trend. Moreover, technological change creates new challenges and opportunities
for social and political organization (Kamarck & Nye, 2002; O’Looney, 2002). In order to facilitate efficiency and effectiveness in digital government, public organizations have begun to apply performance measurement to examine digital government initiatives. Even though there has been a growing use of digital government, there are also certain problems that prevent efficient and effective implementation. For instance, in terms of the digital divide, there has always been a gap between those people and communities that can make effective use of ICTs and those that cannot. Now more than ever, unequal adoption of technology excludes many from reaping the fruits of the economy (Norris, 1999). In addition, there are recent calls for increased security, particularly of our public information infrastructure. Concern over the security of the information systems underlying government applications has led some researchers to the conclusion that e-government must be built on a secure infrastructure that respects the privacy of its users (Kaylor, 2001).
intErnational digital govErnmEnt Efforts Digital government development is now constant and conspicuous. It has received considerable attention through a steady stream of events at the national and international levels (Bertelsmann Foundation, 2003; DigitalGovernance.org Initiative). The Bertelsmann Foundation conducted research on “Balanced E-Government,” and their international study identifies criteria of success for outstanding e-government performance in local government administration. The Bertelsmann project differentiates between e-administration and e-democracy. E-administration is represented by “transaction of user oriented services offered by public institutions that are based on information and communication technologies.” E-democracy is “digitally conveyed information and the political influence exerted by citizens and business on the opinion-forming process of public (institutions).” Furthermore, the study also measures how far the
Digital Government Worldwide
concepts of e-administration and e-democracy have been integrated by public agencies online. Digital government has become a high priority throughout the world. No country wants to be left behind in the movement to improve government through electronic delivery of information and services (Pardo, 2000). While most of the research on digital government has focused on advanced countries, the World Bank E-Government web site focuses on digital government in developing countries (www1.worldbank.org/publicsector/egov). The World Bank provides cases studies as a source of ideas in the areas of better service delivery to citizens, improved services for business, transparency and anticorruption efforts, empowerment through information, and efficient government purchasing. In addition to the World Bank, the DigitalGovernance.org initiative focused on digital governance in developing countries in terms of building accountable and democratic governance institutions using ICTs. It presents five generic egovernance models: broadcasting model, critical flow model, comparative analysis model, e-advocacy model, and interactive service model, as well as many case studies in developing countries. In 2001, an international report on national web sites was published by the United Nations and the American Society for Public Administration in consultation with the National Center for Public Productivity at Rutgers University — Campus at Newark. The report, “Benchmarking E-government: A Global Perspective,” used a framework similar to our research. According to the report, “National government websites were analyzed for the content and services available that the average citizen would most likely use. The presence or absence of specific features contributed to determining a country’s level of progress. The stages (emerging web presence, enhanced web presence, interactive presence, transactional web presence, and fully integrated web presence) present a straightforward benchmark which objectively assesses a country’s online sophistication” (UN/ASPA, 2001:1). Digital government has become a new part of the government structure. However, parochial
perspectives limit opportunities to increase experimentation, innovation, and organizational learning pertinent to digital government. It is important to understand existing gaps in government theory and practice, including both traditional and digital perspectives, between Western and non-Western nations (Welch & Wong, 1998). Fortunately, the global study of digital government is advancing, and such efforts may contribute to its continued development. In this context, our research evaluates the current practices of digital governments in municipalities worldwide. This research focuses on the evaluation of current practices on the supply side (government), not the demand side (citizen). Our emphasis is on the evaluation of each city government’s web site in terms of security, usability, and content of web sites, the type of online services currently being offered, and citizen response and participation.
samplE This research examines cities throughout the world based on their population size, the total number of individuals using the Internet, and the percentage of individuals using the Internet. The cities were selected using International Telecommunication Union’s (ITU) “Internet Indicators” (2002). The ITU data lists the online population for each of 196 countries1, and our initial sample consisted of the largest municipalities within the 98 UN member countries with an online population greater than 100,000. For example, in the U.S. and South Korea, New York and Seoul were chosen, respectively. In addition, Hong Kong SAR and Macao SAR were added to the 98 cities selected, since they have been considered independent countries for many years and have high percentages of Internet users. The rationale for selecting the largest municipalities stems from the e-government literature, which suggests a positive relationship between population and e-government capacity at the local level (Moon, 2002; Moon & deLeon, 2001; Musso et al., 2000; Weare et al., 1999). Table 1 is a list of the 100 cities selected.
Digital Government Worldwide
Table 1. 100 cities selected by continent (2004 Population in 1000s)** Asia (30) Country Almaty (Kazakhstan) Amman (Jordan) Baku (Azerbaijan)* Bangkok (Thailand) Beirut (Lebanon) Bishkek (Kyrgyzstan)* Colombo (Sri Lanka) Dhaka (Bangladesh) Dubai (United Arab Emirates) Ho Chi Minh (Vietnam) Hong Kong SAR (Hong Kong SAR) Istanbul (Turkey) Jakarta (Indonesia) Jerusalem (Israel) Karachi (Pakistan) Kuala Lumpur (Malaysia) Kuwait City (Kuwait)* Macao SAR (Macao SAR) Manama (Bahrain) Mumbai (India) Muscat (Oman)* Nicosia (Cyprus) Quezon City (Philippines) Riyadh (Saudi Arabia) Seoul (Korea) Shanghai (China) Singapore (Singapore) Tashkent (Uzbekistan) Tehran (Iran) Tokyo (Japan)
Europe (34) Pop. Country 1025.7 Amsterdam (Netherlands) 1308.3 Athens (Greece) 1240.8 Belgrade (Serbia and Montenegro) 6709.2 Berlin (Germany) 446.3 Bratislava (Slovak Republic) 841.1 Brussels (Belgium) 669.7 Bucharest (Romania) 9363.1 Budapest (Hungary) 940.6 Copenhagen (Denmark) 3452.1 Dublin (Ireland) 6855*** Helsinki (Finland) 9631.7 Kyiv (Ukraine) 8987.8 Lisboa (Portugal) 708.5 Ljubljana (Slovenia) 10889 London (United Kingdom) 1440.3 Luxembourg City (Luxembourg) 152.1 Madrid (Spain) 445*** Minsk (Belarus)* 154.7 Moscow (Russia) 12623 Oslo (Norway) 880.2 Paris (France) 200.7 Prague (Czech Republic) 10330 Reykjavik (Iceland) 3822.6 Riga (Latvia) 9551.8 Rome (Italy) 13279 Sarajevo (Bosnia and Herzegovina) 3499.5 Sofia (Bulgaria) 2299.4 Stockholm (Sweden) 7317.2 Tallinn (Estonia) 8273.9 Vienna (Austria) Vilnius (Lithuania) Warsaw (Poland) Zagreb (Croatia) Zurich (Switzerland)
North America (12) Africa (12) Pop. Country Pop. Country Pop. 742.3 Ciudad de Mexico (Mexico) 8705.1 Algiers (Algeria)* 2004.0 762.1 Guatemala City (Guatemala) 999.4 Cairo (Egypt) 1790.7 1126.9 Havana (Cuba)* 2359.2 Cape Town (South Africa) 8113.6 3396.3 Kingston (Jamaica)* 594.5 Casablanca (Morocco)* 2984.1 428.8 New York (United States) 8134.8 Dakar (Senegal)* 3741.2 983.9 Panama City (Panama) 445.8 Dar-es-Salaam (Tanzania)* 2613.7 1897.1 Port-of-Spain (Trinidad & Tobago) 50.6 Harare (Zimbabwe)* 1976.4 1729.8 San Jose (Costa Rica) 346.8 Lagos (Nigeria)* 8682.2 1100.7 San Salvador (El Salvador) 513.4 Lome (Togo)* 695.1 1027.9 Santo Domingo (Dominican Republic)* 2240.5 Nairobi (Kenya) 2504.4 590.6 Tegucigalpa (Honduras) 470.5 Port Louis (Mauritius) 143.6 2598.0 Toronto (Canada) 4558.8 Tunis (Tunisia)* 704.7 560.7 258.7 South America (10) Oceania (2) 7465.1 Country Pop. Country Pop. 79.8 Asuncion (Paraguay) 539.2 Auckland (New Zealand) 374.3 3290.9 Buenos Aires (Argentina) 11928 Sydney (Australia) 4305.5 1682.9 Caracas (Venezuela) 1719.6 11247 Guayaquil (Ecuador) 2044.7 799.2 La Paz (Bolivia) 850.0 2107.6 Lima (Peru) 8380.3 1165.2 Montevideo (Uruguay) 1346.9 116.5 Santa Fe De Bogota (Colombia) 6981.5 687.7 Santiago (Chile) 4434.9 2453.1 Sao Paulo (Brazil) 10333 602.5 1084.7 * Official city web site unavailable 1264.8 372.1 ** Population data from the 2004 World Gazetteer (except for Hong Kong and Macao): 1504.1 www.world-gazetteer.com 544.0 1676.6 *** Population data for Hong Kong and Macao from the 2004 World Sourcebook: 682.3 http://www.odci.gov/cia/publications/factbook/ 351.7
For our purposes, the main city home page was defined as the official web site where information about city administration and online services is provided by the city. City web sites typically included information about the city council, mayor, and executive branch of the city. Separate home pages for agencies, departments, or the city council were examined only if the sites were linked to the menu on the main city home page. If the web site was not linked, it was excluded from evaluation. Sixteen of the initial 100 cities were excluded from our research for lack of an official city web site: eight in Africa (67%), four in Asia (13.33%), one in Europe (2.94%), and three in North America (25%). As a result, this research evaluated only 84 of the 100 cities initially selected.
fies content, transactional services, and citizen participation as critical components of e-government (Moon, 2001). Furthermore, in recent years many U.S. citizens have become more aware of the need for increased security and privacy of government. A recent poll conducted by Hart-Teeter in conjunction with the Council for Excellence in Government (2003) found that Americans are cautiously aware of possible risks associated with digital government. The Hart-Teeter poll states:
mEthodology
Nearly 44% of Americans strongly agree that if they submit personal information about themselves to government websites, government will be able to provide them with better services…. This concern about privacy and security translates into a cautionary tone from Americans: a 54% majority think that government should proceed slowly in relying on the Internet for communication between citizens and government. (Hart-Teeter, 2003, p. 2)
Our instrument for evaluating city and municipal web sites builds on previous research that identi-
In addition, we have added an additional area of emphasis for our analysis that examines web
Digital Government Worldwide
Table 2. E-government measures Number of Key Concepts
Raw Score
Weighted Score
Security/ Privacy
19
28
20
Privacy policies, authentication, encryption, data management, and use of cookies
Usability
20
32
20
User-friendly design, branding, length of homepage, targeted audience links or channels, and site search capabilities
Content
19
47
20
Access to current accurate information, public documents, reports, publications, and multimedia materials
Service
20
57
20
Transactional services involving purchase or register, interaction between citizens, businesses and government
Citizen Participation
14
39
20
Online civic engagement, internet based policy deliberation, and citizen based performance measurement
Total
92
203
100
E-government Category
site usability. Our final instrument consists of five components: (1) security and privacy; (2) usability; (3) content; (4) services; and (5) citizen participation. Table 2 summarizes the measures used in our research to assess a web site’s capabilities in each of those five categories. Previous e-government research varies in the use of scales to evaluate government web sites. For example, while West (2000, 2001, 2004) uses an index consisting of about 27 dichotomous (yes or no) measures, other assessments use a four-point scale (Kaylor, 2001) for assessing each measure. Our research instrument goes well beyond previous research, utilizing 92 measures, of which 45 are dichotomous. For each of the five components in our theoretical framework, the study applies 14 to 20 measures, and each measure was coded on a four-point scale of increasing technological sophistication (0, 1, 2, 3) (see Table 3). Furthermore, in developing an overall score for each municipality, we have equally weighted the five categories to avoid skewing the research in favor of a particular category (regardless of the number of questions in each category). The remaining 47 measures were scored according to the scale in Table 3. The dichotomous measures in the “service” and “citizen participation” categories correspond with values on our four-point scale of “0” or “3”; dichotomous measures in “security/ privacy” or “usability” correspond to ratings of “0” or “1” on our four-point scale.
Keywords
Our instrument places a higher value on some dichotomous measures due to the relative value and technical complexity of the different e-government services being evaluated. For example, evaluators using our instrument in the “service” category were given the option of scoring web sites as either “0” or “3” when assessing whether a site allowed users to access private information online (e.g., educational records, medical records, point total of driving violations, lost property). “No access” equated to a rating of “0.” Allowing residents or employees to access private information online was a higher order task that required more technical competence, and was clearly an online service, or “3,” as defined in Table 3. On the other hand, when assessing a site’s privacy statement or policy, evaluators were given the choices of scoring the site as “0” or “1.” The presence or absence of a security policy was clearly a content issue that emphasized placing information online, and corresponded with a value of “1” on the scale outlined in Table 3. The differential values assigned to dichotomous categories were useful in comparing the different components of municipal web sites with one another. To ensure reliability, each municipal web site was assessed by two evaluators in the site’s native language, and, in cases where a significant variation (+ or – 10%) existed on the raw score between evaluators, web sites were analyzed a third time. Furthermore, an example for each measure indi-
Digital Government Worldwide
Table 3. E-government scale Scale
Description
0
Information about a given topic does not exist on the website
1
Information about a given topic exists on the website (including links to other information and e-mail addresses)
2
Downloadable items are available on the website (forms, audio, video, and other one-way transactions, popup boxes)
3
Services, transactions, or interactions can take place completely online (credit card transactions, applications for permits, searchable databases, use of cookies, digital signatures, restricted access)
cated how to score the variable. Evaluators were also given comprehensive written instructions for assessing web sites.
framEwork This section details our theoretical framework and discusses specific measures used to evaluate web sites. The discussion of security and privacy examines privacy policies and issues related to authentication. Discussion of the usability category involves traditional Web pages, forms, and search tools. The content category is addressed in terms of access to contact information, public documents, multimedia, time-sensitive information, and disability access. The section on services examines interactive services, services that allow users to purchase or pay for services, and the ability of users to apply or register for municipal events or services online. Finally, the measures for citizen participation involve examining how local governments are engaging citizens and providing mechanisms for citizens to participate in government online. The first part of our analysis examined the security and privacy of municipal web sites in two key areas: privacy policies and authentication of users. In examining municipal privacy policies, we determined whether policies were available on every page that accepted data, and whether or not the word “privacy” was used in
the link to such policies. We were also interested in determining if privacy policies identified the agencies collecting the information, and whether the policy identified exactly what data were being collected on the site. Our analysis determined if the intended use of the data was explicitly stated on the web site. The analysis examined whether the privacy policy addressed the use or sale of data collected on the web site by outside or third party organizations. Our research also determined if there was an option to decline the disclosure of personal information to third parties2. This included other municipal agencies, other state and local government offices, or businesses in the private sector. Furthermore, we examined privacy policies to determine if third party agencies or organizations were governed by the same privacy policies as the municipal web site. We also determined whether users had the ability to review personal data records and contest inaccurate or incomplete information. In examining factors affecting the security and privacy of local government web sites, we addressed managerial measures that limit access of data and assure that they were not used for unauthorized purposes. The use of encryption in the transmission of data, as well as the storage of personal information on secure servers, was also examined. We also determined if web sites used digital signatures to authenticate users. In assessing how or whether municipalities used their web sites to authenticate users, we examined whether public or private information was accessible through a restricted area that required a password and/or registration. A growing e-government trend at the local level is for municipalities to offer their web site users access to public, and in some cases private, information online. Other research has discussed the government issues associated with sites that choose to charge citizens for access to public information (West, 2001). We add our own concern about the impact of the digital divide if public records are available only through the Internet or if municipalities insist on charging a fee for access to public records. Our analysis specifically addresses online access to public databases by
Digital Government Worldwide
determining if public information such as property tax assessments or private information like court documents is available to users of municipal web sites. In addition, there are concerns that public agencies will use their web sites to monitor citizens or create profiles based on the information they access online. For example, many web sites use “cookies” or “web beacons”3 to customize web sites for users, but the technology can also be used to monitor Internet habits and profile visitors to web sites. Our analysis examined municipal privacy policies to determine if they addressed the use of cookies or web beacons. This research also examined the usability of municipal web sites. Simply stated, we wanted to know if sites were “user-friendly.” To address usability concerns, we adapted several best practices and measures from other public and private sector research (Giga, 2000)4. Our analysis of usability examined three types of web sites: traditional Web pages, forms, and search tools. To evaluate traditional Web pages written using hypertext markup language (html), we examined issues such as branding and structure (e.g., consistent color, font, graphics, page length, etc.). For example, we looked to see if all pages used consistent color, formatting, “default colors” (e.g., blue links and purple visited links), and underlined text to indicate links. We also examined whether system hardware and software requirements were clearly stated on the web site. In addition, our research examined each municipality’s home page to determine if it was too long (two or more screen lengths) or if alternative versions of long documents such as .pdf or .doc files were available. The use of targeted audience links or “channels” to customize the web site for specific groups such as citizens, businesses, or other public agencies was also examined. We looked for the consistent use of navigation bars and links to the home page on every page. The availability of a “site map” or hyperlinked outline of the entire web site was examined. Our assessment also examined whether duplicated link names connected to the same content. We examined online forms to determine their usability in submitting data or conducting searches
of municipal web sites. We looked at issues such as whether field labels aligned appropriately with fields, whether fields were accessible by keystrokes (e.g., tabs), or whether the cursor was automatically placed in the first field. We also examined whether required fields were noted explicitly, and whether the tab order of fields was logical. For example, after a user filled out his or her first name and pressed the tab key, did the cursor automatically go to the surname field? Alternatively, did the page skip to another field such as zip code, only to return to the surname later? We also checked to see if form pages provided additional information about how to fix errors if they were submitted. For example, did users have to reenter information if errors were submitted, or did the site flag incomplete or erroneous forms before accepting them? Also, did the site give a confirmation page after a form was submitted, or did it return users to the home page? Our analysis also addressed the use of search tools on municipal web sites. We examined sites to determine if help was available for searching a municipality’s web site, or if the scope of searches could be limited to specific areas of the site. Were users able to search only in “public works” or “the mayor’s office,” or did the search tool always search the entire site? We also looked for advanced search features such as exact phrase searching, the ability to match any or all words, and Boolean searching capabilities (e.g., the ability to use and/or/not operators). Our analysis also addressed a site’s ability to sort search results by relevance or other criteria. Content is a critical component of any web site. No matter how technologically advanced a web site’s features are, if its content is not current, if the site is difficult to navigate, or if the information provided is not correct, then it is not fulfilling its purpose. When examining web site content, our research examined five key areas: contact information, public documents, disability access, multimedia materials, and time-sensitive information. When addressing contact information, we looked for information about each agency represented on the web site. In addition, we also
Digital Government Worldwide
looked for the availability of office hours or a schedule of agency office hours. In assessing the availability of public documents, we looked for the availability of the municipal code or charter online. We also looked for content items such as agency mission statements and minutes of public meetings. Other content items included budget information and publications. Our assessment also examined whether web sites provided access to disabled users either through “bobby compliance” (disability access for the blind; see http://www.cast. org/bobby) or disability access for deaf users via a TDD phone service. We also checked to see if sites offered content in more than one language. Time-sensitive information that was examined included the use of a municipal web site for emergency management, and the use of a web site as an alert mechanism (e.g., terrorism alert or severe weather alert). We also checked for time-sensitive information such as job vacancies or a calendar of community events. In addressing the use of multimedia, we examined each site to determine if audio or video files of public events, speeches, or meetings were available. A critical component of e-government is the provision of municipal services online. Our analysis examined two different types of services: (1) those that allow citizens to interact with the municipality, and (2) services that allow users to register for municipal events or services online. In many cases, municipalities have developed the capacity to accept payment for municipal services and taxes. The first type of service examined, which implies interactivity, can be as basic as forms that allow users to request information or file complaints. Local governments across the world use advanced interactive services to allow users to report crimes or violations, customize municipal home pages based on their needs (e.g., portal customization), and access private information online such as court records, education records, or medical records. Our analysis examined municipal web sites to determine if such interactive services were available. The second type of service examined in this research determined if municipalities have the capacity to allow citizens to register for municipal
services online. For example, many jurisdictions now allow citizens to apply for permits and licenses online. Online permitting can be used for services that vary from building permits to dog licenses. In addition, some local governments are using the Internet for procurement, allowing potential contractors to access requests for proposals or even bid for municipal contracts online. In other cases, local governments are chronicling the procurement process by listing the total number of bidders for a contract online, and in some cases listing contact information for bidders. This analysis also examined municipal web sites to determine if they had the capacity to allow users to purchase or pay for municipal services and fees online. Examples of transactional services across the United States include the payment of public utility bills and parking tickets online. In many jurisdictions, cities and municipalities allow online users to file or pay local taxes, or pay fines such as traffic tickets. In some cases, cities around the world are allowing their users to register or purchase tickets to events in city halls or arenas online. While this research evaluates a city’s capacity or ability to implement transactional services, it should be noted that the research does not account for the total number of transactions a city offers. Cities that offer three transactional services received the same score as cities with 20 services. In addition, we recognize that not all transactions are equal, and technological sophistication among different transactional services varies. In other words, while we evaluate a city’s capacity to implement basic transactional services, we do not determine whether the transactional service implemented is one of greater or lesser technological sophistication. Finally, perhaps the most untapped area of e-government involves using the Internet to engage citizens in democratic processes. Citizen participation in government is a ripe area for e-government, in part because the Internet is a convenient mechanism for citizen users to engage their government, and because of the potential to decentralize decision making. Despite that potential, very few public agencies offer online
Digital Government Worldwide
opportunities for civic engagement. We looked at several ways public agencies at the local level were involving citizens. For example, do municipal web sites allow users to provide online comments or feedback to individual agencies or elected officials? Our analysis examined whether local governments offer current information about municipal government online or through an online newsletter or e-mail listserv. We also examined the use of Internet-based polls about specific local issues. In addition, our research examined whether communities allow users to participate and view the results of citizen satisfaction surveys online. For example, some municipalities used their web sites to measure performance and published the results of performance measurement activities online. Other municipalities used online bulletin boards or other chat capabilities for gathering input on public issues. Most often, online bulletin boards offer citizens the opportunity to post ideas, comments, or opinions without specific discussion
topics. In some cases, agencies attempt to structure online discussions around policy issues or specific agencies. Our research looked for municipal use of the Internet to foster civic engagement and citizen participation in government. This study does have some limitations — unique cultural, customary, and institutional considerations — even though we have made concerted attempts to reduce them. We do offer this research as progress in solving the problems of developing reliable and valid evaluation criteria for digital government.
findings & rEcommEndations Table 4 lists the top 20 cities in digital government. The table lists city scores in each of the five categories of our framework. The maximum score for each category was 20, and the maximum total score was 100. Of the top five cities, four ranked first in at least one category in our framework.
Table 4. Top 20 cities in digital government Ranking
City
Country
Score
Privacy
Usability
Content
Service
Participation
1
Seoul
Republic of Korea
73.48
11.07
17.50
13.83
15.44
15.64
2
Hong Kong SAR
Hong Kong SAR
66.57
15.36
19.38
13.19
14.04
4.62
3
Singapore
Singapore
62.97
11.79
14.06
14.04
13.33
9.74
4
New York
United States
61.35
11.07
15.63
14.68
12.28
7.69
5
Shanghai
China
58.00
9.64
17.19
11.28
12.46
7.44
6
Rome
Italy
54.72
6.79
14.69
9.57
13.16
10.51
7
Auckland
New Zealand
54.61
7.86
16.88
11.06
10.35
8.46
8
Jerusalem
Israel
50.34
5.71
18.75
10.85
5.79
9.23
9
Tokyo
Japan
46.52
10.00
15.00
10.00
6.14
5.38
10
Toronto
Canada
46.35
8.57
16.56
9.79
5.79
5.64
11
Helsinki
Finland
45.09
8.57
15.94
11.70
6.32
2.56
12
Macao SAR
Macao SAR
44.18
4.29
17.19
11.91
7.72
3.08
13
Stockholm
Sweden
44.07
0.00
13.75
14.68
10.00
5.64
14
Tallinn
Estonia
43.10
3.57
13.13
12.55
6.67
7.18
15
Copenhagen
Denmark
41.349
4.643
13.438
9.787
5.789
7.692
16
Paris
France
41.338
6.429
14.375
7.660
5.439
7.436
17
Dublin
Ireland
38.85
2.50
13.44
11.28
7.02
4.62
18
Dubai
United Arab Emirates
37.48
7.86
10.94
7.87
8.25
2.56
19
Sydney
Australia
37.41
6.79
12.19
9.15
5.44
3.85
20
Jakarta
Indonesia
37.28
0.00
16.56
9.79
6.32
4.62
Digital Government Worldwide
For example, Seoul excelled in service delivery and citizen participation, Hong Kong ranked highest in Privacy/Security and usability, and New York led all cities in providing content on its municipal web site. Tables 5-10 list the top 10 cities in the categories of privacy/security, usability, content, service delivery, and citizen participation. Our research also suggests a difference in the digital government capabilities between the 30 developed nations belonging to the Organization for Economic Co-operation and Development (OECD) and lesser-developed (non-OECD) nations. For example, Table 10 shows that although the average score for digital government in municipalities throughout the world is 28.49 out of 100, the average score in OECD countries is higher (36.34) while the average score in non-OECD countries is lower (24.26). Whereas 19 of 28 cities in OECD countries are above the world average, only 16 of 52 cities in non-OECD countries are above that average. Interestingly, 32 of 52 cities in non-OECD countries are below the average score for that group of countries. There were also regional differences. For example, 67% of cities in Africa, 13% in Asia, 3% in Europe, and 25% in North America do not have official city web sites. Interestingly, every city selected in South America had its own official web site. In addition, this research seems to suggest that city governments on every continent, except Africa, are actively developing their capabilities in digital government. A one-way ANOVA (Table 11) suggests a difference in the level of digital government at the municipal level between OECD and non-OECD countries at a significance level of 0.05. This is consistent with our concern over the disparity in e-government between governments in developed versus lesser-developed countries. The apparent gap between developed and underdeveloped countries suggests the need for international organizations such as the UN and cities in advanced countries to attempt to bridge the digital divide. We recommend developing a comprehensive policy for bridging that divide. A comprehensive policy should include capacity 00
Table 5. Top 10 cities in privacy and security Rank
City
1
Hong Kong SAR
Hong Kong SAR
15.36
2
Singapore
Singapore
11.79
3
New York
United States
11.07
3
Seoul
Republic of Korea
11.07
5
Tokyo
Japan
10.00
6
Shanghai
China
9.64
7
Helsinki
Finland
8.57
7
Toronto
Canada
8.57
9
Auckland
New Zealand
7.86
Dubai
United Arab Emirates
7.86
9
Country
Score
Table 6. Top 10 cities in usability Rank
City
1
Hong Kong SAR
Hong Kong SAR
Country
Score 15.36
2
Singapore
Singapore
11.79
3
New York
United States
11.07
3
Seoul
Republic of Korea
11.07
5
Tokyo
Japan
10.00
6
Shanghai
China
9.64
7
Helsinki
Finland
8.57
7
Toronto
Canada
8.57
9
Auckland
New Zealand
7.86
9
Dubai
United Arab Emirates
7.86
Table 7. Top 10 cities in content Rank
City
Country
Score
1
Hong Kong SAR
Hong Kong SAR
19.38
2
Jerusalem
Israel
18.75
3
Seoul
Republic of Korea
17.50
4
Macao SAR
Macao SAR
17.19
4
Shanghai
China
17.19
6
Auckland
New Zealand
16.88
7
Jakarta
Indonesia
16.56
7
Toronto
Canada
16.56
9
Vienna
Austria
16.25
10
Helsinki
Finland
15.94
Digital Government Worldwide
building for municipalities, including information infrastructure, content, applications, and access for individuals. Parallel to improving citizens’ access to digital government, it is important to develop relevant content for citizens and innovative applications in digital government considered as best practices throughout the world. The data we have developed underscore several concerns. First, given the low scores on privacy (mean 2.53; median 1.07) and participation (mean 3.26; median 2.18), cities worldwide need to work diligently to increase web site security and to encourage citizen participation. There are many established criteria for improved Web security, and an increasingly rich and accessible set of best practices studies on citizen participation. Second, the apparent gap between developed and underdeveloped countries suggests the need for international organizations such as the U.N. and the World Bank, as well as cities in the more advanced countries, to work assiduously to bridge the digital divide within countries and between countries. The digital divide is a challenge that democratic and democratizing societies must address, and it refers to the divide between those with web access and web-related skills, and those without such capacities. Even though the online population worldwide is increasingly reflective of communities offline, the reality of a digital divide means that certain segments of the population are effectively excluded from online access and public policy deliberation. Thus, the divide undermines the Internet as a mainstream and inclusive participatory medium as it disproportionately impacts lower socio-economic individuals who have historically played an insignificant role within the public policy process. We recommend the development of comprehensive policies to bridge that divide. A comprehensive policy should include capacity building for government web sites, including information infrastructure, content, applications, and access for individuals and their representative organizations. Parallel to improving citizens’ access to digital government, it is important to develop relevant content and to emulate innovative applications and best practices of digital government throughout the world.
Table 8. Top 10 cities in service delivery Rank
City
Country
Score
1
New York
United States
14.68
1
Stockholm
Sweden
14.68
3
Singapore
Singapore
14.04
4
Seoul
Republic of Korea
13.83
5
Hong Kong
Hong Kong
13.19
6
Tallinn
Estonia
12.55
7
Macao
Macao
11.91
8
Helsinki
Finland
11.70
9
Dublin
Ireland
11.28
9
Shanghai
China
11.28
Table 9. Top 10 cities in citizen participation Rank
City
Country
Score
1
Seoul
Republic of Korea
15.44
2
Hong Kong SAR
Hong Kong SAR
14.04
3
Singapore
Singapore
13.33
4
Rome
Italy
13.16 12.46
5
Shanghai
China
6
New York
United States
12.28
7
Auckland
New Zealand
10.35 10.00
8
Stockholm
Sweden
9
Sao Paulo
Brazil
9.12
10
Sofia
Bulgaria
8.42
Table 10. Comparison by OECD membership Rank
City
Country
1
Seoul
Republic of Korea
Score 15.64
2
Rome
Italy
10.51
3
Singapore
Singapore
9.74
3
Tegucigalpa
Honduras
9.74
5
Jerusalem
Israel
9.23
6
Auckland
New Zealand
8.46
7
Copenhagen
Denmark
7.69
7
New York
United States
7.69
9
Paris
France
7.44
9
Shanghai
China
7.44
0
Digital Government Worldwide
Table 11. Comparison by OECD membership
•
Non-OECD countries
•
OECD countries Above 36.58
12
8
36.34
Average Score in OECD countries
36.58–28.49
8
28.49
Average Score Throughout the World
28.49–24.55
2
24.26
Average Score in Non-OECD Countries
Below 24.55
6
7
rEfErEncEs
5 32
Table 12. ANOVA OECD membership Sum of Squares
Df
Mean Square
F
Sig.
Between Groups
2654.411
1
2654.411
13.696
.000
Within Groups
15117.039
78
193.808
Total
17771.450
79
Table 13. Descriptive statistics for the 84 cities surveyed Privacy
Usability
Content
Service
Participation
Min
0
3.44
0.43
0
0
Max
15.36
19.38
14.68
15.44
15.64
Mean
2.53
11.45
6.43
4.82
3.26
Median
1.07
11.72
5.635
4.2105
2.18
Note: Maximum score for each category was 20
acknowlEdgmEnt Additional research support was provided by:
• • • • •
0
Min-young Ku, M.A. Student, Sungkyunkwan University Stephen Ablan, Master’s Candidate, New York University
Lung-Teng Hu, Ph.D. Student, Rutgers University-Newark Yong-Kun Lee, Ph.D. Student, Sungkyunkwan University Jong-Seok Kim, Master’s Candidate, New York University Young-jin Shin, Ph.D. Student, Sungkyunkwan University Eun-Jin Seo, Ph.D. Student, Sungkyunkwan University
Bertelsmann Foundation. (2001). Balanced e-government. Retrieved December 24, 2003: http://www.begix.de/en/index. html Blanchard, A., & Horan, T. (1998). Virtual communities and social capital. Social Science Computer Review, 16, 293-307. Burrell, M. G. (1979). Sociological Paradigms and Organizational Analysis. London: Heinemann. Dyson, E., Gilder, G., Keyworth, G., & Toffler, A. (1994). Cyberspace and the American dream: A magna carta for the knowledge age. Release 1.2, Progress and Freedom Foundation, Washington DC, August 22, www.townhall.com/pff/position. html Frissen, P. (1997). The virtual state: Postmodernisation, informatisation, and public administration. In B. D. Loader, The Governance of Cyberspace (pp. 111-125). London: Routledge. Giga Consulting. (2000). Giga scorecard analysis of the New Jersey Department of Treasury. Unpublished report to the NJ Department of Treasury. Retrieved from: www.forrester.com Hart-Teeter. (2003). The new e-government equation: Ease, engagement, privacy and protection. A report prepared for the Council for Excellence in Government. Ho, A. (2002). Reinventing local governments and the e-government initiative. Public Administration Review, 62(4), 434-444. Holden, S.H., Norris, D.F., & Fletcher, P.D. (2003). Electronic government at the local level: Progress to date and future issues. Public Performance & Management Review, 26(4), 325-344.
Digital Government Worldwide
Holzer, M. & Vatche, G. (1998). Five great ideas in American public administration. In J. Rabin, W. B. Hildreth, & G. J. Miller (Eds.), Handbook of Public Administration (2nd ed.). New York: Marcel Dekker, Inc. (pp. 49-101). Holzer, M., Melitski, J., Rho, S.-Y., & Schwester, R. (2004). Restoring trust in government: The potential of digital citizen participation. Washington, DC: IBM Center for The Business of Government. Kaylor, C. et al. (2001). Gauging e-government: A report on implementing services among American cities. Government Information Quarterly, 18, 293-307. Klein, H.K. (1999). Tocqueville in cyberspace: Using the Internet for citizen associations. The Information Society, 15, 213-220. Melitski, J. (2003). Capacity and e-government performance: An analysis based on early adopters of Internet technologies in New Jersey. Public Performance and Management Review, 26(4), 376-390. Moon, M. J. (2002). The evolution of e-government among municipalities: Rhetoric or reality? Public Administration Review, 62(4), 424-433. Moon, M. J., & deLeon, P. (2001). Municipal reinvention: Municipal values and diffusion among municipalities. Journal of Public Administration Research and Theory, 11(3), 327-352. Musso, J. et al. (2000). Designing Web technologies for local governance reform: Good management or good democracy. Political Communication, 17(l), 1-19. Norris, D.F., Fletcher, P.D., & Holden, S. (2001). Is Your Local Government Plugged In? Highlights of the 2000 Electronic Government Survey. Washington, DC: International City/Country Management Association. Norris, P. (1999). Who surfs? New technology, old voters and virtual democracy. In Kamarck & Nye (Eds.), Democracy.com? Governance in a
Networked World (pp. 71-94). Hollis, NH: Hollis Publishing Company. Pardo, T. (2000). Realizing the Promise of Digital Government: It’s More Than Building a Web Site. Albany, NY: Center for Technology in Government. United Nations; American Society of Public Administration (2001). Bench-marking e-government: A global perspective. Retrieved from: http://www. aspanet.org/about/pdfs/Bench markingEgov.pdf Weare, C. et al. (1999). Electronic democracy and the diffusion of municipal Web pages in California. Administration and Society, 31(1), 3-27. Welch, E. & Wong, W. (1998). Public administration in a global context: Bridging the gaps of theory and practice between Western and nonWestern nations. Public Administration Review, 58(1), 40-50. West, D. M. (2000). Assessing e-government: The Internet, democracy, and service delivery by state and federal governments. Retrieved from: http:// www.insidepolitics.org/egov treport00.html West, D. M. (2001, October). WMRC global egovernment survey. Retrieved from: http://www. insidepolitics.org/egovt01int.html West, D. M. (2002). Global e-government. Retrieved from: http://www. insidepolitics.org/egovt02int.PDF West, D. M. (2003). Global e-government. Retrieved from: http://www.insidepolitics.org/egovt03int.pdf West, D. M. (2003). Urban e-government. Retrieved from: http://www.insidepolitics.org/ egovt03city.pdf West, D. M. (2004). E-government and the transformation of service delivery and citizen attitudes. Public Administration Review, 64(1), 15-27. Winner, L. (1997). Technology today: Utopia or dystopia. Social Research, 64(3), 989-1017.
0
Digital Government Worldwide
EndnotEs 1
2
3
International Telecommunication Union. (2002). Internet indicators: Hosts, users and number of PCs. Retrieved June 12, 2003, from http://www.itu.int/ITU-D/ict/statistics/ The New York City privacy policy (www.nyc. gov/privacy) defines third parties as follows: “Third parties are computers, computer networks, ISPs, or application service providers (ASPs) that are non-governmental in nature and have direct control of what information is automatically gathered, whether cookies are used, and how voluntarily provided information is used.” The New York City privacy policy (www.nyc. gov/privacy) gives the following definitions of cookies and Web bugs or beacons: “Persistent cookies are cookie files that remain upon a user’s hard drive until affirmatively removed, or until expired as provided for by a pre-set expiration date. Temporary or “Ses-
4
sion Cookies” are cookie files that last or are valid only during an active communications connection, measured from beginning to end, between computer or applications (or some combination thereof) over a network. A web bug (or beacon) is a clear, camouflaged or otherwise invisible graphics image format (“GIF”) file placed upon a web page or in hyper text markup language (“HTML”) email and used to monitor who is reading a web page or the relevant email. Web bugs can also be used for other monitoring purposes such a profiling of the affected party.” Additional information about public sector web site usability can be found online at the U.S. Department of Human Services site at www.usability.gov. Examples of private sector usability criteria and benchmarks include Moffitt at: http://www.unt.edu/ benchmarks/archives/2002/august02/access.htm and Nielsen at http://www.useit. com/alertbox/990711.html.
This work was previously published in International Journal of Electronic Government Research, Vol. 1, No. 1, edited by M. Khosrow-Pour, pp. 1-18, copyright 2005 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
0
0
Chapter LXX
User Help and Service Navigation Features in Government Web Sites Genie N. L. Stowers San Francisco State University, USA
abstract This article examines the user help and service navigation features in government Web sites and compares them across levels of government. These features are critical to ensuring that users unfamiliar with government are able to successfully and easily access e-government services and information. The research finds clear patterns in the use of similar help and navigation features across governments, leading to a conclusion that these features are diffusing in the public sector Web development field. The article concludes by stating that Web developers should work to overcome a second digital divide, one of a lack of knowledge of Web site organization and government structure. Users need to be actively assisted to find information by Web developers.
usEr hElp and sErvicE navigation fEaturEs in govErnmEnt wEb sitEs This article reports on efforts to make American state, local, and federal e-government portals more user-friendly by providing user help, service navigation, and organizational structures to assist potentially novice users in finding e-government information and services. Once users have access to the Internet and learn how to use computers, they still have to understand how to navigate Web sites and find the information or services they need. Then,
they have to understand how to interact with the Web site so that they can access those services. Governments need to provide more proactive user help features, service navigation features, and organizational structures to actively assist users in finding information and services they desire; otherwise, they can create a second digital divide, one between those who understand Web site structure and organization structure, and those who do not. As Hargittai (2003, p. 3) puts it, many a Web developer “wrongly assumes that gaining access to the Internet obliterates any potential inequality that may result from lack of access to the new medium. There are factors beyond mere connectivity
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
User Help and Service Navigation Features in Government Web Sites
that need to be considered when discussing the potential implications of the Internet for inequality. In addition to relying on basic measures of access to a medium, one needs to consider more nuanced measures of use such as user ‘skill’. “ ‘Skill’ is defined as the ability to locate content online effectively and efficiently”.
usability in public sEctor wEb sitEs Today, West (2005) suggests that there is a great degree of variation among the organization and structure of current public sector Web sites, and that not all sites are organized in a user-friendly fashion. Indeed, there is little formal focus on these elements, and their importance sometimes goes unheeded. Many government portals and Web sites are designed with government workers in mind, not the ordinary users without experience in government or with the use of Internet services. Bridging the “digital divide” does not mean just making computers themselves accessible, but also involves making the Web sites themselves more user-friendly and easy to use — removing barriers due to lack of experience with the Internet (Nielsen, 2000; Rosenfeld & Morville, 1998). Designers and developers of public sector Web sites must assume that those using their sites have limited training and experience and will need sites that are easy to use and designed with usability and effective information architecture in mind. They must also consider that the design lessons developed for private sector e-commerce sites might not necessarily work for public sector sites. User help, service navigation, and organizational structure are critical features of any public sector main portal that is dedicated to having users find its services and information. The presence or absence of these features on public sector sites can greatly affect the ability of a user to find information and services that might be available on the site and to effectively use those services. The usability of the e-government portal interface is an important feature of human-computer
0
interaction; to many users, the interface of the main portal is, in fact, the only important part (Singh & Kotze, 2002). Usability is typically defined as “the measure of the quality of a user’s experience when interacting with a product or system — whether a Web site, a software application, mobile technology, or any user-operated device. Usability is a combination of factors that affects the user’s experience with the product or system, including: ease of learning, efficiency of use, memorability, error frequency and severity, and subjective satisfaction” (U.S. Department of Health and Human Services, 2005). Other factors that can be incorporated into usability include effectiveness, a match between the system and the real world, user control, safety, utility, flexibility, robustness, and consistency and usefulness of navigation (Nielsen, 2000; Singh & Kotze, 2002). We can also argue that usability includes adequate help features (including embedded help, interactive help, or manuals) and an information architecture that is descriptive, broad, and shallow for users to “click through” rather than relying upon a deep structure (Kitajima, Blackmon, & Polson, 2000, 2005; Nielsen, 2000). Since the beginning of the World Wide Web, empirical studies have reported low rates of successful information and retrieval (Nielsen, 2000) and wide variation in the lengths of time taken for success (Hargittai, 2003). When Hargittai (2003) tested user ability to find the IRS 1040 tax form (given an unlimited amount of time), 93% could find the form, but the time required varied from 30 seconds to almost nine minutes. Of those participating, 60% used a search engine and 40% tried a specific URL. Among the problems encountered, particularly by those who were significantly older and by more recent users, were confusion about URLs and about page design layout. Jenkins, Corritore, and Wiedenbeck (2003) identified four groups of users who utilized different search strategies based upon their level of expertise in subject matter and with the World Wide Web. Those who were domain or content and Web novices searched broadly first but did little evaluations of their findings. Domain experts but Web novices searched broadly first but were
User Help and Service Navigation Features in Government Web Sites
able to evaluate their results more thoroughly for accuracy. Domain and Web experts first looked deeply, utilizing their expertise, and were able to follow those deep trails and evaluate their findings. Those who were experts on both domain and the Web were able to quickly find and identify the information which was needed. As Kitajima, Blackmon, and Polson (2000) describe it, if a user has a 90% chance of finding and selecting the correct link at each level, and there are five levels to go before they can access their desired services or content, the chance of successfully finding that information falls to .53 (.9 6) or 53% rather than the 81% chance of success if there are only two levels to move through. Users typically seek to improve these odds by being goal-driven and focused upon descriptions of content and links that match with their understanding of reality rather than necessarily utilizing search facilities or navigation aids (Kitajima, Blackmon, & Polson, 2000, 2005). They seek out “information scent”, defined as “the imperfect, subjective, perception of the value, cost or access path of information sources obtained from proximal cues, such as Web links, or icons representing the content sources” (Chi, Pirolli, & Pitkow, 2000, p. 2). Typically, users perform this through a twostage process. The first is an “attention” process where the user seeks to separate a Web page into sections based on labels, layout, and information on the page. To be effective in laying out information scent to guide user behavior, the layout and labels have to be able to direct the user to the appropriate section with labels that make sense for them. The second step is the “action selection” stage, where the user decides upon which link to choose. Clearly, here the appropriate labels are crucial to user success. Successful navigation thus requires gaining the user’s attention and leading them to the correct selection of the appropriate link at each hierarchical level of the Web site (Blackmon, Kitajima, & Polson, 2005). For many users seeking online government information, conducting a successful search for information means solving an unfamiliar problem (Blackmon, Kitajima, & Polson, 2005). In
essence, they know too little about government and how it is structured to be able to successfully navigate and make link choices on many government Web sites. There are numerous difficulties with this task, some of them based upon the knowledge of and familiarity with the labels themselves. Takeuchi and Kitajima (2002) effectively illustrate that, at the very least, there are gender differences in the understanding of many technical words, and these differences are further accentuated when viewed across cultural divisions. Given that there are many other differences in levels of understanding of information and a wide variety of quality in labeling and the laying of information scents, it is clear that government Web designers have a daunting task. To overcome these many differences in abilities and understanding of how to use the technology and content, Web developers must develop effective site structures that reflect the users’ views of the content at that site, not just the organization’s own view of itself (based upon its organizational chart) (Nielsen, 2000). Plus, many gaps in user knowledge can also be filled with effective help systems like live chat, online learning and tutorials, online manuals, and well-structured designs which seek to define choices in terms that inexperienced users might know (Shneiderman & Hochheiser, 2001). Usability features include those which assist the user in finding information, finding services, and in finding their way around the Web site. They include the following: • • • •
navigation (features that assist the user in moving around the site); labeling (how the content is labeled, again so that the user knows how to find information); searching systems (the usefulness of the search features) (Rosenfeld & Morville, 1998); and visible help features, including a help page, frequently asked questions (FAQs) about the site, site maps (pages that literally provide a map of the entire site), and tutorials on using the site. 0
User Help and Service Navigation Features in Government Web Sites
This previous research is important to understanding the project reported upon here, but is still not sufficient to allow us to generate meaningful hypotheses of the more specific topic explored here, user help features. Therefore, the research reported here will be structured around research questions.
structuring rEsEarch QuEstions about usEr hElp fEaturEs Before we can identify research questions about user help features, we must identify these features. Initially, standard content analysis techniques were used to identify existing user help, navigation, and information architecture features on the Web portals. Agency and government portals were visited, features were identified and categorized, and a comprehensive list was developed. Once the list was developed, each portal to be studied was
visited and coded as to the presence or absence of each feature. Table 1 presents the types of user help features identified on government portal sites as well as a brief explanation of each. These include information about the site, an explicit Help section, FAQs (Frequently Asked Questions), information in other languages, and site maps. These features are organized in two sections according to whether or not they would be relatively easy to implement or require few resources. Thus, simple e-mail linkages for contacting the agency, to additional help, to information about the site, to user tips, and to an e-mail or comment section to provide feedback about the site would be easy to implement. Indices, site maps, FAQs, and search engines also would require minimally more work but little technical expertise to implement. The most reasonable explanation for widespread user help features is that the features which are easiest and cheapest to produce will be utilized most frequently. There is no reason to believe that
Table 1. User help features Feature
Explanation Easiest and Requires Fewest Resources To Implement
About the Site
Link to information about the site
Contact Us
Information and links to allow the user to contact the agency for more information or for help with the site
FAQs
Includes answers to Frequently Asked Questions
Feedback
Invites users to give them feedback about the site
Help
Explicit agency-provided help with the site
Index
An index of information, data, and agencies available
Search
Search engine to allow users to search the site
Site Map
Visual representation of the entire Web site
User Tips
Helpful hints on how users can use the site More Technically Difficult or Resource-Heavy To Implement
0
Live Help
Links to live chat with agency representative to provide assistance
Other Languages
Site provides information in other languages
Text Version
An alternate site is provided in text
User Help and Service Navigation Features in Government Web Sites
Table 2. Service navigation aid features Feature
Explanation
Agency Information
Listings of all agencies in directory form
Answers A to Z
Alphabetized listings of answers to questions
Calendars
Calendars of government activities and events
Contact Information
Linkages to direct contact information for agencies
Do you know how I do ___ …?
List of questions organized according to major service areas from the citizen’s point of view, stating “how do I do x or y?”
E-Government Services
Direct link from home page to all e-government services
Events
Link to information on major events
Facilities Locator
Direct linkage to way to locate government offices
Featured Link / Spotlight
Many sites have featured programs or linkages
Hot Topics
Link to information on what are considered currently important issues
Most Visited / Frequently Requested Site
Links to or listings of the most frequently visited sites, indicating the importance of that information
Popular Services / Major Programs
Highlighting of popular services or major programs
Quick Links
Listing of the commonly-asked questions in prominent format
Special Initiatives
Current, new, or special initiatives from the agency
What’s New
Listing of new items posted on the site
this pattern would differ across levels of government. Therefore, the first research question is: R1: The user help features which require the least technical abilities and fewest resources to implement will be found more frequently across all levels of government. The features which require the most technical abilities and greatest levels of resources to implement will be found least often. Table 2 provides a listing of the various portal features used to provide users with assistance in finding agency online services. These features are not as common as the user help features listed above. They include agency or government calendars leading to information, as well as various ways to link to services (for example, Answers A to Z, Do You Know?, Facilities Locators, Frequently Requested Site, Featured Links, Quick Links, and What’s New). We already know that well-planned formatting leads to success in finding sought-after informa-
tion or services (Blackmon, Kitajima, & Polson, 2005) since users seek “information scent”, or clues to the information they want (Chi, Pirolli, & Pitkow, 2000). Further, Web site developers can structure a site’s portal entryway to separate users into knowledgeable consumers or users who might not know much about government. Thus, they are able to take advantage of the fact that users with experience and expertise typically utilize different strategies to find information (Jenkins, Corritore, & Wiedenbeck, 2003). Different agencies have used these principles to structure entry into their Web sites in different ways. This information can be used to structure service navigation schemes or the overall information organization of the entire Web portal. However, this research is clearly exploratory, and there are few theoretical or practical reasons to expect one type or distinguish one type of service navigation scheme from another. There is no clear difference in the amount or level of technical expertise or resources needed to develop each scheme; they are just different ways of concep-
0
User Help and Service Navigation Features in Government Web Sites
Table 3. Types of information and service organization on government Web portals Type of Information Architecture
Description.. Site is Organized Around:
Audience / Market
The needs of particular audiences or markets; for example, Firstgov.gov has information organized around Online Services for Citizens, for Businesses, and for Governments.
E-Government Services / Links to Agencies
The services, tasks, or functions offered by the agency
Topics / Issues
Various topics, often just miscellaneous listings of topics
Hybrid Site
Combinations of all of the above
tualizing services and setting up the information scent discussed above. However, the most basic of the service navigation schemes are the ones which provide only basic description and require no re-conceptualization, such as E-Government Services linkages and Links to Agencies; therefore, we would suggest that these would be most popular. There would be no expectation that the usage of these labeling systems would differ from one level to another level of government. R2: The E-Government Services/Links to Agencies service navigation labeling systems would be found most commonly among government agency Web portals, no matter the level of government. Table 3 identifies the information architecture, or structure, of systems of various Web site portals, in other words, how access to the information is organized. The various possible ways in which the Web portal can be organized include an audience/ market orientation (the now familiar Citizen/Visitor/Business/Government), according to the types of services or tasks available, the kinds of topics or issues, or a hybrid of several types. The audience/market orientation is one in which the Web portal’s navigation scheme is segmented into a separate set of linkages for each potential audience. For example, the audience-oriented FirstGov.gov site has four tabs leading to four different sets of site features: For Citizens, For Businesses and Non-Profits, For Federal Employees, and Government-to-Government. States with audience/market orientations will often have their
0
sites segmented according to the purpose of a visitor (Access Washington’s Living in Washington, Working/Employment, Doing Business, Education/Learning, and Visiting/Recreation). A portal guiding their visitors according to the types of services or tasks available will have navigation features listing the types of services provided by the agency and providing links to information or services on those areas. Portals guiding users according to topics or issues set up navigation according to subject matter dealt with within the agency or government. One of the few theoretical suggestions as to how these features would be clustered is by Ho (2002), who suggests that the system of information architecture used is a function of whether an agency is traditionally organized with a bureaucratic paradigm (in which case the information architecture would be structured according to a traditional listing of agencies) or whether it has moved to the more efficient e-government paradigm emphasizing a customer service orientation (with a structure according to the Web site audience or market). However, Ho’s hypothesis also suggests that there is necessarily a direct and strong linkage between design ideas within an agency’s technology branch and the organizational sophistication of the overall agency, which is unlikely. Instead, staffs from these two areas are unlikely to communicate often (this is, in fact, one of the most commonlycited breakdowns of technological systems, that there is little communication between technology developers and end users). Therefore, no research question is posited for these site features.
User Help and Service Navigation Features in Government Web Sites
As discussed in the research methodology section, the components of these features on each government’s main Web portal were identified for this project, their presence or absence on the portal was coded, and basic analysis was conducted.
proJEct mEthodology The research project discussed here utilized a cross-sectional comparison that focused upon a review of federal, state, and local e-government portals, and comparisons between them. All federal agency portals from executive agencies, cabinet agencies (including sub-agencies with their own domain name), and independent agencies were included in the study. Federal boards and commissions were not included. All 50 states plus Washington, D.C. were included in the analysis, along with all cities over 100,000 in population. This resulted in 43 federal sites, 51 state sites, and 47 urban sites. The data was collected during summer of 2005. The first stage of this analysis was to identify the user help features, service navigation features, and types of information architecture, or organization and presentation of information, that are currently being used on public sector Web sites, focusing upon the home portal of each portal. (While accessibility is also a crucial aspect of usability, it was not part of the overall project and is not reported here.) Portals were the initial points of entry for users, as they define the usability and information architecture features for the entire site. Therefore, they are the most crucial part of an agency or government’s entire Web site. The identification of help or navigation features was accomplished by content analysis of federal, state, and local home pages and the construction of coding sheets including the identified features. The presence of various features was determined through the examination of the site’s portal, as this is the main entryway and where users would need these features. All coding was conducted by one individual, the author, which is a shortcoming of this research. To attempt to overcome this issue, the author
double-checked the results; of course, a simple “yes” or “no” check-off for each feature is a much less complex coding task than having to code an arbitrary scale of the effectiveness of a feature. All features were working at the time of the coding. Sixty-four variables were coded, along with three additive indices. The author recognizes that a simple count does not address the issue of whether or not a feature is well-designed or not; however, this research is the first step in investigating this phenomena, and the first step should be whether a feature exists or not. Later, additional research should be conducted on the effectiveness of these features across jurisdictions.
usability and hElp fEaturEs The research reported here explores the simple research questions discussed above and is descriptive and exploratory in nature.
descriptive results Table 4 presents basic descriptive data on user help features. As can be seen, most governments utilize more than one help feature, up to a maximum of six different features on the same portal, with state governments utilizing the most user help features. One federal and one local agency utilized none. Table 5 reports descriptive data on service navigation features. An average of 5.1 features per government Web portal is used, with federal agencies utilizing an average of 4.3 and state agencies utilizing the highest average number, 5.6 service navigation features per site. The minimum number on a site was one for federal and local, two for state. The maximum number utilized was quite high—seven features for federal agencies, eleven for state, and ten for local agencies.
Exploration of research Questions The next part of the analysis is the exploration of the research questions and then, more exploratory
User Help and Service Navigation Features in Government Web Sites
Table 4. Distribution of user help features across level of government Total N
Federal
State
Local
141
43
51
47
3.7 features
3.7 features
4.3 features
3.1 features
1.36
1.35
1.30
1.17
Minimum Number of Features
0
0
2
0
Maximum Number of Features
6
6
6
6
Mean Number of Features Standard Deviation
Table 5. Distribution of service navigation features across level of government Total N
Federal
State
Local
141
43
51
47
5.1 features
4.3 features
5.6 features
5.3 features
1.78
1.59
1.62
1.85
Minimum Number of Features
1
1
2
1
Maximum Number of Features
11
7
11
10
Mean Number of Features Standard Deviation
Table 6. Distribution of help features by level of government (Ranked by most frequently used by all jurisdictions)
User Help Feature
Total
Federal (% Sites With Feature)
State
Local
(% Sites with Feature)
(% Sites with Feature)
Chi-Square
Probability
Easiest and Requires Fewest Resources To Implement Search
97.9%
97.7%
100%
95.7%
2.139
0.343
Contact Us
70.2
65.1
72.5
72.3
0.769
0.681
Site Map
48.2
69.8
52.9
23.4
20.043 ***
0.000
Help
33.3
25.6
58.8
12.8
25.021 ***
0.000
FAQs
19.9
27.9
21.6
10.6
4.355
0.113
About the Site
18.4
11.6
31.4
10.6
8.901 **
0.012
Feedback on Site
15.6
4.7
21.6
19.1
5.744
0.057
Index
12.8
20.9
5.9
12.8
4.744
0.093
2.8
2.3
2
4.3
0.526
0.769
User Tips
More Technically Difficult or Resource-Heavy To Implement Other Languages
25.5%
41.9
11.8
25.5
11.11 **
0.004
Text Version
18.4%
4.7
31.4
17
11.17 **
0.004
Live Help
9.2%
0
19.6
6.4
11.39 **
0.003
Help features which are present at significantly different levels across government levels are in bold italics. * ** ***
Statistically different and significant at the .05 level Statistically different and significant at the .01 level Statistically different and significant at the .000 level
User Help and Service Navigation Features in Government Web Sites
analysis. Clearly, the first research question can be supported, as the easiest and least resource-intensive types of user help features were found much more frequently (Table 6). Among all levels of government, the most commonly-seen help feature was a search engine (found in 97.9% of all sites investigated, with no significant differences among levels of government). This was followed by invitations to Contact Us, found in 70.2% of sites, also with no significant differences across types of government. Site maps and Help areas were also found very commonly but exhibited different patterns across governments. Site maps were used most frequently by federal agencies and only by less than one-quarter of local agencies. Help areas were created most frequently by state agencies (58.8%), followed by one-quarter of federal agencies and only 12.8% of local agencies.
User tips were very seldom utilized as user help features by any level of government. Also as suggested in Research Question 1, those user help features that would require more resources or more technical sophistication were found less frequently on government Web portals. However, two of them (the site presented in Other Languages and an alternate Text Version, both very resource-intensive features) were available in 25.5% and 18.4% of all sites, respectively, and there were significant differences across types of governments. Federal agencies were much more likely to have sites available in other languages, and text versions were more frequently found at state sites. Table 7 presents the distribution of service navigation features by level of government. As suggested, the very basic E-Government Services
Table 7. Distribution of service navigation features by level of government (Ranked by most frequently used by all jurisdictions) Federal (% Sites With Feature)
Service Navigation Feature
Total
State
Local
(% Sites with Feature)
(% Sites with Feature)
Chi-Square Statistic
Probability Level
52.2 ***
0.000
E-Government Services
60.3%
16.3%
72.5%
87.2%
Featured Links
60.3
44.2
66.7
68.1
6.7 *
0.035
Link to Agencies
51.8
25.6
52.9
74.5
21.5 ***
0.000
Popular Services
35.5
32.6
54.9
17.0
15.6 ***
0.000
Do You Know / How I Do ___?
31.9
18.6
37.3
38.3
5.1
0.080
Calendars
27.0
7
33.3
38.3
12.8 **
0.002
Link to Contacts
27.0
20.9
27.5
31.9
1.4
0.500
Events
23.4
18.6
15.7
36.2
6.5 *
0.038
What’s New
22.7
27.9
17.6
23.4
1.4
0.492
Quick Links
19.1
16.3
27.5
12.8
3.7
0.154
Special Initiatives
14.9
16.3
15.7
12.8
0.3
0.879
Most Visited
12.8
2.3
15.7
19.1
6.3 *
0.042
Facilities Locator
10.6
30.2
3.9
0.0
25.4 ***
0.000
Hot Topics
9.2
4.7
11.8
10.6
1.6
0.454
Answers A to Z
3.5
4.7
3.9
2.1
0.5
0.798
User Help and Service Navigation Features in Government Web Sites
Table 8. Distribution of information or service organization available by level of government (Ranked by most frequently used by all jurisdictions) Total Type of Organization Available Audience Services, Processes Hybrid Topics or Issues
35.5 34.8 17.7 11.3
Federal (% Sites With Feature) 14.0 44.2 7.0 34.9
State
Local
(% Sites with Feature) 49.0 43.1 5.9 2.0
(% Sites with Feature) 40.4 17.0 40.4 2.1
was present in 60.3% of all government sites, but more surprisingly, was seldom found on federal sites (16.3% only). It was very common on state and local sites (72.5% and 87.2%), however, and the differences between all three levels of government was statistically significant. Featured Links, Links to Agencies, and Popular Services were the next most commonly found across all levels of government, but again there were significant differences across those levels in where it was used. Featured Links were found on two-thirds of state and local sites; Agency Links were found in three-quarters of local sites, approximately one-half of the state sites, but on only one-third of the federal sites. Features like Calendars and Events were most commonly found on state and local sites while Facilities Locators, not surprisingly, were most often found on federal sites but very seldom on state or local sites (governments evidently presume that their residents know where these government facilities are located). Finally, Table 8 presents the distribution of information organization types across levels of government. The most commonly-utilized organizational scheme is the Audience/Market scheme, but that is because it is used by practically one-half of all states and 40% of all large cities in their Web portals. Federal agencies utilize this type only 14% of the time. A structure breaking down information by the types of services or processes offered is utilized far more often by federal agencies (44.2%) and by most of the rest of the states not using the Audience scheme (43.1%). Only 17% of the large cities used this type. Cities most
ChiSquare Statistic
Probability Level
67.7
.0000
frequently used a hybrid type of portal (in 40.4% of the cases), while states and federal agencies seldom used this. Finally, another third (34.9%) of federal agencies organized their information around topics or issues, a scheme avoided by states and local governments. Clearly, the differences here are significant statistically and substantively. The types of information organization schemes utilized are a function of the types of services provided and the site managers’ perceptions of their audiences. Federal agencies seem to focus on what they can offer users rather than be organized according to any anecdotal perception of their audience’s identity (FirstGov.gov is a significant exception). State agencies focus on the services they offer or provide a user perspective. Cities, on the other hand, have fewer resources with which to work, which might explain the high number of hybrid sites, which are less organized. Still, 40% of local portals have taken the audience approach. These initial results focus upon identifying user help, service navigation, and information architecture features used by the public sector. This is the first step in identifying which of these features ultimately are effective and really are helpful in assisting the general public in understanding these public sector portals and their services, and how to navigate and use them.
conclusion From the results, it appears that similar patterns do exist across levels of government in the types of
User Help and Service Navigation Features in Government Web Sites
user help and information organizational structure utilized. More technically-difficult features appear to be utilized by larger levels of government, so financial and staff resources are clearly determining patterns. However, there are many user help features that require little technical expertise so it is hard to explain their usage by the simple existence of resources in larger governments or agencies. In addition, it appears that states are leading the way in offering multiple types of features (see tables 4 and 5) and in offering the arguably more user-friendly audience-based type of information and service organizational structure. Given the patterns found for some features, it appears that diffusion of these innovations and features is occurring across jurisdictions and within levels of government. Local government IT professionals meet and know one another through conferences, professional associations, and other informal networks, as do those at the federal and state levels. Certain features are discussed and popularized among IT professionals as important and useful for assisting users. Then, other governments in a network follow up and develop that feature for their own Web site. Or, Web managers visit the Web portals of their sister governments, see a feature they like that works well for their own level of government, and move to adopt it. However, there is no sense, from these results, that there is any systematic usability testing being utilized in the Web development process. The development of user help and service navigation features, as well as systematic usability testing on Web portals, is crucial if governments really intend to effectively utilize e-government and provide electronic services. Deservedly, much focus is put upon the differences still existing between classes of users due to their access to computers and the Internet. Obviously, the inability of some groups of users to access e-government services reduces the equity of these services. However, it should be recognized that another digital divide exists. Even if users have access to computers and the Internet, in order to fully utilize e-government services, they need government agency Web sites that can be understood and navigated successfully by everyone, not just
by those who have knowledge and understanding about how government agencies and services work and are organized. Effective user help and service navigation features can remove this second divide by allowing novice users or users unused to contacting government to understand and fully utilize e-government services and information. We should not create the “other digital divide”; we should work to ensure that all can access government services in an online context.
rEfErEncEs Blackmon, M. H., Kitajima, M., & Polson, P. G. (2005). Tool for accurately predicting Web site navigation problems, non-problems, problem severity, and effectiveness of repairs. In Proceedings of the Conference on Human Factors in Computing Systems (pp.31-40). Retrieved July 29, 2005, from http://staff.aist.go.jp/kitajima. muneo/English/PAPERS(E)/CHI2005.pdf Blackmon, M. H., Polson, P. G., Kitajima, M., & Lewis, C. (2002). Cognitive walkthrough for the Web. Retrieved July 29, 2005, from http://staff. aist.go.jp/kitajima.muneo/English/PAPERS(E)/ CHI2002.pdf Chi, E. H., Pirolli, P., & Pitkow, J. (2000). The scent of a site: A system for analyzing and predicting information scent, usage, and usability of a Web site. In Proceedings of CHI 2000. ACM Press. Retrieved July 29, 2005, from http://www-users. cs.umn.edu/~echi/papers/chi2000/scent.pdf Cohen, S. A., & Eimecke, W. B. (2001). The use of the Internet in government service delivery. PricewaterhouseCoopers Foundation for the Business of Government. Retrieved August 22, 2002, from http://endowment.pwcglobal.com/pdfs/CohenReport.pdf Daabaj, Y. (2002). An evaluation of the usability of human-computer interaction methods in support of the development of interactive systems. In Proceedings of the 35th Hawaii International Conference on System Sciences.
User Help and Service Navigation Features in Government Web Sites
Dillon, A. (2001). Beyond usability: Process, outcome, and affect in human-computer interactions. Canadian Journal of Information and Library Science, 26(4), 57–69. Hargittai, E. (2003). Serving citizens’ needs: Minimizing online hurdles to accessing government information. IT and Society, 1(3, Winter, 2003), 27 – 41. Retrieved July 29, 2005, from http://www. ITandSociety.org Ho, A. T.-K. (2002). Reinventing local governments and the e-government initiative. Public Administration Review, 62(4), 434-444. Huang, C. J. (2003). Usability of e-government Web sites for people with disabilities. In Proceedings of the 36th Hawaii International Conference on System Sciences. Jansen, B.J., & Pooch, U. (2000). Web user studies: A review and framework for future work. Journal of the American Society of Information Science and Technology, 52(3), 235–246. Retrieved August 1, 2005, from http://jimjansen. tripod.com/academic/pubs/wus.pdf Jenkins, C., Corritore, C. L., & Wiedenbeck, S. (2003). Patterns of information seeking on the Web: A qualitative study of domain expertise and Web expertise. IT and Society, 1(3, Winter, 2003), 64–89. Retrieved July 29, 2005, from http://www. ITandSociety.org Jones, C. P. (2003). Usability and information design. In Proceedings of the Annual Conference for the Society for Technical Community: Vol. 50 (pp. 333–338). Kitajima, M., Blackmon, M. H., & Polson, P. G. (2000). A comprehension-based model of Web navigation and its application to Web usability analysis. In S. McDonald, Y. Waern, & G. Cockton (Eds.), People and computers XIV - Usability or else! In Proceedings of HCI 2000 (pp. 357-373). Springer. Retrieved July 29, 2005, from http://staff. aist.go.jp/kitajima.muneo/English/PAPERS(E)/ HCI2000.pdf Kitajima, M., Blackmon, M. H., & Polson, P. G. (2005). Cognitive architecture for Web site design
and usability evaluation: Comprehension and information scent in performing by exploration. HCI International, 2005. Retrieved July 30, 2005, from http://staff.aist.go.jp/kitajima.muneo/English/PAPERS(E)/HCII2005-CoLiDeS.html Lazar, J., Bessiere, K., Ceaparu, I., Robinson, J., & Shneiderman, B. (2003). Help! I’m lost: User frustration in Web navigation. IT & Society, 1(3, Winter, 2003), 18–26. Retrieved July 29, 2005, from http://www.ITandSociety.org Nielsen, J. (2000). Designing Web usability: The practice of simplicity. Indianapolis, IN: New Riders Publishing. Norris, P. (2001). Digital divide: Civic engagement, information poverty, and the Internet worldwide. New York: Cambridge Press. Paciello, M. G. (2000). Web accessibility for people with disabilities. Lawrence, KS: CMP Books. Pearrow, M. (2000). Web site usability handbook. Rockland, MA: Charles River Media, Inc. Pew Research Center (2002). Getting serious online. Pew Internet and American Life Project. Retrieved August 30, 2005, from http://www. pewinternet.org/reports/pdfs/PIP_Getting_Serious_Online3ng.pdf Pew Research Center (2002). Daily Internet Activities. Retrieved August 30, 2005, from http://www. pewinternet.org/reports/chart.asp?img=Daily_Internet_Activities.jpg Pew Research Center. (2002). Internet Activities. Retrieved August 30, 2005, from http://www. pewinternet.org/reports/chart.asp?img=Internet_ Activities.jpg Reiss, E. L. (2000). Practical information architecture. New York: Addison Wesley. Rosenfeld, L. (2000). Special report: Design usability—Seven pitfalls to avoid in information architecture. Internet World Magazine. Retrieved July 29, 2005, from http://www.internetworld.com/ magazine.php?inc=121500/12.15.00feature3long. html
User Help and Service Navigation Features in Government Web Sites
Rosenfeld, L., & Morville, P. (1998). Information architecture for the World Wide Web Sebastopol, CA: O’Reilly and Associates.
of the Internet. Retrieved February 15, 2002, from http://www.ntia.doc.gov/ntiahome/dn/index. html
Shneiderman, B., & Hochheiser. (2001). Universal usability as a stimulus to advanced interface design. Human-Computer Interaction Lab, University of Maryland. Retrieved August 1, 2005, from http://www.cs.umd.edu/hcil/pubs/tech-reports.shtml
U.S. Department of Commerce (2000). Falling through the net: Toward digital inclusion. Retrieved February 21, 2001, from http://www.esa. doc.gov/fttn00.pdf
Singh, S., & Kotze, P. (2002). Towards a framework for e-commerce usability. In Proceedings of the 2002 Annual Research Conference of the South African Institute of Computer Scientists and Information Technologist on Enablement through Technology. Retrieved July 29, 2005, from http://portal.acm.org/citation.cfm?id=581508&CF ID=51211042&CFTOKEN=76008125 Takeuchi, H., & Kitajima, M. (2002). Web contents evaluation based on human knowledge of words. International Conference on Soft Computing and Intelligent Systems. Retrieved July 29, 2005, from http://staff.aist.go.jp/kitajima.muneo/English/ PAPERS(E)/SCIS2002Takeuchi.pdf U.S. Department of Commerce (2002). A nation online: How Americans are expanding their use
U.S. Department of Health and Human Services (2005). Usability basics at usability.gov. Retrieved August 4, 2005, from http://www.usability. gov/basics/index.html U.S. Office of Management and Budget E-Government Task Force. (2002). E-government strategy: Simplified delivery of service to citizens. Retrieved from http://www.whitehouse.gov/omb/inforeg/egovstrategy.pdf West, D. M. (2005). Global e-government, 2005. Retrieved October 28, 2005, from http://www. insidepolitics.org/egovt05int.pdf West, D. M. (2005). State and federal e-government, 2005. Retrieved October 15, 2005, from http://www.insidepolitics.org/egovt05us.pdf
This work was previously published in International Journal of Electronic Government Research, Vol. 2, Issue 4, edited by M. Khosrow-Pour, pp. 24-39, copyright 2006 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
Chapter LXXI
An Empirical Study on the Migration to OpenOffice.org in a Public Administration B. Rossi Free University of Bolzano-Bozen, Italy M. Scotto Free University of Bolzano-Bozen, Italy A. Sillitti Free University of Bolzano-Bozen, Italy G. Succi Free University of Bolzano-Bozen, Italy
abstract The aim of the article is to report the results of a migration to Open Source Software (OSS) in one public administration. The migration focuses on the office automation field and, in particular, on the OpenOffice.org suite. We have analysed the transition to OSS considering qualitative and quantitative data collected with the aid of different tools. All the data have been always considered from the point of view of the different stakeholders involved, IT managers, IT technicians, and users. The results of the project have been largely satisfactory. However the results cannot be generalised due to some constraints, like the environment considered and the parallel use of the old solution. Nevertheless, we think that the data collected can be of valuable aid to managers wishing to evaluate a possible transition to OSS.
introduction Open Source Software (OSS) and Open Data Standards (ODS) emerged in recent years as a viable alternative to proprietary solutions. There are many cases in which the adoption of OSS has proven advantageous for companies deciding to adopt it in replacement or in conjunction with closed solutions. The limitation of these migra-
tions for our point of view is that they were very often server-side oriented and not supported by empirical evidence of the benefits of the new solution. In this sense, there are very few case studies that report successful transitions on the desktop side (ZDNet, 2005) and some are still underway (Landeshauptstadt München, 2003; Stadt Wien, 2004). It is our opinion that the reason of the apparent different results in the two fields is due to the
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
An Empirical Study on the Migration to OpenOffice.org in a Public Administration
nature of OSS development (Feller & Fitzgerald, 2001) that leads to repercussions on the resulting usability (Nichols & Twidale, 2003). When comparing OSS and proprietary software and when comparing software solutions in general, it is impossible to get a global index referring to quality in order to compare two solutions (Fenton & Pfleeger, 1997). If we consider the most important aspects under which it is significant to analyse software, as: • • • • • • • • •
Reliability Performance Price Security Interoperability Usability Extendibility Functionalities Privacy protection
The categories have to be balanced with the requirements of the environment and users in which the solution is deployed. Where the aspects of security, reliability, and extendibility are of key importance, OSS has proven a valid solution, if not superior to proprietary solutions. Where functionalities, usability, and in general user interaction acquires importance as on the client side, OSS has yet to prove as a valid alternative. Price is a controversial issue as there is the need not only to evaluate the license price but also the software maintenance and other costs inherited from the migration. These considerations originated the study we propose. The purpose of the study is to evaluate in a rigorous way the introduction of OSS in a working environment, following the criteria of a controlled experiment from the selection of the sample to the evaluation of the results. We selected a sample of 22 users from different offices in the public administration target of the experiment. We divided the sample in two groups, one to be migrated, the other to be used as a control group. The results obtained seem to report that the initial reduction of productivity is not as consistent as we thought, also taking into account that half of the users
considered the introduced solution as offering less functionality than the proprietary one.
statE of thE art There are many studies available evaluating the Total Cost of Ownership (TCO) of OSS. The original model derived from the work of the Gartner Group in 1987 and has since then been inserted in different models. The TCO model helps managers by considering not only the cost of purchase but also further costs as maintenance or training. All the studies are not unanimous as the savings that can be reached with the adoption of OSS (Robert Frances Group, 2002; The Yankee Group, 2005). One of the reasons is probably the different weight given to costs and benefits that are difficult to measure. Two of such measures are, for example, the risks of lock-ins and the development of local economies. The risks of entering a mechanism of lock-in, for example, by relying only on a single software supplier or storing massive amounts of data by means of closed data standards are real and must be considered in a TCO model evaluating a transition (Shapiro & Varian, 1999). On the other side, the adoption of OSS can be of benefit to local software companies that can exploit the possibility given by the available source code and open data standards. Also in this case, the amount of this kind of externality is difficult to quantify. Considering OSS, there are many projects worth mentioning, we will name here two of the most famous and see how they perform on the market against proprietary solutions: • •
the Apache Web server1 the Mozilla Firefox Web browser2
Table 1 shows that the Apache Web server detains almost 70% of the whole market share (Netcraft Survey, 2005). As virus attacks of the last years have proven (CERT, 2001), one of the reason of such wide adoption is the security proposed by the Apache architecture. Table 2 shows the market share of the Mozilla Firefox browser between January and April 2005.
An Empirical Study on the Migration to OpenOffice.org in a Public Administration
Table 1. Web servers in September and October 2005 (Netcraft Survey, 2005) Developer
September 2005 Percent
October 2005
Percent
Change
Apache Microsoft Sun Zeus
49598424 14601553 1868891 584598
52005811 15293030 1889989 585972
69.89 20.55 2.54 0.79
0.74 0.19 -0.07 -0.03
69.15 20.36 2.61 0.82
Table 2. Browsers market share (“Browser Market Share Study,” itproductivity.org, 2005) Browser Internet Explorer Firefox Mozilla Netscape AOL MSN Opera Total
January 2005 (%) 84.85 4.23 4.48 3.03 2.20 0.58 0.34 99.71
April 2005 (%) 83.07 10.28 3.81 0.92 0.85 0.67 0.41 100.01
Table 3. Large scale migrations to OSS of public administrations Region Extremadura Munich Vienna Largo, FL
Clients to migrate 80,000 14,000 7,500 900
Side Desktop/Servers Desktop Desktop Desktop/Servers
As it can be seen also in this case the software is gaining constant market shares in the last months. The software is still behind in market shares but it represents an important competitor for the market dominator, Microsoft Internet Explorer. These are surely two of the most popular OSS that emerged during the last few years; there are many more that can compete with proprietary solutions. By looking at these and other examples, we can conclude that OSS already could represent an important alternative to proprietary software. Another important consideration on OSS is represented by the cases in which a large migration has been performed or is in the process of being performed. Having a look at the different case studies available for the migration to OSS, we summarise the most famous during these years in Table 3; three are European, while one is U.S.-based. One of the most remarkable deployments of OSS on the desktop side is surely the one of the Extremadura region in Spain, recently installing 80,000 Linux systems, 66,000 for the educational
0
Distribution gnuLinex Debian Wienux (Debian/KDE) Linux KDE 2.1.1
system and 14,000 for administrative workstations. The local administration created their Linux distribution called gnuLinex.3 According to their IT department, the savings have been of the order of €18M (ZDNet, 2005). Another case of success is the one of the city of Largo, FL where the migration has involved 900 clients; the savings have been estimated at $300,000-$400,000 (Newsforge, 2002). The migration of the city of Munich and the one of the city of Vienna are currently underway (Landeshauptstadt München, 2003; Stadt Wien, 2004). As the delay of the Munich migration seems to demonstrate, a transition to OSS is not a process to underestimate. There are also cases where the proprietary solution has been considered as more convenient, like the city of Nürnberg, where according to their own migration study the transition from Windows 2000/Office 2000 to Windows XP/Office XP was considered €4.5M cheaper than the transition to Linux/OpenOffice. org (Stadt Nürnberg, 2004). A final consideration on studies performed on OSS usability. Of certain interest for our
An Empirical Study on the Migration to OpenOffice.org in a Public Administration
study, albeit a little dated, is the experimentation conducted by the Berkeley University in November-December 2001 (Everitt & Lederer, 2001), comparing two different solutions in the office automation field, namely Sun StarOffice Writer 5.2 and Microsoft Word 2000. Authors report about an experiment on 12 users, regarding the user interface integration. As a result of the study, the two products were comparable, although the Microsoft solution proved to be more satisfactory and easier to use.
thE study Our study has been inserted into this framework; the intention is to contribute to the field with a solid and sound analysis of a real transition to OSS on the client side, specifically the analysis of a migration in the office automation field in one public administration. In particular the study related the introduction of the OpenOffice.org4 suite. The suite offers comparable functions as the one offered by Microsoft Office.5 It is composed of several applications, a text-processor, a spreadsheet, software for presentations, for drawing operations and for the creation of formulae. The only functionality missing in the version installed was the possibility to create small local databases. In the organisation where we performed the study, this was a feature rarely employed by users and, in general, deprecated by IT managers. We focused our analysis mainly on word-processors and spreadsheets. The experiment was performed on 22 users of a public administration (PA) during the transition to OSS. In the following sections we expose the methodology adopted, the tools employed, and the main results obtained from the qualitative and quantitative data collected. The limitations and possible future additional work is listed at
the end of the article. The overall sample of 22 users has been selected from three departments of the PA under exam. Some constraints had to be followed, for example, the fact that the head of the different offices posed a limitation on the number of the available workers per office. Table 4 represents the different groups, with two office directors per each group as part of the sample. The average age of participants to both groups was uniform between groups and has not influenced the results; users were selected from such departments in a random way with the limitations described previously. Regarding the protocol, the selection of the experimental groups has been done in a way to enable that the participants were, when possible, in some way in relation with each other, physically near and if possible coming from the same organisational units, to take advantage of possible network externalities that arise in terms of document exchange and reciprocal help (Shapiro & Varian, 1999). One group experimented with the introduction of OpenOffice.org (our treatment X, in Figure 1), while the other group was used as a control group. The experimental design followed an experimental pretest-posttest control group design (Campbell & Stanley, 1990). A questionnaire has been submitted to both groups before (O1) and after (O2) the introduction of OpenOffice.org to evaluate the effects of the experimentation on the attitude towards OSS. The activities of both groups have been constantly monitored by an automatic system for data collection (Sillitti, Janes, Succi, & Vernazza, 2003) that permitted the gathering of a series of objective process data (the series of observations O3,i). In some other cases where there has not been the possibility to have a control group and a proper randomisation of the sample, a “quasiexperimental” and a “one-shot” design have been employed (Campbell & Stanley, 1990). We are aware that in
Table 4. The selected sample and distribution among groups Group Group 1 Group 2 Total
Women 6 9 15
Men 3 4 7
Total 9 13 22
Departments 3 3 3
Notes Only using MS Office Using MS Office and OpenOffice.org -
An Empirical Study on the Migration to OpenOffice.org in a Public Administration
Figure 1. Experimental design adopted
this way the results obtained are less extendible to the general case and more subject to exogenous effects.
timE Evolution of thE ExpErimEnt The experiment lasted for 32 weeks; during the first 10 only Microsoft Office was monitored and the different system dependencies were collected. OpenOffice.org was introduced in group 2 after week 10 and during the 23rd week of experimentation OpenOffice.org has been associated with Microsoft Office formats .doc and .xls. The results of the choice will be exposed in the subsequent sections. Figure 2 shows a graphical representation of the evolution of the experimentation. In detail, the steps performed during the experimentation were the following: 1. 2. 3. 4.
Selection of the participants to the experiment; Submission of the questionnaires on the attitude towards OSS; Motivational seminar on the reasons of the experimentation; Identification of the OpenOffice.org experimental group and control group;
5.
Analysis of the most used documents and the possible software dependencies; 6. Installation of OpenOffice.org and translation in the OpenOffice.org format of the most used documents and on a per request basis; 7. Installation of the monitoring and data collection system to define the situation before the transition; 8. Training, performed on a single day, trying to focus on the different approach proposed by the new software; users were instructed on how to perform the usual office automation tasks; 9. Start of the OpenOffice.org data collection; 10. Support given to users through an online forum and a hotline; 11. Periodic verification meetings with users of OpenOffice.org to identify possible problems; 12. Automatic start of OpenOffice.org with files with Microsoft Office extension starting from week 23; 13. Submission of the final questionnaires.
Two types of questionnaires have been submitted to users. The first one identical before and after the experimentation, to understand the attitude towards OSS and the effects of the experimentation on such attitude; the second one has been submitted only at the end, where all the final results of the project have been collected and more information for the replication of the experiment have been determined. The Goal Question Metrics (GQM) paradigm (Basili, 1995) was employed in every phase of the project, from the overall design to the creation of the questionnaires. The GQM is a
Figure 2. Evolution of the experiment expressed in weeks
An Empirical Study on the Migration to OpenOffice.org in a Public Administration
methodology that was developed at the University of Maryland in the mid-1980s that relates the goals of an organisation to a set of questions. Questions are further associated to a set of metrics. In this way it is always possible to evaluate whether a goal has been reached and what are the informational needs of a certain goal are.
•
softwarE EmployEd The tools used during the experimentation were useful to assess the evolution of the experiment and in particular to gather quantitative and objective data about the migration process. In particular, two applications were employed for the ex-ante analysis and one was continuously employed during the transition to monitor the usage of the proposed solutions. •
•
PROM (PRO Metrics), a noninvasive monitoring tool was used to evaluate the usage of OpenOffice.org and Microsoft Office during all the transition process (Sillitti et al., 2003). Metrics of interest were the number of documents handled and time spent per single document. The software has been running during all weeks of the experimentation, permitting us to acquire objective data on the experimentation. DepA (Dependency Analyser) has been employed to evaluate at the beginning of the project the existing dependencies of Microsoft Office in terms of called and calling programs (Rossi & Succi, 2004). The program is a simple agent running on workstations to determine the calls from different applications, collecting in this way information on the different interrelations
between applications. The program has been running on client desktops for the first 10 weeks. FLEA (FiLe Extension Analyser) has been used to perform a scan of the data standards available on the users’ drives and analyse the eventual presence of macros. The software permits us to collect information on the type of extension, date of creation, date of last access, size of the file, and for particular extensions also information about the macros contained. The scan was performed at the beginning of the experimentation.
All tools deployed are not noninvasive in order not to bias the results. From the final questionnaires emerged that users did not notice the presence of any external software during the experimentation.
data analysis In this section we report the results of the data collection activities. In particular we can distinguish the data collection across a temporal boundary (ex-ante, during, and ex-post) and between qualitative and quantitative data. The biggest effort during the project has been to monitor constantly the users during the experimentation. To gather objective data on the migration, we used the PROM software. Data collected included the time spent on documents and the number of documents opened using the selected office automation suite. A more fine-grained analysis on the function utilised has not been performed. During every phase of the project, the quantitative data collected has been backed with qualitative data coming from interviews and questionnaires. As a
Table 5. Type of data collected during the experimentation Ex-ante
During Ex-post Periodic meetings for Interviews/Questionnaires feedback
Qualitative
Interviews/questionnaires
Quantitative
Collection of data standards (FLEA), Monitoring of SW usage Number of OOo files created during Collection of dependencies (PROM) the project. (DepA)
An Empirical Study on the Migration to OpenOffice.org in a Public Administration
side effect, we noticed that the periodic meetings performed with users caused a small increase in the usage of the open source solution during the immediate subsequent days. We will briefly review all the data collected, starting from the analysis of the existing situation, performed at the beginning of the experimentation.
The first step for performing the initial analysis of the experimentation environment was the one related to the presence of macros inside documents and the distribution of the documents. Another important issue was to find the number of templates available. This analysis has been performed statically at the beginning of the project. In the evaluation of macros impact, Microsoft Word and Microsoft Excel documents of the participants to the project were considered. Two different locations were considered:
Ex-antE analysis
• •
The aspects we analysed for an overview of the existing situation were concerned with the presence of interoperability issues in the users’ environment and the presence of possible dependencies in the form of macros inside office automation documents. Macros are a series of commands inserted in the form of code inside documents, to perform a series of repetitive actions. They are generally very common in office automation documents, to permit the automation of repetitive tasks. As the usage in OpenOffice.org of macros written for Microsoft Office is not possible—at least at the time the experimentation was carried out—this is an interoperability issue. Macros need, in this way, to be completely rewritten. The number of templates in the preexisting format represents another interoperability issue. Our software for data collection granted us the possibility to evaluate the number of this type of documents, but not the complexity, another factor to take into account when there is the need to migrate a document. The collection of such data has to be crossed with interviews with the IT personnel to evaluate the real relevance of the macros discovered and the real necessity of the conversion of templates.
Users’ drives Network drives
In the last location, both normal documents and templates were considered. Table 6 reports a summary of the results. We found 25.810 Word documents and 2.192 Excel documents. Among these, only 2 Word documents (0.01%) and 49 Excel documents (2.24%) contained macros. Moreover, our tool has identified a high number of Excel documents protected by password (near 16% of the total). In this case, it is not possible to determine whether they contain macros. The second step has been the identification of the software dependencies that existed in the office automation environment. A dependency is either a call to an external application (outgoing call) or a call from an external application to the office automation suite (incoming call). As the dynamic evaluation of the calls, two different typologies have been considered: •
Applications that call Microsoft Word or Excel (Figure 3)
Table 6. Number of documents and macros during the pretransition phase; LOC = Lines of Code, CNO = Could Not Open Location
Microsoft Word Files Macros 19144 2 4484 -
Users’ driver Network driver Network drives 2182 (templates)
-
LOC 12 -
CNO -
Microsoft Excel Files Macros 1367 8 816 43
LOC 1070 21482
CNO 14 331
-
-
9
10197
-
3
An Empirical Study on the Migration to OpenOffice.org in a Public Administration
Figure 3. Number of incoming dependencies of Microsoft Word (left) and Microsoft Excel (right)
In this category we discovered that 80% of the times, Microsoft Word was called from explorer. exe which means a normal start from its icon or a file on the file manager. Furthermore, 12% of the times it was called from the e-mail client Outlook. Microsoft Excel was called 95% of the times by explorer.exe and 4% of the times from Outlook (for a total of 99% of the calls). •
Applications called from Microsoft Word or Excel (Figure 4)
74% of the global calls have been towards printer drivers. Almost 11% of the calls of Microsoft Word and Excel have been towards the program to report problems in Microsoft applications, and about 6% for the help guide. Further interviews with IT managers confirmed the situation outlined by data collection tools. Globally, the system environment of the experimentation was less turbulent than we had
thought initially. Templates, macros, and dependencies collected were not so critical to increase significantly the migration costs.
ongoing ExpErimEnt analysis The experimentation has been monitored constantly by PROM software. Figure 5 shows two different measures of productivity, the percentage of documents opened by using OpenOffice. org and the percentage of time spent within the OpenOffice.org suite. As an example, a percentage of 5% means that users spent in that week 95% of the time using Microsoft Office. From the data collected, we can notice, in particular, two effects: •
Figure 4. Number of outgoing dependencies for
Microsoft Word/Excel
•
As expected, the level of adoption of the new software has been increased by the decision to associate the Microsoft Office file formats with OpenOffice.org. We expected to have complaints and reports of incompatibilities deriving from this decision. What happened was instead that users learned when it was convenient for compatibility reasons to adopt one solution or the other. Even after 32 weeks of experimentation, the time spent with OpenOffice.org was below 25% of the total time dedicated to the two suites for office automation. Also to note is the fact that the time and documents of Figure 5 are inclusive of the Microsoft Office documents opened with OpenOffice.org.
An Empirical Study on the Migration to OpenOffice.org in a Public Administration
Figure 5. Increase in OpenOffice.org usage. On the X-axis there is the week of the project; OpenOffice.
org has been inserted during week 10; and the automatic association has been activated during week 23. In blue the percentage of opened files using OpenOffice.org is represented and in orange the percentage of average time devoted to OpenOffice.org among users that effectively used OpenOffice.org. 0% % 0% % 0% % 0% 0 0 0
It is noteworthy that, during the experimentation, we did not benefit fully from the network effect that can be raised by the growing number of documents in one format and the subsequent exchange between users (Shapiro & Varian, 1999). In a broader migration such effects can also have an impact increasing the usage of the new platform proposed. In our experimentation users were in some way constrained in the adoption of the new format, as they could not exchange the documents with the users not participating in the experimentation. We also posed two questions to evaluate the impact on productivity, according to the GQM methodology: 1.
Did the usage of OpenOffice.org caused a reduction in the number of documents used per day? We studied the correlation between the number of documents used each day and number of documents opened with OpenOffice.org. A negative effect on the usage of OpenOffice.org had to produce a negative impact on the usage of the office automation documents and a significant negative correlation between these two variables;
2.
that is, the more documents are handled with OpenOffice.org, the less they are globally handled. The correlation in question has been of -0.08, therefore we exclude that the usage of OpenOffice.org has reduced the number of documents handled daily. Did the usage of OpenOffice.org caused an increase in the time devoted to each document? We studied the correlation between the time spent managing all the documents and the OpenOffice.org ones. A negative effect on the usage of OpenOffice.org has to create a significant positive correlation; that is, it should be evident that the more time spent with OpenOffice.org, the more time is spent globally managing documents, as OpenOffice.org required more time to accomplish the same tasks. This correlation has been determined in -0.04, therefore it has to be excluded that the usage of OpenOffice.org has increased the global effort to handle documents. The comparison with the control group confirmed furthermore that the evolution of the usage of documents among the test group and the control group has been consistent, excluding the presence of exogenous factors.
An Empirical Study on the Migration to OpenOffice.org in a Public Administration
Ex-post analysis At the end of the experimentation, the evaluation of the attitude towards OSS and in general the evaluation of the project has been performed. We submitted one questionnaire to users in order to evaluate their attitudes towards OSS and the knowledge that was acquired after the transition. We submitted also the same questionnaire to the control group to ensure that no exogenous effects biased the results. The questionnaire was designated to answer to the following two questions: • •
What is the user perception of OSS at the end of the experiment? Has the user modified his/her perception of OSS at the end of the experiment?
All figures of this section represent the ex-ante situation on the left and the ex-post situation on the right. In this way it should be easier to evaluate the change of users’ attitudes.
The first question that has been submitted was whether the Open Source concept has become more familiar after the experimentation. The result in this case is quite obvious; at the end the users had a clearer idea of the concept of OSS. It may be surprising the initial number of users claiming to know OSS, but this is due to the preproject meetings with IT managers explaining the reasons of the experimentation. The second question enters the heart of the matter, questioning about the perception of OSS. In this case an interesting phenomenon has been discovered. At the beginning one group had no opinion on OSS (almost half of the interviewed). At the end of the experiment, almost all users have an opinion: those that were positive maintained the same opinion and the uncertain were divided into three groups almost of the same size, those with a positive opinion, those with a negative one, and those that had no opinion. Nevertheless, at the end of the experiment only a small part of the participants had a negative opinion on OSS.
Figure 6. Question 1: How familiar are you with the expression Open Source Software? Possible answers:
(a) very well, (b) well, (c) neither nor, (d) not well, (e) not at all.
Figure 7. Question 2: How do you perceive the expression Open Source Software? Possible answers: (a)
as something negative, (b) as something positive, (c) neither positive nor negative.
An Empirical Study on the Migration to OpenOffice.org in a Public Administration
The third question focuses on the importance of the diffusion of the software in use. In this answer we have a strong movement towards a bigger knowledge; for example, the category “very important” has gone from 0 to almost 40%. The experimentation in the usage of OpenOffice.org has strongly increased the consciousness of the participants on the importance of the usage of well established software. The fourth question poses the problem of the direct substitution and asks the user how much the user is reluctant to abandon the application in use in favour of an OS solution. In this case the user in favour maintained the same opinion, while reluctant people became more reluctant; also in this case users had a clearer idea of OSS after the experimentation. More than half of the interviewed users were still positive towards a possible transition.
The fifth and sixth questions present a slightly different nature. The aim is to find whether the experimentation changed the perception on the requirements of the software to be adopted and used. Question 5 focuses on the factors to consider when a new product is adopted and question 6 discusses the more important aspects of an efficient usage. In both cases the role of the training and support is evidenced as important, while other aspects as security, privacy, and availability of source code are considered less important. We can conclude that the effect of the OpenOffice.org introduction has increased the perception of the importance of the training. From one side, this can be obvious: the introduction of a new instrument requires always a training period, especially if it partially substitutes an old one. We must also point out that all the interviewed people had already performed their training with the office automation tools
Figure 8. Question 3: How important is it that the application you use is established and widely used?
Possible answers: (a) very important, (b) important, (c) of moderate importance, (d) of little importance, (e) not important.
Figure 9. Question 4: You are reluctant to give up the use of application software that you are using to
in favour of an OSS alternative! Possible answers: (a) totally agree, (b) agree, (c) neither, (d) disagree, (e) totally disagree.
An Empirical Study on the Migration to OpenOffice.org in a Public Administration
Figure 10. Question 5: How important do you find the following factors when you use a new IT-platform?
Factors considered are: (a) support and training, (b) easiness of use, (c) interoperability, (d) source code available, (e) functionalities, (f) security, (g) privacy. Position Before the experiment 1 Easiness of use
After the experiment Easiness of use
2
Support and training
Functionalities
3
Functionalities
4
Support and training, interoperability, Interoperability security
5
Security
6
Privacy
Privacy
7
Source code available
Source code available
Figure 11. Question 6: The biggest advantages you perceive with OSS are: Factors considered: (a) better support and training, (b) it is easier to use, (c) stability, (d) better functionalities, (e) better security. Position 1 2 3 4 5
Before the experiment Easiness of use Better functionalities, more stable Better support and training Better security
some time before, therefore such perception should already be present. It can be concluded that OSS has additionally stimulated the curiosity of participants, to the point to ask more questions about the tools used; such approach, if confirmed, can go in favour of people who claim that the adoption of OSS brings more “shared” knowledge. The last question proposed before and after the experimentation is a sort of summary and deals with the motivations that the user would have to use OSS. It is interesting to note that near OSS supporters a new group emerged also absorbing neutral users towards a more negative opinion. Some questions about the overall evolution of the migration have been submitted to users at the end of the project. Two questions are interesting for our evaluation of the migration, both related to the functionalities of OpenOffice.org and the possible full migration to the new solution proposed. We must point out that the experimentation has been performed with version 1.1.3 of OpenOffice.org;
After the experiment Easiness of use Better support and training Better functionalities, more stable Better security
the latest release would, probably, obtain better results. Question 8 faces the problem of the choice between the two proposed solutions in a general way. It asks whether the functionalities offered by the two suites are equivalent. Some users answered that Microsoft Office offers more functionalities than OpenOffice.org 1.1.3. In some way, the surprise may be that half of the users considered the set of functionalities offered equivalent. Question 9 contextualises the problem. It tries to evaluate the impact of a possible substitution of Microsoft Office with OpenOffice.org. In this case almost all the participants in the test considered the migration as possible, even though the majority sustained later that this operation requires some effort and is not only a simple substitution. A final evaluation of the experimentation has been performed on the number of files generated by the users adopting the new data standard supplied by OpenOffice.org. Table 7 contains a
An Empirical Study on the Migration to OpenOffice.org in a Public Administration
Figure 12. Question 7: Which motivations do you have to use Open Source Software? Possible answers:
(a) You believe it is right to support OSS initiatives, (b) You find that today’s market dominance of a single software vendor is wrong, (c) other: specify, (d) I have no motivation to use Open Source Software. .
Figure 13. Question 8: How do you evaluate the functionalities of OpenOffice.org with respect to the
Microsoft Office ones? Possible answers: (a) widely superior, (b) superior, (c) equal, (d) inferior, (e) widely inferior. 0% 0% 0% 0% 0% 0% 0% 0% a
b
summary of all the different files created during the experimentation in the two different formats proposed by the OpenOffice.org suite for text processors and spreadsheets. Obviously this kind of static analysis represents only a small subset of the usage of the open solution, as users had also the freedom to open Microsoft Office proprietary formats using OpenOffice.org. For technical reasons FLEA could not be employed in this type of scan to give us more fine-grained data.
limitations This study represents the results of a single experience and such results cannot be systematically generalised, as the essential comparative aspect is missing. As already mentioned, the PA under exam has imposed some constraints on the selection of the sample. The office automation field, in particular, may not be fully comparable to other
0
c
d
e
desktop environments where open source solutions are not as strong as OpenOffice.org. Furthermore, this study does not focus on a complete substitution of the old solution, rather on the evaluation of the coexistence of both solutions. A further step might be the evaluation of the effects deriving from a complete migration.
conclusion The migration to OSS described in this article has to be taken with care before generalising the results to other similar cases. In particular the migration has been restricted to the OpenOffice. org platform and to the PA field. The migration approach has been the more gradual as possible, maintaining the proprietary solution in parallel with the new one. The results obtained from the experimentation have been encouraging for the introduction of OSS
An Empirical Study on the Migration to OpenOffice.org in a Public Administration
Figure 14. Question 9 – In this moment, if Microsoft Office is removed, are you still able to perform the
same tasks? Possible answers: (a) yes, (b) yes but with some problems, (c) no. 0% 0% 0% 0% 0% 0% 0% 0% a
b
c
Table 7. Number of documents created during the experimentation Department 1 Department 2 Department 3 Total
Writer documents 27 223 164 414
on the desktop-side. Data collected during the experimentation shows that the usage of the new platform increased during the whole period, reaching at the end 25% of the total office automation tasks. Proprietary software remained the preferred solution for users. The impact on productivity has been minimal also due to the similarities of the software considered. Users acquired a better understanding of OSS after the experimentation and tended to have, in general, a positive vision of the whole movement. Software used has been considered adequate for the transition, although a sort of lack of functionalities emerged from the opinions of the users. More recent releases of OpenOffice.org should solve these problems.
acknowlEdgmEnt We acknowledge Dr. Hellmuth Ladurner for his precious help and support. Acknowledgments also go to all the users involved, participants in the experiment, technical personnel, and supervisors; without their constant effort and their availability, this study would not have been possible.
Calc documents 2 34 12 48
rEfErEncEs Basili, V. (1995). Applying the goal/question/metric paradigm. Experience factory. In Software quality assurance and measurement: A worldwide perspective (pp. 21-44). International Thomson Publishing Company. Campbell, D.T., & Stanley, T.D. (1990). Experimental and quasi-experimental design. Houghton Mifflin Company. CERT. (2001). Advisory CA-2001-19: Code Red Worm. Retrieved June 15, 2006, from http://www. cert.org/advisories/CA-2001-19.html Everitt, K., & Lederer, S. (2001). A usability comparison of Sun StarOffice Writer 5.2 vs. Microsoft Word 2000. Retrieved June 15, 2006, from http://www.sims.berkeley.edu/courses/is271/f01/ projects/WordStar/ Feller, J., & Fitzgerald, B. (2001). Understanding Open Source Software development. AddisonWesley. Fenton, N.E., & Pfleeger, S.L. (1997). Software metrics: A rigorous and practical approach (2nd ed.). PWS Publishing Company.
An Empirical Study on the Migration to OpenOffice.org in a Public Administration
Gartner Inc. (2003). Distributed computing chart of accounts. Retrieved June 15, 2006, from http://www.gartner.com/4_decision_tools/modeling_tools/costcat.pdf Landeshauptstadt München. (2003). Clientstudie der Landeshauptstadt München. Retrieved June 15, 2006, from http://www.muenchen.de/aktuell/ clientstudie_kurz.pdf Netcraft Survey. (2005). Retrieved June 15, 2006, from http://news.netcraft.com/archives/web_server_survey.html Newsforge. (2002). Largo loves Linux more than ever. Retrieved June 15, 2006, from http://www. newsforge.com/print.pl?sid=02/12/04/2346215 Nichols, D.M., & Twidale, M.B. (2003, January). The usability of Open Source software. First Monday, 8(1). Retrieved June 15, 2006 from http://www. firstmonday.org/issues/issue8_1/nichols/ Robert Frances Group. (2002). Total cost of ownership for Linux Web servers in the enterprise. Retrieved June 15, 2006, from http://www.rfgonline. com/subsforum/LinuxTCO.pdf Rossi, B., & Succi, G. (2004). Analysis of dependencies among personal productivity tools: A case study. Undergraduate Thesis, Free University of Bolzano-Bozen. Shapiro, C., & Varian, H.R. (1999). Information rules: A strategic guide to the network economy. Harvard Business School Press. Sillitti, A., Janes, A., Succi, G., & Vernazza, T. (2003, September 1-6). Collecting, integrating and
analyzing software metrics and personal software process data. In Proceedings of EUROMICRO 2003, Belek-Antalya. Stadt Nürnberg. (2004). Strategische Ausrichtung im Hinblick auf Systemunabhängigkeit und Open Source software. Retrieved June 15, 2006, from http://online-service.nuernberg.de/eris/agendaItem.do?id=49681 Stadt Wien. (2004). Open Source software am Arbeitsplatz im Magistrat Wien. Retrieved June 15, 2006, from http://www.wien.gv.at/ma14/pdf/ oss-studie-deutsch-langfassung.pdf The Yankee Group. (2005). 2005 North American Linux TCO survey. Retrieved June 15, 2006, from http://www.yankeegroup.com ZDNet. (2005). Extremadura Linux Migration case study. Retrieved June 15, 2006, from http://insight.zdnet.co.uk/software/linuxunix/0,39020472,39197928,00.htm
EndnotEs 1
2
3 4
5
Apache Software Foundation, http://www. apache.org The Mozilla Firefox project, http://www. mozilla.com/ gnuLinex, http://www.linex.org/ OpenOffice.org, http://www.openoffice. org Microsoft Office, http://www.microsoft. com/office/editions/prodinfo/default.mspx
This work was previously published in International Journal of Information Technology and Web Engineering, Vol. 1, Issue 3, edited by G. I. Alkhatib and D. C. Rine, pp. 64-80, copyright 2006 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
Chapter LXXII
Organisational Challenges of Implementing E-Business in the Public Services: The Case of Britain’s National Mapping Agency Francesca Andreescu University of Greenwich, UK
abstract Underpinning ₤136 billion of economic activity in the United Kingdom, Britain’s National Mapping Agency is a commercialising public sector organisation having trading fund status and existing in the intersection of two different spheres—the public and the private. Recognised as a leading participant in the geographic information industry, within which it is forging partnerships with key private sector companies, the organisation has enthusiastically grasped e-business as an all-embracing phenomenon and implemented a new strategy that transformed the way it did business. Drawing on longitudinal data gathered over a period of four years, this article explores the processes of strategic and organisational transformation engendered by e-business implementation in this organisation and discusses the successful elements, as well as some of the challenges to its change efforts.
introduction A common theme within the management literature in recent years has been the take up of private sector management strategies and practices by public sector organisations, designed to increase efficiency, performance, and cost economy in the activities they performed. In the United Kingdom, more specifically, in the context of significant changes in their operating environments and pressures to increase efficiency and accountability,
public sector organisations have been urged to experiment with new organising ideas, structures, and processes and transform the way they do business, by taking the opportunities and meeting the challenges that e-technologies and e-ways of working presented. As a result, public sector organisations have embraced e-business and have made innovative uses of Internet technologies to invent new business models or to enhance existing practices. E-business has been seen as a way of transforming these bureaucratic, centralised,
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Organisational Challenges of Implementing E-Business in the Public Services
and reactive organisations and their capabilities. Whilst the Internet created new commercial opportunities for public sector organisations, e-business was thus about exploiting those opportunities. This article draws on findings from a study conducted over a period of four years in a commercialised public sector organisation to reveal the processes of strategic and organisational transformation engendered by e-business during the implementation of a complex structural and cultural change programme aimed at reshaping this organisation and rethinking how it provided value to its customers. The discussion will examine the dilemmas and constraints identified by managers in the interpretation of the e-business strategy concept and why its implementation in practice can be challenging. Whilst the Internet offers a technological solution, the findings of this case study suggest that the successful implementation of a wider e-business strategy depends on managing simultaneously a number of projects which cross organisational boundaries and link organisational and technological factors. The article is divided into five main sections. The first section reviews recent e-business research and highlights that studies treating the concept of e-business as an all-embracing phenomenon and analysing in depth its implications are relatively rare. The second section of the article describes the methodology used in the study. Information on the contextual developments within the case study organisation is followed by a discussion of the research findings. The case study is used as a background for discussing some of the challenges and constraints that this commercialising organisation faces in implementing a wider e-business strategy. Finally, the concluding discussion highlights key lessons learnt and implications for practitioners.
E-businEss and organisational changE: rEvolution or E-volution? Increasing environmental pressures, global economic uncertainties, changes in public and
community expectations, and pressures to increase public accountability have provided the momentum for British public sector organisations to examine the effectiveness of their management structures, systems, and processes. The emphasis is on achieving efficiency, effectiveness, and economy in the activities performed by public organisations and on developing their ability to compete with private sector organisations. As a result, many public sector organisations are now existing in the intersection of two different spheres—the public and the private (Kickert, 2001). They fit neither in the strictly public realm of state action nor in the strictly private realm of commercial relationships. They are expected to function like businesses—to be efficient, customer-driven, and client-oriented—yet they perform tasks that are inherently public. How to strengthen organisational capabilities in order to confront successfully the competitive pressures becomes, consequently, one of the biggest challenges for these organisations. Research into appropriate e-business models has grown significantly over the past few years, with authors taking both theoretically and empirically based approaches to the development of taxonomies of business models suitable for the new economy. A number of business models focused upon individual business transactions that use the Internet as a medium of exchange, including both business-to-business and business-to-consumer (e.g., Becker & Berkemeyer, 2005; Carlton, 2001; Clay, 2001; Clegg, Chu, Smithson, Henney, et al., 2005; Gao, 2005; Garicano, 2001; Scott Morton, 2001; Smith, 2001). Interest has also tended to focus on new startups (Chaston, 2001; Clarke & Flaherty, 2004; Colombo, 2001) and on traditional organisations moving towards integrating electronic marketing and sales, purchasing, or customer service with their current businesses (e.g., Barnes, Hinton, & Mieczkowska, 2005; Bhaskar, 2005; Chen & Leteney, 2000; Hansen, 2000; Kotha, Rajgopal, & Rindova, 2001; Lee & Wang, 2001), or on the technology itself (Day & Schoemaker, 2000; Smith, 2001). Much research into the use of e-commerce, furthermore, has tended to focus on larger firms (Dutta & Segev,
Organisational Challenges of Implementing E-Business in the Public Services
1999), new business models for digital content (Barnes & Hunt, 2001; Boddy & Macbeth, 2000; Daniel, Wilson, & Myers, 2002; Mahadevan, 2000; Van der Wiele, Williams, Van Iwaarder, Wilson, & Dale, 2002), and the growth and development of dot-coms (Benoy, Cook, & Javalgi, 2001; Clay, 2001). Further studies revealed how the Internet has made possible types of business models previously very hard, or impossible, to implement (Mahadevan, 2000; Rayport, 1999; Timmers, 2000). A key concern is how strategy can be developed in hypercompetitive markets where the speed of change makes traditional forms of analysis impractical (Hoffman & Novak, 2000; Ordanini & Pol, 2001; Venkatraman, 2000). Eisenhardt (1998, 1999) and Yoffie and Cusumano (1999a, 1999b) address this issue with concepts derived from complex systems theory and emergent strategy making. E-business has also confirmed the significance of the resource-based view of the firm. Competencies, such as knowledge management and the ability to integrate complex sets of technological and business skills, are identified as success factors in a number of case studies (e.g., Feeny, 2001; Harris, Coles, Dickson, & McLoughlin, 1999; Kotha, 1998; Sauer, 1993). Various studies also addressed the risks of Internet ventures, barriers to implementation, success factors, and steps needed to manage technology-driven change (e.g., Barua, Konona, Whinston, & Yin, 2001; Eisenhardt, 1999; Kotha et al., 2001; Porter, 2001). In general, existing research has focused on issues related to the digital content provision (particularly online provision) and technologies or technology-based change generically, rather than focusing specifically on the organisational transformation required for successfully making the change to e-business. This topic is critically important because the majority of organisations are not startups, and many are not solely concerned with e-commerce and electronic markets. Rather, most organisations are traditional businesses which must grapple with finding a new architecture to meet the imperative of remaining competitive in an increasingly Internet-enhanced economy. Their concern is the adoption of e-busi-
ness as an all embracing phenomenon, in other words, transforming key business processes with Internet related technologies (Deise, Nowikow, King, & Wright, 2000; Symonds, 1999). In particular, within the literature drawn on public sector organisations, the issue of how public sector organisations undergoing commercialisation can successfully make the transition from traditional approaches to e-business by taking advantage of e-technologies has received little attention. Building upon these arguments, this article explores the processes of strategic and organisational transformation engendered by e-business implementation through an in-depth case study of a commercialising British public sector organisation within the geographic information industry. On doing so, the study aims to provide fresh insights into the internal transformation processes which occurred in an organisation evolving from the classical, bureaucratic, and centralised “public sector model” towards a new organisational form through embracing e-business as a corporate philosophy. We also aim to see whether practice had overtaken theory and, if so, what enhancements to existing theory could be learned from the practitioners in the field. The definition of e-business adopted in this research takes the view that e-business represents: the way in which organisations can gain value from the Internet technologies and encompasses not only e-commerce—selling and buying over a network, but also the way people within an organisation work together, the sharing of information and effective communication, the transactions and connections across a supply chain, between suppliers and distributors and consumers, as well as the relationships between individuals and institutions. (Symon, 2000, p. 6) Thus, unlike other authors (e.g., Clegg et al, 2002; Poon & Swatman, 1999) who use the terms more or less interchangeably, this study seeks to distinguish between e-commerce and e-business. E-business is not just the sharing of business information, maintaining business relationships, and conducting business transactions by means
Organisational Challenges of Implementing E-Business in the Public Services
of Internet-based technology; it is also about the organisation’s social environment and the relationships between people and technologies. Such a perspective highlights the complex nature of the notion and shows that e-business goes beyond e-commerce, in that when implemented successfully, it transforms an organisation and its capabilities entirely.
rEsEarch approach The research reported here is drawn from findings from a four-year study of organisational change in Britain’s National Mapping Agency, carried out between 2001 and 2004. The study adopted a longitudinal perspective on change concerned with the holistic and processual character of organisational transformations over time (Pettigrew, 1985, 1990). An inductive research strategy was employed, which was both iterative and developmental. The research aim was not to confirm hypotheses, but to understand the process of e-strategy implementation and the perceptions of those involved and affected. What was important was the process by which the changes were introduced, the reaction by organisational members to those changes, and their perception of them.
Data Collection and Data Types The time period for data collection was between 2001 and 2004. This research involved three main
types of data: in-depth interviews, documentary data, and nonparticipant observation. Data were collected from the organisation at two time points during the research period to track the internal changes over time: • •
Time 1 (2001/2002), corresponded to the official adoption of the new “business model”) Time 2 (2003/2004) corresponded to the incorporation and consolidation of strategic changes, allowing for change to become partially “anchored” in new social structures and practices
A total of 89 interviews were conducted with the main actors involved in the strategic initiative programmes and in the implementation of change, such as deputy chief executive, corporate strategists, members of the top and middle management, other individual organisational members, and management consultants (Table 1). Most respondents had been with the organisation for a considerable period of time. This was supplemented with analysis of documentary evidence, including board minutes, strategy documents, reports of the steering committee, project teams, minutes of follow-up meetings, business plans, as well as nonparticipant observation. During the field study, the researcher had access to the main participants, formal and informal meetings, existing minutes, and documentation highlighting some of the historical, processual, and contextual issues relevant to the e-business
Table 1. Number of respondents participating in interviews in Phase 1 and Phase 2 Position of Interviewee
Number in Time 1
CEO Strategy Directors and Strategy Managers Senior Managers Senior Consultants Project Managers Line Managers Individual Organisational Members (Sales and Finance)
1 6 14 3 21 8 2
Total Interviews = 89
55
Number in Time 2 1 5 8 2 10 6 2 34
Organisational Challenges of Implementing E-Business in the Public Services
strategy implementation. In addition, the interviews emphasised both individual and shared interpretations of key participants concerning actions, events, views, beliefs, aspirations, and motives. All interviews were tape-recorded. The following section will introduce the case study organisation and present the results.
research site Britain’s National Mapping Agency (NMA) is a quasiautonomous agency within the geographic information industry with trading fund status. It employs approximately 1,850 staff, 1,350 of whom are based at the head office, whilst the rest—cartographic surveyors and territorial sales representatives—are dispersed among a network of 80 local offices around the country. NMA is recognised as a leading participant in the geographic information industry, within which it is forging partnerships with key private sector companies. Its principal activities are twofold: the maintenance of the National Topographic Database by recording and storing measurements of new roads, houses, and so forth, and the creation of products from it, such as paper map series and digital datasets used in geographic information systems (GIS). The organisation offered an excellent opportunity to study the organisational change required in transforming a former government agency into an e-business, culturally, commercially, and technically. Confronted by powerful pressures to improve organisational performance under the government modernisation agenda, NMA implemented a complex e-business strategy, combined with a huge investment in technology new product development, which were designed to radically change the organisation’s structure, management, and knowledge processes. To take advantage of the opportunities that e-technologies and e-ways of working presented, NMA charted an ambitious organisational transformation in which e-business was seen as a catalyst for change and e-strategy as a route map. This knowledge-intensive organisation is a particularly interesting example of how a public
organisation managed to overcome the constraints of its business context to fashion its own destiny. Unlike other public sector organisations with stable, incremental environments, NMA is operating in conditions of low environmental stability, with frequent, rapid changes in geographic information systems technology. Due to the emergence of new digital technology, the geographic information industry is on the verge of rapid growth, particularly in the market for location-based services. The overlay of layers of data to create new views creates good commercial opportunities for NMA, such as in the correlation of geographic, commercial catchment area and sociodemographic data for use in supermarket home delivery services. All of these trends make the geographic information industry very different from the more traditional consumer and industrial markets in which many other public organisations operate. With an increasingly challenging environment, pressures to develop e-business under the government e-strategy in the public sector and emergence of new technologies, NMA is closer to the “relentlessly changing organisations” model of Brown and Eisenhardt (1998) in the high-velocity computer industry. There are these particular “high-tech” and “e-business” features of NMA that make this public sector organisation such an interesting case for management research in general, and for e-business change implementation in particular.
Key Facts About NMA • •
•
Modern data collection using the geographic positioning system (GPS) and location information. Turnover from operating activities: £110 millions in 2004; most of the income comes from computerised geographic data, which is used extensively in both the private and public sector, with around £136 billion of Britain’s GDP underpinned by it. Core markets by 2004: Public sector and utilities (32%); land and property (5%); consumer (12%); commercial markets (mobile
Organisational Challenges of Implementing E-Business in the Public Services
•
•
communications and wireless; large retailers; banking, finance and insurance; transport and distribution) (51%). Has operated as a trading fund since April 1999, providing a greater degree of commercial flexibility and increased responsibility for its business planning and finances. As a trading fund the business has to make a profit but does not receive a subsidy from the taxpayer. A potential change in status towards a government-owned plc. was proposed in 2002 and rejected because it did not deliver the necessary benefits to the organisation. It was decided that it was in the public interest for NMA to remain a trading fund. However, enhanced financial freedoms and flexibilities have been granted by the government through a revised framework document in 2003.
results External Context The E-government Strategic Framework (April 2001) set out a series of guiding principles for public sector organisations centred upon building services around citizens choices, making government and its services more accessible electronically and managing information and knowledge in more efficient ways to ensure easier online access and more effective use of all services. It set a series of e-government policies and targets for quasiautonomous agencies, stating as main objectives that 90% of low-value procurement transactions and 100% of document management should take place electronically by 2004 and that 100% of services should be available electronically by 2005. In order to meet these aims, public sector organisations must innovate within a common framework and manage their services as a business, focusing on cost effectiveness and funding mechanisms. In line with these targets, Britain’s national mapping agency was granted trading fund powers by Parliament in 1999 to give it direct responsibility for its own finances and freedom to develop new initiatives. In effect, this meant that NMA
would remain obliged to serve the government by providing information across Britain whilst earning revenues in a commercial and increasingly competitive marketplace, reducing dependence on the taxpayer. Such a status provided the organisation with the opportunity to operate and compete commercially by earning commercial revenues for its geographical information in order to be selffunding, as well as be more accountable for the efficiency and effectiveness of its operations.
Internal Context The organisation had a long history of unsuccessful reorganisations and frequent changes of Chief Executives. Since 1993, NMA has launched three different reorganisation initiatives under different governments, all of them recognised by managers and staff as “spectacularly unsuccessful.” Similar to other public sector entities undergoing public sector reform in the form of commercialisation, the efficiency and effectiveness of management processes were major challenges. NMA was a vertically integrated organisation, with an excessive degree of specialisation, rigid hierarchical divisions, and divided areas of activity with niches and boundaries that served no useful purpose. Underlying many of the problems of inefficiency and high costs was the monopoly situation that all costs could eventually be passed on to the market. The organisation was also a classic example of public service culture and organisational systems and processes. NMA’s administrative structure based on departments and well-defined job categories had effectively prevented the organisation from operating in accordance with its own objectives and values. It was realised that, especially in the relationship between NMA’s employees and its customers, too much importance was being given to resolving bureaucratic issues, to the detriment of customer service. In short, the internal structure worked against the integration of main business processes.
Drivers for Change NMA has expanded considerably in recent years into commercial and leisure markets, both in the
Organisational Challenges of Implementing E-Business in the Public Services
U.K. and internationally. The environment in which the organisation operated was characterised by emerging opportunities for the rapid expansion of the digital market and, in particular, location-based services offered via the Internet. That sat alongside its activities in the mature market for traditional paper-based mapping. The severe market competition and threat of product substitution through the mass expansion of digital information and the new status of trading fund acted as drivers for revenue maximisation and for placing greater emphasis on increasing the utilisation of geographical data. A key aspect of becoming a trading fund was to move towards performance targets and a culture of measuring, and rewarding achievement linked to the business vision. This new “business model” meant that NMA had to strike the right balance between maintaining consistent and accurate geographical information for the whole of Great Britain whilst ensuring its operations are funded by earning income and generating profits from the licensing of data to both the public and private sectors. The organisation was also required to make an average return on the capital it employs—on average around £40 million—of at least 5.5% a year and to pay an annual dividend to the government based on each year’s trading results. In response to this challenge, a series of organisation change initiatives began.
of a market-oriented, customer-focused, and selffinancing organisation. The nature and direction of the required cultural shift was a radical change for many employees, who had spent virtually all their careers in the relative security, comfort, and complacency of the old “public sector ethos” culture. The “cultural inertia” among the majority of employees, at all levels, was one of the major challenges to management in trying to achieve the transformation to the new system of efficient business operations. Following the review, a new vision formed the foundation on which the organisation could be transformed into an e-business: NMA and its partners will be the content provider of choice for location based information in the new information economy. In October 2000, a cross-functional team from across NMA was established to develop the vision into a new strategy for the organisation. The team used the mission statement as a starting point for designing the structure and process of the change programme and translating it into a new e-strategy. Fundamentally, the e-strategy was about transforming the business commercially, technologically, and culturally, by implementing a new business model, combined with a multimillion pound investment in technology and new product development. The main strategic priorities were:
the change agenda: from Bureaucracy to E-Business
•
With the advent of a new Chief Executive Officer recruited from the commercial sector in September 2000, considerable impetus for change was being evidenced. A reevaluation of the business from the customers’ perspective was completed and the need for the organisation to develop innovative products and services that could be delivered electronically was identified. The review undertaken by the executive team identified that the greatest hindrance to achieving NMA’s corporate goals were its structures and work practices. Another major change that had to occur was a shift from the old culture of public monopoly to the culture
• • • • • •
Commercial revenue—increase commercial revenue from existing assets Reduce costs and waste Invest in new brands, infrastructure, and capability Grow data products and help partners to develop their products Aggressive marketing for products Grow partners and enable them to add value to products Build the best data to enable “joined-up geography”
The key emphasis was on better knowledge management, focusing on the needs of businesses and individuals, and helping partners to create
Organisational Challenges of Implementing E-Business in the Public Services
radically new products underpinned by data maintained by NMA. At the heart of the new strategy was the idea of NMA working together with partners to become the content provider of choice for location-based information. This meant establishing commercial agreements with various partners in which NMA was providing the geographical data and the partners were developing the software required for customising this data in different ways and translating it into innovative products and services under NMA’s brand. A series of objectives were established: • • • • • •
Delivering excellence in all aspects of the business, employing e-business principles to exceed customer expectations Identifying, developing, and maintaining effective strategic partnerships Ensuring that the business strategy is clearly understood by staff, customers, partners, and the wider community Establishing NMA as the centre of excellence for innovation in location-based information Developing a business of progressive people with skills appropriate to an e-business Changing the internal culture
A range of projects were identified that would exploit e-business technology and approaches to improve performance. An initial assessment of costs, benefits, and timescales were prepared. At the end of the four-week period, NMA Executive and Nonexecutive Directors approved the strategy. More detailed planning and evaluation of implementation costs and benefits followed as development work began. A major feature of the e-strategy implementation was the intention to place the organisation at the forefront of the new information economy by embracing cutting-edge technology for supplying geographical data and enhancing the versatility of NMA data. This involved the implementation of a groundbreaking new concept of mapping and the development of new products, by transforming the map-making process so that electronic data could be available to customers within 24 hours
0
of being surveyed. The key aim was to replace the traditional paper map products with large-scale electronic mapping, so that most of the products and services offered by the organisation could be delivered electronically; digital mapping now accounts for some 80% of the organisation’s turnover. Amongst the most pioneering products were a digital database and online service named Master Map, featuring definitive digital data for the whole of Great Britain and an online integrated product called Pre-Build TM. The Master Map database provides intelligent and accessible data with the flexibility to link information across and between organisations, whilst Pre-Build TM offered highlydetailed digital mapping pinpointing buildings and roads tailored to the needs of utilities and telecommunication companies. The successful implementation of the new strategy pivoted on the adoption of e-business as a corporate philosophy. As the Chief Executive remarked: e-business is key to our future success, opening huge new opportunities for us and our partners. It requires new ways of working and both individual and team efforts right across the business. We have the best data to underpin the Master Map of Britain. Our e-initiatives will ensure we can create and supply that data when, where and how our customers want it—and at the same times e-enable ourselves. (Chief Executive, 2001) Collecting, maintaining, and delivering geographical data was at the forefront of NMA’s activity and the implementation of the new e-business strategy acted as a driver for the transformation of its core business processes—data collection and management. The e-strategy was thus seen as the necessary framework for driving forward business change. As some senior managers argued: The “E” strategy allows us to reassess how we manage the collection and delivery of our data to remain at the forefront of the industry, offering agility and flexibility to our customers. (Director—Data Collection and Management, 2001)
Organisational Challenges of Implementing E-Business in the Public Services
Table 2. Projects under the e-strategy Strategic Initiative
Putting the customer first
Strategic alliances
The new mapping agency
Corresponding Projects Customer Web sites New Web site for the organisation Customer Relationship Management On-line Service Digital Mapping Establishment Joined-up government Alliance Extranet New ways of working and Project Platinum Knowledge management “Help yourself” (personalised online support for all employees) Raising the return (a new financial model)
Developing the market
Developing the Digital National Geographic Database E-Brands Location-based standards E-Business channels Market Development team Pricing and licensing
Enabling infrastructure
Enhancing the IT infrastructure Off-site 24/7 availability Enterprise Wide Software Suite (ESS)
The Digital Maps business cannot grow enough without the e-strategy. This work is critical because it will allow us to create and deliver more innovative products much faster to our customers. It will allow us to become far more efficient and cost effective across a wide range of activities. (Director—Digital Brands, 2001) As the main supplier of geographical information in Britain, NMA needed not only to implement tangible customer benefits and to advance online services, but also to change the organisation’s internal culture to a customer-led one. The prevailing view amongst over 90% of senior executives interviewed was that e-business was not only about electronic service delivery, but also that “e” should become embedded in the employees’ ways of working, in terms of approach and attitude: For the future NMA, there is no business but e-business! We must become a more agile organisation able to respond to our customers and partners needs at “internet-speed.” … The IT technology is the engine room of this change providing the infrastructure, information systems and user support in transforming the organisation. However, if
we are to succeed in achieving our strategic aims and becoming an e-business, it is essential that we develop new ways of working. We must improve our day-to-day business practices and ensure that we get the best out of people, knowledge, systems and facilities. (Chief Technology Officer, 2001) The process of strategy formulation resulted in a series of 21 investment projects that were grouped into 5 strategic initiatives reflecting their interconnected nature (Table 2). Putting the customer first initiative involved the adoption of e-business principles and technology to ensure that each customer and partner is managed at segment and individual levels. In the words of the Initiative Manager: This initiative is critical because it will allow us to create and deliver more innovative products much faster to our customers. It will allow us to become far more efficient and cost effective across a wide range of activities. (Initiative Manager, 2001) Customers were offered online access to a wide range of mapping data, product demonstrations to show the versatility of NMA data and tailor-
Organisational Challenges of Implementing E-Business in the Public Services
made Web gateways into the mapping agency, each fostering improved, more focused customer service. These gateways included facilities for the online ordering and delivery of existing data products at any time of the day or night or requests for special mapping surveys. Each customer Web site held information specific to that customer or partner as well as access to generic information and functionality available to wider groups. By December 2002, the implementation of a portal Web infrastructure environment increased the efficiency of personalisation and customisation provided to customers. In parallel, the NMA Web site was redeveloped to become a gateway for location-based services and information and an enterprisewide software was implemented to allow better customer relationship management. Users of the new Web site were able to access location-based information and services either directly provided by NMA or indirectly by partners and account holders (customers and partners) could order digital and graphic mapping online. The strategic alliance initiative involved the establishment of a strategic alliance to help drive the location-based information industry forward in a cohesive manner and develop collaborative working between NMA and other parts of the government with the help of e-business technology. It involved joined-up geography by using NMA data and referencing framework to combine other location-based information from other government bodies, for example, land ownership, local authorities, addresses, field parcels, and land use. It also marked a change in the way NMA was making available its datasets to users, by enabling partners to commercialise this data, on behalf of NMA, in a more flexible way. The initiative manager talks of the overall project as follows: This initiative ensures that our NMA has the right strategic relationships to help grow the geographic information industry. Our organisation holds an important position in the location based information industry but must work with key players in government and the private sector to realise the vision. (Initiative Manager, 2001)
The new mapping agency initiative focused on the people implications of the transformation of NMA into an e-business. The Project Manager states: Whilst each of the other four strategic initiatives delivers in its own way significant change to the business, this initiative is the one that touches every single person in NMA. These projects are all closely related to our people and the way we do things. The focus is on simple and more effective ways of working whilst focusing on the needs of the customer. (Initiative Manager, 2001) A programme called “New Ways of Working” addressed issues associated with the current organisational structure, people, culture, communication, and the way employees worked. The clear target was to make these elements more streamlined, adaptive, responsive, and customerdriven. This included changing the existing working practices, the implementation of new performance management, and reward systems focused on rewarding performance, changing the way information and knowledge are captured, coordinated, and made readily available within the organisation (knowledge management), and the implementation of a new software which allowed employees to manage their personnel records online whilst also eliminating unnecessary paperwork, procedures, administration, and costly support functions. The organisation needed new competencies and skills among existing managers, so that they could drive forward the change programme. Current management capabilities were characterised by excellent technical abilities but poor people management skills, especially in terms of managing change and risk. It was considered that, in order to change the culture of the organisation, leaders and management teams had to develop themselves in readiness to embrace the new challenges and opportunities facing NMA. Therefore, a new programme entitled “Project Platinum” was designed to help leaders understand how they could influence the culture of the organisation, identify the competencies and skills required in leaders,
Organisational Challenges of Implementing E-Business in the Public Services
and set in place a programme which could develop the behaviours supporting those competencies. As part of this initiative, a set-piece event—NMA Experience—was organised in December 2001. The event was attended by all the employees and was designed to explain to everyone the new vision and values of the organisation as an attempt to change the strong supportive civil service culture, rich in custom and practice, develop a new culture in line with the new direction of the organisation, and encourage creative and innovative behaviours. The core new values were identified as customer focused, quick, working together, able to take some risks, interested and excited by challenge, personally accountable, commercially oriented, and rewarded for results. Developing the market initiative involved developing new partnerships and using new channels to reach new customers. In the past, NMA focused on an established core of customers in a relatively mature market with sectors such as central government, local government, and utilities. These customers increasingly expected better quality products and services in a market that was becoming more competitive. The organisation needed thus not only to develop the existing markets but also to find new markets and work with partners to develop new applications that will provide benefits to end customers. This involved adopting a proactive brand strategy, with a focus on digital location-based information and the launch of a new idea of geographic database—Master Map. Innovative in concept, Master Map was not a map in the traditional sense but a digital map framework through which customers could access the precise mapping data they needed. It offered a high level of flexibility and a complete reference system for Britain’s geographical data. As the programme manager comments: “The initiative supports customers evolving needs, develops existing markets and opens new opportunities to ensure that NMA is the content provider of choice” (Initiative Manager, 2001). Enabling infrastructure initiative involved building a robust new infrastructure of systems to underpin the e-business. A considerable amount of infrastructure has been put in place to support
the whole of the e-strategy but particularly the Master Map idea. The Internet, extranet, and intranet applications were all upgraded and, in order to enhance customer benefit and operational efficiencies, the old systems were replaced with a single, integrated, enterprisewide software application platform. This integrated software package, called Enterprise-Wide Software Suite (ESS), offered a single repository for all NMA’s data and was implemented in association with technology partners. One of the programme managers expressed a commonly held view: The Enabling Infrastructure provides the foundation for the success of the e—Strategy. The application and technical environment that we build must provide the efficiencies that allow us to operate in an increasingly dynamic and delivery focused organisation. Information Technology (IT) provides the infrastructure, information systems and user support in transforming the organisation. Without a robust IT infrastructure none of the e-business activities will be sustainable. To a lesser or greater extent, we have all been frustrated when a system let us down, whether it be our own personal Internet Service Provider or a corporate system. As our business becomes increasingly dependent on IT systems, we have to avoid those frustrating (and potentially damaging) failures, as it is through the Enabling infrastructure initiative that we aim to do this. (Programme Manager, Enabling Infrastructure, 2002)
implEmEntation issuEs organisational restructuring In order to successfully implement the e-business strategy, the structure of the organisation was reviewed and reorganised in November 2001. As the CEO summed up the rationale for this strategic initiative: The need for accurate, reliable locational information underpins so many of the new services coming on stream and in Britain, no one has better
Organisational Challenges of Implementing E-Business in the Public Services
locational information than us. … But to stay at the forefront of the geographical industry we must keep pace with the market. That’s why we’re not only investing in e-business initiatives but have already put in place a whole new organisational structure to make it a reality. (Chief Executive, 2001) The realignment of internal group structures and board responsibilities resulted in: •
•
The establishment of two brands businesses, based around distinct customer groups—Digital and Graphic Brands. The two brands’ businesses were focused on introducing new and innovative ways of working with commercial and government partners to meet different customer group needs. Amongst such innovative approaches were the establishment of a joint-venture company with a commercial partner, with the aim of providing consistent and maintained points of interest data for the industry and an estimated £35 million content deal with a mobile phone operator that would allow mobile users to access coloured maps featuring real-time displays of various locations. The creation of a Business Change group that was charged with championing the transformation of the way the organisation did business and in particular managing the implementation of the e-business strategy.
Ownership of the Strategy Ownership, accountability, and leadership were essential elements of the successful implementation of the NMA e-business strategy. As change was intended to be transformational and to affect all the aspects of the organisation and the levels within it, the entire programme has been top-down driven, with the provision of a clear, sustained direction that was well resourced and coordinated. The instigators of the transformation were the leaders of the organisation and the Strategy and Operating Board—including Executive and Nonexecutive Directors—had been heavily
involved in the e-business strategy and monitor its implementation. In addition, each strategic initiative had a director-level owner who was accountable to the board for delivering the benefits of that initiative. Full-time initiative managers who focused on coordinating activity within their initiative have been appointed. At project level, business owners have been identified, who were champions for their projects within the implementation. Project owners were responsible for ensuring that their projects delivered the benefits anticipated in the strategy. Where IT projects were involved in the delivery, the business owners worked in close conjunction with IT Programme and Project Managers.
risk management, change control, and financial monitoring at appropriate levels Mechanisms have been implemented to ensure the adequate management of risks, costs, and change control during the implementation. The Operating Board received updates on implementation progress, including costs, on a fortnightly basis with a more detailed review undertaken six-weekly through a business health check exercise. In addition, a weekly implementation steering group (chaired by the Director of Business Change and including the Initiative Owner Directors) monitored strategic level risks, costs, and implementation progress. Significant changes to project contracts and business cases were approved by initiative owners and reported to the steering group. Furthermore, programme and project boards have been established within each initiative and these dealt with the day-to-day detail of implementation monitoring, cost control, and minor change control.
internal communication In order to engage all employees in the business transformation associated with the implementation of the e-business strategy, considerable effort has been put into a coordinated programme of internal communication, which included:
Organisational Challenges of Implementing E-Business in the Public Services
•
• • • •
Face-to-face briefings with all staff in December 2000 and June 2001, including interactive and multimedia demonstrations of live and prototype systems Face-to-face briefings on quarterly basis with all field staff from December 2000 onwards Fortnightly update briefings at Operating Board—cascade briefed by managers to all staff and reported on Intranet A dynamic intranet site—regularly updated with project aims and progress Internal branding of the implementation activity to help focus all staff on the business transformation
further restructuring In May 2002, a new Human Resources Director, recruited from the commercial sector, was appointed to drive forward the people side of the change programme. At her initiative, in August 2002, a team of senior managers undertook an eight-week review exercise (entitled “Emerald City”) which involved looking forward to the future and identifying the challenges and opportunities that the organisation was likely to face in three to five years’ time and how the business could be driven forward. Following this review, in September 2002, the board announced a further restructuring of the organisation to help develop stronger teams and networks around the core processes (Figure 1) and to create a more flexible structure, with more flexible processes and communication patterns. This was accompanied by a review of staffing levels in different functional and subfunctional areas and 300 people were, consequently, released. A consequence of this restructuring was that a number of old divisions and functions were either eliminated or merged, in order to obtain a leaner, flatter structure and increase overall efficiency of the business. The proposed new structure involved merging the Digital and Graphic Brands businesses into a single Sales and Market Development Group and creating a separate Programmes and Products
Group, each headed by a Director. The Business Change Group activities were absorbed within other parts of the organisation. To give greater integration to corporate communications, the Corporate Communications Department has been integrated into the Human Resources and Corporate Services Group led by the HR Director. Minimal changes were introduced in the other groups. As a result of these changes, the new structure due to be implemented starting with November 2002 incorporates seven major groups (Figure 2): strategy; human resources and corporate services; finance; information systems, Web, research and innovation; data collection and management; programmes and products; and sales and market development. The previous structure created in November 2001 in relation to the e-strategy implementation served its purpose of focusing the organisation of work on developing new products and markets and gave an impetus to particular projects. It allowed NMA to strengthen its digital offering and also gave the graphic side of the business the confidence and space to develop, and between them both business groups could identify notable successes. The new structure implemented in September 2002 offered a much higher level of functional flexibility. It gave clarity to the end-to-end processes of the business—from collecting data, through the production processes, and on the supply of data to partners and customers, whilst allowing the development of stronger cross-functional teams and networks around core processes. The number of hierarchical levels between top management and business groups teams has been substantially reduced and the seven groups have been organised along business processes, giving a lot of operational freedom to Group Managers. This freedom was balanced by a stronger planning process in the organisation, with group teams working with the strategy people on the overall planning. The different cross-functional teams created meant increased horizontal communication between groups as well as vertical between the top management team and group teams, facilitated by
Organisational Challenges of Implementing E-Business in the Public Services
Figure 1. Core business processes in NMA Data Collection and Management
The National Geospatial Database
Production and technical product marketing
Sales and customer marketing
Supported by Strategy and International & Government Relations Finance and Procurement Human Resources and Corporate Services Information Systems, Web, Research and Innovation
Figure 2. The new organisational structure CEO and Management Team
Strategy
Human Resources and Corporate Services
Finance
Inf. Systems, Web, Research and Innovation
the use of modern information technology. The new organisation of work was trying to combine the strengths of a functionally arranged organisational structure, offering a better coordination of the core business process with the flexibilities in terms of vertical communication offered by a process-based structure. The transition from the old structure to the new one was a lengthy process. The old structure was effectively being moved to the new structure on a level by level basis. As employees in the positions made redundant under the voluntary redundancy scheme were leaving the organisation, the remaining positions were being rearranged in the new structure and staff were notified about the new positions. Transition structures were being used to prop up the old structure and keep the day-to-day operations going. The implementation of the new structure was finalised in August 2003. In parallel with these structural changes, in order to track and predict customer, market, and brand profitability, a Corporate Balanced Score-
Data Collection and Management
Programmes and Products
Sales and Market Development
card with six performance dimensions (financial performance; quality of service; flexibility; competitiveness; resource utilisation; and innovation) has been introduced as a way of consolidating corporate performance measures and focus them upon core strategy components. The aim of introducing the balanced scorecard, in addition to other measurement tools such as customer satisfaction surveys, employee opinions, and monthly business health checks, was to provide the management board with the means to monitor the progress of the business on an on-going basis. In addition, the strategy team could liaise better with individual business planners (Business Groups’ Directors) for coordinating business processes across teams and establish integrated performance measures. By December 2004, all the 21 projects comprising the e-strategy were fully completed and their success benchmarked against their key deliverables. A new business strategy 2005-2008, focused on how NMA will be meeting customer needs in the next four years, was created, building
Organisational Challenges of Implementing E-Business in the Public Services
on some of the success aspects of the previous e-strategy: the creation of better data collection, maintenance and management systems as well as the development of new products and geographic solutions through partnerships with other public and private organisations. An event chart of the NMA “journey” is presented in Figure 3. An overall assessment of the change efforts driven by the e-business implementation between 2000 and 2004 suggests the e-strategy implementation was a success. The progress made towards transforming the way the organisation worked was acknowledged by customers, partners, suppliers, and e-business experts, and employees noticed a marked improvement in their working environment. In addition to this feedback, the e-strategy was awarded five stars by the Government Office of the E-Envoy in 2004 and this rating indicated that the NMA’s strategy was seen as a successful plan of action that has largely been met with key deliverables offering customers and staff new benefits. An operating profit of £9.4 million was forecasted by April 2005 thanks to careful management of costs, growth in revenue, and rigorous prioritisation of investments.
lEssons lEarnEd The digital revolution offered huge opportunities for the organisation to improve the services that it provided to its customers, enhance its interaction with partners, and revolutionised the way people worked: first, by redrawing the way in which those services are provided to capture the full benefits of technology and, second, by tailoring the services to the needs of individual citizens, customers, and businesses. Electronic service delivery enabled NMA to become far more responsive and flexible, as well as creating the opportunity to harvest significant efficiency benefits. The steps taken to meet the organisational challenges of implementing the e-business strategy across the organisation therefore provide a list of best practices for this particular situation (see Table 3). Several success factors relevant to e-business implementation emerged from this account: First, one major lesson from this experience was that a successful organisational transformation involving e-business implementation requires clear leadership from the top management team. The NMA e-strategy has been created as the strategic blueprint to develop the business and firmly
Figure 3. Event chart, e-strategy implementation and follow-up (2000-2004) New CEO
21 Projects
(August 2000)
Change Programme
E-Strategy
(January 2001) (September 2000)
New ways of working
NMA Experience
Project Platinum
New HR Director (May 2002)
-
(December 2002)
Vision Values Int. Comms
(December 2001)
Enterprise Wide Software Suite (ESS)
Emerald City (August 2002)
Organisational Change
STRATEGY
- Reorganisation
Customer People s Data Delivery
New strategy 2005-2008 Focus 4 Years
- New structure (April 2003)
(Presented: September 2004)
Organisational Challenges of Implementing E-Business in the Public Services
Table 3. Best practices for the implementation of the e-business strategy No.
1
Organisational Challenges
Being “halfway” towards the private sector as a public sector executive agency moving towards a competitive commercial model
Steps Taken to Meet Organisational Challenges - Design a strategy that would capitalise on the unique position of the NMA on the market, as the biggest geographical information provider in U.K. with market-ready cuttingedge technology and high levels of internal capability - Establish strategic alliances to help drive the location-based information industry forward in a cohesive manner - Develop collaborative working between NMA and other parts of the government
Data from NMA “What we needed was a strategy that recognised our unique position. Something that enabled the organisation to have learning and, at the end of the experience, the learning would be resident in the organisation” (Corporate Strategist, 2001) “Who are our competitors? There is no single competitor for us. Because we are responsible for the national infrastructure of geographical information, there is no single other organisation who could and would want to replicate that, because it is hugely expensive to replicate that” (Senior manager, 2002) “The top team and the board decided ‘Right, this is our new strategy, this is were we are going, and this is our vision. We need to communicate that to everybody’” (Project Platinum Manager, 2002)
2
3
4
Need to develop and communicate a clear ebusiness vision and the concept behind the change to the organisation as a whole and to all stakeholders, especially end users (customers and partners)
Communicate the why, where, and how of the change to the people involved in the 21 projects and to all employees by using mobilising events such as “NMA Experience.” This builds confidence in the projects, lets the team know where they stand and where they are headed, and encourages commitment to the cause through the use of various internal communication mechanisms
Create a feeling of ownership for the e-strategy at project level and a commitment at all levels of the organisation
- Get the project leaders and initiative owners involved early in the e-strategy projects - Communication events important in creating belief and feeling of ownership
“We put this plan together, represented by a railway journey. It’s a single-track railway, a one-way journey, because we wouldn’t be going back. It didn’t have a starting-point because change has been going on forever, and we couldn’t say if or when we would finish” (Chief Executive, 2001)
- The project management teams were put in place to develop channels for and manage the communication between the different parties. Weekly, fortnightly, and quarterly status meetings help to keep all parties updated on each other’s progress and on the progress of the e-strategy implementation as a whole
“After about three months we had improved the relationship between the project teams and the steering group. The steering group felt more comfortable and more confident in the ability of the teams to deliver” (Director of Business Change, 2002) “Communication forms the grounding for all the organisational work I have done. The communication, consultation and involvement strategy is what will make it happen” (Programme Manager, 2003)
Need new effective communication channels between the project teams and between the Sales and Marketing teams and customers
“The essentials are around being absolutely clear about what it is you are trying to achieve and why — and I don’t just mean the reorganisation. I mean: where do you want to be as a business?” (Senior manager, 2003) “What this event [NMA Experience] really did was to move the senior management up in the eyes of the workforce. It helped them understand that there was skill and ability in our senior team, and they were all committing to doing things in a different way and start leading the business more effectively” (Project Manager, 2002)
continued on following page
Organisational Challenges of Implementing E-Business in the Public Services
Table 3. continued No.
5
6
Organisational Challenges
Manage different customer segments: retain traditional loyal customers (public sector and utilities) whilst developing emerging markets (telecomm, retail, and insurance) and commercial partnerships
Establish the right model of organisational design for the organisation
Steps Taken to Meet Organisational Challenges
- From products to relationships: shift from a customer relationship management (CRM) focused on products or processes to a CRM philosophy focused on customer segments and the solutions they require - Identify the most profitable customer groups - Focus NMA’s activity entirely on improving core business processes (data collection and management) and involve commercial partners in translating this digital data into business solutions that meet end users’ needs through partnership agreements
- Design and implement the right structure for an e-business starting from NMA’s core business processes - Focus on core organisational competences: data collection, data management, exploitation of new technologies, and partner/customer management
Data from NMA “We need partners who would come and help us find/exploit certain niches in the market place. They are very well financed — e.g. Microsoft, big mobile operators and by enlarge our strategy is and will be in the future to partner with some of these companies rather than compete with them. So partnership development is an important part of our strategy, previously we worked with small partners, now we are trying to work with bigger ones, to get them develop solutions for us” (Senior Manager, 2002) “We place the customer rather than the product or process at the centre of the organisation to develop a stronger link between our different groups of customers and our data collection and management side of the business” (Business Improvement Manager, 2002) “The normal practice would be: you take the external consultancy methodology cookbook and you apply things from it and other projects that you worked for, and you get a sense of the right way to do things … but this organisation is different and we wanted the structure to reflect what we do, not what consultants want us to do” (Board Member responsible for the structural change, 2003) “We did not want an off-the-shelf structure. Consultants came with a whole series of structural models, they came with the toolkit but didn’t know how to fit it” (Strategist, 2003) “Don’t expect everything to work without some adjustment and don’t be afraid to review changes at an early stage to keep the reorganisation on track” (Business Group Director, 2003)
7
Top management commitment and the business case
- CEO with experience and personally identified with the e-strategy project - Have a multifunctional leadership team that is from sales, marketing, and IT -Involve senior management from sales and marketing in the strategy implementation
8
Political resistance to change of various stakeholders and managing simultaneously their diverse expectations
- Consistent communication allied to the use of facilitated workshops
“The fundamentals are you need to have a really clear leader of the structure, somebody who has a very clear vision about what the endgame looks like” (Senior Manager, 2002) “Sales and Marketing senior management buy-in was absolutely fundamental for the process to work. … There must be a vision, there must be a strategy and there must be support behind it” (Senior Change Manager, 2003) “Consistency of message and purpose is one of the most important success factors in making change happen. Crystal-clear purpose, understood by all, including ‘what it means for me,’ should be made explicit” (NMA Employee, 2002)
continued on following page
Organisational Challenges of Implementing E-Business in the Public Services
Table 3. continued No.
9
Organisational Challenges
External “customer acceptance” barriers and problems
10
Aligning the objectives of the 21 projects with the overall objectives of the e-business strategy and establishing timelines
11
Blending major change with major continuity
Steps Taken to Meet Organisational Challenges
- Communicate consistently with end users, at the beginning to build confidence, and throughout to communicate progress and to get feedback - Use customer relationship management to rebuild the confidence of customers who are confused about who they are actually buying products and services from
- People involved in the projects have clear understanding of their roles and responsibilities - Good project management to avoid duplications and conflicting business objectives
- Integrate the e-commerce strategy with the business strategy and with the core operations from the start
position it in the new information economy. The strategy was transforming the business at all levels, culturally, technically, and commercially. Fundamental to all of this activity was the requirement to create the environment in which NMA could successfully deliver its strategy for the business. This has required the Senior Management Team to clearly articulate the vision, purpose, goals, and values for the organisation and to communicate them clearly businesswide. It also required pulling together the leadership, communication, and engagement activity, whilst ensuring that all existing projects, initiatives, and everyday activities were aligned and integrated with this. This high level strategic activity was also about leveraging what was already in place and, most importantly, maintain consistency across cultural, behavioural, and leadership approaches. Second, the case of Britain’s National Mapping Agency shows that, in practice, the transition from government monopoly to commercial organisation whilst embracing e-business as a corporate phi-
0
Data from NMA “The changes in our business processes — for example, procurement, reporting and customer relationship management, were so massive that the smallest error could have potentially serious consequences during business processes switching, damaging customer relationships and delivery channels for a lengthy period of time. Communicating proposed changes to our customers, users and clients was essential. They demanded honest, consistent and up-to-date information whether the news were good or bad. (Business Improvement Manager, 2002) “It was important to ensure all the time our customers, users and clients that we had a backup or disaster recovery plan should something go wrong during the transition” (Sales and Marketing Manager, 2002) “this great big machine called project planning comes in and it takes up vast amounts of time and systems space, because of all these critical paths which chunter away. Actually this is too big right now, and frightening people. (Senior Manager, 2002) “We needed to be able to adapt to shifting circumstances in market conditions and identify potential changes in course sooner rather than later” (Senior Marketing Manager, 2003)
losophy can be extremely challenging to achieve. Although e-business has enabled NMA to tap new customers and new revenues and opened up a space for importation of private sector practices into NMA, the strong public interest for its activities was still present. Yet the organisation was expected to operate commercially, cover its costs, and build up reserves through its own commercial style operations. That situated the organisation in the intersection of two different spheres—the public and the private. Becoming an e-business, in this context, required collaborative working not only with commercial partners but also with different parts of the government. The “in-between” situation thus tested to the full extent the capability of the organisation to lead and manage change, especially in terms of finding the perfect balance between fulfilling its still strategic role as national agency and providing high-quality services to its customers in a dynamic marketplace. One of the senior managers interviewed described this situation with the following words:
Organisational Challenges of Implementing E-Business in the Public Services
“We [as an organisation] want to behave as if we were situated in the commercial business sector, but we cannot escape our origins” (Senior Manager, 2004). Third, the case findings demonstrate that successful organisational transformation involving ebusiness implementation relies on changing some fundamental business processes and attitudes. Close cooperation between many different sets of people—from middle managers to programmers, and from technical architects to system users within the organisation—is needed. This convergence of business process, creative, and technical skills created in NMA a new dimension for teamwork, which in turn shifted the culture of the organisation from “knowledge is power” to “sharing knowledge is power.” Technology can be an important enabler but not a driver of knowledge sharing. Finally, one key challenge in NMA was how to blend major change with major continuity. In this respect, a major lesson learnt from this organisation’s experience was that it is essential to integrate the e-commerce strategy with the business strategy and with the core operations from the start. This echoes the findings of Dutta and Segev (1999) that successes result from close partnerships between commercial and IT managers, and that companies making e-commerce central to their organisation do better than those that make it an afterthought.
conclusion This article aimed to reveal the processes of strategic and organisational transformation engendered by e-business during the implementation of a complex structural and cultural change programme aimed at reshaping a commercialised public sector organisation and rethinking how it provided value to its customers. The discussion examined the dilemmas and constraints identified by managers in the interpretation of the e-business strategy concept and why its implementation in practice could be challenging. Whilst the Internet offers a technological solution, the findings of this case
study suggest that the successful implementation of a wider e-business strategy depends on managing simultaneously a number of projects which cross organisational boundaries and linking together organisational and technological factors. Whilst this reflective account may be unique, it does, however, provide pointers to other large organisations undertaking a similar e-transformation and reflects on the degree of organisational transformation required by traditional organisations in meeting this imperative and successfully making the change to e-business. In particular, the findings illustrate that an e-business strategy is not just about technology. It also embraces the business challenges that result from managing change in a fast moving environment, as well as the important issues of people, organisation, culture, communication, and how an organisation must create a process for delivering innovation. Further empirical field studies in other settings would enrich the concepts developed in this study and would produce a definitive list of best practices. The case highlights, however, the complex nature of the notion of e-business in a public sector context and shows that when implemented successfully, it can transform entirely these organisations and their capabilities.
rEfErEncEs Barnes, D., Hinton, M., & Mieczkowska, S. (2005). Enhancing customer service operations in e-business: The emotional dimension. Journal of Electronic Commerce in Organizations, 3(2), 17-33. Barnes, S., & Hunt, B. (2001). E-commerce and e-business: Business models for global success. Oxford: Butterworth-Heinemann. Barua, A., Konona, P., Whinston, A., & Yin, F. (2001). Driving e-business excellence. Sloan Management Review, 43(1), 36-45. Becker, S. A., & Berkemeyer, A. (2004). A case study on a security maturity assessment of a business-to-business electronic commerce or-
Organisational Challenges of Implementing E-Business in the Public Services
ganization. Journal of Electronic Commerce in Organizations, 2(4), 1-19. Benoy, J., Cook, R., & Javalgi, R. (2001). Marketing on the Web: How executives feel, what businesses do. Business Horizons, 44(4), 32-40. Bhaskar, R. (2004). A customer relationship management system to target customers at Cisco. Journal of Electronic Commerce in Organizations, 2(4), 1-19. Boddy, D., & Macbeth, D. (2000). Prescriptions for managing change: A survey of their effects in projects to implement collaborative working between organisations. International Journal of Project Management, 18, 297-306. Brown, S. L., & Eisenhardt, K. M. (1998). The art of continuous change: Linking complexity theory and time-paced evolution in relentlessly shifting organisation. Administrative Science Quarterly, 42, 1-34. Carlton, D. W. (2001). Free riding and sales strategies for the Internet. The Journal of Industrial Economics, 49(4), 521-540. Chen, S., & Leteney, F. (2000). Get real! Managing the next stage of Internet retail. European Management Journal, 18(5), 519-528. Chaston, I. (2001). The Internet and e-commerce: An opportunity to examine organisational learning in progress in small manufacturing firms? International Small Business Journal, 19(2), 13-30. Clarke, I., & Flaherty, T. B. (2004). Challenges of transforming a traditional brick-and-mortar store into a bricks-and-clicks model: A small business case study. Journal of Electronic Commerce in Organizations, 2(4), 74-87. Clay, K. (2001). Prices and price dispersion on the Web: Evidence from the online book industry. The Journal of Industrial Economics, 49(4), 441-462. Clegg, C. W., Chu, C., Smithson, S., Henney, A., et al. (2002). E-business prospects: Findings from an expert panel. London: Department of Trade and Industry.
Colombo, M. G. (2001). Technology-based entrepreneurs: Does Internet make a difference? Small Business Economics, 16(3), 177-190. Daniel, E., Wilson, H., & Myers, A. (2002). Adoption of e-commerce by SMEs in the UK: Towards a stage model. International Small Business Journal, 20(3), 253-270. Day, G., & Schoemaker, P. (2000). Avoiding the pitfalls of emerging technologies. California Management Review, 42(2), 8-33. Deise, M. V., Nowikow, C., King P., & Wright, A. (2000). Executive’s guide to e-business, from tactics to strategy. New York: John Wiley & Sons. Dutta, S., & Segev, A. (1999). Business transformation on the Internet. European Management Journal, 17(5), 466-476. Eisenhardt, K. M. (1998). Time pacing: Competing in markets that won’t stand still. Harvard Business Review, 76(2), 59-70. Eisenhardt, K. M. (1999). Patching: Restitching business portfolios in dynamic markets. Harvard Business Review, 77(3), 72-83. Feeny, D. (2001). Making business sense of the e-opportunity. Sloan Management Review, 42(2), 41-52. Gao, J. (2005). E-commerce issues in Australian manufacturing: A newspaper medium perspective. Journal of Electronic Commerce in Organizations, 3(4), 20-41. Garicano, L. (2001). The effects of business-tobusiness e-commerce on transaction costs. The Journal of Industrial Economics, 49(4), 463486. Hansen, M. (2000). Networked incubators: Hothouses of the new economy. Harvard Business Review, 78(5), 74-84. Harris, L., Coles, A. M., Dickson, K., & McLoughlin, I. (1999). Building collaborative networks. In P. Jackson (Ed.), Virtual working: Social and organisational dynamics (pp. 33-46). London: Routledge.
Organisational Challenges of Implementing E-Business in the Public Services
Hoffman, D., & Novak, T. (2000). How to acquire customers on the Web. Harvard Business Review, 78(3), 179-183.
Rayport, J. (1999). The truth about Internet business models. Strategy and Business, Third Quarter(16), 1-3.
Kickert, W. (2001). Public management of hybrid organisations: Governance of
Sauer, C. (1993). Why information systems fail: A case study approach. Henley-on-Thames: Alfred Waller.
quasi-autonomous executive agencies. International Public Management Journal, 4, 135-150. Kotha, S. (1998). Cometing on the Internet: The case of Amazon.com. European Management Journal, 16(2), 212-222. Kotha, S., Rajgopal, S., & Rindova, V. (2001). Reputation building and performance: An empirical analysis of the top-50 pure Internet firms. European Management Journal, 19(6), 571-586. Lee, H. L., & Wang, S. (2001). Winning the last mile of e-commerce. Sloan Management Review, 42(4), 54-62. Mahadevan, B. (2000). Business models for Internet-based e-commerce: An anatomy. California Management Review, 42(4), 55-69. Ordanini, A., & Pol, A. (2001). Infomediation and competitive advantage in B2B digital marketplaces. European Management Journal, 19(3), 276-285. Pettigrew, A. M. (1985). The awakening giant: Continuity and change in ICI. Oxford: Basil Blackwell. Pettigrew, A. M. (1990). Longitudinal field research on change. Theory and practice. Organization Science, 1, 267-292. Poon, S., & Swatman, P. (1999). An exploratory study of small business Internet commerce issues. Information and Management, 35, 9-18. Porter, M. (2001). Strategy and the Internet. Harvard Business Review, 79(2), 63-78.
Scott Morton, F. (2001). Internet car retailing. The Journal of Industrial Economics, 49(4), 501-520. Smith, M. D. (2001). Consumer decision-making at an Internet shopbot: Brand still matters. The Journal of Industrial Economics, 49(4), 541-559. Symon, C. (2000). The future at your fingertips. In S. Rock & J. Reeves (Eds.), Building a successful e-business (pp. 6-11). London: IBM Business Guide, Caspian Publishing. Symonds, M. (1999, June 26). Business and the Internet: Survey. The Economist, pp. 1-44. Timmers, P. (2000). Electronic commerce. Chichester, UK: John Wiley & Sons. Van der Wiele, T., Williams, R., Van Iwaarder, J., Wilson, M., & Dale, B. (2002, September). The ebusiness research network: Summary of the results of the Dutch pilot survey. Paper presented at the British Academy of Management Conference. Venkatraman, N. (2000). Five steps to a Dot.com strategy: How to find your footing on the Web. Sloan Management Review, 15(3), 15-29. Yoffie, D., & Cusumano, M. (1999a). Judo strategy: The competitive dynamics of Internet time. Harvard Business Review, 77(1), 70-81. Yoffie, D., & Cusumano, M. (1999b). Building a company on Internet time: Lessons from Netscape. California Management Review, 41(3), 8-28.
This work was previously published in International Journal of E-Business Research, Vol. 2, Issue 4, edited by I. Lee, pp. 39-60, copyright 2006 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
Chapter LXXIII
Public Administrators’ Acceptance of the Practice of Digital Democracy: A Model Explaining the Utilization of Online Policy Forums in South Korea Chan-Gon Kim Rutgers University–Newark, USA Marc Holzer Rutgers University–Newark, USA
abstract The Internet provides a new digital opportunity for realizing democracy in public administration, and this study raises a central question: What factors determine public officials’ acceptance of the practice of digital democracy on government Web sites? We focused on online policy forums among many practices of digital democracy. To gauge public officials’ behavioral intentions to use online policy forums on government Web sites, we examined individual and organizational factors, as well as system characteristics. We administered a survey questionnaire to Korean public officials and analyzed a total of 895 responses. Path analysis indicates that three causal variables are important in predicting public officials’ intentions to use online policy forums: perceived usefulness, attitudes toward citizen participation, and information quality. We discuss implications of this study for practices and theories of digital democracy.
introduction Today the Internet is changing the operation of governments. A large number of citizens can access a large volume of information simultaneously and conduct online transactions with government agencies 24 hours a day and 7 days a week. In addition, citizens can register their opinions on government Web sites through online discussions
and online polls anywhere and anytime. Thus the concept of digital democracy is emerging. New electronic means have the potential to increase citizen participation in government and to ensure that citizens’ preferences are reflected in the policy-making process. Despite the fact that digital democracy is possible in public agencies, there are wide variations in adopting and implementing practices of digital
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Public Administrators’ Acceptance of the Practice of Digital Democracy
democracy among government agencies at the federal, state, and local levels. Decisions at the organizational level do not necessarily bring changes in the attitudes and behaviors of individual public administrators. In other words, policy adoption is different from program implementation, and factors affecting these two are also different (de Lancer & Holzer, 2001). Successful innovation implementation is determined by human factors, or end users’ acceptance of the innovation (Nedovic-Budic & Godschalk, 1996). Organizational members can reject or not fully utilize an innovation (Leonard-Barton & Deschamps, 1988). The literature on implementation has indicated that street-level bureaucrats have considerable resources with which to influence policy outcomes (Hill, 2003; Lipsky, 1980). While some research has examined the adoption of e-government at the organizational level (Ho & Ni, 2003; Ho, 2002; Moon, 2002; Weare, Musso, & Hale, 1999), little research has been done at the micro level with regard to attitudes and behaviors of public administrators toward digital democracy. This study examines why and how public administrators accept the practice of digital democracy on government Web sites when they make and implement public policies. Since public administrators offer and maintain government Web sites, supplying the space for digital democracy on a government Web site and utilizing it is a prerequisite for such democracy in public administration. The major research question of this study is: What factors determine public administrators’ acceptance of the practice of digital democracy on government Web sites? More specifically, this study focuses on a single practice of digital democracy and aims to examine the impact of individual, organizational, and system characteristics on administrators’ intentions to use online policy forums on government Web sites.
essentially similar terms, such as “digital democracy,” “electronic democracy,” “e-democracy,” “virtual democracy,” “teledemocracy,” and “cyberdemocracy.” This study uses the term “digital democracy” to describe the use of government Web sites for citizens’ participation in public affairs. The main characteristic of the new information and communication technology (ICT) is digital data transfer (Hague & Loader, 1999), and “digital democracy” is defined as “a collection of attempts to practice democracy without the limits of time, space, and other physical conditions, using ICT or computer-mediated communication instead, as an addition, not a replacement for, traditional analogue political practices” (Hacker & van Dijk, 2000, p. 1). Online citizen participation can enrich democratic processes and build public trust by enabling public agencies to receive broader and more diverse opinions from citizens than those available through traditional means of off-line participation (Holzer, Melitski, Rho, & Schwester, 2004). Through online discussions, members of the public can learn from each other, and public administrators can become better informed, sometimes through the experience and hidden expertise of the public (Coleman & Gotze, 2001). Several scholars have suggested typologies of digital democracy (Kakabadse, Kakabadse, & Kouzmin, 2003; Norris, 2005; Tsagarousianou, 1999). With regard to the current state of digital democracy, information disclosure on government Web sites is full-fledged, and many agencies are receiving feedback on policy issues from citizens through the Internet. However, online discussion is just emerging, and decision making through government Web sites such as electronic referenda is still relatively infrequent in government (Norris & Moon, 2005; Kim, 2004).
digital dEmocracy in public administration
An “online policy forum,” or an online discussion forum, is a place on the government Web site where public officials or citizens can post discussion topics on policy issues and exchange their views on those topics over a period of time. Online
Citizens’ involvement in public affairs through the Internet has brought about the use of several
Online Policy Forums
Public Administrators’ Acceptance of the Practice of Digital Democracy
policy forums can take several formats. Public officials can participate in forums as discussants or citizens can discuss issues among themselves without officials’ participation. Discussion topics can be offered by the government or registered online by citizens. There are also two types: issue-based forums and policy-based forums. At an early stage of policy making, issue-based forums are organized to collect ideas and opinions. At a later stage of policy-making, policy-based forums are offered to solicit responses about a draft policy (OECD, 2003). Although online policy forums are still emerging and not universally utilized yet in public agencies, there are several exemplary cases of online discussion forums around the world (Holzer et al., 2004; OECD, 2003; Coleman & Gotze, 2001). At the central government level, the National Dialogue of Public Involvement was implemented by the Environmental Protection Agency (EPA) in 2001 to offer online discussions between public administrators and citizens (www.network-democracy. org/epa-pip) with 1,261 message postings in 10 days (Beierle, 2002; Holzer et al., 2004). The Hansard Society e-democracy program has been piloting a series of online discussion forums (www. tellparliament.net) for the UK Parliament since 1998 (Coleman, 2004). At the state level, Minnesota offered an online policy forum, Issue Talk (http://issuetalk.state.mn.us), from January 7 to 18 in 2002, inviting more than 600 comments and ideas about the state budget shortfall (Minnesota Planning, 2002). Cases at the local government level include online forums for the city of Kalix, Sweden (www.kalix.se), for the North Jutland County in Denmark (www.nordpol.dk), and for Bologna, Italy (www.comune.bologna.it) (OECD, 2003; Coleman & Gotze, 2001).
concEptual framEwork Since a government Web site can be viewed as an information system (IS) for public administrators, the theories of information system use can be applied to explain and predict public administrators’ use of such sites. Kling and Jewett (1994) clas-
sify theories of information systems use into two broad types: rational systems models and natural systems models. The rational systems model views organizations as instruments designed to pursue specific goals and considers the efficiency of achieving those goals the most important value. By contrast, the natural systems model assumes that organizational members share a common interest in the survival of the organization, and that organizational behavior is oriented toward the pursuit of this end. In this model, organizations resemble natural systems that are like living organisms (Kling & Jewett, 1994; Scott, 1992). We conducted a comprehensive literature review, and an updated classification is presented in Table 1. The prevailing theory of information systems use is based on the rational systems model, and the Technology Acceptance Model (Davis, 1989) has been the dominant model for an individual’s technology acceptance (Adams, Nelson, & Todd, 1992; Mathieson, Peacock, & Chin, 2001). Based on previous research on an individual’s technology acceptance and utilization of information systems, this study proposes a model to explain public administrators’ acceptance of online policy forums on government Web sites when public agencies introduce such forums in their organizations. Since the characteristics of the Internet are different from other information technologies, these factors have been considered in formulating the conceptual framework of this study. Previous information technology innovations were mainly used by organizational members without interaction with citizens; but using a government Web site for purposes of digital democracy requires interaction between public administrators and citizens. Thus, when studying online dialogue, factors affecting users’ attitudes and behaviors should be considered in contexts different from those utilized in studying other information technologies. The basic constructs of this study are “perceived usefulness” and “behavioral intention,” which are derived from the Technology Acceptance Model (Davis, 1989) and the Theory of Planned Behavior (Ajzen, 1985, 1991). The dependent variable in this study is a public administrator’s behavioral
Public Administrators’ Acceptance of the Practice of Digital Democracy
Table 1. Theories of information systems use Theories
Research
Variables affecting IS Use
1. Rational Systems Explanations A. IS Implementation Research
Kwon & Zmud (1987)
individual, structural, technological, taskrelated, environmental factors
Lucas, Ginzberg, & Schultz (1990)
user acceptance, system characteristics, management support, user knowledge of system purpose and use, organizational support, etc.
B. IS Success Model
DeLone & McLean (1992, 2003)
information quality, system quality, service quality
C. Application of the Theory of Planned Behavior
Ajzen (1985, 1991)
attitude, subjective norms, perceived behavioral control
D. Technology Acceptance Model
Davis (1989)
attitude, perceived usefulness, perceived ease of use
E. Application of Innovation Diffusion Theory
Rogers (1995)
relative advantage, compatibility, complexity, trialability, observability
2. Natural Systems Explanations A. Cultural/political perspectives
B. Sociotechnical Systems Theory
C. Theory of Technology Enactment
Kling (1980), Kling and Jewett (1994)
power politics, coalition building, forwarding agendas
Walsham (1993)
interpretive approach, organizations as cultures and political systems
Kraemer, Dutton, & Northrop (1981), Danziger, Dutton, Kling, & Kraemer (1982)
political, socio-economic technological environment, political/administrative system attributes, computer package (hardware, software, procedures, management policies), orientation of computing
Danziger & Kraemer (1986)
organizational environment, computer package, user characteristics
Fountain (2001)
objective information technology, organizational forms, institutional arrangements, enacted technology
intention to use online policy forums on a government Web site. A public administrator’s behavioral intention to use such forums is considered a predictor for actual use of online policy forums. The public administrator’s perceived usefulness of online policy forums is an intervening variable that mediates the relationship between independent variables and the behavioral intention. We assume that independent variables have impacts on the dependent variable directly or indirectly through perceived usefulness. A survey of the literature on variables that affect individuals’ attitudes toward, and use of, information technology generated a substantial number of factors (Anandarajan, Simmers, & Igbaria, 2000; Davis, 1989; DeLone & McLean, 1992, 2003; Finlay & Finlay, 1996; Igbaria, Guimaraes, & Davis,
1995; Lucas, 1978; Ranerup, 1999; Thong & Yap, 1995). Among them, nine factors are considered important in predicting a public official’s perceived usefulness and behavioral intention to use online policy forums. These independent variables can be divided into three broad categories: (1) individual factors, (2) organizational factors, and (3) system characteristics. Individual factors include Internet attitudes, attitudes toward citizen participation, and knowledge about digital democracy. Organizational factors refer to supervisor support, IS department support, and innovation-supportive organizational culture. System characteristics include the ease of use of online policy forums, information quality, and perceived risk in online discussion. Taking account of all relationships between these factors, the conceptual framework
Public Administrators’ Acceptance of the Practice of Digital Democracy
Figure 1. A model of public administrators’ acceptance of online policy forums on government Web sites Figure 1. A Model of Public Administrators’ Acceptance of Online Policy Forums on Government Web Sites Individual Factors x Internet Attitudes x Attitudes toward Citizen Participation x Knowledge about Digital Democracy
Organizational Factors x Supervisor Support x IS department Support x Innovation-supportive Organizational Culture
Perceived Usefulness
Behavioral Intention to Use
Actual Use
System Characteristics x Ease of Use x Information Quality x Perceived Risk in Online Discussion
of this study is presented in Figure 1. According to empirical research, there is a close relationship between intention and behavior (Davis, 1989; Davis, Bagozzi, & Warsaw, 1989; Sheppard, Hartwick, & Warsaw, 1988). This study focuses on intentions and relevant factors but does not measure actual use of online policy forums. However, behavioral intention can be used as a surrogate for predicting actual use of online policy forums.
individual factors Previous research found that favorable Internet attitudes are associated with the frequent use of the Internet (Anandarajan et al., 2000; Spacey, Goulding, & Murray, 2004). Thus we expect that administrators’ Internet attitudes will influence the perceived usefulness of online policy forums and behavioral intentions to use online policy forums. •
Hypothesis 1-1. A public administrator’s favorable Internet attitudes will positively
•
affect his/her perceived usefulness of policy forums. Hypothesis 1-2. A public administrator’s favorable Internet attitudes will positively affect his/her behavioral intention to use online policy forums.
To implement digital democracy, public administrators must interact with citizens and allow citizen participation in policy making and implementation through the government Web site. Wang (2001) found that both managers’ and public administrators’ willingness to be accountable was related to increased citizen participation. Thus, public administrators’ favorable attitudes toward citizen participation are prerequisite to the use of practices of digital democracy. •
•
Hypothesis 2-1. A public administrator’s favorable attitudes toward citizen participation will positively affect his/her perceived usefulness of online policy forums. Hypothesis 2-2. A public administrator’s favorable attitudes toward citizen participa-
5
Public Administrators’ Acceptance of the Practice of Digital Democracy
tion will positively affect his/her behavioral intention to use online policy forums. Research on the use of MIS has shown that education, training, experience, and skills of individuals using the technology are associated with perceived usefulness and actual use of information systems (Agarwal & Prasad, 1999; Igbaria, 1990, 1993; Igbaria et al., 1995; Igbaria, Parasuraman, & Baroudi, 1996; Nelson & Cheney, 1987). Thong and Yap (1995) found that businesses with CEOs who are more knowledgeable about information technology (IT) are more likely to adopt IT. Research found that knowledge about the Internet affects their use of the Internet (Finlay & Finlay, 1996). If public administrators are knowledgeable about digital democracy, they will understand the importance of digital democracy and perceive the usefulness of practices of digital democracy as high and use online policy forums frequently. •
•
Hypothesis 3-1. A public administrator’s high level of knowledge about digital democracy will positively affect his/her perceived usefulness of online policy forums. Hypothesis 3-2. A public administrator’s high level of knowledge about digital democracy will positively affect his/her behavioral intention to use online policy forums.
organizational factors In MIS research, organizational support—such as manager support and information center (IS department) support—was found to affect system usage (Anandarajan et al., 2000; DeLone & McLean, 2003; Igbaria et al., 1995; Igbaria et al., 1996; Igbaria, 1990, 1993; Lucas, 1978; Thompson, Higgins, & Howell, 1991). Supervisors may encourage or discourage subordinates to use online policy forums in their work. In addition, an IS department that is in charge of managing the government Web site may facilitate implementing practices of digital democracy in agencies. •
Hypothesis 4-1. Supervisor support for using online policy forums will positively affect a
•
•
•
public administrator’s perceived usefulness of online policy forums. Hypothesis 4-2. Supervisor support for using online policy forums will positively affect a public administrator’s behavioral intention to use online policy forums. Hypothesis 5-1. IS department support for using online policy forums will positively affect a public administrator’s perceived usefulness of online policy forums. Hypothesis 5-2. IS department support for using online policy forums will positively affect a public administrator’s behavioral intention to use online policy forums.
As long as a certain culture exists within an organizational unit, the activities of organizational members can be influenced by the culture. According to Subramaniam and Ashkanasy (2001), innovation takes place when organizational members are opportunistic, not constrained by many rules, and willing to take risks and experiment with new ideas. According to previous research, businesses with more innovative CEOs are more likely to adopt IT (Thong & Yap, 1995), and top managers’ risk-taking propensity is positively associated with IT innovativeness (Moon & Brestchneider, 2002). Since online policy forums are new innovations, public administrators who work in an innovative organizational culture may try new inventions. •
•
Hypothesis 6-1. Innovation-supportive organizational culture will positively affect a public administrator’s perceived usefulness of online policy forums. Hypothesis 6-2. Innovation-supportive organizational culture will positively affect a public administrator’s behavioral intention to use online policy forums.
System Characteristics The influence of perceived ease of use on perceived usefulness and behavioral intention is derived from the Technology Acceptance Model (Davis, 1989). Public administrators have several
Public Administrators’ Acceptance of the Practice of Digital Democracy
means of gathering information from citizens in the decision-making process, such as face-to-face meetings, telephone conversations, or government Web sites. Thus perceived ease of use of online policy forums is directly related to the use of online policy forums. •
•
Hypothesis 7-1. A public administrator’s positive perception of the ease of use of online policy forums will positively affect his/her perceived usefulness of online policy forums. Hypothesis 7-2. A public administrator’s positive perception of the ease of use of online policy forums will positively affect his/her behavioral intention to use online policy forums.
Previous research found that the quality of information had significant influence on the perceived usefulness of an information system (Klobas, 1995; Kraemer, Danziger, Dunkle, & King, 1993; Lin & Lu; 2000). We can view the government Web site as an information system that produces information for public administrators when citizens register their opinions on online policy forums. Public administrators judge whether the information provided by online citizens is reliable and valid, or whether the online information is of uncertain quality. If the quality of online information is high, then public administrators may perceive the usefulness of online policy forums positively and continuously use those forums. Therefore, the following hypotheses are formulated: •
•
0
Hypothesis 8-1. A public administrator’s perception of high information quality of online policy forums will positively affect his/her perceived usefulness of online policy forums. Hypothesis 8-2. A public administrator’s perception of high information quality of online policy forums will positively affect his/her behavioral intention to use online policy forums.
Online relationships are established between users of the Internet when they are engaged in electronic conversation through a chat room or a discussion forum. The lack of face-to-face contact leads to greater anonymity in online interactions and online citizens might behave with less good will (Friedman, Kahn, & Howe, 2000). If public administrators perceive a high risk in using online discussion forums, they might not write on online forums. •
•
Hypothesis 9-1. A public administrator’s perception of high risk in online discussions will negatively affect his/her perceived usefulness of online policy forums. Hypothesis 9-2. A public administrator’s perception of high risk in online discussions will negatively affect his/her behavioral intention to use online policy forums.
the impact of perceived usefulness on behavioral intention The importance of perceived usefulness as one of the determinants of the intention to use an information system has been stressed by the Technology Acceptance Model (Davis, 1989). Perceived usefulness influences user acceptance of the information system because of the value of outcomes from using the system. Public administrators can utilize online information submitted by citizens through online discussion on the government Web site. If public administrators perceive the usefulness of online policy forums as positive, they will use online policy forums more frequently. •
Hypothesis 10. A public administrator’s positive perception of the usefulness of online policy forums will positively affect his/her behavioral intention to use online policy forums.
Public Administrators’ Acceptance of the Practice of Digital Democracy
rEsEarch mEthodology sample and data collection The unit of analysis in this study is individual public administrators, and the population of this study is defined as public administrators who work in public agencies that have adopted online policy forums, and make and implement public policies in their agencies in South Korea. Our focus is on individual public administrators’ use of online policy forums in South Korea, which is the leader in broadband penetration in the world (Organization for Economic Cooperation and Development, 2001) and places second in terms of Internet usage rate among its population (International Telecommunication Union, 2004). South Korea obtained a fifth-place ranking in terms of an e-government readiness index and was sixth in terms of the e-participation index constructed by the United Nations1 (2004). South Korea has been vigorously promoting e-government and digital democracy, with many public agencies adopting online policy forums. All Ministries except one among 18 Ministries offer online policy forums, and 12 upper-level local governments among 16 offered online policy forums as of January 2005. Thus South Korea is a particularly good case for the study of digital democracy This study mixed purposive sampling and cluster sampling. To select appropriate public agencies, we used several criteria: the frequency of online policy forums, the number of message postings per forum, and the highest and average number of message postings per forum. Then, a total of 13 public agencies were selected, including four central government ministries, two upper-level local governments, and seven lower-level local governments. Among 18 ministries at the central government of South Korea, four ministries were relatively active as shown in Table 2 in terms of the frequency of online forums and the number of message postings, and these ministries were selected for the survey: the Ministry of Information and Communication, the Ministry of Government Administration and Home Affairs, the Ministry of Unification, and the Ministry of Health and Wel-
fare. Among them, the Ministry of Unification’s “Online Public Hearing” (www.unikorea.go.kr) is the most active, offering six online forums in 2004, with 731 participants on average per forum. Discussion topics were offered by the government or registered by citizens on government Web sites. Most online forums lasted for 15 to 30 days, and any citizens or officials can post messages. We applied the same methodology to selecting several local governments. Among 16 upper-level local governments, the Seoul Metropolitan Government and Gyeonggi Province were actively promoting online policy forums and were selected for the survey. Seoul Metropolitan Government’s “Cyber Policy Forum” (http://forum.seoul.go.kr) provided discussion topics every month in 2004, with an average number of 120 participants per forum. Gyeonggi Province offered five online policy forums in 2004, with 82 participants on average per forum (http://www.gg.go.kr/0502new/news/ join/dojung/dojung01/index.html). Among 232 lower-level local governments, seven district offices were deemed appropriate for this study: Gangnam, Gangdong, Guro, Jungnang, Seongbuk, Yangchon, and Gangseo. Then we selected several divisions at each public agency. Selected divisions were those that posted discussion topics on their agency’s Web site or those related to discussion topics registered by citizens. A total of 34 divisions were selected in the central government, 29 divisions in upper-level local governments, and 31 divisions in lower-level local governments. We mailed questionnaires to a manager at each agency in central and local governments between February 11th and 15th, 2005. Then these managers distributed the questionnaires to all public administrators who make and implement public policies in their divisions, which had posted discussion topics or had topics registered by citizens on government Web sites for the past two years. Questionnaires were distributed to 300 officials at the central government, 320 at upper level local governments, and 390 at lower-level local governments. Thus the total sample selected for this study was 1,010 as presented in Table 3.
Public Administrators’ Acceptance of the Practice of Digital Democracy
Table 2. Statistics of Online Policy Forums for 18 Ministries in South Korea, 2004 Ministry
Selection of discussion topics
Online forums with message postings 21 ~ 50
Online forums with message postings above 50
The highest number of message postings per forum
Unification
Offered by government
0
6
1,257
Offered by government
0
2
99
Offered by government
0
2
1,281 threads (average more than 817)
Offered by government
0
2
167
Offered by government
3
2
110
Education and Human Resources Development
Offered by government
2
1
106
Construction and Transportation
Offered by government
0
1
258
Agriculture and Forestry
Offered by government
0
1
252
Finance and Economy
Offered by government
0
1
85
Science and Technology
Offered by citizens
1
0
11
Labor
Offered by government
1
0
29
Culture and Tourism
Offered by government
1
0
18
Commerce, Industry and Energy
Offered by both government and citizens
1
0
39
Foreign Affairs and Trade
Offered by government
1
0
36
Maritime Affairs and Fisheries
Offered by government
1
0
27
Environment
Offered by government
0
0
10
Justice
0
0
0
National Defense
0
0
0
(average 731) Information and Communication (average 96) Government Administration and Home Affairs Health and Welfare (average 120) Gender Equality and Family (average 60)
Questionnaires were accompanied by a cover letter that stated that participation was voluntary and that responses were anonymous, encouraging honest responses for this study. We did not ask a respondent’s name, phone number, resident registration number, address, or date of birth. After respondents completed the questionnaires, they voluntarily returned them by mail. After excluding incomplete questionnaires, we collected 895 questionnaires, and the response rate was 88.6% across governments.
Measurement and theSurvey instrument This study used a five-point Likert-type questionnaire to measure the attitudes and beliefs of public administrators. We combined several questions to measure a variable and created a composite measurement by summing individual items for each variable. Questionnaire items are summarized in the Appendix. Factor analysis and Cronbach alpha coefficients showed the unidimensionality
Public Administrators’ Acceptance of the Practice of Digital Democracy
Table 3. Distribution and collection of the questionnaire Level of government
Central Government Ministries
Agency
Number of divisions for survey
Distribution
Collection
Response rate
Ministry of Information & Communication
8 divisions
70
62
88.6%
Ministry of Government Administration & Home Affairs
8 divisions
90
86
95.6%
Ministry of Unification
9 divisions
70
64
91.4%
Ministry of Health & Welfare
9 divisions
70
65
92.9%
34
300
277
92.3%
Seoul Metropolitan Government
22 divisions
240
230
95.8%
Gyeonggi Province
7 divisions
80
69
86.3%
29
320
299
93.4%
Gangnam District Office
12 divisions
120
80
66.7%
Gangdong District Office
3 divisions
40
23
57.5%
Jungnang District Office
3 divisions
55
50
90.9%
Guro District Office
3 divisions
40
39
97.5%
Seongbuk District Office
5 divisions
60
59
98.3%
Yangchon District Office
4 divisions
60
54
90.0%
Gangseo District Office
1 division
15
14
93.5%
Sub-total Upper-level Local Governments
Sub-total
Lower-level Local Governments
Sub-total
31
390
319
81.8%
Total
94
1,010
895
88.6%
and reliability of the instrument. Variables of this study are as follows. • •
• •
Negative Internet attitudes: A public administrator’s negative attitudes toward the Internet. Attitudes toward citizen participation: A public administrator’s willingness to allow citizen participation in the policy-making process. Knowledge about digital democracy: A public administrator’s knowledge about digital democracy. Supervisor support: The degree to which a public administrator’s supervisor emphasizes citizen input through online policy forums in policy processes and encourages employees to use online policy forums in their work.
•
•
• • •
IS department support: The degree to which IS department supports a public administrator’s using online policy forums in public agencies. Innovation-supportive organizational culture: The degree to which a public administrator works in an innovative organizational culture in public agencies. Perceived ease of use: The degree to which a public administrator expects the use of online policy forums to be free of effort. Information quality: The degree to which online information submitted by citizens is valuable and of high quality. Perceived risk in online discussion: The degree to which a public administrator perceives risks when communicating with citizens online.
Public Administrators’ Acceptance of the Practice of Digital Democracy
•
•
Perceived usefulness: A public administrator’s subjective perception that using online policy forums will increase his/her job performance. Behavioral intention: The degree to which a public administrator has the intention to use online policy forums in his/her work.
rEsults descriptive statistics Following factor analysis and reliability tests, appropriate constructs were established and the descriptive statistics for 11 research variables are presented in Table 4. On average, public officials in South Korea seemed to believe that organizational culture is moderately innovative (M = 3.68), and they tended to have moderately high behavioral intentions to use online policy forums in the near future (M = 3.55). However, they think that they have only a little knowledge about digital democracy (M = 2.58), perceived low risk in online discussion (M = 2.83), and low supervisor support for using online policy forums (M = 2.89). When we examined a correlation matrix, perceived usefulness and behavioral intention have positive relationships with several variables.
None of the independent variables show correlations above 0.8 among themselves, which implies that multicollinearity would not be a problem in regression analysis.
Path Analysis To test hypotheses, we analyzed data using SPSS and employed path analysis. Path analysis was developed by Sewall Wright as a method for examining relationships among all variables and testing causal models when multiple variables are involved in a theoretical model (de Vaus, 1986; Loehlin, 1998; Pedhazur, 1997). In path analysis, a path diagram graphically displays the pattern of causal relationships among a set of variables. An arrow from one variable to another indicates a direct effect, and a two-headed curved arrow indicates the correlation between two variables. We assume that the relations among the variables in the path model are linear and additive; the residual of each endogenous variable is not correlated with one another or with other endogenous variables; there is a one-way causal flow in the model; and the variables are measured on an interval scale without error. Then path analysis can be reduced to the solution of several multiple linear regression analyses (Pedhazur, 1997, p.771). When we examined the assumptions of multiple regres-
Table 4. Descriptive statistics for research variables N
Mean
Std. Deviation
Negative Internet Attitudes
891
3.52
.95
Information quality
515
3.07
.63
Perceived usefulness
512
3.24
.64
Perceived risk in online discussion
883
2.83
.80
Supervisor support
861
2.89
.75
IS department Support
867
2.99
.81
Attitudes toward citizen participation
891
3.36
.73
Organizational culture
892
3.68
.68
Knowledge about digital democracy
892
2.58
.83
Ease of use
508
3.42
.64
Behavioral intention
889
3.55
.67
Variable
Public Administrators’ Acceptance of the Practice of Digital Democracy
sion, they were also met, including normality of residuals, linear relationships between variables, homoscedasticity, and no perfect collinearity. In path analysis, each path is given a path coefficient, a beta weight (standardized regression coefficient) in regression analysis that shows how much impact the variable has on another variable. Since these path coefficients are standardized numbers, they can be compared with each other in terms of the degree of importance of each variable. To control for the effects of other known influences on regression models, we examined two variables: age and gender. When we removed these two control variables in regression analysis, R2 for the model without control variables was similar to R2 the original regression model with control variables, and the magnitude of the regression coefficients changed little. Among research variables, Internet attitudes, innovation-supportive organizational culture, and perceived risk in online discussion had no significant impacts on either perceived usefulness or behavioral intention. Thus these independent variables were removed in path analysis. The initial path analysis produced all path coefficients regardless of statistical significance or meaningfulness of coefficients. To trim the path model and make it more parsimonious, inFigure 2. Final path model
significant or unimportant path coefficients can be removed from the model. Accordingly two criteria were used in deleting a few path coefficients: the significance and importance of the coefficient. Path coefficients not significant at the 0.1 level were deleted, and path coefficients less than 0.05 were deleted from the model. Thus three paths were removed from the original path model: the path from knowledge about digital democracy to perceived usefulness, the path from supervisor support to behavioral intention, and the path from information quality to behavioral intention. All path coefficients in the final model are statistically significant at the 0.05 level, R2 for the model predicting perceived usefulness is 0.572, and R2 for the model predicting behavioral intention is 0.417. This means that 57% of the variation of perceived usefulness is explained by five independent variables, and 42% of the variation of behavioral intention is explained by four variables. The final path model is presented in Figure 2. The total causal effect of each predictor on behavioral intention was calculated by adding the direct and indirect effects in the final path model. To obtain indirect effects, we multiply the path coefficients of all of the indirect routs. The results of effect analysis are presented in Table 5. Total causal effects for all predictor variables are
Attitudes toward citizen participation .21
.187*** .090***
.33 .34
Knowledge about digital democracy
.24
.28
14
Figure 2. Final Path Model
.117***
.32 .39
Supervisor support
.116***
.21 .67
.26
IS department support
.29 .35
Perceived Usefulness
.359***
Behavioral Intention
.093** .127*** .135***
.35
Ease of use
.120***
.37
.553***
.33
Information quality
Note. * significant at the 0.1 level, ** 0.05 level, *** 0.01 level
Public Administrators’ Acceptance of the Practice of Digital Democracy
Table 5. Decomposition of effects for the final path model Endogenous Variable Causal variable
Perceived Usefulness
Behavioral Intention
Attitudes toward citizen participation Direct effect
.090*
Indirect effect Total causal effect
.187* .032*
.090*
.220*
Knowledge about digital democracy Direct effect
.117*
Indirect effect Total causal effect
.117*
Supervisor support Direct effect
.116*
Indirect effect Total causal effect
.042* .116*
.042*
.093*
.127*
IS department support Direct effect Indirect effect Total causal effect
.034* .093*
.160*
.135*
.120*
Ease of use Direct effect Indirect effect Total causal effect
.048* .135*
.168*
Information quality Direct effect
.553*
Indirect effect Total causal effect
.199* .553*
.199*
Perceived usefulness Direct effect
.359*
Indirect effect Total causal effect R2
.359* .572
.417
significant at or below the 0.05 level. With regard to the magnitude of the total causal effect of each predictor on behavioral intention, the order of importance is as follows: (1) perceived usefulness (0.359), (2) attitudes toward citizen participation (0.220), (3) information quality (0.199), (4) ease of use (0.17), (5) IS department support (0.16), (6) knowledge about digital democracy (0.168), and (7) supervisor support (0. 042).
In path analysis, model-implied or model-predicted correlations between two variables using path coefficients should reproduce the original correlations or observed correlations (Heise, 1969; Klem, 1995; Kline, 1998; Pedhazur, 1982; Schumacker & Lomax, 1996), and it is desirable to have correlation residuals with absolute values less than 0.1 (Kline, 1998). We calculated a predicted correlation in a path model by adding causal effects—direct and indirect effects—and non-causal effects—spurious and unanalyzed effects. Spurious effects occur when a third variable has causal paths to both an independent variable and a dependent variable, while unanalyzed effects occur due to effects from correlations between pairs of source variables (Davis, 1985; Klem, 1995; Kline, 1998; Pedhazur, 1982). In our model, correlations range from 0.21 to 0.39 between pairs of source variables, and these effects were considered in analyzing the impact of independent variables on dependent variables. The final path model appears to fit the data fairly well. Applying the criterion that the difference between the original and reproduced correlations should not exceed 0.1, we find that correlation residuals in the absolute values for all correlations are well below the 0.1 criterion (Table 6). Thus, the final path model reproduced all of the observed correlations, and the model is relevant in predicting public administrators’ behavioral intention to use online policy forums in South Korea. The final path model confirms that “perceived usefulness” mediates between some predictors and “behavioral intention.” Information quality does not directly affect behavioral intention but affects it indirectly through perceived usefulness. Perceived usefulness provides only a partial mediation for attitudes toward citizen participation, supervisor support, IS department support, and ease of use.
conclusion This study examined what factors determine public administrators’ acceptance of online policy forums on government Web sites for digital democracy and how individual, organizational, and system
Public Administrators’ Acceptance of the Practice of Digital Democracy
characteristics affect public officials’ intentions to use online policy forums in South Korea. Hypotheses 2-1, 2-2, 3-2, 4-1, 5-1, 5-2, 7-1, 7-2, 8-1, and 10 were supported, but hypotheses 1-1, 1-2, 3-1, 4-2, 6-1, 6-2, 8-2, 9-1, and 9-2 were not supported at the 0.05 significance level. First, perceived usefulness of online policy forums is the strongest factor affecting public administrators’ behavioral intentions to use online policy forums. Path analysis reveals causal relationships between behavioral intention, perceived usefulness, and other predictors. Perceived usefulness has been found to play a mediating role between some predictors and behavioral intentions. Second, public administrators’ favorable attitudes toward citizen participation are positively associated with the perceived usefulness of online policy forums and their intentions to use online policy forums. If public administrators have negative attitudes toward citizen participation, online policy forums cannot be utilized in the policy-making process. Third, the information quality of online policy forums is the major factor affecting public administrators’ perceived usefulness of online policy forums. It has the largest impact on perceived usefulness among the nine factors. Fourth, perceived ease of use of online policy forums is a predictor for perceived usefulness of online policy forums and behavioral intentions to use online policy forums. If public administrators feel that online policy forums are easy to access and use compared to other methods of gathering citizens’ opinions, they will utilize online policy forums more frequently. Fifth, IS department support for using online policy forums is related to the perceived usefulness of online policy forums and behavioral intentions to use online policy forums. Public officials may perceive online policy forums as useful when IS departments encourage officials to consider citizens’ opinions in the policy-making process. Sixth, public administrators’ knowledge about digital democracy is positively associated with their behavioral intention to use online policy forums. In other words, the more knowledgeable public officials are about digital democracy, the
more likely they are to use online policy forums for digital democracy in public administration. Seventh, Internet attitudes, innovation-supportive organizational culture, and perceived risk in online discussion are not predictors for either perceived usefulness of online policy forums or behavioral intention to use online policy forums. Considering the fact that South Korean public administrators are heavy users of the Internet, Internet attitudes may have no impact on our model. Unlike other discussion forums operated in the private sector, the government’s online policy forums may strongly regulate messages posted on their Web sites, so public administrators may not perceive much risk when they write in online policy forums. As South Korea has been promoting administrative reforms at all public agencies, innovation-supportive organizational culture may not be an important factor. Since digital democracy is still emerging for many public agencies, the results of this research can be helpful for successful implementation of the concept. Path analysis reveals that perceived usefulness, attitudes toward citizen participation, and information quality are the major determinants of behavioral intention among individual, organizational, and information system factors. Policy makers, elected officials, and managers should focus on increasing the level of those three factors in order to facilitate digital democracy. Leaders within public administration organizations should pay attention to online policy forums and utilize information provided by citizens in the policy-making process so that public administrators may perceive the usefulness of those policy forums. Since public administrators’ use of online policy forums involves citizen participation in the policy-making process through government Web sites, officials’ favorable attitudes toward citizen participation are indispensable to digital democracy. On the part of citizens, they should understand policy issues and provide good discussions on policy issues through government Web sites so that public administrators can perceive the usefulness of online policy forums and utilize them. This was consistent with the analysis of comments from respondents who emphasized
Public Administrators’ Acceptance of the Practice of Digital Democracy
that the quality of information, mature citizenship, and citizens’ competence in policy issues are prerequisite to the success of digital democracy. Many officials commented that opinions on online policy forums were often posted by narrow, selfinterested parties, and some officials complained that abusive or malicious opinions were posted on government Web sites. Theories of digital democracy have hypothesized that the Internet has a potential to broaden and deepen citizens’ participation in decision making in government and improve the quality of decisions by ensuring deliberative public debate. Although several studies have been conducted at the organizational level to find characteristics of agencies that have adopted practices of e-government, research on individual public administrators is rare. This research underscores several implications for developing theories of digital democracy. First, this study suggests that the acceptance of practices of digital democracy at the individual level should be distinguished from the adoption of such practices at the organizational level. This study also finds that the level of individual public administrators’ acceptance of practices of digital democracy varies within an agency that has adopted those practices. Second, this study indicates that the causal path model is valid in explaining public administrators’ behavioral intentions to use online policy forums. This model can be extended to public administrators’ acceptance of other practices of e-government or digital democracy, such as online opinion polls/surveys, online public hearings, and electronic town meetings. As these practices take place online, there are no essential differences between online forums and other practices. The generalizability of this study across countries is limited because the samples were selected from public officials in South Korea. We expect that individual, organizational, and IS system characteristics will differ between public officials in South Korea and other countries, and one previous study did show that the pattern of relationships between variables in IS research was different between countries (Igbaria & Zviran, 1991). Thus comparative, cross-national research is necessary
to more fully understand the implementation process for digital democracy. In addition, future research might compare public administrators’ willingness to use the practice of digital democracy between public administrators who work at agencies that adopted such practices and those who work at agencies that did not introduce such practices in their organizations.
rEfErEncEs Adams, D. A., Nelson, R. R., & Todd, P. A. (1992). Perceived usefulness, ease of use, and usage of information technology: A replication. MIS Quarterly, 16 (2), 227-247. Agarwal, R., & Prasad, J. (1999). Are individual differences germane to the acceptance of new information technology? Decision Sciences, 30(2), 361-391. Ajzen, I. (1985). From intention to actions: a theory of planned behavior. In J. Kuhl, & J. Beckmann (Eds.), Action control: From cognition to behavior (pp. 11-39). NY: Springer-Verlag. Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50, 179-211. Anandarajan, M., Simmers, C., & Igbaria, M. (2000). An exploratory investigation of the antecedents and impact of interne usage: An individual perspective. Behaviour & Information Technology, 19(1), 69-85. Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in social psychological research: conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51(6), 1173-1182. Beierle, T. C. (2002). Democracy on-line: An evaluation of the national dialogue on public involvement in EPA decisions. Retrieved February 17, 2005, from http://www.rff.org/Documents/ RFF-RPT-demonline.pdf
Public Administrators’ Acceptance of the Practice of Digital Democracy
Coleman, S. (2004). Connecting parliament to the public via the Internet. Information, Communication & Society, 7(1), 1-22.
librarians’ attitudes toward and use of the Internet: A structural equation modeling approach. Library Quarterly, 66(1), 59-83.
Coleman, S., & Gotze, J. (2001). Bowling together: Online public engagement in policy deliberation. Hansard Society. Retrieved October 14, 2004, from http://bowlingtogether.net/bowlingtogether.pdf
Fountain, J. E. (2001). Building the virtual state: Information technology and institutional change. Washington, DC: Brookings Institution Press.
Danziger, J. N., Dutton, W. H., Kling, R., & Kraemer, K. L. (1982). Computers and politics: High technology in American local governments. New York: Columbia University Press. Danziger, J. N., & Kraemer, K. L. (1986). People and computers: The impacts of computing on end users in organizations. New York: Columbia University Press.
Friedman, B., Kahn, P. H., & Howe, D. C. (2000). Trust online. Communications of the ACM, 43(12), 34-40. Gutmann, A., & Thompson, D. (1996). Democracy and disagreement. Cambridge, MA: The Belknap Press of Harvard University Press.
Davis, J. A. (1985). The logic of causal order. Beverly Hills, CA: Sage.
Hacker, K. L., & van Dijk, J. (2000). What is digital democracy? In K. L. Hacker, & J. van Dijk (Eds.), Digital democracy: Issues of theory and practice (pp. 1-9). Thousand Oaks, CA: Sage Publications.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of Information Technology. MIS Quarterly, 13(3), 319-340.
Hague, B. N., & Loader, B. D. (Eds.). (1999). Digital democracy: Discourse and decision making in the information age. London: Routledge.
Davis, F. D., Bagozzi, R. P., & Warsaw, P. R. (1989). User acceptance of computer technology: a comparison of two theoretical models. Management Science, 35(8), 982-1003.
Heise, D. R. (1969). Problems in path analysis and causal inference. Sociological Methodology, 1, 38-73.
de Lancer Julnes, P., & Holzer, M. (2001). Promoting the utilization of performance measures in public organizations: An empirical study of factors affecting adoption and implementation. Public Administration Review, 61(6), 693-708. DeLone, W. H., & McLean, E. R. (1992). Information systems success: The quest for the dependent variable. Information Systems Research, 3(1), 60-95. DeLone, W. H., & McLean, E. R. (2003). The DeLone and McLean model of information systems success: A ten-year update. Journal of Management Information Systems, 19(4), 9-30. De Vaus, D. A. (1986). Surveys in social research. London: George Allen & Unwin. Finlay, K., & Finlay, T. (1996). The relative roles of knowledge and innovativeness in determining
Hill, H. C. (2003). Understanding implementation: street-level bureaucrats’ resources for reform. Journal of Public Administration Research and Theory, 13(3), 265-282. Ho, A. T. (2002). Reinventing local governments and the e-government initiative. Public Administration Review, 62(4), 434-444. Ho, A. T., & Ni, A. Y. (2003). Explaining the adoption of e-government features: A case study of Iowa County Treasures’ Office. American Review of Public Administration, 34(2), 164-180. Holzer, M., Melitski, J., Rho, S., & Schwester, R. (2004). Restoring trust in government: The potential of digital citizen participation. Report of IBM Center for the Business of Government. Retrieved October 25, 2004, from http://www.businessofgovernment.org/pdfs/HolzerReport.pdf
Public Administrators’ Acceptance of the Practice of Digital Democracy
Igbaria, M. (1990). End-user computing effectiveness: A structural equation model. Omega, 18(6), 637-652. Igbaria, M. (1993). User acceptance of microcomputer technology: An empirical test. Omega, 21(1), 73-90. Igbaria, M., Guimaraes, T., & Davis, G. B. (1995). Testing the determinants of microcomputer usage via a structural equation model. Journal of Management Information Systems, 11(4), 87-114. Igbaria, M., Parasuraman, S., & Baroudi, J. J. (1996). A motivational model of microcomputer usage. Journal of Management Information systems, 13(1), 127-143. Igbaria, M., & Zviran, M. (1991). End-user effectiveness: A cross-cultural examination. Omega, 19(5), 369-379. International Telecommunication Union. (2004). Internet indicators: hosts, users, and number of PCs. Retrieved February 11, 2005, from http:// www.itu.int/ITU-D/ict/statistics/at_glance/Internet03.pdf Kakabadse, A., Kakabadse, N. K., & Kouzmin, A. (2003). Reinventing the democratic governance project through information technology? A growing agenda for debate. Public Administration Review, 63(1), 44-60. Kim, S.-T. (2004). E-government: Theories and strategies. Seoul, Korea: Bummunsa. (In Korean) Klem, L. (1995). Path analysis. In L. G. Grimm, & P. R. Yarnold (Eds.), Reading and understanding multivariate statistics (pp. 65-97). Washington, DC: American Psychological Association. Kline, R. B. (1998). Principles and practices of structural equation modeling. NY: Guilford Press. Kling, R. (1980). Social analyses of computing: Theoretical perspectives in recent empirical research. Computing surveys, 12(1), 61-108.
0
Kling, R., & Jewett, T. (1994). The social design of work life with computers and networks: A natural systems perspective. In M. C. Yovits (Ed.), Advances in computers (Vol. 39; pp. 239-293). San Diego, CA: Academic Press. Klobas, J. E. (1995). Beyond information quality: Fitness for purpose and electronic information resource use. Journal of Information Science, 21(2), 95-14. Kraemer, K. L., Danziger, J. N., Dunkle, D. E., & King, J. L. (1993). The usefulness of computer-based information to public managers. MIS Quarterly, 17(2), 129-148. Kraemer, K. L., Dutton, W. H., & Northrop, A. (1981). The management of information systems. New York: Columbia University Press. Kwon, T. H., & Zmud, R. W. (1987). Unifying the fragmented models of information systems implementation. In R. J. Boland, & R. A. Hirschheim (Eds.), Critical issues in information systems research (pp. 227-251). New York: John Wiley & Sons. Leonard-Barton, D., & Deschamps, I. (1988). Managerial influence in the implementation of new technology. Management Science, 34(10). 1252-1265. Lin, J. C., & Lu, H. (2000). Towards an understanding of the behavioural intention to use a Web site. International Journal of Information Management, 20, 197-208. Lipsky, M. (1980). Street-level bureaucracy: Dilemmas of the individual in public services. New York: Russell Sage Foundation. Loehlin, J. C. (1998). Latent variable models: An introduction to factor, path, and structural analysis (3rd ed.). Mahwah, NJ: Lawrence Erlbaum Associates. Lucas, H. C. (1978). Empirical evidence for a descriptive model of implementation. MIS Quarterly, 2(2), 27-41.
Public Administrators’ Acceptance of the Practice of Digital Democracy
Lucas, H. C., Ginzberg, M. J., & Schultz, R. L. (1990). Information systems implementation: testing a structural model. Norwood, NJ: Ablex Publishing. Mathieson, K., Peacock, E., & Chin, W. W. (2001). Extending the technology acceptance model: the influence of perceived user resources. The Data Base for Advance in Information systems, 32(3), 86-112. Minnesota Planning. (2002). Issue talk: State budget shortfall. Retrieved January 12, 2005, from http://www.mnplan.state.mn.us/pdf/2002/IssueTalkSummary.pdf Moon, M. J. (2002). The evolution of e-government among municipalities: Rhetoric or reality? Public Administration Review, 62(4): 424-433. Moon, M. J., & Brestchneider, S. (2002). Does the perception of red tape constrain IT innovativeness in organizations?: Unexpected results from simultaneous equation model and implications. Journal of Public Administration Research and Theory, 12(2), 273-291. Nedovic-Budic, Z., & Godschalk, D. R. (1996). Human factors in adoption of geographic information systems: A local government case study. Public Administration Review, 56(6), 554-567. Nelson, R. R., & Cheney, P. H. (1987). Training end users: an exploratory study. MIS Quarterly, 11(4), 547-559. Norris, D. F. (2005). Electronic democracy at the American grassroots. International Journal of Electronic Government Research, 1(3), 1-14. Norris, D. F., & Moon, M. J. (2005). Advancing e-government at the grassroots: Tortoise or hare? Public Administration Review, 65(1), 64-75. Organization for Economic Cooperation and Development. (2001). The development of broadband access in OECD countries. Retrieved February 14, 2005, from http://www.oecdwash.org/DATA/ DOCS/broadband_access.pdf
Organization for Economic Cooperation and Development. (2003). Promise and problem of e-democracy: Challenges of online citizen engagement. Retrieved October 18, 2004, from http://www1. oecd.org/publications/e-book/4204011E.PDF Pedhazur, E. J. (1982). Multiple regression in behavioral research (2nd ed.). New York: Holt, Rinehart and Winston. Pedhazur, E. J. (1997). Multiple regression in behavioral research (3rd ed.). Forth Worth, TX: Harcourt Brace College Publishers. Ranerup, A. (1999). Internet-enabled applications for local government democratisation. In R. Heeks (Ed.), Reinventing government in the information age (pp. 177-193). London: Routledge. Rogers, E. M. (1995). Diffusion of innovations (4th ed.). New York: The Free Press. Schumacker, R. E., & Lomax, R. G. (1996). A beginner’s guide to structural equation modeling. Mahwah, NJ: Lawrence Erlbaum Associates. Scott, W. R. (1992). Organizations: Rational, natural, and open systems (3rd ed.). Englewood Cliffs, NJ: Prentice Hall. Sheppard, B. H., Hartwick, J., & Warsaw, P. R. (1988). The theory of reasoned action: A metaanalysis of past research with recommendations for modification and future research. Journal of Consumer Research, 15(3), 325-343. Singleton, R. A., & Straits, B. C. (2005). Approaches to social research (4th ed.). New York: Oxford University Press. Spacey, R., Goulding, A., & Murray, I. (2004). Exploring the attitudes of public library staff to the Internet using the TAM. Journal of Documentation, 60(5), 550-564. Subramaniam, N., & Ashkanasy, N. (2001). The effects of organizational culture perceptions on the relationship between budgetary participation and managerial job-related outcomes. Australian Journal of Management, 26(1), 35-54.
Public Administrators’ Acceptance of the Practice of Digital Democracy
Thompson, R. L., Higgins, C. A., & Howell, J. M. (1991). Personal computing: Toward a conceptual model of utilization. MIS Quarterly, 15(1), 125143. Thong, J., & Yap, C. (1995). CEO characteristics, organizational characteristics and information technology adoption in small businesses. Omega, 23(4), 429-442. Tsagarousianou, R. (1999). Electronic democracy: rhetoric and reality. Communications, The European Journal of Communication Research, 24(2), 189-208. United Nations. (2004). UN global e-government readiness report 2004. Retrieved December 10, 2004, from http://www.unpan.org/egovernment4. asp Walsham, G. (1993). Interpreting information systems in organizations. Chichester, UK: John Wiley & Sons.
Wang, X. (2001). Assessing public participation in U.S. cities. Public Performance & Management Review, 24(4), 322-336. Weare, C., Musso, J. A., & Hale, M. L. (1999). Electronic democracy and the diffusion of municipal Web pages in California. Administration & Society, 31(1), 3-27.
EndnotE 1
E-government readiness index is calculated by averaging the Web measure index, the telecommunication infrastructure index and the human capital index. The e-participation index evaluates the extent of information provision on government Web sites and citizens’ online participation in policy making.
This work was previously published in International Journal of Electronic Government Research, Vol. 2, Issue 2, edited by D. F. Norris, pp. 22-48, copyright 2006 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
Chapter LXXIV
E-Mexico:
Collaborative Structures in Mexican Public Administration Luis F. Luna-Reyes Universidad de las Américas-Puebla, México J. Ramon Gil-Garcia University at Albany, USA Cinthia Betiny Cruz Universidad de las Américas-Puebla, México
abstract After six years of challenges and learning pushing forward the e-Government agenda in Mexico, the Presidential succession brought an opportunity for assessing the current progress, recognizing the main unsolved problems, and planning the vision for the future of e-Government in Mexico. This case provides a rich description of the e-Mexico system, including its main objectives and goals, governance structures, IT infrastructure, collaboration processes, main results, and current challenges. Some background information about Mexico is also provided at the beginning of the case. Playing the role of a consultant working for the new Mexican CIO, the reader is asked to evaluate the current situation and help in the design of a work plan, including a proposal for organizing the ICT function, the main strategic objectives, and some specific lines of action for the next six years.
introduction Three years of presidential election campaign finally came to an end on July 2, 2006. Aspirants from the three main political parties worked hard to attract voters’ preferences since 2003, but everything got resolved in a single election day. Results were so close that the Federal Elections Institute decided not to pronounce any winner on the basis of their “fast count” program, but to wait
for the complete counting of votes. For the first time since 1929, the Institutional Revolutionary Party (PRI) was not the first force in both the House of Representatives and the Senate. Actually, PRI became the third political force with about 21% of the legislators in the House of Representatives. The National Action Party (PAN) became the first political force with about 42%, and the Party of the Democratic Revolution (PRD) became the second force with about 25%.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
E-Mexico
Just a couple of weeks after the election, Pedro Torres,1 who was going to be appointed by the elected President to organize the Information and Communication Technologies (ICT) function in the Federal Government for the next six years, was gathering information about the current state of digital government in Mexico. He needed to prepare an assessment of the current progress and needs, and to present a work plan, including a proposal for organizing the Information and Communication Technologies (ICT) function, the main strategic objectives and some specific lines of action. He asked for your advice as a consultant in this process. The following sections in the document constitute a summary of the information Pedro had gathered and shared with you to work in this assessing and planning process.
background The official name of Mexico is the United Mexican States and is a federal republic formed by 31 states and a Federal District, which is Mexico City. There are three levels of government: federal, state, and municipal. Each level has certain degree of political and administrative autonomy. Municipalities have an elected council chaired by the municipal president. This council—called “cabildo”—has both executive and legislative functions. At the state level, there is a Governor, representing the Table 1. Mexico’s Politico-Administrative Regime Key Feature
Mexico
State Structure
Federal Fairly Centralized Co-ordinated
Executive Government
Intermediate (formerly Majoritarian)
Minister/Mandarin Relations
Separate Fairly Politicized
Administrative Culture
Predominantly Rechtsstaat
Diversity of Policy Advice
Mainly Political Appointees and civil servants Increasingly consultants, academics and corporations
executive branch, a state legislature, and a state judicial branch headed by the state supreme court of justice. Finally, at the federal level the president is the head of the Executive branch; she is elected by democratic direct voting for a six-year period without possibility of reelection. The legislative branch is conformed by the Senate and the House of Representatives, 128 senators and 500 representatives. The judicial branch is represented by the Supreme Court with 11 Justices. Supreme Court Justices are elected by the House of Representatives every 15 years. The legal system is a combination of the Roman and French systems (Lowe, Armstrong, & Mathias, 2002). Politically, Mexico was governed by the same political party (PRI) from 1929 to 2000 in a quasi-single-party system. The country’s borders are with the United States of America to the north and with Guatemala and Belize to the south. The Mexican territory spans 1,964,375 square kilometers. Mexico’s politico-administrative regime includes characteristics similar to the United States, Canada, and France, among others. Table 1 summarizes the main characteristics using the framework developed by Pollitt and Bouckaert (2000). Regarding the basic structure of the state, as mentioned before, Mexico is a federal system by constitution and the autonomy of state and local governments is clearly established. However, for more than 60 years a single political party dominated the three levels of government and there was a de facto centralization of power around the federal government. A decentralization process started in 2000, but there are still examples of this quasi-federal regime. Regarding the horizontal co-ordination at the federal level, two ministries have historically “called the shots” as far as administrative reform is concerned: the Ministry of Finance and the Ministry of Public Administration (former Office of the Federal Comptroller). Regarding the nature of executive government, Mexico is in a transition period from a mostly majoritarian regime to a more consensual one (probably intermediate). For many years, Mexican presidents had a significant majority of the legislature from the same political party. They rarely
E-Mexico
included people from other parties as heads of a ministry or agency. Starting in the late 90’s, this situation is changing and the different between the party winning the election and the second force has become very small. In fact, in 2000 President Fox included in his cabinet individuals identified with another political party (PRI). The relationship between executive politicians (ministers) and senior civil servants is very interesting in Mexico. The Mexican Civil Service is very new and before it was established only very low hierarchical levels were not political appointees (they were union members). With the creation of the civil service, some political appointees decided to change their career paths to senior civil servants. Therefore, public servants are separate from political appointees but their careers are already fairly politicized. Similarly to France and Germany, the philosophy and culture of governance in Mexico follow what has been called a Rechtsstaat perspective. That is, the state is conceptualized as a central legal and administrative force that integrates society around it. In these cases, civil servants are trained in specific laws that have been developed specifically to guide the functioning of government. This conception is slowly changing following New Public Management trends, at least at the level of political discourse, but it is still predominant. Finally, regarding the diversity of sources of policy advice, specifically administrative reform issues, political appointees, and recently civil servants, have been the main sources of reform initial ideas for a long time. However, other groups such as management consultants, academics, and corporations are increasingly taking the role of sources of policy advice. This trend has become more evident since 2000, when President Fox (with a long private corporation background) was elected. Mexico’s total population was 103.1 million inhabitants in 2005 (INEGI, 2006b). About 53 million are female and 50.1 million are male. About 25% of the population lives in rural areas, 14 % in semi-urban localities, and 61% in urban cities. One quarter of the population resides in cities of
500,000 inhabitants or larger, with the biggest human concentration in the Federal District (Palacios, Lara, & Kraemer, 2003). The Metropolitan area of Mexico City, including population officially living in the Federal District and the State of Mexico, is home of nearly 21 million people. Gross National Product (GNP) for 2005 was 8,374,348.5 million pesos (about $800 billion US dollars), which represented a 3% increase from 2004 (INEGI, 2006a). The Mexican Economy shifted from an imports’ substitution model between the 30s and the 70s to trade liberalization in the 80s and 90s. In the last few years, there has been an important focus on the Small and Medium Enterprises (SMEs), which account for most of the business establishments in the country (Palacios Lara & Kraemer, 2003). Main economic activities in the country are commerce, personal services, and manufacturing. These three activities account for about 75% of the GNP. Although the World Bank classifies Mexico as a Middle income country (World Bank, 2005), the country faces several problems. According to the World Policy Institute (2000), Mexico had an underfunded educational system, and an even poorer healthcare system, resulting in low educational levels and a high infant mortality index. Also in 2000, the country had one of the lowest levels of rural development in Latin America, and it was below average in attracting foreign investment, and also in its tourism industry. When compared with other Latin American countries in its same income level, Mexico is the lowest in number of telephones per capita, yet has some of the most expensive rates for domestic calls. In addition, there existed a restrictive environment for new business creation. Finally, Mexico was considered one of the nations in the world with high levels of corruption (World Policy Institute, 2000). In 2000, the first president from an opposition political party —Vicente Fox from PAN—was elected. Given the country problems, some of the main challenges faced by the Fox administration were to boost economic growth, reduce poverty, and improve Mexico’s international competitiveness. To respond to these and other challenges, President Fox main strategy has focused on a plan
E-Mexico
to upgrade infrastructure, modernize the tax and labor systems, and reform the energy sector. Proposals on these topics have been discussed in the House of Representatives in previous years without reaching consensus in any of them. Therefore, the new elected president, who will take office at the end of 2006, will face very similar challenges.
sEtting thE stagE The National Institute of Informatics, Geography and Statistics (INEGI) played a leadership role in Information Policy from the 80s to 2002. One of the main objectives of the Institute was to promote and regulate ICT development in the Federal Government. According to one of the current program managers, INEGI attempted to centralize control over ICT-related purchases and
contracts. Other Federal agencies opposed this attempt, and as a result, INEGI decided to follow a “no-policy” strategy. President Fox started in 2000 a very ambitious program to promote the Mexican Digital Society and the use of ICT to improve government services. One important component of the program oriented mainly to promote a digital society was called the e-Mexico system, and was housed at the Ministry of Communications and Transportation. The program component related to the use of ICT in government was lead initially by the President’s Office for Innovation. As a result of some policy and process changes described in the following sections, the Office of the Federal Comptroller (SECODAM) became the Ministry of Public Administration in 2002. The new ministry took responsibility of important aspects of Information Policy in Mexico. Actually, program managers working in this direction
Table 2. Main project objectives of e-Mexico e-Mexico: Main Project Objectives •
To generate value alternatives through a technological system with social content, offering opportunities made feasible by ICT to improve the quality of life of all Mexicans.
•
To be a communication channel to integrate and harmonize in a one-stop window services in four initial areas (e-Learning, eHealth, e-Economy, and e-Government).
•
To create synergies among public and private agents, allowing content and services integration to create public value.
•
To provide to citizens knowledge, opportunities and services in Education, Health, Economy and Government, resulting in personal development inside their own communities, and promoting a closer relationship with government.
•
To offer alternative and more flexible communication channels with the citizen using modern ICT.
•
To reduce intermediaries, opening direct communication channels with the entity responsible of service offering.
•
To develop a technical system with social content, promoting an integral social development.
•
To incorporate isolated communities to social development using modern ICT to deliver basic education, health, economic and government services.
•
To develop a showcase of Mexican culture and values for interested persons abroad.
•
To become the place to connect citizens with public administration, public and private organisms as well as non-profits.
•
To promote an IT-enabled government, allowing the creation of opportunities, breaking down barriers, and promoting efficiency.
•
To provide experience in Internet-projects implementation, developing a unique “portal for internet portals” containing links to information and services.
•
To provide advice based upon world best practices on Internet government services to promote the offering of basic government services through the Internet.
•
To provide the technological infrastructure to develop Internet services in a fast, easy and safe way, allowing the evolution of government presence from informational to transactional and combining open standards for interoperability among government entities.
E-Mexico
in the President’s Office for Innovation moved to the Ministry of Public Administration to give continuity to the programs. INEGI kept policy responsibility only for geographical and statistical systems (i.e., national census), but ICT applications to government became a responsibility of the newly created Ministry of Public Administration (see the Appendix for a brief description of these and other agencies involved in e-Mexico). The e-Mexico system is an “umbrella” initiative at the center of the Mexican strategy to develop government services and applications for all society. The mission of e-Mexico is to “be an agent of change in the country, integrating efforts from diverse public and private actors in the elimination of the digital divide and other socio-economic differences among Mexicans, through a system with technical and social components to offer basic services on education, health, commercial interchange, and government services, being at the same time leaders in Mexican technological development” (e-México, 2003b). President Fox administration assumed as one of its main objectives the promotion of ICT use among Mexicans. The e-Mexico system was conceived as a way to provide universal access to information, knowledge and government services as a strategy to create a more democratic and participative society where economic and social benefits were better distributed (e-México, 2003b). The project started as a direct initiative of President Fox, who in his initial address to the Nation on December 1st, 2000 instructed the Minister of Communications and Transportation to start the initiative: I instruct the Minister of Communications and Transportation, Pedro Cerisola, to start as soon as possible the e-Mexico project, so the information and communications revolution acquires a truly national character, reducing the digital divide among governments, private organizations, households and individuals, reaching up to the last corner of our country. The main rationale of e-Mexico was the widely accepted belief that ICT offer national
economies with opportunities to grow, to develop, and to create sustainable competitive advantages. Moreover, ICT use in government is associated with benefits such as cost savings, better programs, more transparency and accountability, and improved democracy (6, 2001; Dawes, 1996; OECD, 2003). In this way, the e-Mexico initiative was driven by several problems and challenges faced by President’s Fox administration, and it was looking to coordinate government actions to promote economic development, improve transparency and democracy through the use of ICT. Other important driver of the project was also the global trend of ICT applications in government as a tool to promote the “new public management” (Arellano-Gault, 2000). E-Mexico objectives were developed on the basis of information collected from three main sources. First, a diagnosis of the ICT situation in Federal Government agencies was conducted, finding what a project participant described as a “fifteen year lag in infrastructure compared to Mexican private organizations.” Second, current practices research looking for e-gov experiences in Latin America and the rest of the world were conducted. For instance, the initial team took into consideration experiences from Singapore, Korea, Canada, Brazil, Cuba, and England. Finally, they conducted during 2001 a public forum involving more than 900 participants from academia, public administration, private sector, and non-profit organizations. The forum produced more than 140 different documents and proposals, which were considered together with the current practices and the given status of ICT in the Mexican government to develop the e-Mexico strategy. Table 2 lists the main e-Mexico project objectives (e-México, 2003a). The e-Mexico project is closely related with other presidential initiatives associated with the President’s Office for Innovation and the Ministry of Public Administration, who are in charge more specifically of ICT use in government. One such initiative is the Good Government Agenda —created in 2002—which has created synergy with e-Mexico in facilitating government reform. Synergies were generated particularly because
E-Mexico
one of the main objectives of the Good Government Agenda was to create a digital government to facilitate access to government information and services anytime and anywhere (INNOVA, 2002).
dEscription of thE casE The proposed goals were ambitious. Actually, project leaders in e-Mexico and other federal government agencies believe that ICT use in government is an instrument to create a more democratic society by giving access to information and services to the entire population, including those small and remote sites regularly difficult to access. The goal was not only to reduce the digital divide, but to create social and economic impacts through the access to information and public services. Moreover, the project was intended to contribute to knowledge creation through the creation of a main portal and several sub-portals based upon particular interests of diverse Mexican communities, reaching 80% of the Mexican population through the 20% higherimpact services (e-México, 2003a). However, to reach the vision, project participants faced important challenges. One of the main challenges resulted from the initial explorations was related to starting a project with a clear vision for the long run, but allowing concrete results also in the short run. As one of the interviewees commented, …to truly take the country to the information society is a change that took Korea 40 years, so Mexico made the bet to make this change in at least 25 years, currently [after the first five years] we have the technology that will allow us to make a faster progress. We know that what we are doing in this period is only setting the foundations to build faster upon the things that we have created and did not exist when we started. Other important challenges were related to the digital divide. In Mexico, as in many other countries, there is a high correlation between the digital divide, and other social “divides” that
exist in the whole country (e-México, 2003b). Developing countries lack the appropriate technological and human infrastructures, as well as relevant content in the local language to create a significant social impact. As mentioned above, one of the interviewees commented that through the initial assessment, they found a 15-year lag in Federal Ministries and government agencies’ computer infrastructures. Just to mention a couple of examples, more than 50% of PC computers in Mexican government had a Pentium II processor or older in 2001 (INEGI, 2001), and there are only 3.6 million Internet connections for 93.9 million people older than six years (AMIPCI, 2005). Moreover, the problem of government innovation not only resides in building technical capabilities, but also in public services redesign and modernization (e-México, 2003b).
The General Strategy E-Mexico strategy was organized around three main “axes” or lines of action, and with a valueoriented and collaboration focus. The three main axes were (1) to create infrastructure that allows citizens to connecting to the Internet, (2) to produce relevant content, and (3) to develop a technical architecture for government. The focus on value creation and collaboration is reflected in the coordination nature of e-Mexico. The following sections include a description of each of the three action streams, and the collaborations in the process.
Connectivity The first line of work of e-Mexico was related to the creation of a connectivity infrastructure to cover most of the country. E-Mexico representatives have been working together with telecommunication companies to promote investment in the communications infrastructure in the country, increasing in this way the number of phone lines in Mexico. Additionally, the e-Mexico system has worked in the deployment of 7,200 Digital Community Centers (DCC), following models that they found
E-Mexico
operational from experiences in Brazil and Peru, but also following previous successful experiences in the country with educational programs using satellite communications. Although the country geography posed interesting technical challenges, the group has been also highly interested in the social and community component, as one of the participants commented, …the most impressive were the results from efforts in Cuba. I went there for a meeting about the Information Society, and they presented all their work models. Their digital community centers had an impact even with all their financial restrictions, there were only three computers, one for the center manager and two for other users, but they were taking advantage of that. We considered that the important part was their social network model. We liked the idea to have it installed in some sort of library. France was declaring little success in their model of community centers, Peru had a similar model but in very early stages, but Cuba had a big success. We thought that part of the success was that they were placing the center in an established organization with somebody taking care of it. In this way, and taking advantage of the opportunity of using the PANAMSAT satellite system, the group worked together with several government agencies to deploy the DCC based on a satellite network, and locating them in already-established community centers such as public libraries, centers for adult education, but mainly in elementary and medium-high schools (about 6,000 out of the 7,200 DCC are located in schools). According to one of the project participants, about 8,000 well-collocated DCC would be enough to serve most communities with more than 1,000 habitants. However, the same participant commented “we have to think very well about the location of the centers, although our goal is 10,000 community centers, there are more than 200,000 towns along the country, about 150,000 of them with less than 1,000 habitants.” Given the important proportion of DCC established in schools, e-Mexico program collaborated closely with the Ministry of Education during
the process. Main challenges during the early stages were technical and logistic, how to place a satellite plate in a donkey or how to install it in a cardboard roof, or how to make the satellite network work in not-state-of-the-art computers. Currently, the challenges are more associated with the sustainability of the DCC, getting people to use them, and to find ways to help communities to generate applications that create wealth and community development. These activities require collaboration with the Ministry of Education, individual schools, citizens, private companies, and non-profit organizations.
contents One known problem about information on the Internet is the fact that an important proportion of it is in English. In this way, the second main line of work in e-Mexico involved the creation of relevant contents for people to access. Initially, they worked in the development of the main e-Mexico Portal, and four sub-portals, e-learning, e-health, e-economy, and e-government. The portal project involved a collaboration process with the State Ministries associated to each of the four main “pillars,” as people in e-Mexico calls each content area. In fact, there is a contact person in each one of the related Ministries that works together with e-Mexico in the Ministry of Communications and Transportation to coordinate content creation or the integration of currently existing content into the portals. The four portals were designed to support the main objectives of the e-Mexico System (see Table 2). The e-learning portal has the objective to offer new options to access education and training, promoting education for everyone as a way to personal development. E-health portal intends to increase public health by eliminating barriers to access to wellbeing information and services such as social security. The e-economy portal has the goal to promote the development of the digital economy in Mexico, particularly oriented to the micro, small, and medium enterprises (mSMEs), as well as to promote a digital culture among consumers. Finally, the e-government portal is
E-Mexico
a medium to offer government information and services (e-México, 2003b). Following the same spirit of the DCC, e-Mexico staff collaborated with government agencies in content creation and integration, leaving the final responsibility of content management to the Ministry of Education, Health, Economy, or Public Administration, who are actually the content owners. However, although there is one main content owner, many organizations are involved in each sub-portal. As one of the participants commented, …of course learning is coordinated by the Ministry of Education, but e-Learning goes beyond schools… education, training and culture. You have to include the Ministry of Education, you have to include the state education authorities, the National Council for Science and Technology (CONACYT), the National Council for Culture and Arts (CONACULTA), public and private universities, the poet associations, the National Council for Educational Promotion (CONAFE)… This is important, it is very important to understand that the Ministry of Education only provides services to towns with a population greater than 500. Unfortunately, most of the 200,000+ towns in
the country have less than 500 habitants. In those places operates a strategy from the National Council for Educational Promotion (CONAFE). They do not have any school, they are not organized in grades… They use a model for literacy based on a multi-grade approach. Although the initial plan involved the creation of these four portals, the e-Mexico system has created 12 different content portals, all involving participation of many organizations. For example, the e-Migrant portal was created with the collaboration of 27 different agencies inside and outside government. The experience in content integration has lead to a standard process that has been called “the portal factory.” A key player in the creation of portals has been INFOTEC, an applied research center, which has developed as part of its activities tools to design content portals based upon group processes and semantic networks.
Systems The last main strategy from e-Mexico was the creation of valuable systems. The first and more visible system was the e-Mexico portal itself. One of the interesting design features of the portal was
Figure 1. Network of relationships around e-Mexico system and digital government in Mexico (LunaReyes, Gil-García, & Cruz, 2006, p. 2380)
Ministry of Economy (e-Economy)
Ministry of Education (e-Learning)
Ministry of Health (e-Health)
0
Other agencies interested in content creation
IT Units at Agency Level
E-Mexico (Ministry of Transportation)
Ministry of Public Administration (e-Government)
Infotec (National Council for Science and Technology)
E-Government Interministerial Committee
E-Mexico
its orientation to the Mexican citizen, and how it was organized around people’s life, home, family, taxes, education, health, etc. The less visible, but not less important, system is conformed by an architecture to facilitate government interoperability and services development. Initially, the group got involved in the definition of requirements and the development of a neural access point, and a data center (e-México, 2003b). During the last year, the group has been involved in requirements definition and the acquisition of a Virtual Private Network for all federal agencies, and an e-services infrastructure based upon the two main technologies in the market, .NET and J2EE. As expressed by another project participant, “the number of convergent (interoperable) systems is still low, and all this architectures are oriented to support true convergence among government agencies.” An important element related to the technical infrastructure is that most of it is outsourced. The DCC monitoring system, the data center, the VPN, and other hardware and communications systems are all outsourced. This situation required collaboration and coordination between the e-Mexico group and a great number of public and private organizations. It is important to mention that e-Mexico systems and infrastructure are not mandatory to other ministries and agencies at the Federal level. They are currently used only by those agencies, which choose to do so.
collaboration and value creation2 E-Mexico has been since the beginning a collaborative process, and it is reflected in the network of agencies involved in the digital government project in Mexico (see Figure 1). E-Mexico Coordination, which resides in the Ministry of Communications and Transportation, works closely with the Ministries of Economy, Education, Health, and Public Administration. In the last couple of years, many other agencies and organizations had shown interest in developing specialized content portals for communities such as the native Mexicans, the community of emigrants to the US, women or people with disabilities.
The Digital Government and IT Unit (Ministry of Public Administration) worked in close coordination with the agency-wide IT units at the federal level. Much of the coordination work has been done with the participation of Ministries and CIOs in an e-government network, which last December was formalized into an e-government Inter-ministerial Committee. The main objectives of the committee, as well as all subcommittees around it, are to share experiences, and to develop policy to be applied to all the Public Administration. As mentioned in previous sections, INFOTEC has given technical support in content creation. INFOTEC has also collaborated through research on current practices and technological trends, functioning as a consultant of both e-Mexico and the Ministry of Public Administration, and as a partner in the development of Internet applications. An interesting result that this horizontal coordination effort has brought is the change in focus from a vertical, hierarchical approach to a value model. One of the participants mentioned that one of their main lessons from the initial public forum was about working together horizontally, “The first thing we found out was the need to work together. It could not be done piece by piece, but we had to do something strange. That is, instead of creating a new Ministry or a new National Institute, we had to align the process horizontally with the people in charge of doing it… and we thought that to do it, we had to focus in three great areas [..] So, e-Mexico could have been born as a new entity, but the idea was to do it with the existing structures, asking people to do the things they had to do, and coordinating and aligning efforts from organizations and agencies working the same topic.” Another important characteristic of the value model was that any interested agency could approach with project ideas, increasing the probability of project success. Other participant commented The advantage of a value-based model is that people looks for us when they are interested in doing something, and then things get done. I am not in the hierarchy above them, and that’s perfect,
E-Mexico
because we are understanding for the first time in Mexico, the meaning of collaboration among powers. As one other participant commented the beauty of this is that we can do a lot with 13 people in the structure of middle- and high-level managers, and about 25 unionized personnel. So, the ratio of the administrative cost compared to the total budget was just 3.6% last year. The coordination, planning, and implementation of any project follow a four-step process: Digital Participation, Strategic Planning, Project Management, and Operation. Digital Participation involves the inter-institutional relationships with ministries, agencies and other organizations. It is in this first stage where new project ideas are generated. During the Strategic Planning stage, ideas start to be developed, assessing social impacts, costs, benefits, and alignment with the general e-Mexico Strategy, building a plan. When an idea passes these initial filters, the technical committee of the e-Mexico endowment reviews the project and makes a funding decision. Once funds are available, the idea becomes a project, and moves to the Project Management stage, where following Project Management Institute standards, the group of agencies deploys and implements the project. The last stage involves the operation of the new system or content portal. Most of this operation is external, that is, they are managed by other ministries or agencies. The four stages are organized in a loop of continuous improvement. One of the participants commented, “we deliver, and then we go back to the inter-institutional relationships… to close the loop and start a continuous process of improvement” (see Figure 2).
governance According to one of the program managers, e-Mexico and the Good Government Agenda promoted the creation of the Digital Government Network, an informal group with the participation of the CIOs from federal ministries, other agencies, and the e-Mexico coordinator. This group interpreted Digital Government as an axis to support other guiding principles for good government such as transparency, accountability, professionalization, service quality or cost efficiency, starting a series of projects under the leadership of the President’s Office for Innovation. When the Ministry of Public Administration was created in 2002, this Ministry’s Digital Government and IT Unit kept leadership of this group (actually, the responsible of the strategy at the Office for Innovation took charge of the Digital Government and IT unit). The main projects of the Network were initially oriented to promote savings, to facilitate interoperability, and to re-define the IT function at the government level. For example, they promoted a unique licensing agreement between Microsoft and all Federal agencies creating important savings for all of them. On the other hand, conversations inside the network made participants realize that about 90% of the IT budget was devoted to control applications, maybe because agency CIOs reported directly to the internal agency comptroller. Moreover, they found out that the decentralization of the IT function in several agencies had promoted the lack of a clear overall IT strategy inside each of them. Positive experiences and savings with the Digital Government Network, together with research on best practices, lead the group to the creation of an E-government Interministerial Committee,
Figure 2. Coordination process at e-Mexico Digital Participation
Strategic Planning
Project Management
Continuous Improvement and System Evolution
Operation
E-Mexico
Figure 3. Structure of the e-government interministerial committee E-Government Interministerial Committee
Sub-committee of Advanced Electronic Signature
Executive Council
Technical Council
Technical Council
...
which became a formal body last December (see Figure 3). Committee members are all federal ministers plus the directors of other important institutions such as the Mexican Oil Company (PEMEX), the Mexican Institute for Social Security (IMSS), the Internal Revenue Service (SAT), the National Council for Science and Technology (CONACYT), the President’s Office for Innovation, and the Federal Commission for Electricity (CFE). The committee president is the Minister of Public Administration, and the executive secretary of the Committee is the Director of the Digital Government and IT Unit from the same ministry. The Committee develops general policy and strategy (Becerra, 2006). The Digital Government Network, became the Executive Council of the Committee. Besides the Plenary Sessions, the Executive Council is organized in several technical councils in charge of creating recommendations on important themes such as IT function organization, Security, IT Procurement, Privacy or Interoperability. The Interministerial Committee also supports its functions in two sub-committees in charge of two key IT topics, electronic signatures and Control and Managerial Systems. The first sub-committee members are the representatives from the Ministry of Public Administration, the Ministry of Economy, and the Internal Revenue Service. The Control and Managerial Systems Sub-committee members are representatives from the Ministry of Finance and the Ministry of Public Administration.
Sub-Committee of Control and Managerial Systems
Consultant Group
Technical Council n
Finally, the Consultant group will be conformed by members from academia, the private sector, and other members of the civil society. Although the Interministerial Committee may have an important impact on the Federal Government IT strategy and policy, each Ministry or Agency currently develops its own strategic plan and negotiates its IT budget directly with the President’s Office. Some political actors are concerned about the survival of the Committee after the presidential succession, arguing that it had not enough time to consolidate as an institution, and that most members from both the Committee and the Executive Council will change when the elected president brings his own Cabinet.
main results, lessons learned and challenges The e-Mexico project and other associated initiatives have been successful in several ways and to a different extent in several areas of work. According to participants in the project, they have accomplished about 95% of the objectives at the beginning of President Fox Administration (see Table 2). Several of these objectives are closely related with the creation of a “portal of portals” with information and services in four related areas: e-government, e-economy, e-health, and e-learning. The current e-Mexico portal (www. emexico.gob.mx) was not only successful creat-
E-Mexico
Figure 4. English version of the e-Mexico portal
ing these four content areas, but in the creation of many others related to specific interest groups (see Figure 4). The portal contains more than 7,000 Web pages organized in about 15 content portals, with a monthly traffic of about 1.5 million hits. About 1,000 government services can be done in some extent through the Internet. These services constitute alternative communication channels with citizens. The portal has been the main place where synergies among public and private partners have been created through the “portal factory,” where public and private organizations, as well as non-profits, have joined efforts to provide valuable content and services to Mexicans. The portal has been also the main tool to develop a showcase of Mexican culture offering contents in English and French, as well as in some Mexican Indian languages such as Mayan and Mazahua. In order to incorporate isolated communities to social development using ICT, e-Mexico has started the deployment of a network of 7,200 DCC with an estimated number of 5.5 million visitors per month. According to surveys about Internet use, and the expert opinion of some of the marketing companies developing the surveys, e-Mexico has had an important impact on Internet penetration
in Mexico. Four out of the 17 million people connected to the Internet does so through a DCC. The level of success has been different in each of the four main e-Mexico areas. The area of e-health, for example, has had a limited impact because of the lack of resources. However, they have managed to create some important relationships with universities and research centers piloting technologies for telemedicine. Through these initial tests, they have figured out that the same technologies can be used for training and administration of medical centers in rural areas. The area of e-learning has been closely related to the DCC deployment. Current challenges in the area are less related to the technical aspects of the installation of DCC, and more associated with the sustainability of each Center, and the challenge of creating programs to promote the DCC as a wealth and development factor in each community. The area of e-economy has been working in the promotion of the supply and demand in the software and IT industry to promote economic development. The work stimulating the supply side has promoted the creation of 19 IT clusters in 19 Mexican States. On the demand side, the group has coordinated projects to show the benefits of
E-Mexico
using IT in traditional business models. Most of these projects are still in a prototype stage, and it is difficult to assess their impact. The area of e-government has made important progress in terms of creating and formalizing an initial governance structure based on the Interministerial Committee for Digital Government, and in terms of the development of an initial eservices architecture to support interoperability. The basic architecture includes a government services framework with guidelines for service development, interoperability and security standards, the organization of content using semantic Web technology, the use of Identity Federation and Liberty Project standards for authentication, and a dual technology platform supporting J2EE and .NET standards. However, as of October 2006, all existing services are offered by single agencies and there is almost no sharing of ICT resources among agencies. The evaluation of the ICT strategy is an area that can be improved. Given that the emphasis during the last six years has been on developing the basic infrastructure for e-government, most indicators of progress are focusing on project deliverables and complying with project timelines. Assessment of accomplishing 95% of initial objectives is based upon this kind of indicators. However, e-government impact measures are missing in this evaluation scheme, and should be included to support sustainable development of the Mexican ICT strategy. Finally, there are also other important challenges for the future. Many public managers recognize the need of a much better regulatory environment. Although some important regulations have been created, such as the Law regulating the Access to Government Information, some of the participants commented that “legislators are not in complete ‘synchrony’ with digital government,” making the process slow, or even creating new legislation that makes digital government implementation more complicated. Another challenge recognized by some participants is posed by different institutional arrangements inside Ministries, or assumptions that made collaboration complicated. For example, the centralized structure of the Ministry of Com-
munications and Transportation has to coordinate with the decentralized education system, which some times make the coordination difficult.
acknowlEdgmEnt The research reported here is supported by the Consejo Nacional de Ciencia y Tecnología (CONACYT-Mexico) grant SEP-2004-C01-46507. The views and conclusions expressed in this paper are those of the authors alone and do not necessarily reflect the views or policies of CONACYT.
rEfErEncEs 6, P. (2001). E-governance. Do digital aids make a difference in policy making? In J. E. J. Prins (Ed.), Designing e-government. on the crossroads of technological innovation and institutional change (pp. 7-27). The Hague, Netherlands: Kluwer Law International. AMIPCI. (2005). Estudio AMIPCI de Internet en México 2005. Retrieved October, 2005, 2005, from
http://www.amipci.org.mx/docs/Presentacion_Estudio_AMIPCI_2005_Presentada.pdf
Arellano-Gault, D. (2000). Challenges for the new public management. Organizational culture and the administrative modernization program in Mexico City (1995-1997). American Review of Public Administration, 30(4), 400-413. Becerra, J. L. (2006). Comisión Intersecretarial para el Desarrollo del Gobierno Electrónico ¿Cómo Funcionará este Modelo Normativo? Política Digital, 5, 12-14. Bogdanor, V. (Ed.). (2005). Joined-up government. New York: Oxford University Press. Dawes, S. S. (1996). Interagency information sharing: Expected benefits, manageable risks. Journal of Policy Analysis and Management, 15(3), 377-394.
E-Mexico
e-México. (2003a). El Sistema Nacional e-México: Sistema de Participación Digital. Retrieved January, 2006, from http://www.emexico.gob.mx/
wb2/eMex/eMex_El_Sistema_Nacional_eMexico_ un_Sistema_de_Par
e-México. (2003b). Resumen ejecutivo del sistema nacional e-México. Retrieved January, 2006, from
http://www.emexico.gob.mx/wb2/eMex/eMex_Resumen_ejecutivo_del_Sistema_Nacional_eMexic
Klijn, E. H., & Koppenjan, J. F. M. (2000). Public management and policy networks: Foundations of a network approach to governance. Public Management, 2(2), 135-158. Klijn, E. H., Koppenjan, J. F. M., & Termeer, K. (1995). Managing networks in the public sector: A theoretical study of management strategies in policy networks. Public Administration, 73(3), 437-454.
INEGI. (2001). Estructura Porcentual de las Computadoras Personales de la Administración Pública por tipo de Procesador por cada nivel de la Administración Pública. Retrieved October 2005, from http://www.inegi.gob.mx/est/contenidos/ espanol/rutinas/ept.asp?t=tinf003&c=3425
Luna-Reyes, L. F., Gil-García, J. R., & Cruz, C. B. (2006, August, 4-6). Collaborative Digital Government in Mexico: Some Lessons from Federal Web-Based Inter-Organizational Information Integration Initiatives. Paper presented at the AMCIS 2006, Acapulco, Mexico.
INEGI. (2006a). Información económica agregada. Retrieved 2006, from http://www.inegi.gob.mx/
Lowe, Armstrong S., & Mathias, A. (2002). Informe de Pais Mexico, 2006, from http://www.
INEGI. (2006b). Mujeres y Hombres en Mexico 2006, Retrieved 2006, from http://www.inegi.
OECD. (2003). The e-government imperative. Paris, France: Organisation for Economic Cooperation and Development.
inegi/contenidos/espanol/acerca/inegi324.asp?c=324
gob.mx/prod_serv/contenidos/espanol/bvinegi/productos/integracion/sociodemografico/mujeresyhombres/2006/MyH_x_1.pdf
INNOVA. (2002). Agenda Presidencial de Buen Gobierno en México. Retrieved March, 2006, from http://www.innova.gob.mx/ciudadanos/innovacion/index.php?contenido=515&lang=es
Janssen, M., & Cresswell, A. M. (2005, January 3-6). The development of a reference architecture for local government. Paper presented at the 38th Hawaii International Conference on System Sciences, Hawaii. Janssen, M., & Joha, A. (2006, August 4-6). Governance of Shared Services in Public Administration. Paper presented at the Americas Conference on Information Systems, Acapulco, Mexico.
missingkids.com/en_US/publications/Mexico_S.pdf
Palacios Lara, J. J., & Kraemer, K. L. (2003). Globalization and e-commerce IV: Environment and policy in Mexico. Communications of the AIS, 11, 129-185. Pollitt, C., & Bouckaert, G. (2000). Public management reform: A comparative analysis. Oxford, UK: Oxford University Press. World Bank. (2005). Mexico Country Brief. Retrieved 2006, from http://iris37.worldbank.org/ domdoc/PRD/Other/PRDDContainer.nsf/WB_View Attachments?ReadForm&ID=85256D2400766CC78 525715600596034&
World Policy Institute. (2000). Mexico: A statistical evaluation of government performance. Retrieved 2006, from http://www.worldpolicy.org/globalrights/ mexindex.html
E-Mexico
EndnotEs 1
This case is based on the findings from 15 semi-structured interviews to e-Mexico program managers. Interviewees include project managers and CIOs from the ministries of Communications and Transportation (3), Public Administration (3), Economy (1), Health (1), Education (1), Finance (2), the President’s Internet System (1), the Institute for Access to Information (2), and staff from INFOTEC (1), a public center for innovation involved in the development and implementation of the Mexican Digital Government Strategy. Interviewees were asked about the characteristics of their projects, projects’ cost and benefits, their perceptions of project success, collaboration, and networking. Documentation analysis was used to enrich
2
the contextual description and triangulate findings from the interviews. However, the names used for prospective program managers are fictional. The Collaboration approach described in this section has been described in the literature using terms such as governance (Janssen & Joha, 2006; Klijn & Koppenjan, 2000; Klijn, Koppenjan, & Termeer, 1995) or joined-up government (Bogdanor, 2005). Those literatures can help to better understand that e-Mexico initiative is an instance of a larger trend to collaborate in government to solve “wicked” problems. Moreover, these ideas can be used as a framework to analyze the Mexican experience. However, the authors preferred to use the term “value creation” because it was the term used by the project participants to describe their efforts.
E-Mexico
appEndix. a briEf dEscription of thE main agEnciEs involvEd in E-mExico •
•
•
•
• •
•
•
E-Mexico Coordination: The e-Mexico Coordination is the office in the Ministry of Communications and Transportation in charge of implementing the main objectives of the e-Mexico program, which is considered as the national information society project. The 13 professional staff members employed in the Coordination work closely with representatives from the Ministries of Education, Health, Economy, and Public Administration to create content for the e-Mexico portal. They have received support and professional advice from INFOTEC. The Coordinator is perceived as the Mexican CIO by some international agencies. President’s Office for Innovation: An office in charge of developing policy and programs to promote innovation in government. This office was in charge of ICT use in the Federal agencies and ministries in 2000, but this responsibility was given to the Digital Government and IT Unit in the Ministry of Public Administration. Currently, the office motivates innovation through documenting and recognizing best practices as well as promoting President Fox's Agenda for Good Government. Digital Government and IT Unit: Is the Unit in the Ministry of Public Administration promoting ICT use and development in federal government agencies. The Unit director is the Executive Secretary of the E-Government Interministerial Committee, and coordinates the development of information systems to improve government productivity, transparency, and services. The director is perceived as the Mexican CIO by some international agencies. President’s Internet System: Is an office in charge of opening communication channels between the President’s office and the citizens through Internet applications such as e-mail and the World Wide Web. They actively promote the use of Open Source Standards, develop Web services and components, and share these components with any agency that approaches to them. Infotec: INFOTEC is the technology center that serves the Government of Mexico and other organizations to foster the Information Society in Mexico through a program of applied research. Ministry of Economy: The Ministry of Economy is responsible for the e-economy pillar of eMexico. The ministry interpretation of its participation in e-Mexico consists of content creation and the promotion of the development of the IT industry in Mexico as an important element for economic development. Ministry of Health: The Ministry of Health works with the e-Mexico Coordination to create health-related content in collaboration with other health-related public agencies. They are also exploring the application of ICT to telemedicine, which would make possible for people in isolated communities to obtain better medical services. Ministry of Education: The Ministry of Education has collaborated in the deployment of the Digital Community Centers, and the development of educational materials for the e-Mexico portal, and others to be used only in schools.
This work was previously published in International Journal of Cases on Electronic Commerce, Vol. 3, Issue 2, edited by M. Khosrow-Pour, pp. 54-70, copyright 2007 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
Chapter LXXV
The Impact of the Internet on Political Activism: Evidence from Europe Pippa Norris Harvard University, USA
abstract The core issue for this study concerns less the social than the political consequences of the rise of knowledge societies; in particular, the capacity of the Internet for strengthening democratic participation and civic engagement linking citizens and government. To consider these issues, Part I summarizes debates about the impact of the Internet on the public sphere. The main influence of this development, as it is theorized in a market model, will be determined by the “supply” and “demand” for electronic information and communications about government and politics. Demand, in turn, is assumed to be heavily dependent upon the social characteristics of Internet users and their prior political orientations. Given this understanding, the study predicts that the primary impact of knowledge societies in democratic societies will be upon facilitating cause-oriented and civic forms of political activism, thereby strengthening social movements and interest groups, more than upon conventional channels of political participation exemplified by voting, parties, and election campaigning. Part II summarizes the sources of survey data and the key measures of political activism used in this study, drawing upon the 19-nation European Social Survey, 2002. Part III examines the evidence for the relationship between use of the Internet and indicators of civic engagement. The conclusion in Part IV summarizes the results and considers the broader implications for governance and democracy.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Impact of the Internet on Political Activism
introduction The rise of knowledge societies represents one of the most profound transformations that has occurred in recent decades. The diffusion of information and communication technologies (ICTs) promises to have major social consequences by expanding access to education and training, broadening channels of expression and social networks, as well as revolutionizing the nature of work and the economy. The primary impact of this development has been evident in affluent societies, but the Internet has also been widely regarded as an important instrument for social change in poorer nations around the globe (Franda, 2002; UN, 2002).
part i: thEoriEs of thE impact of knowlEdgE sociEtiEs on dEmocracy There are multiple theories about how the growth of knowledge societies could potentially influence civic engagement in contemporary democracies. Four main perspectives can be identified in the literature.
the internet as a virtual agora The most positive view is held by cyber-optimists who emphasize the Panglossian possibilities of the Internet for the involvement of ordinary citizens in direct, deliberative, or “strong” democracy. Digital technologies are thought to hold promise as a mechanism facilitating alternative channels of civic engagement exemplified by political chat rooms, remote electronic voting in elections, referenda, plebiscites, and the mobilization of virtual communities, thereby revitalizing levels of mass participation in public affairs (Barber, 1998; Budge, 1996). This view was certainly popular as the Internet rapidly expanded in the United States during the mid-1990s and the radical potential of digital technologies for democracy continues to be expressed by enthusiasts today (Gilder, 2000; Rash, 1997; Rheingold, 1993; Schwartz, 1996).
0
Moreover, the general claim that the knowledge society will stimulate widespread citizen deliberation in affairs of state so that the Internet functions like a virtual Agora while attractive as a normative ideal, became less plausible once it was widely recognized by many observers that there are substantial disparities in who becomes involved in digital politics. Studies of politicallyoriented discussion groups, bulletin boards, and online chat rooms have found that these largely fail as deliberative forums, instead serving as places to reinforce like-minded voices due to their “easy entrance, easy exit” characteristics (Davis, 1999; Davis & Owen, 1998; Wilhelm, 2001). The survey evidence from many countries indicates that those who take advantage of the opportunities for electronic civic engagement are often activists who were already most predisposed to participate via the traditional channels of political participation (Hill & Hughes, 1998; Selnow, 1998; Toulouse & Luke, 1998). The Internet is a medium of choice par excellence, so it seems improbable that political Web sites, chat rooms and online news will reach many citizens who are otherwise disengaged, apathetic, or uninterested, if they choose to spend their time and energies on multiple alternative sites devoted to everything from the stock market to games and music (Bonfadelli, 2002; Johnson & Kaye, 2003). In this regard, the Internet seems analogous to the segmented magazine market, where some subscribe to the Atlantic Monthly and the Economist and Foreign Affairs, but others pick Golfing Weekly or Playboy. Therefore, claims for the potential of the knowledge society to revitalize mass participation or strong democracies find little support from the available empirical studies.
the knowledge Elite and social inequalities As the Internet evolved during the last decade, a darker vision developed among cyber-pessimists who regard the knowledge society as a Pandora’s Box reinforcing existing inequalities of power and wealth and generating deeper divisions between the information rich and poor. In this perspective, the global and social divides in Internet access
The Impact of the Internet on Political Activism
mean that, far from encouraging mass participation, the knowledge society will disproportionately benefit the most affluent sectors in the developed world (Golding, 1996; Hayward, 1995; Murdock & Golding, 1989; Weber, Loumakis, & Bergman, 2003). For example, the first phase of the UN World Summit on the Information Society (WSIS) held in Geneva in December 2003 concluded that the Internet holds great prospect for development for billions of people around the globe, endorsing ambitious principles and action plans, and yet no agreement was reached about the transfer of financial and technological resources necessary to facilitate wider electronic access in poorer nations (ITU, 2003). Despite the great potential for technological innovations leading towards political change, observers suggest that in established democracies, traditional interest groups and governments have the capacity to reassert their control in the virtual political sphere, just as traditional multinational corporations have the ability to reestablish their predominance in the world of e-commerce (Hill & Hughes, 1998; McChesney, 1999; Selnow, 1998; Toulouse & Luke, 1998). In authoritarian regimes, as well, studies have found that access to publishing and disseminating information on the Internet, can be strictly restricted by governments, such as limitations imposed in Cuba, Saudi Arabia, and China (Boas, 2000; Drake, Kalathil, & Boas, 2000; Hill & Hughes, 1999; Kalathil & Boas, 2003).
politics as usual The third perspective, which has become more commonly heard in recent years, is articulated by cyber-skeptics who argue that both of these visions are exaggerated. In this view, the potential of the knowledge society has failed so far to have a dramatic impact on the practical reality of “politics as usual,” for good or ill, even in countries such as the United States at the forefront of digital technologies (Margolis & Resnick, 2000). This perspective stresses the embedded status quo and the difficulties of achieving radical change to political systems through technological mechanisms. For example, commentators suggest that during
the 2000 American election campaign, George W. Bush and Al Gore used their Web pages essentially as glossy shop windows, fundraising tools, and campaign ads, rather than as interactive “bottom up” formats facilitating public comment and discussion (Foot & Schneider, 2002; Media Matrix, 2000). During the 2004 presidential election in the United States, the fundraising function also seems to have predominated in the campaign for Vermont Democratic Governor, Howard Dean, and the Kerry-Edwards Web site. Elsewhere, content analysis of political party Web sites in countries as diverse as the United Kingdom, France, Mexico, and the Republic of Korea has found that the primary purpose of these Web sites has been the provision of standard information about party organizations and policies that were also widely available offline, providing more of the same rather than anything new, with still less interactive facilities. Party presence on the Internet seems to represent largely an additional element to a party’s repertoire of action along with more traditional communication forms rather than a transformation of the fundamental relationship between political parties and the public, as some earlier advocates of cyber-democracy hoped. (Gibson, Nixon, & Ward, 2003, p. X) Studies of the content of government department Web sites in many countries at the forefront of the move towards e-governance (e.g., the United States, Canada, and India) and surveys of users of these Web sites have also found that these Web sites are often primarily used for the dissemination of information and the provision of routine administrative services. The Internet thereby serves as an aid to good governance by increasing government transparency, efficiency, and customer-oriented service delivery, but it does not function as a radical medium facilitating citizen consultation, policy discussion, or other democratic inputs into the policymaking process (Allen, Jullet, & Roy, 2001; Chadwick & May, 2003; Fountain, 2001; Haque, 2002; Stowers, 1999; Thomas & Streib, 2003). In the skeptical view, technology is a plastic
The Impact of the Internet on Political Activism
medium that flows into and adapts to pre-existing social molds and political functions.
part ii: concEptual framEwork, EvidEncE, & data
the political market model
What evidence would allow us to examine these propositions, particularly testing the impact of the Internet upon political activism? To understand these issues, we need to recognize that involvement in public affairs can take many different forms, each associated with differing costs and benefits. This study compares the impact of frequency of use of the Internet on four main dimensions of activism: voting, campaign-oriented, cause-oriented, and civic-oriented. These are summarized into a 21-point “Political Activism Index” combining all dimensions1. The basic items used to develop this Index are listed in Table 1 and reported fully in Appendix A. Voting in regular elections is one of the most ubiquitous forms of citizen-oriented participation, requiring some initiative and awareness for an informed choice but making fairly minimal demands of time, knowledge, and effort. Through the ballot box, voting exerts diffuse pressure over parties and elected officials, and the outcomes of elections affect all citizens. Participating at the ballot box is central to citizenship in representative democracy, but due to its relatively low costs, the act is atypical of other more demanding forms of participation. The Internet can be expected to encourage voting participation mainly by lowering some of the information hurdles to making an informed choice, although the provision of remote electronic voting through a variety of new technologies can be expected to have a more radical impact on turnout (Tolbert & McNeal, 2003). Campaign-oriented forms of participation concern acts focused primarily on how people can influence parliament and government in representative democracy, mainly through political parties in British politics. Verba, Nie, and Kim (1978) focus on this aspect when they define political participation as “…those legal activities by private citizens that are more or less directly aimed at influencing the selection of governmental personnel and/or the actions they take.” Work for parties or candidates, including party membership and volunteer work, election leafleting, financial
The last theoretical perspective—the one developed in this study—can be characterized as the political market model. In this account, the impact of the knowledge society depends upon the interaction between the “top-down supply” of political information and communications made available via the Internet, e-mail, and the World Wide Web from political institutions, notably government departments, parliaments, political parties, the news media, interest groups, and social movements, and upon “demand” in the use of information and communications about politics among the online public. This model suggests that, in turn, demand depends upon the social characteristics of the online population, especially the preponderance of younger, well-educated citizens who are commonly among the heaviest users of the Internet, and their prior political interests and propensities. The theory suggests that, given these assumptions, use of the Internet in the public sphere is most likely to strengthen and reinforce cause-oriented and civic-oriented dimensions of political activism, which are more popular among the well-educated younger generation, while having far less impact upon traditional channels of participation through voting, parties, and election campaigns (Norris 2001, 2002). Therefore, rather than accepting that either everything will change as radical forms of direct democracy come to replace the traditional channels of representative governance (as optimists hope), that the digital divide will reinforce socioeconomic disparities in politics (as pessimists predict), or alternatively that nothing will change as the digital world merely replicates “politics as usual” (as the skeptics suggest), the political market model suggests that it is more sensible to identify what particular types of democratic practices will probably be strengthened by the rise of the knowledge society, understanding that these developments remain a work in process.
The Impact of the Internet on Political Activism
Table 1. Political activism in Europe, ESS-2002 (Source: European Social Survey, 2002 [ESS-19]. Pooled sample N. 31741 in 18 ESS nations, excluding Germany [where Internet use was not monitored]) Percentage that have…
Regular internet user (%)
Not regular user (%)
Activism Gap (%)
Sig
Reported voting in the last national election
28
26
+2
***
Contacted politician
22
13
+9
***
Worn campaign badge
12
6
+6
***
Donated money to party
11
7
+4
***
Worked for party
6
4
+2
***
Party member
6
5
+1
***
Bought product for political reason
41
17
+24
***
Signed petition
35
16
+19
***
Boycotted product
25
11
+14
***
Demonstrated legally
10
5
+5
***
Protested illegally
2
1
+1
***
Member sports club
35
14
+21
***
Member trade union
33
17
+16
***
Member consumer group
25
11
+14
***
Member hobby group
20
10
+10
***
Member educational group
12
4
+8
***
Member professional group
14
6
+8
***
Member environmental group
9
4
+5
***
VOTING CAMPAIGN-ORIENTED
CAUSE-ORIENTED
CIVIC-ORIENTED
Member humanitarian group
9
4
+5
***
Member church group
15
11
+4
***
Member social club
11
10
+1
***
4.43
2.56
+1.87
***
TOTAL 21-POINT ACTIVISM INDEX Mean index score
Note: For the specific items used in the construction of the Index, see the Technical Appendix. ‘Regular internet user’ is defined as reported personal use of the Internet, email or World Wide Web at least weekly. The ‘activism gap’ is measured as the percentage of regular Internet users minus the percentage of regular non-users who engage in this activity. A positive gap indicates that regular Internet users are more active than non-users. The total activism index counts participation in each form of activity as one and sums the 21-point scale. Significance tested by Chi-squares and by ANOVA. *** .001 ** .01 * .05 N/s Not significant.
donations to parties or candidates, attending local party meetings, and get-out-the-vote drives, typifies this category. Parties serve multiple functions in representative democracies, notably simplifying and structuring electoral choices, organizing and mobilizing campaigns, aggregating disparate interests, channeling political debate, selecting candidates, structuring parliamentary divisions, acting as policy think tanks, and organizing government. Not only are parties one of the main
conduits of political participation, they also serve to boost and strengthen electoral turnout. If mass party membership is under threat, as many indicators suggest, this could have serious implications for representative democracy (Mair & Biezen, 2001; Scarrow, 2001). Campaigning and party work typically generate collective rather than individual benefits, but require greater initiative, time, and effort (and sometimes expenditure) than merely casting a
The Impact of the Internet on Political Activism
ballot. The Internet can be expected to provide new opportunities for activism in parties and election campaigns (e.g., through downloading information, joining parties or donating funds, and participation in discussion groups hosted on party or candidate Web sites) (Gibson, Nixon, & Ward, 2003; Hague & Loader, 1999; Norris, 2001;). Experience of campaign-oriented activism is gauged in this study by a five-battery item, including whether people are members of a party and whether they have donated money to a party, worked for a party, contacted a politician, or worn a campaign badge during the previous 12 months. Cause-oriented activities are focused primarily upon influencing specific issues and policies. These acts are exemplified by consumer politics (e.g., buying or boycotting certain products for political or ethical reasons), taking part in demonstrations and protests, and organizing or signing petitions. The distinction is not watertight; for example, political parties can organize mass demonstrations, and social movements often adopt mixed action strategies that combine traditional repertoires such as lobbying representatives, with a variety of alternative modes such as online networking, street protests, and consumer boycotts. Nevertheless, compared with campaign-oriented actions, the distinctive aspect of cause-oriented repertoires is that they are most commonly used to pursue specific issues and policy concerns among diverse targets, both within and also well beyond the electoral arena. These acts seek to influence representative democracies within the nation-state through the conventional channels of contacting elected officials, ministers, civil servants, and government departments, but their target is often broader and more diffuse, possibly in the non-profit or private sectors, whether directed at shaping public opinion and lifestyles, publicizing certain issues through the news media, mobilizing a networked coalition with other groups or non-profit agencies, influencing the practices of international bodies such as the World Trade Organization or the United Nations, or impacting public policy in other countries. Experience of cause-oriented activism
is measured in this study by a five-battery item, including whether people have signed a petition, bought or boycotted products for a political reason, demonstrated legally, or protested illegally during the previous 12 months. Lastly and by contrast, civic-oriented activities involve membership and working together in voluntary associations, as well as collaborating with community groups to solve a local problem. The core claim of Toquevillian theories of social capital is that typical face-to-face deliberative activities and horizontal collaboration within voluntary organizations far removed from the political sphere (exemplified by trade unions, social clubs, and philanthropic groups) promote interpersonal trust, social tolerance, and cooperative behavior. In turn, these norms are regarded as cementing the bonds of social life and creating the foundation for building local communities, civil society, and democratic governance. In a win-win situation, participation in associational life is thought to generate individual rewards, such as career opportunities and personal support networks, as well as facilitating community goods by fostering the capacity of people to work together on local problems. Putnam suggests that civic organizations such as unions, churches, and community groups play a vital role in the production of social capital where they succeed in bridging divisive social cleavages, integrating people from diverse backgrounds and values, promoting “habits of the heart” such as tolerance, cooperation, and reciprocity, thereby contributing towards a dense, rich, and vibrant social infrastructure (Pharr & Putnam, 2000; Putnam, 1993, 1996, 2000, 2002). This dimension involves direct action within local communities (e.g., raising funds for a local hospital or school) where the precise dividing line between the social and political breaks down. Trade unions and churches, in particular, which have long been regarded as central pillars of civic society in Europe, have traditionally served the function of drawing citizens into public life. For a variety of reasons, including the way that voluntary associations can strengthen social networks, foster leadership skills, heighten political awareness, create party linkages, and facilitate
The Impact of the Internet on Political Activism
campaign work, people affiliated with churchbased or union organizations can be expected to participate more fully in public life (Cassel, 1999; Radcliff & Davis, 2000). Access to the knowledge society can be expected to expand social networks and information, facilitating membership in civic associations and social groups, although the evidence as to whether the Internet strengthens or weakens social capital remains under debate (Bimber, 1998; Horrigan, Rainie, & Fox, 2001). Experience of civic activism is measured here by a 10-point scale summarizing membership in a series of different types of voluntary organizations and associations, including traditional sectors such as trade unions, church groups, and social clubs, as well as “new” social movements exemplified by groups concerned about the environment and humanitarian issues. The summary 21-point political activism index, which provides an overview, is composed very simply by adding together experience of each of these different types of acts (each coded 0/1). It should be noted that in this conceptual framework, this study focuses upon political activity; we are concerned with doing politics rather than being attentive to public affairs or having psychological attitudes such as trust in parliament or political efficacy, which are thought to be conducive to civic engagement. The study, therefore, does not regard exposure or attention to mass communications, including following campaign events in newspapers or watching party political broadcasts during the election, as indicators of political activism per se. These factors may, indeed, plausibly contribute towards participation and thereby help explain this phenomenon as prior pre-conditions, but they are not, in themselves, channels that citizens can use for expressing political concerns or mobilizing group interests.
Survey Evidence & Data Sources To establish the extent and significance of the role of the Internet on political activism, the primary source of evidence for this study is drawn from the 19-nation European Social Survey 2002 (ESS-19). This is a new, academically driven
study designed to chart and explain the interaction between Europe’s changing institutions and the attitudes, beliefs, and behavior patterns of its diverse populations2. The survey includes a wide range of items designed to monitor citizen involvement, including a battery of a dozen items that can be used to create a summary political activism scale, as well as multiple indicators of political interest, efficacy, trust, party allegiances, subjective well-being, family and friendship bonds, and a rich array of detailed socio-demographic data, including household composition, ethnicity, type of area, and occupational details. This survey provides recent evidence, and it also facilitates comparison among similar advanced industrialized European societies and democratic states. The size of the total pooled sample (with over 36,000 cases) also allows us to monitor differences among smaller European populations, such as ethnic minorities. The survey currently includes four nations in Scandinavia (Norway, Sweden, Finland, and Denmark), six nations in Northern Europe (Britain, Germany, Luxembourg, Ireland, the Netherlands, and Switzerland), four from Mediterranean Europe (Greece, Spain, Italy, Portugal, and Israel), and four post-Communist societies in Central Europe (the Czech Republic, Hungary, Poland, and Slovenia). All these countries were classified by Freedom House in 2001-02 as fully “free” in their political rights and civil liberties, using the Gastil Index. Most can also be categorized as affluent post-industrial economies, with an average per capita GDP in 2002 ranging from $16,000 (in Example 1. “Now, using this card, how often do you use the internet, the World Wide Web or e-mail—whether at home or at work—for your personal use?” No access at home or work 00 (41%) } Never use 01 (17%) } Less than once a month 02 (3%) } Regular use Once a month 03 (2%) } Several times a month 04 (4%) } -----------------------------------------------------Once a week 05 (5%) } Several times a week 06 (11%) } Every day 07 (16%) } (Don’t know/No answer)
The Impact of the Internet on Political Activism
Greece) to $30,000 (in Norway), although all of the post-Communist states except Slovenia fall below this level.
internet use In some of these societies, the knowledge society has been widely diffused, with two-thirds or more of the public using the Internet at least occasionally. By contrast, in other societies, few of the public accesses the Internet. The survey monitored Internet use by the question in Example 1. This is a limited measure that does not gauge what people do online or where they commonly seek information, nor does it distinguish among access to the Internet, email, or World Wide Web. These are only some forms of access to the knowledge society, and other electronic technologies may be equally important, such as text messaging; mobile or cell phones; or cable, satellite, and interactive television. In addition, people may also use the Internet at work, and the measure does not attempt to monitor the length of experience of using the Internet. Nevertheless, this item provides
a standard measure of exposure to the Internet widely used in other studies, which gives a suitable benchmark for cross-national comparison. The main cross-national contrasts that emerged on this item are illustrated in Figure 1, which compares a third of Europeans in the sample who report that they regularly use the Internet (defined here as personal use of the Internet at least weekly) in the ESS-19. The remaining two-thirds say they never used the Internet or used it far less regularly than weekly. As many other Eurobarometer surveys have regularly reported, sharp differences are evident in Internet access within Europe (Norris, 2001). In Scandinavia (notably Denmark, Sweden, and Norway), regular Internet use is widespread among the majority of the population. Many of the countries fall into the middle of the distribution where anywhere from one-fifth to one-half use the Internet at least weekly. By contrast, in some countries in Mediterranean and post-Communist Europe, less than a fifth of the public made regular use of this technology, notably in Greece, Hungary, Poland, and Spain, which were all at the bottom of the distribution.
Figure 1. The proportion of regular Internet users in Europe, ESS-2002 0.55 0.52 0.51 0.48 0.44 0.42 0.42
Denmark Sweden Norway Switzerland Netherlands Luxembourg
Country
Finland
0.36
United Kingdom
0.31 0.31
Ireland Israel
0.25 0.23 0.19 0.18 0.16 0.16 0.14
Slov enia Italy Czech Republic Portugal Spain Poland Hungary
0.10
Greece 0.0
0.0
0.0
0.0
regular internet user
0.0
The Impact of the Internet on Political Activism
part iii: analysis & rEsults How do those Europeans who are and are not regular users of the Internet compare across the different types of political activism? We can first compare the overall patterns using the pooled ESS-19 sample without using any prior social or attitudinal controls, and then go on to consider the results of the multivariate analysis and differences among European nations. One important qualification to note is that in the analysis, we cannot establish the direction of causality in these models; with a single cross-sectional survey, it is impossible to disentangle satisfactorily whether use of the Internet facilitates and encourages political activism, or whether prior habits of political engagement lead towards continuing activism via electronic channels. To establish causality in any media effects, we really need either to analyze repeated panel surveys among the same respondents over successive years, or to examine experimental research designs, neither of which are available on a cross-national basis (Kent, Jennings, & Zeitner, 2003; Norris, 2000; Norris & Sanders, 2003). In the model, based on standard theories of political socialization, we assume that the cultural values and norms of behavior are acquired from formative experiences with family, school, and community in early youth. We theorize that these processes are likely to shape long-term and enduring political orientations and habitual norms of behavior, such as patterns of partisan identification, ideological values, and forms of activism. We assume that use of the Internet is a relatively recent and, therefore, short-term influence that will facilitate and reinforce the cognitive and attitudinal factors associated with habitual political activism (e.g., expanding people’s awareness of election issues or party policies) but will not necessarily alter or transform broader patterns of civic engagement. Given these assumptions, Table 1 shows a clear and consistent pattern: regular Internet users are significantly more politically active across all 21 indicators. The overall score on the mean Political Activism Index, which summarizes this pattern, was 4.43 for regular Internet users compared with 2.56 for others, a substantial and significant
difference. Yet the size of the activism gap does vary among different types of engagement; it is relatively modest in reported voting turnout as well as across most of the campaign-oriented forms of activism, such as party membership, party volunteer work, and party donations. By contrast, the gap is substantial (in double digits) among many forms of cause-oriented activism, such as buying or boycotting products for political reasons and signing petitions, as well as in membership of certain types of civic organizations notably belonging to sports clubs, trade unions, consumer groups, and hobby groups. To see whether this activism gap was an artifact of the way that regular Internet use was measured, Figure 2 illustrates the mean distribution of the political activism index across all categories of Internet use, ranging from no access at home or work to personal use of the Internet every day. The figure confirms that the overall scale of political activism rises sharply and steadily with each category of Internet use, more than doubling across the whole scale. Moreover, if the patterns are analyzed by country, again similar results are evident in every society. As Figure 3 shows, political activism rises steadily with increasing Internet use in nearly all nations, and the only exceptions to this are Portugal and Poland where levels of technological diffusion remain very limited. Figure 2. Internet use and the political activism index (Source: European Social Survey, 2002)
The Impact of the Internet on Political Activism
political activism political activism political activism political activism
Figure 3. Internet use and the Political Activism Index by nation (Source: European Social Survey, 2002) .00
czech republic
denmark
spain
finland
1.
united kingdom
2.
.00 .00 .00 .00
.00
greece
Hungary
ireland
israel
Italy
luxembourg
netherlands
Norway
poland
portugal
sweden
slovenia
.00 .00 .00 .00
.00 .00
3.
.00 .00 .00
.00 .00 .00 .00 .00 .00
.00
.00
.00
internet use
.00
.00
.00
.00
internet use
4 Despite this clear and important pattern, the theoretical framework in this study suggests that, given the characteristics of the online population, we should find systematic variations in engagement by the different types of political activism, and, indeed, this is confirmed by the data. Figure 4 illustrates the strong and significant linear relationship between use of the Internet and civic activism in a wide range of voluntary organizations and local associations (R=318***). The cause-oriented activism scale is also significantly associated with the frequency of using the Internet (R =.318***). Nevertheless, voting activism is relatively flat across levels of Internet use, and the correlation proves to be significant but negative in direction (R= -0.024**), while the campaign-oriented activism scale shows only a modest positive correlation (R=.136). The results suggest that any association between access to the Internet and political activism is heavily contingent upon the particular forms of participation that are under analysis. Nevertheless, many factors may be influencing this process, including the prior social characteristics and cultural attitudes of Internet users. To examine these issues we need multivariate regression models. The main explanations of political activism can be categorized into the following four groups:
Structural explanations emphasizing the resources that facilitate civic engagement; notably, time, education, and income, which are closely associated with demographic groups and social status; Cultural accounts focusing upon the motivational attitudes that draw people into public affairs, such as a sense of political efficacy, institutional confidence, and citizenship duty; Agency explanations prioritizing the role of mobilizing organizations such as churches and unions, as well as the role of the news media and informal social networks, which bring people into public affairs. The use of the Internet can best be conceptualized in this model as a mobilizing agency; Historical accounts suggesting that there could be a regional effect generated by traditions in each area; notably, the length of time that representative democracy has operated in Scandinavia, Western Europe, and the “third-wave” democracies in the Mediterranean region and post-Communist societies that only experienced free and fair elections from the early 1990s onward.
Given this framework, Table 2 first includes the standard demographic and socioeconomic variables that many studies have commonly found to influence participation, including belonging to an ethnic minority, educational qualifications, household income, social class, work status, total hours normally worked per week, marital and family status, and religiosity. These were entered into the model in this order before adding the cultural attitudes of frequency of political discussion, the importance of politics, social and political trust, internal and external political efficacy, a sense of civic duty, and interest in politics. The mobilizing agency variables were then entered, including social networks and attention to politics on television, radio, and newspapers. The use of the Internet was entered at the end of this category to see whether there was any residual impact associated with this technology net of all other factors. Lastly, the major European regions were added, coded as
The Impact of the Internet on Political Activism
Figure 4. Internet use and types of political activism (Source: European Social Survey, 2002 Pooled sample) .0
Civic activism
.
mean
Cause-oriented .0
Voting activism 0.
Campaign-oriented
0.0 No access Never use at home or work
Less than once a month
Once a month
Several times a week
Once a week
Several times a week
Every day
dummy variables, where the Nordic region was the default category for comparison. The results in the pooled model confirm the significance of many of these factors upon the political activism scale. The only exceptions proved to belong to an ethnic minority, the amount of hours in the paid workforce, and the salience of politics, which all proved to be non-significantly related to participation, contrary to expectations. But after adding the complete battery of controls, use of the Internet continued to be significantly related to political activism, suggesting that this relationship is not simply explained away as a result of the prior social or attitudinal characteristics of those who are most prone to go online. The most
personal use of internet/e-mail/www
Table 2. The impact of Internet use on political activism, with controls (Source: European Social Survey, 2002 [ESS-19]. Pooled sample N. 31741 in 18 ESS nations, excluding Germany [where Internet use was not monitored]) B
Std. Error
-1.39
0.28
Gender
-0.17
0.05
-0.03
0.00
Age (years)
0.02
0.00
0.13
0.00
Belong to ethnic minority
0.03
0.13
0.00
0.81
Educational qualifications
0.22
0.02
0.11
0.00
Income
0.07
0.01
0.06
0.00
Work status
0.38
0.06
0.06
0.00
Total hours per week in main job + O/T
0.00
0.00
0.00
0.88
Married
0.19
0.05
0.03
0.00
Have children at home
0.13
0.05
0.02
0.01
Importance of religion
0.12
0.05
0.02
0.01
Discuss politics/current affairs, how often
-0.14
0.01
-0.09
0.00
Important in life: politics
0.01
0.01
0.01
0.54
Social trust
0.02
0.00
0.04
0.00
Trust in national political institutions
-0.02
0.01
-0.03
0.01
Trust in international institutions
-0.03
0.01
-0.04
0.00
Internal political efficacy
0.27
0.01
0.18
0.00
External political efficacy
0.12
0.01
0.08
0.00
Civic duty scale
0.04
0.00
0.11
0.00
How interested in politics
-0.34
0.04
-0.09
0.00
(Constant)
Beta
Sig. 0.00
SOCIAL STRUCTURE
CULTURAL ATTITUDES
continued on following page
The Impact of the Internet on Political Activism
Table 2. continued MOBILIZING AGENCIES How often socially meet with friends, relatives or colleagues
0.17
0.02
0.08
0.00
TV watching, news/ politics/current affairs on average weekday
-0.12
0.02
-0.05
0.00
Radio listening, news/ politics/current affairs on average weekday
0.03
0.01
0.02
0.06
Newspaper reading, politics/current affairs on average weekday
0.09
0.03
0.03
0.00
Personal use of internet/e-mail/www
0.10
0.01
0.10
0.00
Northern Europe
-0.30
0.06
-0.05
0.00
Mediterranean Europe
-1.35
0.08
-0.16
0.00
Post-Communist Europe
-1.73
0.09
-0.23
0.00
Adjusted R2
.373
REGION
Note: The models represent the result of ordinary least squares regression analysis where the total political activism index is the dependent variable. The index counts participation in each form of activity as one and sums the 21-point scale. The figures represent the unstandardized beta coefficient (B), the standard error (s.e.), the standardized beta coefficient (Beta), and the significance (sig). The default dummy regional variable was the Nordic region. All variables were checked to be free of problems of multi-collinearity by tolerance statistics. See Table1 for the items in each scale.
Table 3. The impact of Internet use on types of political activism, with controls (Source: European Social Survey, 2002. Pooled sample N. 31741 in 18 ESS nations, excluding Germany [where Internet use was not monitored]) Type of activism
B
Std. Error
Beta
Sig.
Adjusted R2
Voting
.005
.001
.034
.002
.145
Cause-oriented
.036
.004
.095
.000
.213
Campaign-oriented
.006
.004
.018
.102
.161
Civic-oriented
.042
.006
.073
.000
.308
Note: The models represent the result of ordinary least squares regression analysis on each form of activism as the dependent variables. For the full range of prior controls in the models (not reported here) see Table 2. The figures represent the unstandardized beta coefficient (B), the standard error (s.e.), the standardized beta coefficient (Beta), and the significance (sig).
important factors predicting activism (measured by the strength of the standardized regression coefficients) concern internal political efficacy (a feeling that the person could influence the political process), age, education, region, and civic duty. After these factors, use of the Internet proved the next strongest predictor of activism, which was more important than other indicators such as social and political trust or use of any of the news media. The overall model explained more than one-third of the variance in activism (R 2=.37). Similar regression models were then run with identical controls to predict the four types of politi-
00
cal activism under comparison, using the pooled sample. Without showing all the coefficients, the summary of the results in Table 3 shows that use of the Internet was significantly associated with voting, cause- and civic-oriented forms of activism, but not with campaign-oriented forms of activism. This confirms that, rather than claims about the effect of the knowledge society on activism, we do need to distinguish among the types of participation that generate different effects. The reason for these differences most probably lies in the residual effect of the typical social background and the political values of Internet
The Impact of the Internet on Political Activism
users, notably the propensity of well-educated younger generations to predominate online, as noted in many studies of the well-known “digital divide” (Norris, 2001).
part iv: conclusion & discussion The theory developed in this study considers the more pessimistic claims that the development of the Internet will serve to reinforce the voices of the powerful, the more skeptical arguments that it will merely reflect “politics as usual,” and the more optimistic view that the knowledge society will transform governance as we know it and strengthen levels of mass political participation. The study hypothesizes that contemporary democracies are a market where the impact of the Internet depends in part upon the “supply” of political information and communications, primarily from political agencies, and also upon the “demand” for such information and communication from the mass public. In turn, the public’s demand comes from the social and cultural profile of the online population reflecting long-standing patterns of civic engagement. As a result, use of the Internet is significantly related to overall patterns of political activism, even with multiple prior controls, but there are several distinct dimensions or channels of activism. The survey evidence analyzed in this study confirms that the rise of the knowledge society in Europe has indeed had the greatest positive consequences for politics by strengthening causeoriented and civic-oriented activism, rather than by encouraging mass participation in campaigns and elections. What are the broader implications of this pattern for democracy and for the future of electronic governance? We can speculate that the primary beneficiaries of this process will probably be political actors lacking traditional organizational resources that are useful in politics, such as those without a large-scale, fee-paying mass membership base, substantial financial assets, and paid full-time bureaucratic officials. This
type of organization is exemplified by new social movements, transnational advocacy networks, alternative social movements, protest organizations, community activists and development workers, single-issue causes from all shades of the political spectrum, as well as minor parties. The knowledge society is not expected to drive these insurgent movements, but rather to facilitate their organization, mobilization, and expression (Keck & Sikkink, 1998). These organizations have the greatest incentives and the fewest constraints to using the knowledge society. If this perspective is correct, then the result of the rise of the Internet may be greater pressures on governments to respond to the demands of single-issue groups and more amorphous social networks. By contrast, established political parties and traditional interest groups can be expected to adapt far more slowly to the knowledge society, because they are capable of drawing upon alternative organizational and financial resources, including legal authority, full time paid officials, press officers, lobbyists, and grassroots fee-paying mass memberships. Yet these are the umbrella organizations, particularly parties, which are capable of aggregating diverse issues into broader programmatic platforms, encouraging compromise, deliberation, and bargaining among members and channeling demands on a more predictable basis into government. It remains to be seen how far these developments alter the channels of participation in representative democracy in Europe and elsewhere, but the consequences will probably provide government with greater opportunities to connect with citizens and to greater challenges in the inevitable pressures that arise from satisfying multiple fragmented zero-sum constituencies represented by singleissue politics.
rEfErEncEs Allen, B.A., Juillet, L., Paquet, G., & Roy, J. (2001). E-governance & government on-line in Canada: Partnerships, people & prospects. Government Information Quarterly, 18(2), 93-104.
0
The Impact of the Internet on Political Activism
Barber, B. R. (1998). Three scenarios for the future of technology and strong democracy. Political Science Quarterly, I(4), 573-589.
Fountain, J. E. (2001). Building the Virtual State: Information Technology and Institutional Change. Washington, DC: Brookings Institution Press.
Bimber, B. (1998). The Internet and political transformation: Populism, community and accelerated pluralism. Polity XXXI, (1), 133-160.
Franda, M. (2002). Launching Into Cyberspace: Internet Development and Politics in Five World Regions. Boulder, CO: Lynne Rienner.
Bimber, B. (2001). Information and political engagement in America: The search for effects of information technology at the individual level. Political Research Quarterly, 54(1), 53-67.
Gibson, R., Nixon, P., & Ward, S. (eds.). (2003). Political Parties and the Internet: Net Gain? London: Routledge.
Boas, T. C. (2000). The dictator’s dilemma? The Internet and U.S. policy toward Cuba. The Washington Quarterly, 23(3), 57-67. Bonfadelli, H. (2002). The Internet and knowledge gaps: A theoretical and empirical investigation. European Journal of Communication, 17(1), 65-84. Budge, I. (1996). The New Challenge of Direct Democracy. Oxford: Polity Press. Cassel, C.A. (1999). Voluntary associations, churches, and social participation theories of turnout. Social Science Quarterly, 80(3), 504-517. Chadwick, A. & May, C. (2003). Interactions between states and citizens in the age of the Internet: “E-government” in the United States, Britain and the European Union. Governance, 16(2), 271-300. David, R. (1999). The Web of Politics. Oxford: Oxford University Press. Davis, R. & Owen, D. (1998). New Media and American Politics. New York: Oxford University Press. Drake, W.J., Kalathil, S., & Boas, T. C. (2000, October). Dictatorships in the digital age: Some considerations on the Internet in China and Cuba. iMP: The Magazine on Information Impacts. Retrieved from: www.cisp.org/imp Foot, K.A. & Schneider, S.M. (2002). Online action in campaign 2000: An exploratory analysis of the US political web sphere. Journal of Broadcasting & Electronic Media, 46(2), 222-244.
0
Gilder, G. (2000). Telecom: How Infinite Bandwidth Will Revolutionize Our World. New York: Free Press. Golding, P. (1996). World wide wedge: Division and contradiction in the global information infrastructure. Monthly Review, 48(3), 70-85. Hague, B. N. & Loader, B. D. (eds.). (1999). Digital Democracy: Discourse and Decision Making in the Information Age. New York: Routledge. Haque, M.S. (2002). E-governance in India: Its impacts on relations among citizens, politicians and public servants. International Review of Administrative Sciences, 68(2), 231-250. Hayward, T. (1995). Info-Rich, Info-Poor: Access and Exchange in the Global Information Society. K.G. Saur. Hill, K. A. & Hughes, J. E. (1998). Cyberpolitics: Citizen Activism in the Age of the Internet. Lanham, MD: Rowan & Littlefield. Hill, K. & Hughes, J. E. (1999). Is the Internet an instrument of global democratization? Democratization, 3,29-43. Horrigan, J., Rainie, L., & Fox, S. (2001). Online communities: Networks that nurture long-distance relationships and local ties. Pew Internet & American Life Project. Retrieved from: www. pew internet.org ITU. (n.d.). Retrieved from: http://www.itu. int/wsis/ Johnson, T.J. & Kaye, B.K. (2003). Around the World Wide Web in 80 ways: How motives
The Impact of the Internet on Political Activism
for going online are linked to Internet activities among politically interested Internet users. Social Science Computer Review, 21,(3), 304-325. Kalathil, S. & Boas, T. C. (2003). Open Networks Closed Regimes: The Impact of the Internet on Authoritarian Rule. Washington, DC: Carnegie Endowment for International Peace. Keck, M. E. & Sikkink, K. (1998). Activists Beyond Borders: Advocacy Networks in International Politics. Ithaca, NY: Cornell University Press. Kent, J. M. & Zeitner, V. (2003). Internet use and civic engagement: A longitudinal analysis. Public Opinion Quarterly, 67(3), 311-334. Mair, P. & van Biezen, I. (2001). Party membership in twenty European democracies 1980-2000. Party Politics, 7(1), 7-22. Margolis, M. & Resnick, D. (2000). Politics as Usual: The Cyberspace “Revolution.” Thousand Oaks, CA: Sage. McChesney, R. W. (1999). Rich Media, Poor Democracy. IL: University of Illinois Press. Media Matrix (2000, October). Campaign 2000: Party politics on the World Wide Web. Retrieved from: www.media metrix.com Murdock, G. & Golding, P. (1989). Information poverty and political inequality: Citizenship in the age of privatised communications. Journal of Communication, 39, 180-195. Norris, P. (2000). A Virtuous Circle. New York: Cambridge University Press. Norris, P. (2001). Digital Divide. New York: Cambridge University Press. Norris, P. (2002). Democratic Phoenix: Reinventing Political Activism. New York: Cambridge University Press. Norris, P. (2003). The bridging and bonding role of online communities. In P. N. Howard & S. Jones (Eds.), Society Online: The Internet in Context. Thousand Oaks, CA: Sage.
Norris, P. & Sanders, D. (2003). Medium or message? Political Communications. Pharr, S. & Putnam, R. (eds.). (2000). Disaffected Democracies: What’s Troubling the Trilateral Countries? Princeton, NJ: Princeton University Press. Putnam, R. D. (1993). Making Democracy Work: Civic Traditions in Modern Italy. Princeton, NJ: Princeton University Press. Putnam, R. D. (1996). The strange disappearance of civic America. The American prospect, 24. Putnam, R. D. (2000). Bowling Alone: The Collapse and Revival of American Community. New York: Simon and Schuster. Putnam, R. D. (ed.). (2002). Democracies in Flux. Oxford: Oxford University Press. Radcliff, B. & Davis, P. (2000). Labor organization and electoral participation in industrial democracies. American Journal of Political Science, 44(1), 132-141. Rash, Jr., W. (1997). Politics on the Net: Wiring the Political Process. New York: W.H. Freeman. Rheingold, H. (1993). The Virtual Community: Homesteading on the Electronic Frontier. Reading, MA: Addison Wesley. Scarrow, S. (2001). Parties without members? In R. J. Dalton & M. Wattenberg (Eds.), Parties Without Partisans. New York: Oxford University Press. Schwartz, E. (1996). Netactivism: How Citizens Use the Internet. Sebastapol, CA: Songline Studios. Selnow, G.W. (1998). Electronic Whistle-Stops: The Impact of the Internet on American Politics. Westport, CT: Praeger. Shah, D.V., Kwak, N., & Holbert, R.L. (2001). “Connecting” and “disconnecting” with civic life: Patterns of Internet use and the production of social capital. Political Communication, 18(2), 141-162.
0
The Impact of the Internet on Political Activism
Stowers, G.N.L. (1999). Becoming cyberactive: State and local governments on the World Wide Web. Government Information Quarterly, 16(2), 111-127. Thomas, J.C. & Streib, G. (2003). The new face of government: Citizen-initiated contacts in the era of e-government. Journal of Public Administration Research and Theory, 13(1), 83-101.
EndnotEs 1
2
Tolbert, C.J. & McNeal, R.S. (2003). Unraveling the effects of the Internet on political participation? Political Research Quarterly, 56(2), 175-185. Toulouse, C. & Luke, T. W. (eds.). (1998). The Politics of Cyberspace. London: Routledge. United Nations/American Society for Public Administration (2002). Bench Marking e-Government: A Global Perspective. New York: United Nations/DPEPA. Verba, S., Nie, N., & Kim, J.-on (1978). Participation and Political Equality: A Seven-Nation Comparison. New York: Cambridge University Press. Weber, L.M., Loumakis, A., & Bergman, J. (2003). Who participates and why? An analysis of citizens on the Internet and the mass public. Social Science Computer Review, 21(1), 26-42. Wilhelm, A. (n.d.). Democracy in the Digital Age: Challenges to Political Life in Cyberspace. New York: Routledge.
3
Since the dimensions are theoretically defined and constructed, based on understanding the role of different forms of participation in representative democracy, the study did not use factor analysis to generate the classification or measurement. For more details of the European Social Survey, including the questionnaire and methodology, see http://naticent02. uuhost. uk.uu.net/index.htm. Data for an initial nineteen countries, along with comprehensive documentation, is accessible at http://ess. nsd.uib.no. The survey is funded via the European Commission’s 5th Framework Program, with supplementary funds from the European Science Foundation which also sponsored the development of the study over a number of years. I am most grateful to the European Commission and the ESF for their support for this project and to the work of the ESS Central Coordinating Team, led by Roger Jowell, for making this survey data available. “Personal use’” is defined by the ESS-2002 as private or recreational use that does not have to do with a person’s work or occupation.
This work was previously published in International Journal of Electronic Government Research, Vol. 1, No. 1, edited by M. Khosrow-Pour, pp. 19-39, copyright 2005 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
0
0
Chapter LXXVI
Adoption and Implementation of IT in Developing Nations: Experiences from Two Public Sector Enterprises in India Monideepa Tarafdar University of Toledo, USA Sanjiv D. Vaidya Indian Institute of Management Calcutta, India
abstract This case describes challenges in the adoption and implementation of IT in two public sector enterprises in the postal and distribution businesses respectively, in India. In spite of similarities in the scale of operations and the general cultural contexts, the IT adoption processes and outcomes of the two organizations were significantly different. While one failed to implement IT in its crucial processes, the other responded effectively to changes in external conditions by developing and using IT applications for critical functions. The case illustrates how differences in organizational factors such as top management commitment, unions, middle management participation, capabilities of IS professionals and specific aspects of organization culture resulted in such differences. The case is interesting and significant because it is representative of experiences of many government-aided organizations in India, which have undertaken IT modernization as a response to external changes and government mandates. The findings can also be generalized across similar organizations in other developing countries.
organizational background introduction The adoption of IT in large public sector organizations poses some interesting challenges and issues. These are related to specific characteristics of these organizations with regard to their entrenched processes, culture, the role of bureaucracy, perfor-
mance measurement criteria and decision-making processes (see for example, Caudle et al., 1991). This case describes challenges in the adoption and implementation of IT in two public sector enterprises in India. The enterprises were in the postal and distribution businesses respectively. Public sector enterprises (PSEs), in the context of the Indian economy, are companies that are largely administered and supported by the
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Adoption and Implementation of IT in Developing Nations
government. They exist in different areas such as transportation, goods distribution, postal services, telecommunications, and other manufacturing and service sectors of the economy. There are different types of PSEs (Mathur et al., 1979). Some of them are statutory corporations established through legislative resolutions of the Parliament. The Parliament is the executive branch of the Government of India, similar to the House and the Senate in the United States. Many other PSEs are departmental agencies, functioning directly under a particular department of the government. Others are established as companies with limited liability under the Companies Act of India. A few PSEs, like those in the Railways sector, function exclusively under one ministry of the government. The government plays an important role at the strategic level, in activities such as policy making and financial outlay. At the operational level, PSEs run directly by government departments are staffed through a cadre of bureaucrats and administrators. In PSEs that are established through legislative acts, professional managers and technical specialists manage the operations. These bureaucrats, administrators, professional managers and technical specialists are responsible for achieving annual objectives in terms of activities accomplished and budgetary goals. Policy implementation with respect to modernization and IT adoption is the responsibility of organizational employees, who have autonomy over operational details of the implementation process, within a broad framework specified by the government. National Couriers Limited (NCL) was in the business of providing postal, courier and information transfer services to different parts of India. It functioned directly under a government department. The company also provided limited banking services such as money transfer, insurance and certificate of deposit services. It had about 90,000 employees working in offices in various states in the country. Eighty-five percent of the personnel of the company were unionized and were either unskilled or clerical level workers. The remaining were professionally trained administrators. National Traders Limited (NTL) was a distributor of agricultural products, particularly food
0
grains, to different parts of the country. It was created by a Parliament resolution. It provided services such as procurement of these products from producers, their storage and management in warehouses, and distribution to non-producing consumers through retail outlets. During the 1970s and 1980s, the organization had played a key role in encouraging farmers to increase their production, by providing them with an assured market and stable purchase prices. Subsequently, the major function of the company had been to collect part of the surplus agricultural produce, and suitably store and distribute it, so that it could be used during lean production seasons and in places where emergencies and natural calamities happened. The organization procured, distributed and transported about 22 million metric tons of produce, annually. Most of the purchasing centers were located in the northern part of the country. Consumers were located all across India and also in the islands off the southern part of the country. NTL had about 63,000 employees, 95% of whom were unionized.
services & processes: brief description Both NCL and NTL were service organizations. The major processes of NCL were collection, sorting and delivery of articles. Articles were collected from more than half a million collection centers, sorted in 550 sorting offices and delivered through more than 100,000 delivery offices. Other processes included activities related to banking, money transfer and information transfer functions. Some financial details about the operations of the company are provided in Table 1. All these functions involved managing and processing significant amounts of information. In this context, the head of the operations of the Eastern Region observed: “The sheer volume of information and articles that is required to be handled is tremendous.”
Adoption and Implementation of IT in Developing Nations
Table 1. Background information for National Courier’s Limited
Total Mail (No. of articles) Money Transfer (number of transactions) Money Transfer (Rs. mn)
1992-93 million
1993-94 million
1994-95 million
1995-96 million
1996-97 million
1997-98 million
1998-99 million
1999-2000 million
13400
13051
13607
13957
15096
15750
16790
17430
105
99
102
106
111
111
120
122
29124
31825
33555
37872
41018
44654
47450
48790
Table 2. Procurement of food grains from different parts of the country Year
PROCUREMENT OF FOOD GRAINS (in * 100,000 tonnes) WHEAT (April—March)
RICE (October— September)
1999-2000
120
185
2000-2001
150
220
2001-2002
200
215
2002-2003
185
160
2003-2004
150
170
*(As on 27.02.2004)
1. 2.
3.
Table 3. Movement of food grains in different parts of the country Year
The major activities of NTL related to the distribution of agricultural produce. Relevant figures in this context are provided in Tables 2 and 3. There were four critical activities for NTL, as described next.
Movement of Food Grains (in Million Tonnes)
1996-97
25
1997-98
20
1998-99
22
1999-2000
20
2000-2001
15
2001-2002
19.5
2002-2003
26.8
2003-2004 (up to Nov'03)
15.3
4.
Purchase of agricultural produce from producers: This was done through a network of purchase centers all over the country. Storage of the purchased produce in appropriate places and under appropriate environmental conditions: NTL had a network of storage depots for this purpose. Interfacing and maintaining liaison with administrative authorities in different states: This was required in order to plan for statewise requirements of produce. Distribution planning and transportation of produce: This function involved the transfer of produce from purchase centers to storage warehouses and then to the numerous distribution centers. It required access to good transport infrastructure, and liaising with professional transport agencies.
The overall processes of both organizations were therefore similar, in that they involved the transfer of physical goods and the accompanying information to and from different parts of the country. They also involved interfacing with government authorities at the state and national levels. 0
Adoption and Implementation of IT in Developing Nations
organization structure & characteristics The bureaucracy in India is typically the administrative arm of the central government and is largely responsible for turning legislation into policies and policies into practice. Bureaucrats therefore have a wide range of functions in many sectors of the economy, including the government departments. Their responsibilities can be broadly visualized in terms of two types of functions. One, they are responsible for assisting in policy formulation in the different ministries and departments. They are also charged with the direct running of the day-to-day government functions like general administration, law enforcement, resources disbursement and tax collection. Two, they are also required to head government controlled PSEs in different industries such as utilities, postal services, nationalized banks, railways and public distribution systems for food grains. Both NCL and NTL, being public sector enterprises, were headed by a senior member of the administrative arm of the bureaucracy. They also had bureaucrats in different top management functions. The operations of NCL were divided into four regions. Each region was headed by a Regional Office, with the Regional Manager as the executive head of the region. The Regional Manager was a member of senior management who supervised a team of middle management. Each region was further divided into districts, with a District Office supervising the operations of each district. The head of each District Office was a member of middle management. Members of junior management worked in the Regional and District Offices. There were about 200 districts and each district supervised the operations of a given number of collection centers, sorting offices and delivery centers, which were staffed by unionized employees and clerks. At the apex, there was one head office, from where the top management and company policymakers operated. In a similar manner, NTL also carried out its operations through a network of administrative offices across the country. There was one central
0
administrative office from where the top management functioned. The operations were divided into five zones and 17 regions. Each zone had a Zonal Administrative Office and each region had a Regional Administrative Office, supervised by a Zonal Manager and Regional Manager respectively. The regions were divided into a number of districts and each district was administered through a District Office, which managed the functions of a number of purchase centers and storage warehouses. There were 123 districts. There were 12,000 purchase centers and 1,700 depots and storage warehouses. Zonal Managers belonged to senior management cadre. Regional Managers and District Managers were middle managers. Junior managers also worked in all these offices. All employees in the management cadre were professional administrators. The company also had a number of clerical employees to carry out low-skilled functions in the different offices, purchase centers and warehouses. The scale and scope of the operations of the company were very large. One of the senior managers in the company observed: “The scale of operations is among the largest for any organization in the country. The amount of information required to be processed is tremendous.” Although the particulars of the organizational hierarchy such as specific office names and designations of managers were different, the broad organization structures of the two organizations were similar, as shown in Figure 1. There were five levels of hierarchy and the decision-making processes were largely centralized. The top management in the apex (central) office and senior management in the zonal and regional offices was responsible for overall policy setting and strategic planning. All new initiatives and programs were designed at the higher levels, and were subsequently communicated through orders and directives to the middle and lower levels. Implementation strategies were planned by the senior management in consultation with middle management and implemented by middle management. This kind of planning
Adoption and Implementation of IT in Developing Nations
and implementation structure is often a feature of public sector enterprises. This is because public sector enterprises usually operate on a large scale and scope, and hence it is more efficient to decide on policy at the top and leave the implementation to the middle managers in the various regional offices. The role of middle managers in policy implementation is therefore crucial (Caudle et al., 1991). In India, the accountability of the public sector to the people of India only further enhances the justification for rigid bureaucratic procedures. Such procedures lead to this rather strict division of labor between the senior management and the middle management. Traditionally, both organizations were similar in that they were large and centralized, and had
historically functioned in stable economic and business environments. They had been largely supported by the government and had not seen any major changes in their business strategies or processes for the past 20 years. Between 1980 and 1987, the top management of both companies was indifferent towards the use of IT and there was no commitment on the part of the organizational leaders to deploy IT in any of the functions. Further, most employees did not have any knowledge or awareness about IT, and tended to associate technology with loss of jobs. This observation has also been recorded in other organizations in India during the 1970s and 1980s. During this time, the Indian economy was a closed one and most organizations did not have any exposure to the use of IT (Nidumolu et al., 1993; Tarafdar & Vaidya, 2002b; Wolcott & Goodman, 2003).
Figure 1. Decision hierarchy and organization structure at National Couriers Limited and National Traders Limited Level One: Top Management at the Apex—Both NCL and NTL Responsible for strategic planning and policy making, in consultation with senior management (Level Two) Level Two: Senior Management Regional Managers—NCL Zonal Managers—NTL Responsible for aiding Top Management (Level One) in strategic planning and policy making Responsible for deciding on implementation strategies, in consultation with middle management (Level Three) Level Three: Middle Management Regional Managers and District Managers—NCL District Managers—NTL Responsible for aiding Senior Management (Level Two) in planning implementation strategies Responsible for executing implementation strategies and directing Junior Management (Level Four) and unionized staff Level Four: Junior Management Employees in the Zonal, Regional and District Offices—NCL Employees in the Regional and District Offices—NTL Responsible for carrying out instructions related to implementation as directed by Middle Management (Level Level Five: Unionized Staff Employees in the Zonal, Regional and District Offices, and in the collection, sorting and delivery centers—NCL Employees in the Regional and District Offices, and in the procurement centers and storage warehouses—NTL Responsible for carrying out instructions related to implementation as directed by Middle Management (Level Three)
0
Adoption and Implementation of IT in Developing Nations
sEtting thE stagE (1987-1991) External conditions The government financially aided NCL and, to a large extent, decided the rates for its services. NCL catered to both urban and rural segments of the population. During this time, the rural and semi urban segments accounted for over 70% of the customer base, and the company was a monopoly in this segment. Entry barriers were high because a vast distribution network was required to handle the volumes and reach in order to operate on a national scale. In the urban retail and corporate segments, the first major changes in the environment came in the late ’80s and early ’90s when a number of private companies were set up which provided faster deliveries, although at much higher prices. Hence there was some competition in this category. However, these new competitors were too small to pose any threat to NCL on a nationwide basis. Even then, NCL did have an internal drive towards business innovation. It introduced special premium “Speed” services in 1987, which were faster and more expensive than their regular services. Corporate customers were interested in efficient and reliable service and accounted for a majority of the high value transactions through the premium services. According to a senior manager in the eastern region: “The products and services prior to 1987 were standard. There were no innovations. However, after 1987, we could sense some of the then happening and some impending changes in the competitive environment. Hence, even though we were a monopoly in the rural retail segment, we introduced new services directed towards the urban corporate and retail segments.” NTL was financially aided by the government and many of the policies regarding purchase price, selling price, and distribution requirements were decided in consultation with the government and representatives of the producers. The company was required to interact with a number of external agencies such as distribution and transport service
0
providers in order to carry out its functions. All interactions with these external organizations were through established procedures. For example, producers were given a fixed price for their produce, and customers also paid a fixed price. Similarly, transporters were selected on the basis of tenders. There had been no significant change in most of these processes over the years. Therefore, the external environment had remained stable. In case of calamities such as floods and drought however, the company faced tremendous pressures because produce had to be rushed to specific places at all costs. The regional manager of one of the regions observed that: “Normally there are no pressures on us. We function in a regulated and financially supported environment. However in times of emergencies, we have to deliver at all costs.” Such instances however, were few and had occurred once in two or three years on an average. Moreover, in such times, NTL and other similar companies were given financial support from the government and also used their own slack resources. On the whole, NTL did not have any innate drive for business innovation. For example, it did not initiate efforts for introducing changes in its processes in a proactive manner. Its managers functioned within the parameters laid down by the government. The performance of the company was measured by the amount of produce purchased, the manner of quality control of stored produce and effectiveness of distribution. There was scope for performance slack because the company was financially supported by the government. In this context, one senior manager who had been with the company for 30 years observed: “Everyone follows standard procedures. There is no inherent drive to change and improve.”
process descriptions & information processing requirements The primary processes for NCL included the logistical activities of sorting and transferring
Adoption and Implementation of IT in Developing Nations
articles. The company also carried out limited operations related to banking functions such as money transfers and money orders. Information processing had to be carried out quickly because article delivery times depended largely on the speed with which articles could be sorted and transported. Organizational processes were standardized through the use of standard operating procedures. Procedures were laid down for collection, sorting, delivery, after-sales activity like enquiry handling, refund, lost articles or other specific customer complaints. Tasks were structured and routine and, the context in which information had to be processed was clear. The presence of bureaucratic procedures, along with inherently simple tasks, did not leave much room for decision support requirements in the day-to-day operations. Decision making followed predictable patterns. There were well-defined rules for communicating information. All official communication was in written format. Information required for decision making was mostly available. A senior executive explained the situation in this manner:
One of the middle managers in a regional office in the eastern region said:
“There are fixed procedures that we are trained to follow. All possible requirements can be anticipated because there is a limited set of options that customers can choose from.”
Both organizations functioned in a stable environment and were financially supported by the government. This led to the possibility of having extensive bureaucratic procedures and well-defined processes. Thus, there was not much room for ambiguity and decision support. This is a common feature in public sector organizations, and has been found to influence the adoption of decision support aids in these organizations (Mohan et al., 1990). All policy decisions regarding the adoption of new innovations were taken by the top management team in consultation with representatives of the government and communicated clearly within the organization. However, there were some differences. NCL was widely regarded as one of the best public sector organizations in India, and within the broad framework of government-mandated policies, there was considerable scope for smallscale, local-level initiatives and innovations by its middle and junior managers. NTL, on the other hand, was more prone to functioning within the confines of government mandates, and there were not many opportunities for local innovation.
Organizational processes for NTL included functions like packing, storing and handling of the produce. These were routine, standardized and well documented. There were written instructions and well-defined procedures for different activities. For instance, there were norms for storing bags in the warehouses, for deciding how many bags would be placed in a stack, how they would be stacked, how they would be issued for distribution and so on. There were specifications for the way in which warehouses were to be constructed. There were standards for preserving the produce in the warehouses according to the desired purity and quality levels. Information regarding relevant parameters such as humidity, temperature and cleanliness were clearly specified. All official communication was in standardized formats, formal and always recorded on paper.
“All tasks are standardized and we have to follow standard operating procedures. There is no ambiguity.” As far as information processing was concerned, some aspects of the company’s operations required information to be processed within a given period of time. For instance, all the purchasing activities had to be completed within two to four months from the time that the produce was plucked and harvested. Transport and logistics operations involved coordination of activities across many geographical regions, and the produce had to be distributed to specific areas within a very short time, in case of disasters. The manager of one of the districts in the eastern region said: “There is a short span of two to three months within which we have to finish off all the purchase and storage activities. This is a time of great pressure for all of us.”
Adoption and Implementation of IT in Developing Nations
it adoption during this period Basic computerization was first introduced in both these organizations in the mid- to late-1980s. This period was also marked by commencement of similar initiatives in other public sector enterprises in India (such as nationalized banks). These initiatives were largely driven by policies of the central government. To begin with, both NCL and NTL went in for applications like payroll and financial accounting at their respective central administration offices. These were later extended to their different regional offices. Overall, the IT infrastructure during this time was quite elementary and did not have any significant impact on their critical operations. Soon after, NCL took steps to introduce some additional IT applications as well. In 1989, money transfer pairing machines were first introduced in each zone. Hitherto, all the money transfer order slips originating in a particular region and bound for all other regions were collated separately and sent to each region. This was done by all the regions, so that a number of slips changed hands everyday between all the regions. With the introduction of the computerized pairing machines, instead of counting off individual slips for each region, each region’s outgoing sum was simply netted off against the incoming sum. This was a spreadsheet application in which the total amount of money ordered for each zone was collated on a spreadsheet and paired and matched for each zone. Also in 1989, counter operations for article booking were computerized in the largest office in the two largest cities. Stand-alone PCs were given to the counter clerks at these offices. This significantly reduced the waiting time for the customer and rationalized queues at the counters. Although these applications were introduced in a limited manner, it was an important step for NCL in that it had proactively implemented some IT beyond the overall parameters suggested by the government. Although the monetary investment in IT during this period was small, it nevertheless set the stage for the implementation of IT in more critical processes, in subsequent years. Further, it also served as a pilot project for demonstration and learning purposes.
impending changes in the External Environment Government policies form an important aspect of the external environment for public sector organizations. Changes in these policies have often been the cause of IT deployment in organizations in developing nations in general (Albadvi, 2002; Li et al., 2002; Molla & Licker, 2002) and in India in particular (Tarafdar & Vaidya, 2002a). In 1991, the Government of India took a policy decision to liberalize the Indian economy. This decision resulted in an increase in external pressures for public sector undertakings in many industries including telecommunications, steel, banking and transportation, among others. The resulting changes in the business and economic environment had implications for adoption of IT in both NCL and NTL. There were also overall pressures for process re-engineering, modernization and human resource development.
casE dEscription (1992-2000) changes in the External Environment Economic liberalization in the early 1990s resulted in changes in the external and competitive environment for both organizations. Liberalization provided enormous opportunities for firms from developed economies to set up manufacturing, service or distributing units in India. This resulted in the entry of many of these companies in a number of Indian industries including banking, financial institutions and the manufacturing sector (see for example, Cavusgil et al., 2002; Joshi & Joshi, 1998; Tarafdar & Vaidya, 2002, 2003). Many of these companies had advanced IT-enabled processes. This created pressures for improved performance of processes among Indian organizations. There were also pressures from different customer segments for more flexibility and better service. As far as NCL was concerned, private companies—from both India and outside—that provided courier and fax services entered the urban markets, and targeted the retail and corporate segments.
Adoption and Implementation of IT in Developing Nations
There was also an increase in the volume of business related mail. Some customer segments like businesses, government organizations and institutional bodies required faster delivery, even if it was at a higher cost. Thus, increasing competition gave rise to a need to segment customers on the basis of specific needs and provide customized service options. Therefore, a Business Development Cell was set up in 1996 to design and develop a market for value-added premium products for specific customer segments. New services were introduced for corporate customers, and the accent was on speed and reliability rather than on cost. Utility payment and e-mail services were also introduced. That the organization perceived the pressures to be somewhat high can be gauged by the following statement, which appeared in the Annual Report of 1997-1998: “… will spend 65% of its plan budget on the induction of technology with a view to improving and upgrading the quality of service...developing and providing new value added services and products. NCL will continue to look at the technology options so that the postal products and services can be re-oriented to the needs of the customers.” The second change as a result of economic liberalization was related to the role of the government. As mentioned before, the government can, through its policies and regulations, influence the adoption of IT (Nidumolu & Goodman, 1996; Rainey et al., 1976). Toward the later part of the eighties, the government laid down certain policies and mandates for adoption of IT in all major public sector enterprises in India, across different industries. Public sector banks, manufacturing units and service organizations embarked upon IT modernization programs. These organizations typically generated and processed huge volumes of transaction data due to their large scope of operations. Consequently, the most pressing requirements were for transaction processing systems.
initiating & implementing it adoption: two contrasting approaches Both NCL and NTL initiated a program for organization-wide computerization in response to the government’s mandates. According to Caudle et al. (1991), there are four major concerns that are required to be addressed for adoption and implementation of IT in public sector organizations. The two organizations addressed these four factors in different ways. 1.
Goals of IT Adoption & Identification of Information Requirements: Market signals and profits guide companies in the private sector. In contrast, the public sector faces different goals, many of which are not necessarily related to financial performance. These could be related to efficiency and quality of customer service, scale of operations, the different kinds of customers served, social objectives and addressing political influences (Caudle et al., 1991). Thus, it is not always possible to directly link the adoption of IT with financial parameters, particularly for public sector enterprises. Hence, one of the ways to approach IT planning is to identify improvements that are required in concerned critical processes, and implement IT in the individual activities entailed in those processes. Identification of information requirements thus forms an important part of the planning for IT adoption and implementation in public sector organizations.
National Couriers Limited Interviews with the head of the Eastern Region illustrated the process of identification of important information processing activities and requirements at NCL: “We had already implemented computerized transaction processing systems in the payroll and financial accounting functions, starting in 1988. After the government mandates in 1991, we
Adoption and Implementation of IT in Developing Nations
booked with the help of a computerized system at identified three critical areas in order to focus our the booking office, and the information would be computerization efforts. The first was the handling transferred via a modem connection to the Central of and sorting registered mail articles. This proCity sorting center, where the tracking system was cess was the key to speed, efficiency and customer installed. This center was connected through the satisfaction in our operations. The second was VSAT network to other sorting centers. Customers the transfer of information related to the status National Couriers Limited could call at theofsorting centerofand find out the of mailedInterviews articles, with money transfer and banking the head of the Eastern Region illustrated the up process identification information processing activities and requirements statusatofNCL: the booked articles. Computerized money services. important Information transfer processes formed transfer services were introduced in 1998. This was our second largest area of operations after mail “We had already implemented computerized transaction processing systems in the payroll and also through the VSAT network. Individual handling.financial The third focus for computerization accounting functions, starting in 1988. After thedone government mandates in 1991, we offices were connected network through a was information within our our computerization identified exchange three criticalactivities areas in order to focus efforts. The to firstthe was the handling of and sorting registered mail articles. themodem key to speed, efficiency and leasedwas line connection. This reduced the offices. These included sharing of files, data and This process customersuch satisfaction in our operations. The second was the transfer relatedfrom to the nearly a week transit timeofofinformation money orders other resources as printers. Our computerstatus of mailed articles, money transfer and banking services. Information transfer processes to amail few handling. seconds.The In 1997-1998, operations ization efforts the period were after formedduring our second largest1992-1999 area of operations third focus office for were computerized put onsharing a WNT-based local concentrated largely inwas these three areas.” computerization information exchange activities within our offices. Theseand included of files, data and other resources such as printers. Ourarea computerization efforts during the the period network. This enabled sharing of files, data 1992 – 1999 were concentrated largely in these three areas.” and other resources, and significantly enhanced Between 1992 and 1999, a number of new the efficiency of introduced office operations. information technologies were introduced at Between 1992 and 1999, a number of new information technologies were at NCL. In 1992, computerized mailmail handling and sorting the two busiest centers the been referred The in approach used by NCLinhas NCL. In 1992, computerized handling andwas introduced country. This reduced timebusiest requiredcenters for sorting and by half.approach” Computerized todirecting as the articles “functional to IT planning sorting was introduced in thethetwo systems for article booking, tracking and delivery systems for select cities were introduced in (Nidumolu & Goodman, 1996). Organizations in the country. This reduced time required for speeded 1997-98 in select citiesthe (refer to Figure 2). This up the booking and delivery procedure follow this approach sorting and articlestoby half. anddirecting enabled customers keep trackComputerof their articles. Athat VSAT network consisting of 75believe terminalsadoption of IT was installed for this purpose. tracking The articlesand would be isbooked with because the help ofit acan computerized desirable improve the timeliness ized systems for article booking, system at the booking office, and the information would be transferred via a modem connection of information flow and reduce process cycle delivery systems for select cities were introduced to the Central City sorting center, where the tracking system was installed. This center was NCL was able to call respond to the in 1997-98connected in selectthrough cities the (refer to Figure VSAT network2). to This other sortingtimes. centers. Customers could up at effectively the sorting centerand anddelivery find out procedure, the status of and the booked articles. conditions Computerized and money transfer external government mandates sped up the booking services were in of 1998. This was also done through deployment the VSAT network. Individual through of IT applications and infraenabled customers to introduced keep track their articles. offices were connected to the network through a leased line modem connection. This reduced the structure in many critical functions. The financial A VSAT network consisting of 75 terminals was transit time of money orders from nearly a week to a few seconds. In 1997-98, office operations expenditure on IT between 1990 installed for purpose.and Theputarticles would beNT based werethis computerized on a WINDOWS Local Area Network. This enabled theand 1999 was sharing of files, data and other resources, and significantly enhanced the efficiency of office operations.
Figure 2. Computerized booking, tracking & delivery system at national couriers limited Computerized booking counters (mail articles accepted)
Booking and Delivery Services
Figure 2. Limited
All arrows indicate electronic flow and processing of information without any manual re-entry of data Computerized
Tracking and Tracing System
Central City Sorting Centre
booking, tracking and delivery system at National Couriers Outgoing Mail to destination
Delivery
incoming mail
Adoption and Implementation of IT in Developing Nations
INR 1000 million1, which was 75% of the total expenditure on modernization during this period and about 2.75% of the organization’s revenues. This was relatively much higher than that during the previous 10 years. According to the head of operations of the eastern region: “For the first time, substantial budgets were being allocated annually, for the computerization process.” A senior manager in the eastern region further described the possible financial implications in the following way: “The areas that we targeted for IT adoption covered 65% to 70% of our operations. So we expected to see significant cost savings in around 70% of our functions.”
National Traders Limited A member of the top management, who had been with the organization for more than 30 years, described the overall phenomenon of IT planning and adoption at NTL in the following manner: “We tried to follow the general instructions from the government, in identifying potential areas of IT adoption. The first step in this regard was the computerization of all high volume transaction processes. We decided to begin by computerizing the payroll and financial accounting processes.” In the mid-1990s a UNIX mainframe system was installed at the central office. This was used in batch processing applications for calculation of accounts, reporting of produce inventory and stock positions, and payroll accounting. In the late 1990s, the mainframe system was converted to a PC-based LAN. An ORACLE-based client server system was installed and the mainframe data were transferred to this. The administrative offices in the different regions carried out the same functions at the regional level using PC-based dBase applications.
In 1998, NTL connected its central administrative office, five zonal offices and 17 regional offices through VSAT links. The district offices were connected to the respective regional offices through dial-up modem connections. At the time of the study, there were about 50 PCs at the central headquarters, four to five PCs in each zonal office, two in each regional office and one in each district office. The depots communicated with the district offices through postal mail and the districts communicated with the regional offices through modems. Forty percent of the data transferred between depots and district, regional and zonal offices related to the inventory and stock position. Forty percent related to financial information and 20% to payroll data. Stock and inventory data were sent in by the depots and district offices to the regional and zonal offices manually, or through leased line modem connections. The regional and zonal offices consolidated the data and then transferred them to the central administrative office through VSAT as well as on paper. Data from all the regional and zonal offices were again consolidated at the central office. Reports about stock positions and requirements were generated for senior managers at both the head office and the regions. The transfer of stock-related information took place on a weekly basis from the depots to the zones and subsequently to the headquarters. A central server housed the consolidated data from all the regions. All the communication links were backed up by traditional mail and fax systems. The IT infrastructure is shown in Figure 3. The investment in IT acquisition, maintenance and training during the period between 1992 and 1999 was INR 35 million, which was less than 5% of the total capital expenditure during this period. 2.
Management of Bureaucracy and Paperwork: The public sector produces a lot of paperwork that results in a proliferation of forms and paper (Caudle et al., 1991). All these records need to be computerized in an integrated manner during the process of computerization, in order to make electronic transfer of information possible between of-
Adoption and Implementation of IT in Developing Nations
fices and minimize manual re entry of data. Hence it has been suggested (Mohan et al., 1990) that a central governing structure be set up. This structure should oversee the integration of IT management with records management and other information resource management areas. For NCL, the different systems were designed such that seamless integration and flow of information between different functions was possible, in a limited manner, as shown in Figure 2. For instance, in the computerized booking and tracking system, article information generated at the time of booking was transferred using electronic means through intermediate stages, all the way to the central sorting centers. Similarly, office information related to administrative activities was shared electronically within each office. These capabilities were also planned to be extended to cover electronic transfer among different offices. For NTL, information transferred from the depots and district offices was manually reentered at the regional offices prior to consolidation and transferred to the Head Office, as shown in Figure 3. At the Head Office also, information was partly reentered manually before the generation of management reports. 3.
Role of Top Management: The top management plays a key role in deciding the thrust and direction of IT adoption in public sector organizations (Nidumolu & Goodman, 1996). This is because the planning and decision structure with respect to implementation of changes is usually centralized.
National Couriers Limited At NCL, there was a change in the top management in 1991 as a new CEO joined the organization. The new top management team was favorably disposed towards IT adoption and took many proactive initiatives in this regard. NCL was a member of the Universal Postal Union, and hence information about the latest IT applications in similar organizations around the world was available to the top
executives. It was the thinking, planning and drive of top management that led to the introduction of the early IT initiatives. A middle manager in the eastern region office described the new priorities of the top management in this manner: “After 1991, a number of new thrusts towards IT adoption have been generated at the central office. Our new CEO is enthusiastic about the introduction of IT, and is aware of the possible areas of application. Seminars are often organized to educate and inform us about the use of IT in postal services worldwide.” All decisions regarding IT planning and deployment were centralized. However, the implementation of IT initiatives was decentralized. According to a member of the top management at the central office: “We tell the regions what the overall plans are, regarding the purchasing of hardware, installation of software and the applications required to be used. Broad decisions regarding all IT applications, hardware and software are taken at the headquarters in consultation with the regions and communicated to the zones and regions. The zones and regions have the power to make their own implementation decisions and purchases within given financial limits.”
National Traders Limited At NTL, the top management was not, in principle, unfavorably oriented towards IT. They did realize the benefits that could accrue from the use of IT, in view of the size of the company and its scale of operations. However, they were not proactive about identifying areas where the company could benefit from IT. The senior managers and policy makers were typically professional administrators and bureaucrats, and the average age was more than 50 years. They did not have any knowledge of emerging IT applications and their use by similar companies around the world. Nor were they comfortable with the use of computers. They were content to depend on the government for direction and instructions on IT adoption.
Adoption and Implementation of IT in Developing Nations
One of the middle managers who had worked in the organization for the last 15 years said: “ …If the government had not made certain suggestions, computerization might have come to NTL even later than the late 1980s.” Interviews with some of the senior managers revealed attitudes that varied from measured tolerance for IT, “It seems to be useful, but it is not indispensable,” to downright rejection, “Computers are just expensive typewriters.” 4.
Role of Middle Management in Driving IT Implementation: There is evidence that middle managers play a very critical role in driving IT implementation and use in public sector organizations (Caudle et al., 1991). This is because these organizations are usually large, with multiple levels of decision hierarchy (Figure 1), and it is not possible for top management to oversee the details of the implementation processes. Further, middle managers have considerable bureaucratic power in the individual departments and units. Hence, while the top management is responsible for policy setting and strategic planning with respect to IT adoption, it is the middle managers who play the most crucial role in driving the implementation processes within different organizational units.
National Couriers Limited The middle management influenced IT adoption at NCL in two ways. First, they were actively involved in framing the specifics of IT adoption policies and driving implementation initiatives within their units. They used their collective organizational power to frame implementation schedules and timelines. They also developed programs for end-user training and education. This was a crucial aspect of the implementation process, given the large number of unskilled and unionized employees in the organization, who viewed IT as a potential threat to their jobs. In this context, one of the middle managers observed:
“We encouraged the clerical and low skilled employees to get familiar with the PC, and start out by just playing games. We hoped that once they became comfortable with the use of PCs, they would be able to appreciate the benefits of computerization. We also conducted education programs regarding the use of IT in organizations. Further we took steps to assure them that their jobs were safe. Throughout the entire computerization process, there was not a single day in which there was a loss of working hours because of union problems.” In this connection, studies in the domain of IT adoption suggest that IT acceptance and innovation at the grassroots levels in different end-user units are crucial to the adoption of IT by an organization (Agarwal & Prasad, 1998; Nambisan, 1999; Rockart, 1988; Vaidya, 1991). In many developing countries, IT is seen to be the cause of reduction in opportunities for employment, and there is a hostile attitude to IT adoption and acceptance, not only at the level of semi-skilled and unskilled employees, but also by middle management. This has been a crucial factor in the introduction and management of IT in developing countries (Jantavongso & Li, 2002; Tarafdar & Vaidya, 2003). The middle management also played an effective role as IS professionals. While the overall head of the IS function was a member of the senior management, IS activities in each regional and district office were supervised by middle management. These managers were also responsible for IT implementation in the delivery and sorting centers. In other words, middle managers were the IS heads in their respective functional departments. They supervised teams of junior managers who were responsible for installing and maintaining the hardware and software. These junior and middle management members had received technical training and were hence capable of managing the technical aspects of project implementation and systems maintenance. This kind of an “indigenous” implementation strategy is often followed in public sectors. This is because in such organizations there are constraints on hiring and firing employees. Hence existing employees are
Adoption and Implementation of IT in Developing Nations
retrained and reallocated to the newer functions (Nidumolu & Goodman, 1996). Thus there was no separate IS department in NCL. Members of middle and junior management were responsible for IT implementation. The middle management IS professionals also played an important part as IT champions. The role of IT champions in driving IT adoption has been well documented (Beath, 1991). The middle managers in NCL were credible and commanded authority by virtue of their positions. They had a good working relationship with the top management as well as the unskilled and clerical workers in the company. They were powerful enough to influence decisions at the higher levels, and saw to it that the resources required were made available. Initially there was considerable resistance — especially from the unionized staff — but this was neutralized through the efforts of middle and junior managers. In fact, in many instances,
some of the unionized staff subsequently became advocates of the IT-related changes, after having gone through the training processes. They became IT champions themselves and saw to it that strong resistance groups were convinced and neutralized. Various studies have explored the role of IS professionals in influencing the adoption of IT. IS professionals have a positive impact on IT adoption in the organization when they are technically aware of the possibilities from IT, are competent at developing new IS and maintaining existing IS, and are capable of promptly solving end-user needs (Al-Khaldi et al., 1999; Dvorak et al., 1994; Swanson, 1994). At NCL, the IS professionals influenced IT implementation through their traditional organizational power as middle managers. They were able to effectively carry out project management and end-user training; they were also able to ensure that resources were available.
Table 4. Similarities and differences between NTL and NCL National Couriers Limited
National Traders Limited
Size
90,000
63,000
Overall Organization Structure
Multiple levels of hierarchy, centralized planning
Multiple levels of hierarchy, centralized planning
Having functioned in stable business and economic environments.
Having functioned in stable business and economic environments.
Historically supported by the government.
Historically supported by the government.
Working under overall policy directives of the government.
Working under overall policy directives of the government.
Subject to the IT adoption and modernization plans of the government.
Subject to the IT adoption and modernization plans of the government.
Staffed by bureaucrats. Professional managers and technical specialists who were responsible for tactical and operational tasks.
Staffed by bureaucrats. Professional managers and technical specialists who were responsible for tactical and operational tasks.
Postal Sector
Distribution Sector
Based on systematic identification of critical processes (article handling and sorting, Information management of mailed articles, Information exchange within offices) and their computerization.
Followed general instructions from the government and did not attempt to identify critical processes. Computerization aimed at basic transaction processing, and not at critical processes.
IT investment between 1990 and 1999 was INR 900 million, which was 70% of the total expenditure on modernization.
IT investment between 1992 and 1999 was INR 35 million, which was less 5% of the total expenditure on modernization
Similarities
Public Sector Enterprises and the accompanying characteristics
Service Organizations Differences
Goals of IT adoption
continued on following page
Adoption and Implementation of IT in Developing Nations
Table 4. continued
Management of paperwork
Role of top management
Role of middle management and the IS department
Different systems were designed so as to try to enable seamless integration and information transfer through electronic means. This was a consequence of an integrated approach to records management
In absence of an integrated approach to records management, information transferred from the depots to the district offices was re-entered manually before the generation of management reports.
Favorably disposed towards IT adoption. Kept abreast of the latest IT developments in similar organizations in other countries. Resources were allocated to new IT thrusts.
Not, in principle, favorably oriented towards IT. Not proactive about identifying areas that could potentially benefit from IT, beyond the obvious high transaction areas. Not comfortable with the use of computers and depended on the government for instruction and directions on IT adoption.
Actively involved in framing IT adoption policies and driving IT implementation efforts, schedules and timelines.
IS department was headed by a senior manager who reported to the head of the finance function. Junior level employees were responsible for data entry, data consolidation and report generation.
Supervised IS activities in the regions and districts, and took on the role of IS professionals. Acted as IT champions, generated grassroots awareness and interest, and developed management and union buy-in for IT adoption efforts. Basic computerization in conjunction with computerization in other public sector enterprises in India. Use of IT in transaction processing for payroll and accounting.
Extent of IT adoption
Proactive IT adoption beyond the overall parameters suggested by the government such as pairing machines and computerization of counter operations. Introduction of IT in key processes: Computerized mail handling and sorting, article tracking, money transfer. LANs within offices and a VSAT network connecting different offices.
Current and future issues
The IS head did not have any control over the implementation issues and schedules in the regional offices. Middle management was reluctant to use computers and depended on junior management to provide reports. They did not participate in or influence the IT adoption process in any way. Basic computerization in conjunction with computerization in other public sector enterprises in India.
Use of IT in batch processing transactions and for reporting- for payroll, accounting and stock calculations. Information transfer between offices manually or through modems. In many cases information from one MIS report was re-entered manually at the next organizational level, because of incompatibility of technology and/or formats.
Transition from Turnaround to Factory Mode, which would imply:
Transition from Support to Turnaround, which would imply:
Extension of existing applications
Proactive identification of critical processes for IT implementation
Broad based end-user training
IT education and awareness
Budget driven IT planning processes
Driving mechanisms for development of successful IS which can be used as an “example” to generate interest and acceptance Change Management
Adoption and Implementation of IT in Developing Nations
This case therefore illustrates a new dimension of the role of IS professionals in driving IT adoption and considerably enhances similar preliminary findings by Caudle et al. (1991).
National Traders Limited The Central IS department at NTL was headed by a senior manager who reported to the head of the finance function. He was somewhat aware of the possibilities of IT and had some limited ideas of its usefulness for the company. He supervised a team of 60 Central IS employees. Out of these 60 people, 10 had a diploma or some other professional training in different aspects of software development, and could develop applications on dBase, MS Access and ORACLE. About 30 people were data entry operators, whose tasks were to key in and consolidate the data from the zonal, regional and district offices. All these employees had received the requisite training and looked after various functions in the IS department. They worked in the central administrative headquarters, and were responsible for centralized consolidation and collation of data from the regions. They also designed and implemented training programs for data entry staff in the different regions. They looked after incremental modifications to the existing applications. Further, they managed third-party vendors who carried out maintenance of existing hardware and the development of new applications. The IS departments in the zones and regions comprised junior-level employees who had been transferred from other departments after training. They had no formal education in computer hardware or software. They were responsible for entering and consolidating the data received from the districts and depots, and generating relevant reports for senior managers at the zonal offices. At the time of the study, about 200 such employees, mostly staff and junior managers, had been trained in various applications, and had later been shifted to dedicated IS functions. They themselves were reluctant to use computers. Third-party vendors carried out the maintenance work. The IS manager was not powerful and senior enough to convince top management to make
0
resources available for any IT initiative other than the most basic applications. He had no significant power to independently make important decisions relating to IT deployment. Moreover the IS professionals at the central office did not have any control over whether or not the regional heads would actually implement specific IT applications. The IS department had not met with much success in this regard and there had been stiff resistance in many cases. This greatly hindered the penetration of IT because regional heads had independent authority over IT implementation initiatives in their areas. The chief of the central IS function said: “The implementation and use of computers in the offices is completely decentralized. We cannot force anyone to start using computers. Ultimately the extent of IT use depends on the policies of the respective regions and zones.” There were more than 60,000 employees in the company, 85% of whom performed low skilled and clerical jobs. Similar to the situation at NCL initially, employees were not favorably biased towards IT, because they feared that they would lose their jobs. Hence they tried various ways to express their opinions in this regard. For example, very often, when reports were not made available on time to senior managers, subordinate junior employees would excuse themselves by saying that the computers were not working or the relevant officer in charge of taking the print-outs was not available. They would even suggest that such problems did not exist before computers were introduced. The reluctance of employees to use computers is exemplified by the fact that those who used them had to be given monetary incentives. Further, most of the middle mangers and even some of the senior managers in the regions were against the deployment of IT. There was limited penetration of IT into the user departments. Senior managers did not directly use computers. They would ask data entry operators to enter data and furnish printouts. The similarities and differences between the two organizations have been described in Table 4.
Adoption and Implementation of IT in Developing Nations
currEnt challEngEs & problEms (2000 & bEyond) In spite of similarities in their overall nature and scale of operations and historical and cultural contexts, the IT adoption processes and outcomes in the two companies were considerably different. While NTL failed to implement IT in its crucial processes, NCL was able to respond effectively to the external conditions and government mandates through organization-wide deployment of IT applications and infrastructure in many critical functions.
national couriers limited The computerization process at NCL took place in two distinct phases. In the first phase, from 1987 to 1991, computerization was limited and driven by the requirements of high-volume transaction processing. During this period, the company used IT for very basic and rudimentary transaction processing operations. It was in the Support Mode (McFarlan et al., 1983) or Delayed Sector (Earl, 1989). These two modes are the first stages of IT adoption in organizations where IT is not fundamentally essential for the smooth running of operations of the company. It is used to accomplish nonessential and noncritical tasks, and the IS department functions as a back-room support department, with no participation in functions like strategic planning and implementation. The second phase of computerization between 1992 and 2000 saw an acceleration of the computerization process. The acceleration was partly in response to government mandates and partly as a result of the enthusiasm of the new leadership about IT. During this period, NCL went through the Turnaround Stage (McFarlan et al., 1983). IT became increasingly crucial to the future development of the organization. There was a change in focus, as far as IT planning and implementation were concerned. New applications were developed and there was an increase in IT investment. At the time this study was conducted, different applications had been introduced in a limited number of
offices, and covered about 40% of the operations of the company. The Annual Report of 1994-95 described the induction of computerization in this manner: “... NCL has made a gradual and phased attempt to introduce information technology into the postal system, so as to provide better services to its customers ...” The most important impact of IT had been to increase operational efficiencies, and IT was accorded a high priority by the top management. Hence the organization was well positioned to move to the next level of IT use, that is, the Factory Mode (McFarlan et al., 1983) or the Dependent Sector (Earl, 1989). This would include the extension of current applications to more offices and the development of more sophisticated applications. The challenge before NCL was therefore to transform from a Turnaround organization into a Factory organization. One of the most important aspects of the Factory Mode is to ensure that IT is delivered efficiently and reliably. This implies that resource requirements and budgets be correctly estimated (Earl, 1989). In this context, Mohan et al. (1990) suggest that since public sector organizations operate under fixed and often tight budgets, an inability to logically derive and clearly communicate IS budget requirements is a primary reason for these organizations not allocating adequate resources for IT adoption. A similar problem existed at NCL also, in that there were no budget-driven planning processes that could broaden the scope of the existing IT applications.
national traders limited NTL was an interesting organization to study because it was large and there was significant potential for the use of IT. However, the organization did not use IT for any but the most basic functions. This was because there was a strong overall negative inclination towards IT adoption and use among the middle and junior management. For instance, none of the departmental heads at
Adoption and Implementation of IT in Developing Nations
the head office or in the regional and zonal offices used computers for the latest available inventory positions. They would ask their secretaries for the relevant paper files or would simply ask their immediate subordinates over the phone. One senior manager observed: “Anyway, I have to ask for most of the information over the phone or through fax. So what is the use of the computer in tracking the movement of the stock?” Lack of enthusiasm among managers in public sector enterprises for using IS has been documented by Mohan et al. (1990). The primary reasons for this are a low comfort level with the use of computers and a lack of awareness of applications relevant to the organization. Nidumolu and Goodman (1996) suggest that perceptions towards IT can change from unfavorable to favorable, as more projects are undertaken and more functions are computerized. At the time this study was conducted, IT was used for routine administrative tasks, and not for any critical activities like logistics and distribution planning. Hence IT was not crucial to the achievement of the strategic objectives of the firm. Moreover, all electronic information was also stored in paper format. Transfer of information was both electronic and paper based. The challenge for NTL therefore was to move from the Support Mode to the Turnaround Mode (McFarlan et al., 1983), and increase the scope of existing IT applications. At the time of writing, they were pilot testing the use of a software for managing distribution and storage of food grains.
change management issues Change management has been suggested as an especially important issue in government organizations because of their entrenched processes (Caudle et al., 1991). In fact, the adoption of IT in the nationalized banks in India, which commenced in the mid-1980s, had been fraught with issues regarding acceptance of process changes and the fear of job losses due to automation. Employee
unions, perceiving that their concerns had not been adequately addressed, had offered considerable resistance and had significantly slowed the process of IT adoption in the banks (Joshi & Joshi, 2002). Hence it is anticipated that change management issues would be crucial to the continued infusion and diffusion of IT at NTL and NCL. The first aspect of change management had to do with overcoming resistance at different levels of the two organizations, especially at NTL. In a study of e-government initiatives in the Indian state of Kerala, Kumar (2003) reports that top management drive has been an important issue in driving IT adoption in various government departments and has facilitated the acceptance of IT at lower organizational levels. In this regard NCL had so far been able to manage differences between the various units and had been able to convince unions and clerical staff about the benefits of IT adoption, largely through the efforts of its middle and top management. This process had been more difficult at NTL, given that the top and middle management themselves were not quite convinced about the usefulness of IT and that they had not proactively driven its adoption. As Joshi and Joshi (2002) have pointed out, it is relatively easier to work towards middle and lower management commitment after top management commitment has been secured. It has been observed that supervisors may often be reluctant to adopt IT in their departments because of possible reductions in head count, which might lead to a decrease in their span of control. This may partially explain the reluctance of middle and junior management cadres to adopt IT, especially in NTL. The middle mangers were afraid of losing headcount in their departments as a result of the junior management receiving training and getting relocated to IT-based functions. Similarly, junior management was apprehensive about the reduction in the number of unionized employees, the resultant loss in their own power and possible backlash from the labor unions. The second aspect of change management was that of managing the work environment during the change process. Studies by Amabile (1996) have suggested that the work environment often
Adoption and Implementation of IT in Developing Nations
becomes negative in times of new technology implementation and significant business process changes. This is because the difficulties associated with adjusting to the changes often result in collective cynicism and confusion. Such conditions stifle creativity and motivation. This was observed in NTL, where the new IT was met with collective skepticism from all levels of the organization. Public sector organizations are characterized by complex performance measurement criteria. The lack of a clearly defined bottomline in most cases leads to a focus on inputs and budgets, rather than on outputs and productivity measures. Economic liberalization in India has resulted in an emphasis on service quality, process efficiency and overall modernization in both the public and private sectors (Wolcott & Goodman, 2003). The challenges before NCL and NTL would be to use IT for enhancing their service quality, for increasing the efficiency of their operations and to appropriately manage their IT adoption processes. In absence of such an effort, both organizations would be burdened with high-cost operations and increasingly dissatisfied customers.
rEfErEncEs Agarwal, R., & Prasad, J. (1998). The antecedents and consequents of user perceptions in information technology adoption. Decision Support Systems, 22(1), 15-29. Al-Khaldi, M.A., & Wallace, R.S.O. (1999). The influence of attitudes on personal computer utilisation among knowledge workers: The case of Saudi Arabia. Information and Management, 36(4), 185-204. Amabile, T., Conti, R., Coon, H., Lazenby, J., & Herron, M. (1996). Assessing the work environment for creativity. Academy of Management Journal, 13(5), 1154-1184. Beath, C.M. (1991, Sept). Supporting the information technology champion. MIS Quarterly, 15(3), 155-371.
Caulde, S.R., Gorr, W.L., & Newcomer, K.E. (1991). Key information systems management issues for the public sector. MIS Quarterly, 15(2), 171-188. Cavusgil, S.T., Ghauri, P.N., & Agarwal, M.R. (2002). Doing Business in Emerging Markets: Entry and Negotiation Strategies. CA: Sage Publications. Dvorak, R., Dean, D., & Singer, M. (1994). Accelerating IT innovation. The McKinsey Quarterly, 123-135. Earl, M.J. (1989). Management Strategies for Information Technology. London: Prentice Hall. Jantavongso, S., & Li, K.Y.R. (2002, May). Ebusiness in Thailand: Social and cultural issues. In M. Khosrow-Pour (Ed.), Issues and Trends of IT Management in Contemporary Organizations. Proceedings of Information Resources Management Association Conference (pp. 443-446). Joshi, V.C., & Joshi, V.C. (2002). Managing Indian Banks (2nd ed.). CA: Sage Publications. Khera, S.S. (1979). Public sector management. In B.C. Mathur, K. Diesh & C.C. Sekharan (Eds.), Management in Government. Publications Division, Ministry of Information and Broadcasting, Government of India. Kumar, A. (2003). E-government and efficiency, accountability and transparency: ASEAN Executive Seminar on e-Government. International Journal of Information Systems in Developing Countries, 12(2), 1-15. Lachman, R. (1985, Sept). Public and private sector differences: CEOs’ perceptions of their role environments. Academy of Management Journal, 28(3), 671-679. Li, Q., Zhang, X., Sun, C., & Wang, S. (2002, May). Strategies of securities electronic commerce in China: Implications of comparative analyses between China and other countries. In M. KhosrowPour (Ed.), Issues and Trends of IT Management in Contemporary Organizations. Proceedings of the Information Resources Management Association Conference, (pp. 1080-1083).
Adoption and Implementation of IT in Developing Nations
McFarlan, F.W., McKenney, J.L., & Pyburn, P. (1983). The information archipelago: Plotting a course. Harvard Business Review, 61(1), 145156. Mohan, L., Holstein, W.K., & Adams, R.B. (1990). EIS: It can work in the public sector. MIS Quarterly, 14(4), 435-448. Molla, A., & Licker, P.S. (2002, May). PERM: A model of e-commerce adoption in developing countries. In M. Khosrow-Pour (Ed.), Issues and trends in IT Management in Contemporary Organizations. Proceedings of the Information Resources Management Association Conference, Seattle, WA (pp. 527-530). Moynihan, T. (1990). What chief executives and senior managers want from their IT departments. MIS Quarterly, 14(1), 15-25. Nambisan, S. (1999). Organisational mechanisms for enhancing user innovation in information technology. MIS Quarterly, 23(3), 365-395. Nidumolu, S.R., & Goodman, S.E. (1993). Computing in India: An Asian elephant learning to dance. Communications of the ACM, 36(4). Nidumolu., S.R., Goodman, S.E., Vogel, D.R., & Danowitz, A.K. (1996). Information technology for local administration support: The Governorates project in Egypt. MIS Quarterly, 20(2), 197-224.
Rainey, H.G., Backoff, R., & Levine, C. (1976). Comparing public and private organizations. Public Administration Review, 36(2), 233-244. Rockart, J.F. (1988, Summer). The line takes the leadership: IS management in a wired society. Sloan Management Review, 29(4), 57-64. Swanson, E.B. (1994). Information systems innovation in organizations. Management Science, 40(9), 1069-1091. Tarafdar, M., & Vaidya, S.D. (2002). Evolution of the use of IT for e-business at century financial services: An analysis of internal and external facilitators and inhibitors. Journal of IT Cases and Applications, 4(4), 49-76. Tarafdar, M., & Vaidya, S.D. (2003). Challenges in the adoption of information technology at Sunrise Industries: The case of an Indian firm. Annals of Cases in Information Technology, 6, 457-479. Vaidya, S.D. (1991, Dec 26-29). End user computing: An Indian perspective. Proceedings of the Indian Computing Congress (pp. 533-541). Wolcott, P., & Goodman, S. (2003). Global diffusion of the Internet in India: Is the elephant learning to dance? Communications of the Association for Information Systems, 11, 560-646.
EndnotE 1
45 INR (Indian Rupee) = 1 U.S. Dollar
This work was previously published in Journal of Cases on Information Technology, Vol. 7, No. 1, edited by M. Khosrow-Pour, pp. 111-135, copyright 2005 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
About the Editors
G. David Garson is full professor of public administration at North Carolina State University, where he teaches courses on American government, research methodology, computer applications, and geographic information systems. In 1995 he was recipient of the Donald Campbell Award from the Policy Studies Organization, American Political Science Association, for outstanding contributions to policy research methodology and in 1997 of the Aaron Wildavsky Book Award from the same organization. He is author of the Guide to Writing Quantitative Papers, Theses, and Dissertations (Dekker, 2001), editor of Social Dimensions of Information Technology (2000), Information Technology and Computer Applications in Public Administration: Issues and Trends (1999), and the Handbook of Public Information Systems (1999), and author of Neural Network Analysis for Social Scientists (1998), Computer Technology and Social Issues (1995), and is author, co-author, editor, or co-editor of 17 other books and author or co-author of more than 50 articles. For the last 20 years he has also served as editor of the Social Science Computer Review and is on the editorial board of four additional journals. Mehdi Khosrow-Pour, DBA, is currently executive director of the Information Resources Management Association (IRMA). Previously, he served 20 years on the Faculty of The Pennsylvania State University as an associate professor of information systems. He has written or edited more than 20 books in information technology management and is also editor of the Information Resources Management Journal, Journal of Cases on Information Technology, Journal of Electronic Commerce in Organizations, and International Journal of Cases on Electronic Commerce.
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
About the Contributors
Esharenana E. Adomi had his university education at the University of Ibadan, Ibadan, and Delta State University, Abraka, both in Nigeria. He holds BEd, MEd, MLS, and PhD degrees. He lectures at the Department of Library and Information Science, Delta State University, Abraka, Nigeria. His research interests lie in Web and Internet technology and ICT development and applications in different settings. Amalia Agathou is currently in the final year of her undergraduate studies in the Department of Information and Communication Systems Engineering of the University of the Aegean in Greece. Her research interests include computer security, data modeling, and databases. Stephen K. Aikins received a PhD from the University of Nebraska at Omaha where he has taught graduate-level courses in public-sector information management. He is a certified public accountant (CPA) and a certified information systems auditor (CISA). Dr. Aikins has several years of experience working in various capacities in government and in the private sector. He has been a state revenue auditor, a municipal government auditor for a public accounting firm, a Medicare auditor for a federal government contractor, and an assistant vice president for a multinational bank. His research interests are electronic government, Internet-based citizen participation, electronic democracy, information security, public sector economics, public financial management and public budgeting. Eugene J. Akers is the senior director for the Center for Advanced Technology in the University Outreach Program of Auburn University Montgomery. The purpose of the center is to extend the resources and expertise of Auburn University Montgomery (AUM) and Auburn University to individuals, organizations, the community, and the environment in a manner that enhances productivity and the quality of life for individuals in Alabama. The center provides consulting services, training programs, economic development endeavors, technological advancements, and community service. As the senior director, Dr. Akers is responsible for establishing the strategic direction of the center. Dr. Akers also is responsible for business development, contract management, and client relationships. Dr. Akers currently serves as an adjunct assistant professor in the Department of Information Systems and Decision Science and the Department of Political Science and Public Administration of Auburn Montgomery. Micah Altman (PhD, California Institute of Technology, 1998) is an associate director of the Harvard-MIT Data Center, is archival director at the Henry R. Murray Research Archive, and is senior research scientist in the Institute for Quantitative Social Science at Harvard University. Dr. Altman has
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
About the Contributors
served as co-investigator in a number of research projects promoting the collection, sharing, citation, and preservation sharing of research data through the development of open-source software tools and methods. His extensively reviewed book, Numerical Issues in Statistical Computing for the Social Scientist, corrects common computational errors made across the range of social sciences. His more thab two dozen publications and four open source software packages span many disciplines in social and information sciences. Andrea Baker is a PhD candidate of information studies at the College of Computing and Information and a graduate research assistant at the Center for Technology in Government, University at Albany, State University of New York. At the University at Albany and Siena College, she is also an adjunct professor at the Department of English. Baker is the co-author of a chapter in The Internet Election: Perspectives on the Web in Campaign 2004 and several papers presented at conferences of prestigious professional associations such as the American Society for Information Science and Technology and the American Society for Public Administration. Her research interests include computer-mediated communication, the impact of new media on the press, open-source technology and community, and collaborative egovernment. David A. Bray is currently with the Goizueta Business School, Emory University. He will be a visiting fellow with Oxford University’s Internet Institute in 2007. Bray’s research focuses on bottomup (i.e., grassroots) sociotechnological approaches for fostering interindividual knowledge exchanges. Before academia, he served as the IT chief for the Bioterrorism Preparedness and Response Program at the Centers for Disease Control where he led the technology response to 9/11, anthrax, WNV, SARS, and other major outbreaks. Lia Bryant is deputy director for the Hawke Institute of Sustainable Societies, University of South Australia. She is also director of the Research Centre for Gender Studies and is a senior lecturer in the School of Social Work and Social Policy. Her research interests include gender and technologies, gender and sexuality, organizations, work, rurality, and space and place. Anthony W. Buenger, Jr. (Lt. Col., USAF, retired), is a professor of systems management (Military Faculty) and information operations and assurance at the Information Resources Management College, National Defense University, Fort McNair, Washington, DC. Buenger received his bachelor’s degree in electrical engineering from the University of Maryland in 1988. In 1992 he received an MA in space systems management from Webster University, Colorado. He is currently working on a PhD in business administration from Northcentral University, Arizona. He served his entire 21-year career in the communications, information technology, and information security fields. Donna Canestraro brings more than 25 years of professional experience to her role as program manager at the Center for Technology in Government (CTG). Prior to joining CTG, Canestraro has worked with both public- and private-sector organizations including Unisys Corporation and General Electric. Her most recent position prior to CTG was as program manager with the Professional Development Program at the University at Albany’s Nelson A. Rockefeller College of Public Affairs and Policy. Her current work focuses on the policy, management, and technology issues related to inter- and intraorganizational information integration. As project manager for the XML Testbed, she has orchestrated
About the Contributors
the development of the XML Testbed, as well as various conference presentations and academic papers about XML. Elena Castro received an MS in mathematics from Universidad Complutense of Madrid in 1995. Since 1998 she has been working at the Advanced Databases Group in the Department of Computer Science, Universidad Carlos III of Madrid. Recently, she has obtained a PhD in information science from Universidad Carlos III of Madrid. She is currently teaching relational and object-oriented databases. Her research interests include database conceptual and logical modeling, advanced database CASE environments, and information and knowledge engineering. Mirko Cesarini is currently working as professor’s assistant with the Faculty of Statistical Science, University of Milan Bicocca. He is also affiliated with CRISP (Interuniversity Research Center on Public Utility Services for the Individual). His research focuses on information systems. He has published several papers in refereed journals, and in proceedings of international conferences. He earned a PhD in 2005 in computer engineering and a master’s degree in the same field in 2001, both at the Politecnico di Milano, Milan, Italy. Ioannis P. Chochliouros is a telecommunications electrical engineer who graduated from the Polytechnic School of the Aristotle University of Thessaloniki, Greece, holding also an MS and a PhD from the University Pierre et Marie Curie, Paris VI. He possesses extreme research and practical experience in a great variety of matters for modern electronic communications. He currently works as the head of the research programs section of the Hellenic Telecommunications Organization S.A. (OTE) in Greece, where he has been involved in different national, European, and international R&D projects and market-oriented activities, many of which have received international awards. He has published more than 80 distinct scientific and business papers and reports in the international literature (books, magazines, conferences, and workshops), especially for technical, business, and regulatory options arising from innovative e-infrastructures and e-services in a global converged environment. He is a member of several national and international scientific committees and fora. He also works as a university lecturer with the Faculty of Science and Technology, Departments of Telecommunication Science, Computer Science and Technology, University of Peloponnese, Greece. Stergios P. Chochliouros is an independent consultant who is a specialist for environmental-related studies. He holds a PhD from the Department of Biology, University of Patras, Greece, and a university degree as an agriculturist. He has gained enormous experiences as an academic researcher and has been involved in various research activities, especially in options for the extended use and/or applicability of modern technologies. In particular, he has participated as an expert in many European research projects relevant to a variety of environmental studies. Moreover, he has gained significant experience both as an educator and advisor while being an author of several papers, studies, and reports. Gabriel Puron Cid received a master’s degree in public administration and policy at the Center for Research and Economics Teaching (CIDE) in Mexico. After the conclusion of this program, he contributed actively at the Ministry of Finance of the federal government in Mexico to design and implement budgeting for results. Today he is a doctoral student of public administration at the Rockefeller College of Public Affairs & Policy of the University at Albany with a specialization in information management
About the Contributors
and policy in the public sector. His research interests are information technology applications in public finance and budgeting, digital government, IT investment and failures in intergovernmental systems, and public management of information systems in governmental contexts. David Cook is an associate professor in the Department of Information Systems and Decisions Sciences, College of Business and Public Administration at Old Dominion University. His educational and work experience is in the area of operations management. He graduated from the University of Kentucky with a PhD in production and operations management. Dr. Cook has published in such journals as Production and Inventory Management, Human Systems Management, Production and Operations Management, The Journal of Computer Information Systems, and the e-Service Journal. His research interests include quality management, service operations, and electronic commerce. He is a member of the Decision Sciences Institute and INFORMS. James Costello serves as Webmaster and Web application developer at the Center for Technology in Government. Prior to joining CTG, Costello operated his own Web design and development company and worked for several private and public organizations, including KeyCorp, the Professional Development Program of Rockefeller College, Coopers & Lybrand, and the Office of Data Processing, Human Resources Administration, City of New York. Based on his experience at CTG, he developed a curriculum on the benefits and challenges of using XML for Web site content management, which he has presented to over 30 different NYS agencies. He and Derek Werthmuller were the two chief architects of the XML Toolkit (http://www.thexmltoolkit.org), a product of the XML Testbed. Ron Craig is a professor of business in the operations and decision sciences area of the School of Business & Economics, Wilfrid Laurier University, Waterloo, Canada. His research interests cover small business and information systems. He has published several articles and cases in these areas. Dolores Cuadra received an MS in mathematics from Universidad Complutense of Madrid in 1995. Since 1997, she has worked as an assistant lecturer at the Advanced Databases Group in the Department of Computer Science, Carlos III University of Madrid. In 2003 she obtained a PhD in computer science from Carlos III University of Madrid. She is currently teaching files organization, database design, and data models. Her research interests include data models, conceptual and logical modeling, and advanced database CASE environments. She has been working in the Department of Computer Science at Purdue University of West Lafayette (Indiana) for nearly a year, where she has applied her research in spatiotemporal databases. Alex Dunayev worked at the St. John IT Department developing the National Events Management System (NEMS). This won a national award for excellence in the not-for-profit organizations category. While working at St. John, Dunayev was also finishing degrees at the University of Auckland. His BCom (Hons) dissertation, which also received an award, forms the basis for his chapter in this book. Dunayev is CEO of AQXI Creative Software at the University of Auckland’s ICEHOUSE. He founded AQXI, largely staffed by other graduates from Auckland. Antonina Durfee (Kloptchenko) is an assistant professor at Appalachian State University in Boone. She holds a PhD from Abo Akademi University. Her current research is in text mining, knowledge
About the Contributors
discovery, human issues in technology adoption, and seeking behavior. She published in International Journal of Intelligent Systems in Accounting, Finance, and Management and International Journal of Digital Accounting Research. William D. “Denny” Eshee, Jr., attorney-at-law, is professor of business law at Mississippi State University. He received his Juris PhD degree from the University of Mississippi. He has been admitted to the Mississippi Bar Association, Federal Bar Association, Alabama Bar Association, United States Court of Military Appeals, and The United States Supreme Court. He has authored or co-authored four books, Mississippi Small Claims Court: A Debt Collection Guide for Mississippi Businesses, The Mississippi Entrepreneur’s Guide, The American Entrepreneur’s Guide, and The Legal Environment of Business, as well as a host of journal articles and other business publications. Vesile Evrim is currently a PhD student in computer science at the University of Southern California, and a research assistant in the Semantic Information Representation Laboratory at USC. She has received BS and MS degrees in applied mathematics and computer science at the Eastern Mediterranean University in Cyprus and another MS degree in computer science from USC. Her current research focuses on trust-based information retrieval, search engines, recommender systems, and social networks. Lixin Fu has been an assistant professor at the University of North Carolina at Greensboro since he joined the university in 2001. He has published more than a dozen papers in refereed journals and international conferences in the past 5 years. His main research areas include data warehousing, data mining, databases, and algorithms. Dr. Fu has obtained a PhD in computer and information sciences at the University of Florida in 2001. He earned a master’s degree in electrical engineering at Georgia Institute of Technology in 1997. He is a member of IEEE, ACM, and Upsilon Pi Epsilon Honor Society. Mariagrazia Fugini is an associate professor of computer engineering at Politecnico di Milano. Previously, she was an assistant professor at the Department of Industrial Automation, Faculty of Engineering, Università di Brescia (1983-1991), and an associate professor with the Department of Computer Science, University of Pavia. She has been a visiting professor at the University of Maryland (1985-1986), the Technical University of Vienna (1986-1991), and the University of Stuttgart. Her research interests are in information system security, information systems development, software reuse, information retrieval, cooperative information systems, and reengineering in public administrations. She has published several books, papers in refereed journals, and papers in proceedings of international conferences and symposia. J. Ramon Gil-Garcia is an assistant professor in the Department of Public Administration at the Centro de Investigación y Docencia Económicas in Mexico City and a research fellow at the Center for Technology in Government, University at Albany, State University of New York. Dr. Gil-Garcia is the author or co-author of articles in numerous academic journals including The International Public Management Journal, Government Information Quarterly, Journal of the American Society for Information Science and Technology, European Journal of Information Systems, Public Finance and Management, and Gestión y Política Pública. His research interests include collaborative e-government, interorganizational information integration, adoption and implementation of emergent technologies, digital divide policies, public management, public policy evaluation, and multimethod research approaches.
About the Contributors
Sheng-Uei Guan received an MS and PhD from the University of North Carolina at Chapel Hill. He is currently a professor and chair in intelligent systems in the School of Engineering and Design at Brunel University, UK. Professor Guan has worked in a prestigious R&D organization for several years, serving as a design engineer, project leader, and manager. After leaving the industry, he joined Yuan-Ze University in Taiwan for three and a half years. He served as deputy director for the Computing Center and the chairman for the Department of Information & Communication Technology. Later he joined the Electrical & Computer Engineering Department at National University of Singapore as an associate professor. Shahidul Hassan is a doctoral student in public administration and policy at the University at Albany, State University of New York. His research interests are interorganizational collaboration and technology and work-practice transformation in government agencies. Wen-Chen Hu received a PhD in computer science from the University of Florida, Gainesville, in 1998. He is currently an assistant professor in the Department of Computer Science at the University of North Dakota. Dr. Hu has been advising more than 50 graduate students and has published over 60 articles on refereed journals, conferences, books, and encyclopedias. His current research interests are in handheld computing, electronic and mobile commerce systems, Web technologies, and databases. He is a member of the IEEE Computer Society, ACM, and IRMA (Information Resources Management Association). Xiaohua (Tony) Hu is currently an assistant professor at the College of Information Science and Technology, Drexel University, Philadelphia. He received a PhD (computer science) from the University of Regina, Canada (1995) and an MS (computer science) from Simon Fraser University, Canada (1992). His current research interests are biomedical literature data mining, bioinformatics, text mining, rough sets, information extraction, and information retrieval. He has published more than 100 peer-reviewed research papers in the above areas. He is the founding editor-in-chief of the International Journal of Data Mining and Bioinformatics. Marijn Janssen is an assistant professor in the field of information systems at the Faculty of Technology, Policy, and Management, Delft University of Technology, The Netherlands. His research is focused on designing service architectures for public service networks. He received a PhD (2001) at Delft University of Technology in the field of management information systems and has been a consultant and information architect for the Ministry of Justice. Anton Joha is a consultant at Morgan Chambers in The Netherlands and an affiliate of Delft University of Technology. His projects are mainly in the field of outsourcing and shared service centers. He holds an MS in management information systems from Delft University of Technology, The Netherlands. Jimmie L. Joseph is an assistant professor in the Department of Information and Decision Sciences, College of Business, University of Texas at El Paso. He earned a BS in the triple majors of biology, computer science, and natural sciences from Indiana University of Pennsylvania, an MBA from the University of Pittsburgh’s Joseph M. Katz Graduate School of Business, an MS in MoIS from the University of Pittsburgh’s Joseph M. Katz Graduate School of Business, and a PhD in MIS from the University of
About the Contributors
Pittsburgh’s Joseph M. Katz Graduate School of Business. Dr. Joseph has published in such journals as the Journal of Management Information Systems, Human Systems Management, and the International Journal of Electronic Business. His research interests include human computer interaction, electronic commerce, and social impacts of information systems. He is a member of the Association for Information Systems and the Decision Sciences Institute. Rhoda C. Joseph is an assistant professor of information systems and technology at The Pennsylvania State University – Harrisburg. She has earned her PhD in business with a specialization in information systems from the City University of New York (CUNY), and her MBA from Baruch College (CUNY). Dr. Joseph’s research interests are in the area of technology nonadoption, e-government, and human factors in information systems. Kalu N. Kalu is an associate professor of political science at Auburn University Montgomery. He is also a research fellow at The Macmillan Center for International and Area Studies, Yale University. His research emphasis is in the areas of institutional development and democratic theory, citizenship and administrative theory, civil-military relations, IT-leadership interface, national security and intelligence policy, e-government, and health care politics and policies. Dr. Kalu has published articles in the Journal of Political and Military Sociology, Administrative Theory & Praxis, Public Administration Review, Administration & Society, International Review of Administrative Sciences, Social Science Computer Review, the Yale Political Quarterly, and others. His book Rentier Politics: State Power, Autarchy, and Political Conquest in Post-War Nigeria is forthcoming. Atreyi Kankanhalli is an assistant professor in the Department of Information Systems, School of Computing, National University of Singapore (NUS). She received a PhD in information systems from NUS. Dr. Kankanhalli’s research interests include knowledge management, e-government, virtual teams, and information systems security. Her work has been published in journals such as MIS Quarterly, Journal of the American Society for Information Science & Technology, Journal of Management Information Systems, International Journal of Human Computer Studies, Communications of the ACM, Decision Support Systems, and International Journal of Information Management, and conferences including ICIS, HICSS, and WITS. Dimitris K. Kardaras is an assistant professor in information systems management in the Department of Business Administration, Athens University of Economics and Business (AUEB), Greece. He holds a BS (Hons) in informatics and a BS (Hons) in management both from the Athens University of Economics and Business, and an MS in information systems engineering and PhD in information systems from the Department of Computation at the University of Manchester Institute of Science and Technology (UMIST), England. Dr Kardaras has participated in many research projects in IS and IT since 1990 and has been teaching in IS courses for over 12 years. He has published journal and conference papers in the areas of IS planning, fuzzy cognitive maps, IS modeling, and e-commerce. David P. Kitlan is an instructor in information sciences and technology at The Pennsylvania State University – Harrisburg. He is a PhD candidate in the public administration program at The Pennsylvania State University – Harrisburg. Kitlan holds an MS in information systems, an MBA, and an MEng from The Pennsylvania State University, Harrisburg. Mr. Kitlan completed his undergraduate degree in mechanical
About the Contributors
engineering, also at The Pennsylvania State University. He has over 20 years of corporate and industry experience in engineering, manufacturing, management, marketing, consulting, six-sigma methods, training, and project management. Kitlan’s research interests include areas of management, work teams, distance learning, e-government, data mining, information security, and the use of information systems in private, public, and nonprofit organizations. Hsiang-Jui Kung is an assistant professor of information systems at Georgia Southern University. He received a PhD in management information systems from Rensselaer Polytechnic Institute in 1997. He joined Georgia Southern University full time in 2001. His research interests include systems analysis and design, database, e-business, IS education, and software evolution. Giorgos Laskaridis holds a degree in informatics from the Department of Informatics, University of Athens, Greece, and is now a PhD candidate in the same department. As a research fellow, he has participated in several European and national R&D projects. His research interests are in the fields of electronic services, e-government, software engineering, and system analysis and design. He has published journal and conference publications in the area of e-services and e-government. He is currently advisor to the secretary general, general secretariat for information systems of the Hellenic Ministry of Economy and Finance. Vincent Lasnik is currently an independent knowledge architect and instructional design consultant in Oregon. His professional experience includes 18 years beyond the PhD in cross-disciplinary technical and creative communications; instructional design and training; hybrid, blended, distance, and/or online learning; information design; interactive learning and multimedia production; research and development; and knowledge generation and dissemination. He holds a BA in history and psychology (Case Western Reserve University), an MA in humanities education, a PhD in instructional design and technology (both from the Ohio State University), and an MS in human factors in information design (Bentley College, McCallum Graduate School of Business). Richard A. Leipold received a BA from Washington and Jefferson College in English literature, and an MA and PhD from the University of Pittsburgh in mathematics. After working as a computer engineer for Westinghouse Electric Corporation, he joined the Faculty of Waynesburg College. He is a professor of computer science and chair of the Department of Mathematics and Computer Science. He is a member of ACM, MAA, and Phi Beta Kappa. In addition to teaching, he directs a project to standardize training modules for digital forensics. Gloria Liddell, attorney-at-law (presently inactive), is an assistant professor of business law at Mississippi State University. Dr. Liddell obtained a Juris doctorate degree from Howard University in Washington, DC. She was admitted to the bars of the District of Columbia and the states of Maryland, Florida, and Mississippi. Her experience includes practicing in the Division of Market Regulation at the headquarters of the Securities and Exchange Commission and the Federal Reserve Board. She has authored several publications on bankruptcy law and received numerous honors and awards from civic, nonprofit, and academic organizations.
About the Contributors
Pearson Liddell, Jr., attorney-at-law, is an associate professor of business law at Mississippi State University. He obtained a Juris doctorate degree from Howard University, Washington, DC. He was admitted to the bars of Washington, DC, Maryland, Florida, and Mississippi, as well as admitted to practice before the federal courts, including the United States Supreme Court. Dr. Liddell is a coauthor of the textbook The Legal Environment of Business. He has published several articles on such diverse business law topics as taxation, bankruptcy, the Digital Millennium Copyright Act, and legal concepts of Internet marketing. Chad Lin is a research fellow at Curtin University of Technology. Dr. Lin has conducted extensive research as in the areas of IS and IT investments evaluation and benefits realization, IS and IT outsourcing, electronic commerce, e-health, virtual teams, and strategic alliance. Dr. Lin has published more than 80 refereed journal articles (e.g., in Information and Management, International Journal of Electronic Commerce, Information Technology and People, Industrial Management and Data Systems, and Journal of Research and Practice in IT) and conference papers as well as book chapters. Dr. Lin has also served as a member of the editorial review boards for several prestigious international journals. Luis Felipe Luna-Reyes is a professor of business at the Universidad de las Américas in México. He holds a PhD in information science from the University at Albany. Luna-Reyes is also a member of the Mexican National Research System. His research focuses on electronic government and on modeling collaboration processes in the development of information technologies across functional and organizational boundaries. Xenia J. Mamakou is a PhD candidate in the Department of Business Administration, Athens University of Economics and Business, and her field of study is in the development of methodologies for Web site evaluation. She received an MS in business information technology from UMIST in 2003. She has published articles on privacy and is doing research on e-commerce, online privacy, management information systems, and so forth. She is working at the Business Informatics Laboratory of AUEB teaching lab courses on business informatics, Web design, and design and development of management information systems. Konstantinos Markellos is an electrical and computer engineer and researcher in the Department of Computer Engineering and Informatics, University of Patras. He obtained a diploma from the Department of Electrical and Computer Engineering (1999) and an MS in hardware and software integrated systems (2003) from the Department of Computer Engineering and Informatics. Today, he is a PhD candidate in the latter department. His research interests lie in the area of Internet technologies and especially e-commerce and e-government, and he has published several research papers in national and international journals and conferences; he is coauthor of one book and three book chapters. Penelope Markellou is a computer engineer and researcher in the Department of Computer Engineering and Informatics, University of Patras. She obtained a PhD in techniques and systems for knowledge management in the Web (2005) and an MS in usability models for e-commerce systems and applications (2000) from the above university. Her research interests focus on algorithms, and techniques and approaches for the design and development of usable e-applications including e-commerce, e-learning,
0
About the Contributors
e-government, and business intelligence. She has published several research papers in national and international journals and conferences and is co-author of two books and eight book chapters. Paloma Martínez Fernández earned a degree in computer science from Universidad Politécnica of Madrid in 1992. Since 1992, she has been working at the Advanced Databases Group in the Department of Computer Science, Universidad Carlos III of Madrid. In 1998 she obtained a PhD in computer science from Universidad Politécnica of Madrid. She is currently teaching database design and advanced databases in the Department of Computer Science, Universidad Carlos III de Madrid. She has been working in several European and national research projects about natural language processing, information retrieval, advanced database technologies and software engineering. Dennis McLeod is currently a professor of computer science at the University of Southern California (USC), and director of the Semantic Information Representation Laboratory at USC. He received PhD, MS, and BS degrees in computer science and electrical engineering from MIT. Dr. McLeod has published widely in the areas of data and knowledge base systems, federated databases, database models and design, and ontologies. His current research focuses on dynamic ontologies, user-customized information access, database semantic heterogeneity resolution and interoperation, personalized information management environments via cooperative immersipresence, information management environments for geoscience and homeland security information, crisis management decision support systems, and privacy and trust in information systems. Kostas Metaxiotis is a lecturer in the National Technical University of Athens and senior advisor to the secretary for the information society in the Greek Ministry of Economy and Finance. He has wide experience in knowledge management, expert systems design and development, artificial intelligence, object-oriented knowledge modeling, inference mechanisms, and e-business. He has published more than 60 scientific papers in various journals, such as Journal of Knowledge Management, Journal of Information and Knowledge Management, Knowledge Management & Practice, Journal of Intelligent Manufacturing, Applied Artificial Intelligence, Industrial Management & Data Systems, Journal of Computer Information Systems, and so forth. He is a member of editorial boards of several leading journals in the field and is a member of the program committee at international conferences. Since 1996 he has been participating in various EC-funded projects within Tacis, Phare, MEDA, and IST Programmes as Senior ICT Consultant and Manager. Mario Mezzanzanica is currently an associate professor with the Faculty of Statistical Science, University of Milano Bicocca. His research and professional interests focus on information technology management. He has been chairman of the scientific committee of CRISP (Interuniversity Research Center on Public Utility Services for the Individual) since July 2005. He has also been a member of the scientific committee of CEFRIEL at the ICT Center of Excellence for Research, Innovation, Education and Industrial Labs since March 2006. His scientific research activities focus on information systems, on which several research activities have been carried out and were published in scientific journals, books, and conference proceedings, both international and national. He was awarded a degree in electronic engineering from the Politecnico di Milano (1985).
About the Contributors
Michael Middleton, PhD, is a senior lecturer in the School of Information Systems at Queensland University of Technology. His interests include information management, information use analysis and evaluation, and digital libraries. His publications include the books Information Management (published by Charles Sturt University Centre for Information Studies) and Integrative Document and Content Management, with Len Asprey (published by Idea Group). Middleton also has extensive experience as a consultant with investigations in information services areas including multicultural affairs, parliamentary services, library portals, and controlled vocabularies. Melissa Moore (PhD, University of Connecticut) is an associate professor of marketing at Mississippi State University. She received her PhD from the University of Connecticut. Dr. Moore’s research interests concentrate on understanding the development and maintenance of customer-firm relationships. Her research has appeared in the Journal of Internet Law, Journal of Business Research, Transportation Journal, Journal of Consumer Psychology, Marketing Management Journal, and the American Journal of Agricultural Economics, and has been presented at both domestic and international conference venues. Robert S. Moore (PhD, University of Connecticut) is an associate professor of marketing at Mississippi State University. He has presented at numerous conferences and published his research in various outlets including Journal of Internet Law, Journal of Advertising, Transportation Journal, Journal of Public Policy and Marketing, Journal of Services Marketing, Journal of End User Computing, Advances in Consumer Research, Marketing Management Journal, Journal for the Advancement of Marketing Education, Albany Law Journal of Science and Technology, and Seton Hall Legislative Review. His research interests center upon consumer behavior in e-commerce settings. Lourdes Moreno López received an MS in mathematics from Universidad Complutense of Madrid in 1999. She has been working with several IT companies, in R&D departments, working in matters about infometrics (information measurement) especially in Web channels. Since 2002, she has been working at the Advanced Databases Group in the Department of Computer Science, Universidad Carlos III of Madrid. She is a teacher in database design. She’s currently developing her doctoral thesis and her research interests include design and accessibility in Web applications. Krysnaia Nanini was awarded a degree in economics, management, and industrial engineering from the Politecnico di Milano, Milan, Italy, in 2005. She has published several papers in proceedings of international conferences. Dale K. Nesbary serves as vice president and dean for academic affairs at Adrian College. He also holds an appointment as professor of political science. His past positions include being a member of the Oakland University Faculty, and he held administrative/research positions with the City of Boston, the National Conference of State Legislatures, and the Michigan Senate. His research interests include public-sector technology, state tax policy, and policing. He has published in a wide range of outlets, including The Journal of Public Affairs Education, The Journal of Contemporary Criminology, Social Science Computer Review, The British Journal of Educational Technology, The International Journal of MS Care, and State Legislatures.
About the Contributors
Bruce Neubauer teaches in the public administration program in Government & International Affairs, University of South Florida. His areas include information systems, e-government, e-democracy, and the modeling and simulation of complex systems. His present research projects relate to the local provision of emergency medical services in the instance of a bird flu pandemic or similar event, and knowledge management among members of emergency response teams. Donald F. Norris is the director of the Maryland Institute for Policy Analysis and Research and professor of public policy at the University of Maryland, Baltimore County. He is a specialist in urban politics, public management, and the application, management, and impacts of information technology, including e-government, in public organizations. Dr. Norris’ works have been published in a number of scholarly journals. He holds a BS in history from the University of Memphis and both an MA and PhD in government from the University of Virginia. Carlos Nunes Silva, PhD, is a professor auxiliar in the Department of Geography, University of Lisbon, Portugal. His research concerns local government, urban planning, e-planning, urban and metropolitan governance, and planning ethics. Craig Orgeron currently serves as the enterprise architect for the Mississippi Department of Information Technology Services. Since being employed at the Mississippi Department of Information Technology Services, Orgeron has participated in numerous government information technology task forces and committees, such as the Digital Signature Committee, the Electronic Government Task Force, and the Governor’s Commission on Digital Government, which led to the implementation of the Enterprise Electronic Government in Mississippi. He holds a BBA degree in MIS and a master’s degree in public policy and administration from Mississippi State University. He is a certified public manager and a graduate of the Senator John C. Stennis State Executive Development Institute. Angeliki Panayiotaki obtained a diploma in computer engineering and informatics from the University of Patras (1996) and an MS in advanced information systems from the University of Athens (2000). She is currently working as a researcher (PhD student) at the Computer Engineering and Informatics Department of the University of Patras and also at the General Secretariat for Information Systems of the Hellenic Ministry of Economy and Finance. Her research interests focus on personalization, Web mining, and interoperability techniques applied in the e-commerce, e-government, and e-health domains. She has published several research papers in international and national conferences. Haralambos Papageorgiou is a professor of statistics in the Department of Mathematics, University of Athens, and is currently chairman of the Department of Mathematics. He has also served as a professor at the University of Kansas, and as a visiting professor at the Universities of Lille in France, Vienna in Austria, and City University of New York. His main research interests over the last 8 years are in the area of official statistics, in particular, in statistical metadata; statistical harmonization; and quality issues as well as data mining. He has authored more than 50 research articles published in International Statistical Journals and for more than 12 years he has participated in European research projects. Eleutherios A. Papathanassiou is a professor in MIS and director of the Business Informatics Lab at the Department of Business Administration, Athens University of Economics and Business (AUEB),
About the Contributors
Greece. He holds a BS (Hons) in mathematics from the University of Athens and a PhD in computer science from the University of St. Andrews, Scotland. He is scientific advisor to Greek companies and head of the center for distance learning (TeleEducation Centre) of the AUEB. He has published in the areas of IS modeling, e-commerce, and computer science. Parviz Partow-Navid has been at CSU, LA, since 1983. He earned his MBA and PhD from the University of Texas at Austin. He is currently director of graduate programs for the College of Business. He has published in journals such as the Computers and Operations Research, Journal of Systems Management, Journal of Information Technology Management, and Software Engineering. Dr. PartowNavid’s interests are in decision support systems, intelligent systems, e-commerce, cybersecurity, and distance learning. John Paynter was formerly a lecturer at the University of Auckland where he supervised over 30 postgraduate students, including Alex Dunayev. Paynter has also worked in government elections and for the national census. These assignments were to pursue his interest in democracy services and particularly in e-government. He is currently working on contracts to document and improve the processes around local government elections. Gabrielle Peko is a lecturer in operations management within the Department of Information Systems and Operations Management, University of Auckland. She teaches process modeling, project management, and enterprise systems. Her research focus is on supply chain management and in this instance in the processes around government systems involving supply chains and how they may be mediated electronically. Emery M. Petchauer is an instructor of education at Lincoln University, Pennsylvania, a historically black university. His research and writing surround hip-hop as an emerging worldview in formal and informal educational settings and its implications on teaching and learning. He is a former high school teacher who has contributed to local communities of hip-hop in San Diego, California and Norfolk, Virginia. Iolanda Principe is a director with the South Australian Department of Health. She has extensive experience in management and continues to undertake research. Her research interests include social capital and the digital divide, social justice, information and communication technologies, and primary health care. Imad Rahal is an assistant professor of computer science at the College of Saint Benedict & Saint John’s University, Minnesota. He earned PhD and MS degrees in computer science from North Dakota State University (2005 and 2003, respectively). He graduated summa cum laude with a bachelor’s degree from the Lebanese American University, Beirut. He was awarded a doctoral dissertation assistantship from the National Science Foundation. His research interests are largely in the broad areas of data mining, machine learning, databases, artificial intelligence, and bioinformatics. He is an active member of the ACM and ISCA professional societies.
About the Contributors
Christopher G. Reddick is an assistant professor of public administration at the University of Texas at San Antonio. Dr. Reddick regularly teaches courses in information technology, public administration, and public-sector financial management. Dr. Reddick’s research interests are in the tools and techniques that public managers can use to make their organizations efficient and effective. Some of his publications can be found in various public administration and information technology journals such as Government Information Quarterly, Public Budgeting & Finance, and the Review of Public Personnel Administration. Jean-Philippe Rennard is a senior professor at Grenoble Graduate School of Business and head of the Department of Management and Technology. He received his PhD from the University Pierre Mendès France of Grenoble. An economist, he is specialized in industrial development. A computer scientist, he is deeply involved in biologically inspired computing. He now mainly works on the applications of biologically inspired computing to economics and management Anne Rouse is an associate professor of IT and business strategy at the Deakin Business School, Deakin University, Melbourne, Australia. She has been researching outsourcing since 1997, and her doctoral thesis on outsourcing risks and benefits won the 2003 ACPHIS medal for best Australasian PhD thesis in information systems. Dr. Rouse conducted a longitudinal study into the Australian government’s “whole of government” IT outsourcing initiative, which had to be abandoned after it became clear that it was not meeting government expectations. Alfred P. Rovai is a professor of education at Regent University in Virginia and teaches research design, statistics, program evaluation, and assessment of student learning in a mostly online doctor-ofeducation program. Previously, he taught similar courses at Old Dominion University and instructional technology courses at UCLA in an online teaching program. He has written extensively on distance education and multicultural topics and recently coedited a book titled Closing the African American Achievement Gap in Higher Education. Jeffrey Roy is an associate professor in the School of Public Administration at Dalhousie University in Halifax, Nova Scotia, Canada. He specializes in models of democratic and multistakeholder governance and electronic government reforms. He is also a member of the Organization for Economic Cooperation and Development’s E-Government Network, associate editor of the International Journal of E-Government Research, featured columnist in CIO Government Review (a Canadian publication devoted to the nexus between technology and government), and author of a recent 2006 book E-Government in Canada: Transformation for the Digital Age (University of Ottawa Press). Giovanni Maria Sacco is currently an associate professor of information systems and human-computer interaction with the Department of Informatics, University of Torino, Italy. He is the author of two U.S. patents (one is an international IBM patent), and has held positions in industry for over 25 years as the chief designer and engineer for information retrieval and content management systems. He has published several scientific papers on the most important international journals. In addition to dynamic taxonomies, his work on security with Dorothy Denning is one of the bases of the MIT Kerberos security architecture, and his work at IBM on buffer management and the invention of recursive hash join are fundamental results for high-performance database systems.
About the Contributors
Rodrigo Sandoval-Almazán is an assistant professor in the Department of Business and Economics, Instituto Tecnológico de Estudios Superiores de Monterrey (ITESM) in Toluca City and a research fellow at the Center for Development of Information Technologies and Electronics of the ITESM. He has lectured on topics such as information systems for business, information systems strategy for business, political marketing strategy, electronic commerce development, political sciences foundations, the digital divide in emergent countries, policy analysis, organization theory, database applications, statistics, Web development, quantitative analysis and modeling, research methods, public administration theory, and local government management. His research interests include electronic government, information technologies and organizations, digital divide technology, online political marketing, new public management, and multimethod research approaches. Yuan Sherng Tay received a BEng from the National University of Singapore. He is currently an MEng student at the National University of Singapore. Giuseppe Sindoni is currently a national seconded export at Eurostat, Statistical Office of the European Commission, where he leads the Statistical Data and Metadata eXchange project. He has been a technologist at the Italian National Statistics Institute and assistant professor at Roma Tre University. He received a PhD in computer system engineering from La Sapienza University of Rome in 1999. He has published numerous research papers, and contributed a chapter to the book Multidimensional Databases and two entries to the Encyclopaedia of Data Warehousing and Data mining. He has coauthored a book on problem solving in economics with office automation tools. Ludwig Slusky is a professor of information systems at California State University, Los Angeles. He has published in Software Engineering, Information and Software Technology, Data Management, Idea Group Publishing, and others. Dr. Slusky’s latest research interests are in information security with emphasis on trustworthy health care networks and in capability maturity model integration (CMMI) with emphasis on software engineering. He is a certified computer information systems security professional (CISSP) and has completed a certified training at Carnegie Mellon Software Engineering Institute in the intermediate concepts of CMMI. His other professional interests are in database administration, systems development, e-commerce, and HIPAA. Anastasia S. Spiliopoulou is a lawyer, LLM, and member of the Athens Bar Association. During the latest years, she had a major participation in matters related to telecommunications and broadcasting policy in Greece and abroad within the wider framework of the information society. She has been involved in current legal, research, and business activities as a specialist for e-commerce and e-businesses, electronic signatures, e-contracts and e-procurement, e-security and other modern information society applications. She has published more than 70 scientific and business papers in the international literature (books, magazines, conferences, and workshops), with specific emphasis given on regulatory, business, commercial, and social aspects. She has mainly focused her activities on recent aspects of the European regulatory policies and on their implications in the competitive development of the converged telecommunications market. She currently works as an OTE (Hellenic Telecommunications Organization S.A.) lawyer for the Department of Regulatory Issues of the General Directorate for Regulatory Affairs.
About the Contributors
Larry Stillman, PhD, is a senior research fellow at the Centre for Community Networking Research, Monash University, Australia. His main interest is working with community-based organisations to better use technology for social and community change, as well as theoretical aspects of human-computer relationships. He is particularly interested in how we know what is valued by communities and people (usually government) who support community initiatives—they are not always the same thing. Other areas of interest include gender, culture, and their relationships with technology use. He is very active in international community informatics research networks. Randy Stoecker is an associate professor in the Department of Rural Sociology, University of Wisconsin with a joint appointment at the University of Wisconsin – Extension Center for Community and Economic Development. He moderates and edits COMM-ORG: The On-Line Conference on Community Organizing and Development (http://comm-org.wisc.edu), conducts trainings, and speaks frequently on community organizing and development, participatory research and evaluation, and community information technology. Stoecker has written extensively on community organizing and development and community-based research, including the books Defending Community (Temple University Press, 1994) and Research Methods for Community Change (Sage Publications, 2005), and he co-authored the book Community-Based Research in Higher Education (Jossey-Bass, 2003). Leonardo Tininini is currently a researcher at the CNR Institute of Systems Analysis and Computer Science and lecturer at the Campus Bio-medico University in Rome. He received a PhD in computer science engineering from La Sapienza University of Rome in 2000. He has published numerous scientific papers on statistical databases, aggregate data, query languages, and spatiotemporal databases, and has been referee for prestigious international conferences and journals. He also worked at the Italian National Institute of Statistics (ISTAT), mainly on the design and implementation of statistical dissemination systems on the Web. Athanasios Tsakalidis obtained a diploma in mathematics from the University of Thessaloniki, Greece (1973), and a diploma in computer science (1981) and PhD (1983) from the University of Saarland, Saarbuecken, Germany. His is currently a full professor in the Department of Computer Engineering and Informatics, University of Patras, and R&D coordinator of the Research Academic Computer Technology Institute (RACTI). His research interests include data structures, graph algorithms, computational geometry, expert systems, medical informatics, databases, multimedia, information retrieval, and bioinformatics. He has published several research papers in national and international journals and conferences and is co-author of the Handbook of Theoretical Computer Science and other book chapters. Hui-Lien Tung is an assistant professor of management information systems in the Division of Business at Paine College. Her teaching and research interests include database management, system analysis and design, and Web-based learning systems. Theodoros Tzouramanis received a 5-year BEng (1996) in electrical and computer engineering and a PhD (2002) in informatics from the Aristotle University of Thessaloniki. Currently, his is lecturer at the Department of Information and Communication Systems Engineering, University of the Aegean. His research interests include access methods and query processing for databases, database security and privacy, and geographical information systems.
About the Contributors
Maria Vardaki is the holder of a BS in mathematics, an MS in management, and a PhD in public administration. Her current research interests are in the area of official statistics and data mining and she is author of 20 research papers published in international journals, encyclopedias, and conferences in research areas like statistical metadata, modeling, quality issues and harmonization of time series data. Since 1996 she has participated in various European and national research projects. Baoying Wang received a PhD in computer science from North Dakota State University. She received a master’s degree from Minnesota State University, St. Cloud. She is currently an assistant professor in Waynesburg College, PA. Her research interests include data mining, data warehouse, bioinformatics, and high-performance computing. She serves as a reviewer and committee member of several internal conferences. She is a member of ACM, ISCA, and SIGMOD. John Wang is a full professor at Montclair State University. Having received a scholarship award, he came to the U.S. and completed his PhD in operations research from Temple University in 1990. He has published over 100 refereed papers and four books. He has also developed several computer software programs based on his research findings. He is the editor of the Encyclopedia of Data Warehousing and Mining (1st and 2nd editions). Also, he is the editor in chief of both the International Journal of Information Systems and Supply Chain Management and the International Journal of Information Systems in the Service Sector. Chee Wei Phang is currently a doctoral candidate in the Department of Information Systems, School of Computing, National University of Singapore (NUS). His research interests include e-government, e-participation, IT-induced organizational change, and IT innovation adoption. His work has been published or is forthcoming in IEEE Transactions on Engineering Management, Communications of the ACM, and top-tier information systems conferences, such as International Conference on Information Systems (ICIS), International Federation for Information Processing (IFIP) WG8.2 Conference, European Conference on Information Systems (ECIS), Hawaii International Conference on System Sciences (HICSS), and Americas Conference on Information Systems (AMCIS). Derek Werthmuller has managed the Technology Solutions Laboratory and the Technology Services Unit at the Center for Technology in Government for over 10 years. Prior to joining CTG, Werthmuller spent 6 years at Siena College as a computer and network specialist. His accomplishments at Siena include establishing an Internet presence for the college, expanding the campus network, and designing a multilayer computer and network security system. Werthmuller is the co-author, along with Jim Costello, of the white paper XML: A New Web Site Architecture (September 2002), which detailed the CTG’s migration to an XML-based Web site. Anne Wiggins recently completed a PhD in the Department of Information Systems, London School of Economics and Political Science. She also holds an undergraduate degree from the University of Sydney and a master’s degree from Birkbeck, University of London. As a consultant in the fields of ICTs and the Internet, she has worked with public and commercial organizations in Australia, the U.S., and the UK. Her research focuses on EU and UK SME (small- to medium-sized enterprise) e-business adoption and implementation.
About the Contributors
David C. Wyld is the Robert Maurin professor of management at Southeastern Louisiana University where he directs the College of Business’ Strategic e-Commerce/e-Government Initiative. He is a noted RFID speaker, consultant, and writer, being a frequent contributor to both academic and industry publications. He is a contributing editor to both RFID News and Global Identification. He is also the author of the recent research report RFID: The Right Frequency for Government, the most downloaded report in the history of the IBM Center for the Business of Government. In 2006, he was named a Rising Star in Government Information Technology by Federal Computer Week. Feng Xu received a BS in electronic engineering from Lanzhou University. She is currently a PhD candidate in the Department of Electronic Engineering at Tsinghua University. In 2005, she worked as an intern in Microsoft Research Asia for more than 3 months. Her research interests include contentbased image retrieval, automatic image annotation, and machine learning. Yu-Jin Zhang (PhD, State University of Liège, Belgium) has been professor of image engineering at Tsinghua University, Beijing, China, since 1997. He has published 300 research papers and a dozen of books, including two monographs, Image Segmentation and Content-Based Visual Information Retrieval (Science Press), and two edited collections, Advances in Image and Video Segmentation and Semantic-Based Visual Information Retrieval (IRM Press). He is vice president of the China Society of Image and Graphics and the director of the academic committee of the society. He is deputy editor-in -chief of Journal of Image and Graphics and on the editorial board of several scientific journals. Dan Zhu is an associate professor at the Iowa State University. She obtained a PhD from Carnegie Mellon University. Dr. Zhu’s research has been published in the Proceedings of National Academy of Sciences, Information System Research, Naval Research Logistics, Annals of Statistics, Annals of Operations Research, and so forth. Her current research focuses on developing and applying intelligent and learning technologies to business and management.
Index A accountability, bias 724 accountability, definition of 723 accountability, in e-budgeting 724 adaptive structuration theory (AST) 365 administration to business (A2B) 578 administration to citizen (A2C) 578 agent freeze 445 agent integrity 453 agent preactivation 445 agent receptionist 444 agent transport 445 agent transport, bootstrap 447 agent transport, supervised 444 Australia 711–721 authentication, e- 417
B Beyond2020 584 blogosphere 86 blogs 16, 81–87 blogs, as therapy 83 blogs, definition 81 blogs, executives and 83 blogs, history of 82 blogs, public officials and 84 blogs, public sector and 84–86 blogs, statistics 83 blogs, U.S. military and 85 Bluetooth technology 62, 69 Britain’s National Mapping Agency 833–853 BSD license 49 business-to-business (B2B) 115 business-to-consumer (B2C) 115
C causal-loop diagram 479 certificate authority (CA) 414, 455 change management 922 chief technology officer (CTO) 522
citizen partcipation, Internet based, challeges 34–35 citizen participation, Internet-based 40 citizen participation, Internet-based, issues 32–35 citizen participation, traditional 31–32 civic skills 80 clustering, hierarchical 571 clustering, hybrid 573 clustering, in text mining 595 clustering, partitioning 570 commentariat 93 commercial off the shelf(COTS) 525 communality 80 community, structure of 52–53 community-based research (CBR) 54, 59 community development 59 community informatics 50–60, 59 community informatics, definition 51 community informatics, ethical issues of 53 community informatics, sustainability 54 community informatics, technology concepts of 53 community organization 59 community portal 103 computerized tomography (CT) 363 confidentiality 414 connectivity 80 consumer-to consumer (C2C) 116 consumers-to-business (C2B) 115 content-based image retrieval (CBIR) 615 culture 500 customer relationship management (CRM) 494 customer relations management 558 cybercrime 689
D data-cube technology 630 database management system (DBMS) 776 database management system (DBMS), heterogeneity 779 data clustering, categorization 568 data longevity 49 data mining, and public administration 557
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Index
DaWinciMD 585 decision support system (DSS) 627 deliberation 80 Department of Administrative Services (DOAS) 522 developing nations, adoption of IT 905–924 Diffie-Hellman (DH) protocol 456 digital democracy 854–872 digital democracy, in public administration 855 digital divide 7, 34, 51–52, 60, 62, 80 digital government 790–804 digital signature 414 digital signature, and budgetary transactions 725 digital signature signer 417 dissemination process 497
E e-authentication 417 e-business 116 e-business, adoption in UK 107–108 e-business, benefits 105 e-business, SME adoption of 106 e-commerce 39, 116 e-democracy 39 e-distribution 116 e-governance 96, 103 e-government 22, 39, 69, 103, 116 e-government, and KM 508–519 e-government, evolutionary approach 30 e-government, services of 96 e-government, SME support 95–97 e-government, stages 23–27 e-government evolution, integration 521 e-government evolution, interaction 521 e-government evolution, publishing 521 e-government evolution, transaction 521 e-government to business (e-G2B) 94 e-management 39 e-marketplace 116 e-Mexico 873–888 e-Mexico initiative 495 e-participation, best practice initiatives 74–80 e-participation, ICT feature effects 73–74 e-participation, individual participation factors 72–73 e-participation, initiative implications 74 e-participation, systems and tools 72 e-participation initiatives 80 e-participation research 70–75 e-procurement 116 e-services 39
early adopters 11 electronic budgeting (e-budgeting) 722–734 electronic business (e-business), and organisational change 834 electronic commerce (e-commerce) 2 electronic data interchange (EDI) 116, 532 electronic government (e-government) 1–2, 11, 30 electronic government (e-government), definition of 750 electronic government (e-government), developing a generic framework 748–774 electronic government (e-government), evolutionary approaches 23 electronic government (e-government), evolutionary approaches, limitations of 23–27 electronic government (e-government), growth stages 3 electronic government (e-government), in Mexico 873–888 electronic health record (EHR) 673 Electronic Health Record Solution 673 electronoic health (e-health) 672–688 enterprise application integration (EAI) 523 enterprise resource planning (ERP) 560 evolutionary models 23 executive information systems (EIS) 627 extensible markup language (XML) 49, 368
F federal PKI (FPKI) 413 First Amendment 13, 19–21 Firstgov.gov 13 flaming 93 forum 21 forum, nonpublic 21 forum, relevant 21 fraud detection 560 full-time equivalent (FTE) 523
G geographical information system [GIS] 368 geospatial Web-based information system 775 GNU general public license 49 governance structure 45–46, 49 governance structure, hierarchy 49 governance structure, hybrid 49 government-to-business (G2B) initiatives 2 government-to-citizen (G2C) initiatives 2 government-to-employee (G2E) initiatives 2 government-to-government (G2G) initiatives 2
Index
government innovation 493 Greek municipality 513 group decision support system (GDSS) 366
(KQML) 441 knowledge transfer 497 knowledge utilization 498
H
L
heterogeneity 777 human resource management system (HRMS) 559 hyperlinks, embedded 12–13, 15–16, 18, 21
land use, in Wisconsin 779 late adopters (laggards) 11 leadership, gatekeeper 642 leadership, networked 641 leadership, organic 642 leadership role 641 lesser GPL 49 licensing components 44–45
I ICTs, effective use of 60 identification number (ID) 455 IEEE 69 IEEE 802.11 62 information and communication technologies (ICTs) 1, 6, 22, 103, 117 information retrieval (IR) 593 information systems (IS) 117 information technologies (IT) 361 information technology cost 7 innovation 11, 106 innovation diffusion theory 6 integration stage 30 integrity, agent 453 intelligent agent 441 inter-agency agreement (IAA) 526 Internet 11, 69 Internet, and its impact on political activism 889–904 Internet deliberative features 40 Internet use 896 interoperability 49 intra-business 117 intranet 11 intrusion detection, and public domain 464 intrusion detection system (IDS) 463 intrusion detection system (IDS), and data mining 463–473
K knowledge 494 knowledge discovery 516 knowledge elicitation 497 knowledge elite 890 knowledge identification 495 knowledge management, and e-government 508– 519 knowledge management, and leadership 500 knowledge management process 495 knowledge query and manipulation language
M make-or-buy decision 547 memory, organizational 516 micro business 104 Mississippi, IT personnel 737 MIT license 49 Mozilla Firefox 819 multidimensional navigation 581
N National Archives and Records Administration (NARA) 418 navigation, in government Web site 805–817 network 49 new technology, resistance to 6 nonpublic forum 13, 14
O online analytical processing (OLAP) 581, 627 online analytical processing (OLAP), and data-cube technology 630 online development community 49 online repositories 41 Ontario, e-health 672–688 open communities 41 openness 30 OpenOffice.org 818–832 open source library 49 open source license 49 open source software 41, 42–43, 49 open source software (OSS) 818–832 open source software, benefits of 42–43 open source software, challenges of 43 open standards 41, 42–43, 49 open standards, benefits of 42–43
Index
open standards, challenges of 43 organizational culture 500 organizational leadership 638 Organization for Economic Cooperation and Development (OECD) 790 outsourcing, in IT 664 outsourcing, in public-sector agencies 662–671
P paper reduction 4 partnering, private-sector 682 partnering, public-private 675 permalink 93 personal digital assistant (PDA) 592 podcasting 93 political efficacy 80 political participation 30 positive branding effect 13 privacy 7 privacy, government Web site 703 public-key infrastructure (PKI) 413, 453 public-key infrastructure (PKI), federal 413 public administration 1, 11 public administration, advantages of e-government 4–5 public administration, challenges of e-government 5–7 public administration, definition 3 public administration, e-government initiatives 3 public forum, designated 13, 14, 21 public forum, traditional 13, 14 public information technology, and human-factors design (UCD) 650–661 public managers 12–13, 19 public wireless internet 61–69
Q quality 604 quality assurance framework, in public administration 606 quality function deployment (QFD) 607
R radio frequency identification (RFID) 425–440 radio frequency identification (RFID), readers 428 really simple syndication (RSS) 81, 93 reasonableness test 14, 21 record keeping 418 reference mode 477 regional economic marketplace 104
registration authority (RA) 414 resource description framework (RDF) 499 risk, shared 676 risk assessment 548 roaming, and security 442 roaming permit, request 444
S SAFE protocol 442 SAFER 453 security 7 security, and certificate authority 418 security, elements of a program 690 security, government Web site 703 security, lack of 453 service-oriented architecture (SOA) 531 servicecenter (SC) 526 shared-service center (SSC) 544 shared-service decisions 547 shared services, risks of 549 sidebar 93 small- and medium-sized enterprises (SMEs) 117 small and medium-sized enterprises (SME) 104 small and medium-sized enterprises (SMEs) 94–95 smart label 429 SMEs 105–110 SMEs, UK policies of 108–110 social capital 53, 60, 80 source credibility 21 South Africa 749 South Korea, online policy forums 854–872 state government 735 state portal 30 State Technology Authority (STA) 522 statistical databases (SDBs) 579 statistical dissemination 581 StatLine 584 stock-and-flow diagram 479 strict scrutiny test 14, 21 structuration theory 362 structuration theory in IT 363 sustainability 60
T terrorist attacks 561 text mining 592–603 thread 93 trackback 93 traditional public forum 21 transaction efficiency 5
Index
transaction stage 30 transparent governance 5 trust 6 trust, dimensions of 6
very small, small/medium-sized enterprise(vSME) 104 very small SMEs (vSMEs) 96 virtual value chain three-stage development process 97
Web browser, Mozilla Firefox 819 Web services, government 533 Web site, assessment of municipal 791 Web site, government 805 Web site, usability in pubic sector 806 Web site evaluation 701 Web sites, as government property 15–16 Web sites, as nonpublic forums 16 web sites, government 13 WiFi 69 WiFi standard 62 wiki 93 WiMax 69 WiMax standard 62 Wireless Internet 69 wireless Internet, and e-government 63–64 wireless Internet, in Michigan 64–66 wireless networks 62 Wisconsin Land Information System (WLIS) 776 World Wide Web 69
W
X
Web-enabled governance 40 Web-enabled governance, trends 32–34 Web application, custom 525 Web application, EAI 525 Web application, static 523 Web application classification 522 Web browser 520
XML, for website management 49 XML tool kit 42, 43
U universal mobile telecommunications systems (UMTS) 69 user-centered design (UCD) 650 user help, and research questions 808 user help, in government Web site 805–817 user interface (UI) 525
V