Information Technology and Innovation Trends in Organizations
.
Alessandro D’Atri Maria Ferrara Joey F. George Paolo Spagnoletti l
l
Editors
Information Technology and Innovation Trends in Organizations ItAIS: The Italian Association for Information Systems
Editors Alessandro D’Atri LUISS Guido Carli CeRSI Via G. Alberoni 7 00198 Roma Italy
[email protected] Maria Ferrara Parthenope University Department of Management Via Medina 40 80133 Naples Italy
[email protected] Joey F. George Florida State University College of Business Academic Way 821 32306-1110 Tallahassee USA
[email protected] Paolo Spagnoletti LUISS Guido Carli CeRSI via G. Alberoni 7 00198 Roma Italy
[email protected] ISBN 978-3-7908-2631-9 e-ISBN 978-3-7908-2632-6 DOI 10.1007/978-3-7908-2632-6 Springer Heidelberg Dordrecht London New York Library of Congress Control Number: 2011932352 # Springer-Verlag Berlin Heidelberg 2011 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Cover design: eStudio Calamar S.L. Printed on acid-free paper Physica-Verlag is a brand of Springer-Verlag Berlin Heidelberg Springer-Verlag is a part of Springer ScienceþBusiness Media (www.springer.com)
Foreword Joey F. George1
I was pleased and honored to be a part of the VII Conference of the Italian Chapter of the Association for Information Systems (ItAIS), held at the Villa Doria d’Angri, Parthenope University, in Naples, Italy, in October 2010. The Villa is a truly amazing place to hold a conference, as it sits high on the hillside, with the Bay of Naples dominating the view in front, and with Mt. Vesuvio dominating the view on the left. And as was true in 2009, when the ItAIS conference was held on the Costa Smeralda of Sardinia, the annual meeting of the Italian chapter of AIS is a truly amazing regional conference. ItAIS was founded in 2003, and the first ItAIS meeting was held in Naples in 2004. Since that time, it has grown to become an important conference. No longer a completely Italian meeting, the conference now attracts information systems (IS) scholars from all over Europe, and in fact, from all over the world. For the seventh ItAIS, the overall theme was “Information Technology and Innovation Trends in Organizations.” The 103 papers that were accepted after a double-blind review (124 submitted) were presented in fifteen tracks showing the breadth of the research topics of interest to the Italian community. The 59 contributions that were selected to appear in this volume represent a wide range of research methods and philosophical views. The sessions were well-attended, some with standing room only for the audience. Every session I attended included plenty of lively discussion and exchange between the presenters and members of the audience. By any standard, it was a very successful conference. I was also privileged to be part of a panel that met before the ItAIS meeting began. The panel was held on the morning of October 8. Panelists included Marco De Marco (Universita` Cattolica, Milano), Robert Winter (University of St. Gallen (HSG), Switzerland), and Peter Bednar (Lund University, Sweden). The panel was organized to discuss a recent document addressing European IS research, the ¨ sterle, “Memorandum on Design Oriented Information Systems Research” by H. O J. Becker, U. Frank, Th. Hess, D. Karagiannis, H. Krcmar, P. Loos, P. Mertens, A. Oberweis, and E. Sinz. (As of this writing, the memorandum has not yet been
1
President of the AIS 2010–2011, Florida State University, Tallahassee, FL, USA,
[email protected] v
vi
Foreword
published in English, but copies of it in English have been widely circulated.) The memorandum has three parts: a preamble, an overview of the authors’ preferred approach to research (in seven parts), and a list of the signatories. According to the authors, much European IS research, especially in German-speaking countries and Scandinavia, has traditionally focused on what is now called design science, where research problems were solved through the development of working information systems. Many of these systems were eventually adopted by government or businesses, indicating the value of their contribution. However, due to a recent focus on cross-national comparisons of post-secondary academic programs (e.g., the Bologna process, http://www.ond.vlaanderen.be/hogeronderwijs/bologna/), there has been a great deal of pressure on European IS researchers to do the following: (1) publish journal articles instead of books; (2) publish in English instead of in their native languages; and (3) work within the behavioral research paradigm that dominates the leading IS journals. The memorandum is a call to action, to prevent European IS research from losing its essence. As the authors say, “European information systems research still has an excellent opportunity to build upon its strengths in terms of design orientation and at the same time demonstrate its scientific rigor through the use of generally accepted methods and techniques for acquiring knowledge”. The memorandum then goes on to list those generally accepted methods and techniques. As part of the panel, the three European panelists each offered their own views regarding the memorandum and the state of the relationship between European and American IS research. Marco De Marco argued for preserving the plurality of global IS research. Robert Winter, who was an instrumental actor in the creation of the memorandum, provided insights into the views of the authors. Peter Bednar discussed the differing philosophical views that characterize the European and American IS research communities and the perceived hegemony of IS journals based in the US. The panel was well attended, and many members of the audience engaged in a wide-ranging discussion with the members of the panel. Successful conferences result from the dedication and hard work of many individuals. This conference is no different. Much of the credit for its success goes to my conference co-chair, Professor Maria Ferrara, of Parthenope University in Napoli, the Program Chair Alessandro D’Atri, of LUISS Guido Carli –Roma, and the members of the Organizing Committee: Rocco Agrifoglio (Parthenope University, Napoli); Francesca Cabiddu (University of Cagliari); Concetta Metallo (Parthenope University, Napoli); and Paolo Spagnoletti (LUISS Guido Carli, Roma). I would also like to thank Marco De Marco, the President of ItAIS, for all of his contributions to the chapter and to this year’s meeting. Thanks are also due to the 36 members of the program committee, who worked hard to review and select the best papers for the conference. As I have written before and said many times, the Italian Chapter of AIS has set high standards for the other chapters of AIS. Whenever anyone asks me what a chapter of AIS should do, or what it should be like, I always point to the Italian chapter as an exemplar. Similarly, the ItAIS conference has set high standards for other regional conferences. As President of AIS, I have traveled widely over the
Foreword
vii
past two years, and I have attended many conferences, both regional and international alike. During 2010 alone, I attended conferences and meetings in seven countries on five continents. Each conference is unique, but it is clear to me that ItAIS has mastered the art and the practice of running a successful regional conference. If an AIS chapter wants to establish a premier regional conference, they should follow the Italian example. The best of the diverse set of papers presented at the VII Conference of the Italian Chapter of the Association for Information Systems is featured in this volume. I hope you will enjoy reading them.
.
Introduction A. D’Atri1, M. Ferrara2, J.F. George3, and P. Spagnoletti4
The general theme of the 2010 itAIS conference, from which this book is titled, attracted contributions not limited to the Italian IS community. In fact, the 159 authors – whose 59 papers were selected to be part of this volume by means of a double-blind review process – include researchers from Italy and from more than 15 countries of five continents (i.e. Australia, Canada, Tunisia, Germany, Hong Kong, etc.). The overall aim of this publication is to explore the different contours and profiles in the development of information technology and organizations within social and economic environments where uncertainty and turbulence appear to be ubiquitous and specific (different in each country, public administration, company). Seemingly, innovation is an ineludible issue since private ventures have to renew their outputs (and the way they generate them) to respond to both competitive challenges and their clients’ requests, and public institutions have to overhaul their services to meet needs of citizens and stakeholders’ requirements. Yet, economic resources for appropriate investments are lagging because of the economic downturn and budget constraints, conditioning decision making, are therefore increasing. Striking a balance between such diverging necessities is ‘the’ issue. But it is not only a question of ‘resource allocation’: organizations operate in a varied world where approaches and methods can be generalized reliably only within very specific (and limited) contexts. There exists a vast array of organizations (differing in size, culture, technological history, structure) in very dissimilar institutional settings where a constantly evolving supply of information systems artifacts has to respond appropriately (and cost effectively) to heterogeneous requirements. The following 14 parts indicate both the amplitude of the research field that the information systems community investigates and the large number of issues that are
1
LUISS Guido Carli University, Centre for Research on Information Systems (CeRSI), Roma, Italy, e-mail:
[email protected] 2 Parthenope University, Naples, Italy, e-mail:
[email protected] 3 Florida State University, College of Business, Tallahassee, Florida, United States, e-mail:
[email protected] 4 LUISS Guido Carli University, Centre for Research on Information Systems (CeRSI), Roma, Italy, e-mail:
[email protected] ix
x
Introduction
presently attracting the attention of scholars. Each part begins with a brief introduction that explains the aims of the section so that the reader gains an overall picture of the contributions included. Part I, E-Services in Public and Private Sectors, brings together different perspectives and underscores the need for an enhanced collaboration between service providers and users (customers and citizens). Part II, Organizational Change and Impact of ICT, highlights the major challenges implied by the management and implementation of change vis-a`-vis technical modifications. Part III, Information and Knowledge Management, depicts the networked collaboration experienced by organizations in sharing information and knowledge. Part IV, IS Quality, Metrics and Impact, intends to assess the (measurable) actual costs and benefits of ICTs. Part V, IS Development and Design Methodologies, focuses on critical phases in IS design such as strategic planning, enterprise architectures development, and the transition from requirements to design. Part VI, Human-computer Interaction, presents and discusses practices, methodologies, and techniques tackling different aspects of the interaction among humans, information and technology. Part VII, Information Systems, Innovation Transfer, and New Business Models, shows how advanced ICT tools offer a set of new possibilities to facilitate the use of open innovation approaches and of cooperative and decentralized models. Part VIII, Accounting Management and Information Systems, delineates the strategic role of IS in accounting beyond its automation. Part IX, Business Intelligence Systems, their Strategic Role and Organizational Impacts, emphasizes the need to incorporate such systems into both the strategic thinking of organizations and their management of change. Part X, New ways to work and interact via the Internet, portrays the dispersed interactions that are facilitated by web applications and their consequences on work activities and on social relationships. Part XI, ICT in Individual and Organizational Creativity Development, describes contributions and implications of the use of ICTs in creative processes and in the management of creative work. Part XII, IS, IT and Security, addresses the several aspects involved in information security, from the technical to the managerial. Part XIII, Enterprise System Adoption, is directed towards the several issues raised by the adoption of ERPs in organizations. Part XIV, ICT–IS as Enabling Technologies for the Development of Small and Medium Size Enterprises, explores the specific needs of smaller organizations thus bringing to the forefront the limits of information research and practice still centered on ‘large technical systems’. Any intellectual achievement is gained through the joint effort of several people: the authors, of course, and the people who have worked to collate and review the contributions and have written the introductions to the chapters. We are also grateful to Marco De Marco, the President of itAIS (www.itais.org) and to all the members of the Organizing Committee of the itAIS 2010 Conference, to the staff of Parthenope University and of CeRSI (Research Centre on Information Systems at LUISS Guido Carli University). That event was indeed the beginning of the process that led to this publication.
Contents
Part I
E-Services in Public and Private Sectors
Inter-organizational e-Services from a SME Perspective: A Case Study on e-Invoicing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 R. Naggi and P.L. Agostini E-Services Governance in Public and Private Sectors: A Destination Management Organization Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 F.M. Go and M. Trunfio Intelligent Transport Systems: How to Manage a Research in a New Field for IS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 T. Federici, V. Albano, A.M. Braccini, E. D’Atri, and A. Sansonetti Operational Innovation: From Principles to Methodology . . . . . . . . . . . . . . . . 29 M. Della Bordella, A. Ravarini, F.Y. Wu, and R. Liu Public Participation in Environmental Decision-Making: The Case of PPGIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Paola Floreddu, Francesca Cabiddu, and Daniela Pettinao Single Sign-On in Cloud Computing Scenarios: A Research Proposal . . . 45 S. Za, E. D’Atri, and A. Resca Part II
Organizational Change and Impact of ICT
The Italian Electronic Public Administration Market Place: Small Firm Participation and Satisfaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 R. Adinolfi, P. Adinolfi, and M. Marra
xi
xii
Contents
The Role of ICT Demand and Supply Governance: A Large Event Organization Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 F.M. Go and R.J. Israels Driving IS Value Creation by Knowledge Capturing: Theoretical Aspects and Empirical Evidences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 R.P. Dameri, C.R. Sabroux, and Ines Saad The Impact of Using an ERP System on Organizational Processes and Individual Employees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 A. Spano and B. Bello` Assessing the Business Value of RFId Systems: Evidences from the Analysis of Successful Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 G. Ferrando, F. Pigni, C. Quetti, and S. Astuti Part III
Information and Knowledge Management
A Non Parametric Approach to the Outlier Detection in Spatio–Temporal Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Alessia Albanese and Alfredo Petrosino Thinking Structurally Helps Business Intelligence Design . . . . . . . . . . . . . . . 109 Claudia Diamantini and Domenico Potena A Semantic Framework for Collaborative Enterprise Knowledge Mashup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 D. Bianchini, V. De Antonellis, and M. Melchiori Similarity-Based Classification of Microdata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 S. Castano, A. Ferrara, S. Montanelli, and G. Varese The Value of Business Metadata: Structuring the Benefits in a Business Intelligence Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 D. Stock and R. Winter Online Advertising Using Linguistic Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . 143 E. D’Avanzo, T. Kuflik, and A. Elia Part IV
IS Quality, Metrics and Impact
Green Information Systems for Sustainable IT . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 C. Cappiello, M. Fugini, B. Pernici, and P. Plebani
Contents
xiii
The Evaluation of IS Investment Returns: The RFI Case . . . . . . . . . . . . . . . . 161 Alessio Maria Braccini, Angela Perego, and Marco De Marco Part V
Systemic Approaches to Information Systems Development and Design Methodologies
Legal Issues in eGovernment Services Planning . . . . . . . . . . . . . . . . . . . . . . . . . . 171 G. Viscusi and C. Batini From Strategic to Conceptual Information Modelling: A Method and a Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 G. Motta and G. Pignatelli Use Case Double Tracing Linking Business Modeling to Software Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 G. Paolone, P. Di Felice, G. Liguori, G. Cestra, and E. Clementini Part VI
Human Computer Interaction
A Customizable Glanceable Peripheral Display for Monitoring and Accessing Information from Multiple Channels . . . . . . . . . . . . . . . . . . . . . . 199 D. Angelucci, A. Cardinali, and L. Tarantino A Dialogue Interface for Investigating Human Activities in Surveillance Videos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 V. Deufemia, M. Giordano, G. Polese, and G. Tortora The Effect of a Dynamic User Model on a Customizable Mobile GIS Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 L. Paolino, M. Romano, M. Sebillo, G. Tortora, and G. Vitiello Simulating Embryo-Transfer Through a Haptic Device . . . . . . . . . . . . . . . . . . 229 A.F. Abate, M. Nappi, and S. Ricciardi Interactive Task Management System Development Based on Semantic Orchestration of Web Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 B.R. Barricelli, P. Mussio, M. Padula, A. Piccinno, P.L. Scala, and S. Valtolina An Integrated Environment to Design and Evaluate Web Interfaces . . . 245 R. Cassino and M. Tucci A Crawljax Based Approach to Exploit Traditional Accessibility Evaluation Tools for AJAX Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 F. Ferrucci, F. Sarro, D. Ronca, and S. Abrahao
xiv
Contents
A Mobile Augmented Reality System Supporting Co-Located Content Sharing and Displaying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 A. De Lucia, R. Francese, and I. Passero Enhancing the Motivational Affordance of Human–Computer Interfaces in a Cross-Cultural Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 C. Schneider and J. Valacich Metric Pictures: Source Code Images for Visualization, Analysis and Elaboration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 S. Murad, I. Passero, and R. Francese Part VII
Information Systems, Innovation Transfer, and New Business Models
Strategy and Experience in Technology Transfer of the ICT-SUD Competence Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 C. Luciano Mallamaci and Domenico Sacca` A Case of Successful Technology Transfer in Southern Italy, in the ICT: The Pole of Excellence in Learning and Knowledge . . . . . . . . 301 M. Gaeta and R. Piscopo Logic-Based Technologies for e-Tourism: The iTravel System . . . . . . . . . . 311 Marco Manna, Francesco Ricca, and Lucia Sacca` Managing Creativity and Innovation in Web 2.0: Lead Users as the Active Element of Idea Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 R. Consoli Part VIII
Accounting Information Systems
Open-Book Accounting and Accounting Information Systems in Cooperative Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 A. Scaletti and S. Pisano The AIS Compliance with Law: An Interpretative Framework for Italian Listed Companies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 K. Corsi and D. Mancini The Mandatory Change of AIS: A Theoretical Framework of the Behaviour of Italian Research Institutions . . . . . . . . . . . . . . . . . . . . . . . . . 345 D. Mancini, C. Ferruzzi, and M. De Angelis
Contents
Part IX
xv
Business Intelligence Systems Their Strategic Role and Organizational Impacts
Enabling Factors for SaaS Business Intelligence Adoption: A Theoretical Framework Proposal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 Antonella Ferrari, Cecilia Rossignoli, and Alessandro Zardini Relationships Between ERP and Business Intelligence: An Empirical Research on Two Different Upgrade Approaches . . . . . . . . . . . . . . . . . . . . . . . . . 363 C. Caserio Patent-Based R&D Strategies: The Case of STMicroelectronics’ Lab-on-Chip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 Alberto Di Minin, Daniela Baglieri, Fabrizio Cesaroni, and Andrea Piccaluga Part X
New Ways to Work and Interact Via Internet
Trust and Conflict in Virtual Teams: An Exploratory Study . . . . . . . . . . . . 381 L. Varriale and P. Briganti Virtual Environment and Collaborative Work: The Role of Relationship Quality in Facilitating Individual Creativity . . . . . . . . . . . . . . . 389 Rocco Agrifoglio and Concetta Metallo Crowdsourcing and SMEs: Opportunities and Challenges . . . . . . . . . . . . . . . 399 R. Maiolini and R. Naggi Open Innovation and Crowdsourcing: The Case of Mulino Bianco . . . . . 407 Manuel Castriotta and Maria Chiara Di Guardo Relational Networks for the Open Innovation in the Italian Public Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 A. Capriglione, N. Casalino, and M. Draoli Learning and Knowledge Sharing in Virtual Communities of Practice: A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 Federico Alvino, Rocco Agrifoglio, Concetta Metallo, and Luigi Lepore Part XI
ICT in Individual and Organizational Creativity Development
Internet and Innovative Knowledge Evaluation Processes: New Directions for Scientific Creativity? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 Pier Franco Camussone, Roberta Cuel, and Diego Ponte
xvi
Contents
Creativity at Work and Weblogs: Opportunities and Obstacles . . . . . . . . . 443 M. Cortini and G. Scaratti Part XII
IS, IT and Security
A Business Aware Information Security Risk Analysis Method . . . . . . . . . 453 M. Sadok and P. Spagnoletti Mobile Information Warfare: A Countermeasure to Privacy Leaks Based on SecureMyDroid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 A. Grillo, A. Lentini, and G. Me A Prototype for Risk Prevention and Management in Working Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469 M.G. Fugini, C. Raibulet, and F. Ramoni The Role of Extraordinary Creativity in Organizational Response to Digital Security Threats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 Maurizio Cavallari Part XIII
Enterprise Systems Adoption
The Use of Information Technology for Supply Chain Management by Chinese Companies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489 Liam Doyle and Jiahong Wang Care and Enterprise Systems: An Archeology of Case Management . . . . 497 F. Cabitza and G. Viscusi Part XIV
ICT–IS as Enabling Technologies for the Development of Small and Medium Size Enterprises
Recognising the Challenge: How to Realise the Potential Benefits of ICT Use in SMEs? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507 P.M. Bednar and C. Welch Understanding the ICT Adoption Process in Small and Medium Enterprises (SMEs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515 R. Naggi Second Life and Enterprise Simulation in SMEs’ Start Up of Fashion Sector: The Cases ETNI, KK Personal Robe and NFP . . . . . . . . . . . . . . . . . . . 523 L. Tampieri
Part I
E-services in Public and Private Sectors M. De Marco and J. Vom Brocke
More and more services (e.g. information, interaction, transaction, support, etc.) are (or could be) provided via electronic networks today. The main channel of e-service delivery is the Internet, but also other channels, such as call centers, public kiosks, mobile phones, television, etc., may have important roles, in integrated and multichannel solutions especially. This section is aimed at focusing on such e-services as a promising reference model for business and private service organizations. To enhance a deeper understanding of e-services, different disciplinary approaches are essential, in order to make multi-disciplinary integration possible. In fact, whilst computer science and engineering are concerned with the development and provision of such services, economic and organization studies approaches are needed to investigate value-related issues, cost factors, service quality, processes management, etc. As a consequence, in this stream of studies, technical issues of infrastructure integration, service-oriented architectures and Enterprise Application Integration (EAI) overlap with the search for new business models and new quality models. Moreover, both technical and organizational e-services issues cannot be effectively addressed when investigation is limited within the boundaries of a single organization. Many e-services, in fact, imply inter-organizational processes integration; and, in most cases, the relationships between e-service providers and the final users imply collaboration processes which may need developing and improving. This field of studies is aimed at understanding all the phenomena related to eservices, both in the case the private sector is the supply side, and in the case the public sector is the provider. As a consequence, many questions arise, involving the differences between e-business and e-government. Researchers are encouraged to investigate the pros and cons of addressing public sector e-services with a private sector perspective. What are the specific (service) needs of public services users (citizens and businesses) in comparison with private service customers’ needs? Who are the stakeholders in public settings and what stakeholder theory should be developed for public e-services? What are the related emerging economic models for public e-services, given that generating revenues is not the main driver in the public sector? In any case, both in the private sector (e-business) and in the public sector (e-government) the challenges which e-services studies must face are numerous. For example, there are economic challenges, related to cost affordance, cost-benefit
2
M. De Marco and J. Vom Brocke
analysis, value-related issues. There are issues and challenges more directly related to social sciences, such as e-readiness, the digital divide, the integration of different actors in e-services design and implementation. There are issues where also psychological approaches are needed, such as topics related with usability and user interface, user acceptance, trust, relationship management, service experience. Also ethic issues have an important role: security- and privacy-related topics are perceived as more and more important for e-services success. Organizational and management issues related with e-services span from quality and evaluation models to the definition of new organizational processes, structures and skills; from new forms of leadership, to emerging public-private partnerships, etc. Technical issues involve topics such as interoperability standards and frameworks, electronic invoicing, web services, service-oriented architectures, data management systems, content management systems, etc. All these issues and challenges make the e-services a cutting-edge, stimulating field of studies. This section presents contributions from multiple perspectives. Theoretical issues and empirical evidence developed in specific service areas (e.g. health care, tourism, government, banking), in processes (e.g. procurement, invoicing, payments), and in public or private environments constitute an ample research background to draw upon and to investigate.
Inter-organizational e-Services from a SME Perspective: A Case Study on e-Invoicing R. Naggi and P.L. Agostini
Abstract Adoption of inter-organizational e-services like e-Invoicing is not a simple task for SMEs. This work is an exploratory attempt to understand such complexity. Through the analysis of a case study the paper further points out that external pressures might induce SMEs to adopt e-services not matching their needs. New and probably underestimated questions arise: can pressures by trading counterparties generate market distortions and hidden inefficiencies also in e-service adoption? The paper will derive some preliminary conclusions and will propose directions for future research on the topic.
Introduction In January 2007 the European Commission has presented an Action Plan [1] with the goal of a 25% reduction by 2012 of administrative burdens for businesses in Europe. An outstanding emerging topic is the optimisation of administrative flows that are still based on paper documents. A recent study [2] underlines in particular how replacing paper-based processes is “relevant not just for exchanges between businesses (supply chain optimisation), but also for company-internal processes”. It is self-evident that the two aspects are strictly connected, the one producing the input-output documents that feed the other one, and that we are dealing with InterOrganizational Systems (IOS). Theoretical and empirical researchers have devoted
R. Naggi Department of Economics and Business Administration, LUISS Guido Carli, Rome, Italy e-mail:
[email protected] P.L. Agostini Dipartimento di Scienze dell’Economia e della Gestione Aziendale, Universita` Cattolica del Sacro Cuore, Milan, Italy e-mail:
[email protected] A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_1, # Springer-Verlag Berlin Heidelberg 2011
3
4
R. Naggi and P.L. Agostini
considerable attention to IOS since the 1980s. Although the body of literature both on the antecedents and the organizational consequences of IOS is vast, new directions of research have been suggested in order to develop “theories that are more compatible with technologies in the post-[point-to-point] EDI era” ([3], p. 509) and, in particular with the e-service approach. In this line of reasoning, we propose a preliminary analysis on Electronic Invoicing procedures – that is to send/receive invoices without using a paper support – and the related theme of lawful e-archiving in the European scenario. Both of them are connected with the wider theme of the digitization of documents – also referred to as “dematerialization”. The theme has been attracting high and growing attention in the public and in the private sector in the last decades – the first UE directive on the matter was issued in 2001, while the EU EDI Recommendation dates back to 1994 – since invoices are among the most numerous and pervasive documents in B2G and B2B exchange of information. In the 2001/115/EC and 2006/112/EC Directives, in the studies of official Working Groups [4] and in the literature [5, 6] third parties (meant as specialised providers) e-invoicing and e-archiving services are regarded as crucial to allow a widespread adoption of new and more efficient administrative processes. Therefore e-invoicing and (lawful) e-archiving services are among the most important and relevant e-services. Two main reasons motivate our research. First the topic is to our knowledge understudied in academic literature. Second, despite being heavily promoted by Public Institutions, the organizational implications of e-Invoicing and lawful e-archiving, especially in a SME perspective, are only occasionally analysed: what are the organizational entailments of shifting from paper-based to digital processes in the invoicing domain? Which organizational functions are (or should) be involved in this change process? How can e-services help enterprises to face all of these problems? And, finally, are they always helpful for SMEs or might they induce new market distortions? This work is an exploratory attempt to understand the specific organizational implications and complexities of e-invoicing adoption, how they have been analysed in the scarce available academic literature and what the possible lines of enquiry might be. To do this, the paper is structured as follows: first we will outline what e-invoicing is and what the economic system aims to achieve through its diffusion; then, we will shift the perspective to the point of view of a medium Italian enterprise, through an exploratory case study to observe the adoption of e-invoicing using the e-service supplied by one of the leading providers in Italy. The case is particularly interesting: in the exploration we unexpectedly met the opportunity to study also the problems arising in interfacing the original project with similar services adopted by a large customer of the enterprise under study. Through the analysis of the case study we will be able to isolate the organizational implications and challenges and to compare the different approaches towards SMEs of the two e-service providers. Finally, we will derive some preliminary conclusions and directions for future research.
Inter-organizational e-Services from a SME Perspective
5
e-Invoicing and e-Archiving Extant literature assumes invoicing dematerialisation as the necessary step towards a complete integration of the delivery and the payment cycles [6, 7], allowing enterprises to considerably improve efficiency in the Financial Supply Chain management [5]. Diminishing the administrative burden, making workflows more efficient, obtaining transparency are just some of goals the private and public sectors want to achieve. A widespread adoption of electronic invoicing could significantly reduce supply chain costs by 243 billion EUR across Europe [8], but – despite an absolute convergence of opinions and expectations – the European market still carries more than 28 billion invoices per year, of which over 90% are still on paper [6, 8]. Given the quite evident benefits of implementing e-invoicing procedures, why is their diffusion still slow? A first consideration underlines the complexity of B2B transactions as they involve manifold participants and complex processes, so creating a long, intricate value chain [9]. These include procurement, agreement administration, financing, insurance, credit ratings, shipment validation, order matching, payment authorization, remittance matching, and general ledger accounting. Furthermore, B2B transactions are more likely to be disputed than B2C ones. Also, only large enterprises get economies of scale [5]. Other main problems affecting a pervasive diffusion of e-Invoicing and e-archiving processes among SMEs are [4, 10]: the diverse interpretations of the legislation; the continuing differences in national regulatory requirements, even within the EU; the lack of a common international standard for layout and data elements. Legner and Wende [6] argue that “this low penetration can be explained by “excess inertia” or “start-up problems” typical of e-business scenarios in which positive network externalities prevail”. The complexity of B2B transactions and major differences among e-invoices formats (PDF, txt, UN/EDIFACT, SAP IDOC, XML industry standards, own formats, etc.), transmission channels (FTP, e-mail, portals, point-to-point, etc.) and National legislative requirements (qualified electronic signature, advanced signature, no signature, e-archiving compliance) generate a many-to-many matrix of relations in the exchange of invoices among trading partners. Such a complexity is hardly manageable by big companies and, without the support of specialised outsourcers, absolutely overwhelming for SMEs. It is also important to underline that, in juridical terms, the expression “electronic invoicing” unifies under a common label lawful procedures that are heterogeneous even within the EU scenario. The main legal requirement for the invoice sender is to obtain the receiver’s consensus to issue invoices through an electronic transmission. The reason is that the receiver might not have the instruments to receive and, mainly, to archive and store (which is compulsory, since it is not allowed to print the received file) the electronic invoice. In scenarios characterized by a prominent presence of SMEs this is a major problem. The Italian Government foresaw the question and in 2004, first country in EU – while France made something similar in 2007 – introduced a second kind of electronic invoice: when the
6
R. Naggi and P.L. Agostini
customer does not agree to receive e-invoices, the sender is anyway allowed to generate and archive them electronically and to send them in the way the receiver requires: using paper, or electronic devices. The receiver will print the documents for lawful archiving and storage. That is why in Italy we distinguish between two kinds of e-invoice: the “symmetrical” one (electronic for both counterparties) and the “asymmetrical” one (electronic only on the sender side). Asymmetrical e-invoicing does not allow a full integration of systems, nevertheless it allows full digitizing of the sender processes connected to invoices issuing. The Italian approach has been effective in at least fostering an increasing diffusion of asymmetrical e-invoicing, while symmetrical e-invoicing is virtually inexistent [11].
The Case Study The contemporary nature of the phenomenon under study has led the authors towards an exploratory and qualitative research design. The case study approach [12] seems particularly suited where the theory in the area is not well developed [13]. The aim is to provide a description of the organizational challenges faced by the organization in the adoption process. To do so, we have chosen to select a case where the switching to e-invoicing has been performed through an external provider, as foreseen in widely accepted adoption models. Both the focal firm and the provider have requested anonymity: we will call them respectively “ALPHA” and “BETA”. Data collection has been performed through interviews with key informants within the focal firm, along with participation to meetings. Triangulation of evidence was achieved by examining available documentation. Finally also BETA’s managers were interviewed. ALPHA is a leading Italian medium enterprise in the field of machine tools. ALPHA is facing the current economic crisis: its turnover has halved during the last 3 years (from 100 to 50 million euros). ALPHA’s supply chain structure is quite complex. On the upstream side the products are made of multiple components, which entail a very intricate sourcing network with dozens of turning over little suppliers and few big multinational suppliers of standard machine components. On the downstream side ALPHA serves with customized products customers of all dimensions in Italy and abroad. In order to improve coordination with suppliers and customers and to obtain real time business intelligence, ALPHA has developed a complex in-house supply chain and inventory management system, integrating it with the accounting system. This effort has allowed substantial advance in terms of efficiency and effectiveness and, according to its managers, has proven to be fundamental in gaining competitiveness. About a year before the authors begun this research, ALPHA started evaluating the possibility to digitize also the invoices generated by the supply chain flows of materials and goods. The possibility of choosing the symmetrical e-invoicing option was rejected immediately, because it was clear that the many little suppliers were neither interested, nor prepared to send and store e-invoices. On the other side the recipients of the invoices were
Inter-organizational e-Services from a SME Perspective
7
characterized by differences in terms of size, stability of the relation, number of invoices received from ALPHA, country of origin (which means different legislations), preparation or interest in receiving invoices electronically. ALPHA therefore opted quite straightforward for an asymmetrical invoice issuing model, whereby the real decision to be made was the classic make or buy one. An internal pre-analysis phase was therefore launched, giving as a result a number of main requirements and critical issues. No deficiencies were detected per se in the IT infrastructure and resources as well as in the managerial and legal competences available to ALPHA. Nevertheless e-invoicing processes resulted to be implying peculiar problematic aspects, especially linked to the necessity to store documents securely (and according to the law) in the long run. This made the in-house option less desirable and the main e-invoicing providers were therefore contacted. After analysing their offers carefully the “buy-option” resulted to be preferable. The cost of the services were in fact convenient especially considering that the outsourcing solution would avoid creating a dedicated infrastructure, consisting of specific: – – – –
Software for automatically appending a digital signature to the documents Software to support multi-channel and multi-format sending of invoices Servers to archive the documents securely for a period of at least 10 years Resources for monitoring the rapidly evolving legal framework
All of these aspects were available through the providers, with a break-even point for personalization expenses of less than 1 year. The criteria on which the selection of the provider was based on were the following: reliability and traceability of the technological infrastructure; competencies of the legal staff; competency and rapidity of response in managing the integration between the systems of ALPHA (based on SAP, with its well-known rigidities) and of the provider; modularity of the service (with the possibility of subsequent integration with further functionalities); and, last not least, the degree to which the service would impact on existing processes (one of the interviewees used the term “not-intrusive” model). A medium-sized Italian provider (BETA) was eventually selected and the project was launched in 3 months (a very short time if compared to the twelve foreseen for the in-house hypothesis). It is worth noting that ALPHA had a clear awareness of the multidisciplinary nature of the project, so that all the following competencies were involved in all phases of the project: organizational, accounting, legal, logistic, ICT, HR. According to the collected interviews within ALPHA the main perceived organizational results can be summed up in: – Savings on costs of paper, mailing, printers, maintenance, errors – Possibility to move human resources from administrative tasks to core business activities – Improvement of response time in the supply chain (up and down-stream) A very interesting aspect encountered in the case exploration is that when ALPHA had just started the implementation of the project with the selected outsourcer BETA, an outstanding foreign customer of ALPHA (GAMMA) asked to
8
R. Naggi and P.L. Agostini
receive its e-invoices through another (foreign) provider (DELTA) used by GAMMA to manage a full, symmetrical, e-invoicing system. Obviously the aim of GAMMA was to integrate all its own administrative flows. DELTA’s offer (for the service had to be paid by ALPHA) included the possibility to adopt its services also for ALPHA’s own invoicing processes. The important aspects for our study purposes, were that, in ALPHA’s opinion: – The services offered by DELTA were not flexible enough to match ALPHA’s needs for not-intrusive solutions. – DELTA’s standard service needs for customisation seemed underestimated. – There were problems in matching ALPHA’s standard invoicing data with the data needed by DELTA’s service. – The complex juridical aspects of Italian rules on e-invoicing and legal e-archiving seemed to be undervalued. – Fees and prices were very high in comparison with the Italian average charges. – Finally, the number of invoices issued towards that customer was very low (but the amount invoiced was not!). Obviously the main issue for ALPHA was to safeguard the relationship with its important customer. But, for our purposes, the main observation is that the integration with DELTA’s service (for the invoicing towards that particular customer – while obviously ALPHA refused to use DELTA’s service for its own needs) was expensive and complicated, causing a duplication of procedures. In other terms invoicing costs towards that customer would be largely increased, not diminished. The problem was ultimately solved by BETA, who succeeded in integrating its system with the features needed to manage the doubled invoicing flow: but ALPHA’s costs were increased and it was further obliged to manual data input.
Discussion Our exploratory case study has unexpectedly raised some further questions concerning e-services and the perspective with which they are implemented and offered. BETA and DELTA bid e-services in the same field of application and virtually to the same target (Beta manages processes also for large firms). Nevertheless BETA was established and grew in a typical SME business scenario, the Italian one, where enterprises are not able to impose – or simply to propose – a full integration of invoicing systems neither to suppliers nor, more obviously, to customers. Suppliers are either too big and powerful, or too little (not sufficiently sized to achieve advantages from document “dematerialization” or to manage it). Besides, in such a scenario, ERPs are far from being standardized. In other terms BETA – although serving also large enterprises – has been able to develop its e-services from the perspective of a SME; therefore flexibility, modularity, unobtrusiveness, vertical knowledge of regulation problems (in a civil law scenario) have been implemented in a low cost service. These assumptions are
Inter-organizational e-Services from a SME Perspective
9
embedded in the offered e-service and it was precisely this sharing of a same vision that made the service proposed by BETA successful for ALPHA. On the contrary, DELTA seems to have implemented its service starting from a large enterprise perspective. At least in this occasion, DELTA has replicated the behaviour of its huge customers. The service is rigid, it aims at standardising the processes of little suppliers to customer advantage, it is expensive, its juridical features aim to support cross-border e-invoicing, simplifying the national legal requirements for e-archiving.
Conclusions E-invoicing and (lawful) e-archiving services are among the most important and pervasive e-services. Besides, both services have implications in the private and in the public sector. Hence, exploring their actual implementation might be an occasion to try to collaborate to the outlining of new business models. Robey et al. [3] in their review on IOS adoption literature, identify three streams of research: factors influencing IOS adoption; the impact of IOS on governance over economic transactions; and the organizational consequences of IOS. In the e-Invoicing field, both the available literature and the reports produced by working groups or Institutions concentrate mainly on the first research stream and tend to highlight what factors are leading to or inhibiting adoption. Typical drivers are efficiency, effectiveness and competitive position. The latter is particularly worth focusing on for our analysis. In a pre-e-service era, Morrell et al. [14] underline how competitive pressure and imposition by trading partners push SMEs to IOS adoption even if they are not prepared to gain full advantage from it. Chen and Williams [15] show how SMEs efficiency might even be reduced if external pressures are uncontrolled. In the case study we have observed the same problems also concerning e-services. In some way, the providers tended to replicate and play the same role of their client. Questions therefore arise: are e-services less “egalitarian” than they are assumed to be? Might they induce new and unexpected market distortions and hidden inefficiencies? We have underlined how ALPHA was obliged to pay for two services instead of one, in order to comply with the requirements of its customer GAMMA. What if other important customers of ALPHA will adopt other providers? The theme gains even more relevance if considered in a public (national or supra-national) policy perspective [16]. The main limitation of the paper is inherent in its exploratory nature. However, it is hopefully a first step in the direction suggested by Robey ([3], p. 509) in his call for theories that are more coherent with contemporary phenomena. The authors intend to incrementally complete the research in various stages. First some additional case studies concerning SMEs with supply chains similar to Alpha will be developed. Different industries will be analysed for comparison. The authors also plan to control for a possible effect of the national context by developing some case
10
R. Naggi and P.L. Agostini
studies abroad. Further interviews with relevant players and experts will be also conducted in parallel, in order to iteratively fine-tune the research strategy. The qualitative results are expected to inductively support the authors in building a solid theoretical framework [13].
References 1. European Commission (2007) Action Programme for Reducing Administrative Burdens in the European Union - COM(2007) 23 final, Brussels. 2. The Sectoral e-Business Watch (2008) The European e-Business Report 2008, 6th Synthesis Report of the Sectoral e-Business Watch, H. Selhofer, et al., Editors, Brussels. 3. Robey, D., G. Im, and J.D. Wareham (2008) Theoretical Foundations of Empirical Research on Interorganizational Systems: Assessing Past Contributions and Guiding Future Directions. Journal of the Association for Information Systems. 9(9): p. 497–518. 4. CEN/ISSS (2003) Report and Recommendations of CEN/ISSS e-Invoicing Focus Group on Standards and Developments on electronic invoicing relating to VAT Directive 2001/115/EC – Final, Brussels. 5. Fairchild, A. and R. Peterson (2003) Value Positions for Financial Institutions in Electronic Bill Presentment and Payment (EBPP), in Proceedings of the 36th Hawaii International Conference on System Sciences (HICSS’03) R. Sprague, Editor. p. 10. 6. Legner, C. and K. Wende (2006) Electronic bill presentment and payment, in Proceedings of the Fourteenth European Conference on Information Systems, J. Ljungberg and M. Andersson, Editors, Goteborg. p. 2229–2240. 7. Furst, K., W. Lang, and D. Nolle (1998) Technological Innovation in Banking and Payments: Industry Trends and Implications for Banks. Quarterly Journal. 17(3): p. 23–31. 8. European Commission Informal Task Force on e-Invoicing (2007) EUROPEAN ELECTRONIC INVOICING (EEI) FINAL REPORT. 9. CEBP - NACHA (2001) Business-to-Business EIPP: Presentment Models and Payment Options. 10. Tanner, C. and R. W€ olfle (2005) Elektronische Rechnungsstellung zwischen Unternehmen, Fachhochschule Basel Nordwestschweiz, Institut f€ ur angewandte Betriebs€okonomie, Basel. 11. Politecnico di Milano – Dipartimento di Ingegneria Gestionale (2010) La Fatturazione Elettronica in Italia: reportage dal campo. 12. Yin, R.K. (2003) Case Study Research: Design and Methods, Thousand Oaks, Sage. 13. Eisenhardt, K. (1989) Building theories from case study research. Academy of Management Review: p. 532–550. 14. Morrell, M. and J. Ezingeard (2002) Revisiting adoption factors of inter-organisational information systems in SMEs. Logistics Information Management. 15(1): p. 46–57. 15. Chen, J.C. and B.C. Williams (1998) The impact of electronic data interchange ( EDI) on SMEs: summary of eight British case studies. Journal of Small Business Management. 36(4): p. 68–72. 16. Arendsen, R. and T.M. van Engers (2004) Reduction of the Administrative Burden: An e-Government Perspective, in Electronic Government, R. Traunm€uller, Editor. Springer, Berlin / Heidelberg. p. 200–206.
E-Services Governance in Public and Private Sectors: A Destination Management Organization Perspective F.M. Go and M. Trunfio
Abstract In today’s “wired world” the public sector and private sectors face competing pressures of price rises and scarcity of “territory”. So far, the public and private sector knowledge domains have large developed separately. A destination Management Organization perspective can accommodate the production facilities and e-services governance to represent the interests of both the public and private sectors. The notion of short-term lets of the territory must be assessed against perceived outside threats, such as food scarcity, that require self-sufficiency to protect the long-term interests of both the public and private sector stakeholders’. This paper develops an e-services “interactive” governance model to bridge gaps by trustworthy relations in the context of decision making by network stakeholders. Subsequently, it applies this model to the Trentino case study for examining conceptual constructs based on embedded governance as a vehicle to balance heritage and innovation and knowledge dissemination. It concludes by summarizing the advantages and disadvantages of sharing information within a destination management organization context amongst the public and private sectors as a step towards “reclaiming the narrative of the commons”.
Introduction No organization can be an island in today’s “wired world.” The emerging scarcity of resources [1] and time puts additional pressure on the destination management organization (DMO) decision makers to understand not only their size, i.e., scale economies but the transaction costs of information, which, in turn, determines the mechanism of organizational governance [2]. Foucault [3] coined the
F.M. Go Marketing Management Department, Erasmus University, Rotterdam, The Netherlands e-mail:
[email protected] M. Trunfio Management Studies Department, University of Naples “Parthenope”, Naples, Italy e-mail:
[email protected] A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_2, # Springer-Verlag Berlin Heidelberg 2011
11
12
F.M. Go and M. Trunfio
term “governmentality” to mean the strategies both of the organizational governance of those at the top and the self-governance of those below. In an increasingly globalizing world, the literature focuses attention on the localization issue. The term “territory” is used here to capture the intersection of several trends, the most significant of which is the growing politicization of resources. The latter is the result of declining supplies of energy, food and water, the privatization of energy, food production, water services and the growing political influence of the bioregionalism movement [4]. The food price rise of 2007–2008 served as a wake-up call amongst politicians to ensure that everyone has enough to eat. Subsequently it has shaded into “food self-sufficiency,” i.e. a growing food yourself movement. In fact, self-sufficiency has become a common policy goal in many countries [5]. Also, it “fuels” both the prospects for localization, green politics and serves as a significant issue that occurs exogenously, and impacts the “real world” of local tourism [6–18]. On the territorial scale tourism and agriculture involve two broad areas of expertise: the fields of tourism marketing, including destination organization management and agricultural management, rural area planning, and environment. The two are often administratively isolated from one another, resulting in ‘contradictions and conflicts’ whilst linked in remit and areas of conduct [19]. External pressures, including commodity price increases and financial instability require understanding of tensions and conflicts for territorial decision making. In turn, decentralization and the asymmetry of information press information brokers to treat information as a common resource.
Two Types of Perspective on DMO Analysis The tourism marketing management literature has focused, in part, on the Destination Management Organization (DMO) model. Also, it distinguishes two main perspectives for analyzing the DMO, first, the perspective of the interactive network management approach and, second, the governance perspective. The DMO enables the coordination of decision making of public and private stakeholders along a vertical scale (supra-national, national, regional and local level), and horizontal and diagonal scales [15–17, 20–22]. The governance approach promises a more stakeholders’ centric coordination mechanism and a reduction of transaction costs [23]. Because the tourism field involves public and private sector stakeholders, who are administratively isolated from one another and often in pursuit of different goals the risk of possible controversy and conflict is relatively high. However, compared to the latter the role of public authorities (political and administrative actors) is rather under-developed in the literature [16, 17, 23]. Therefore, the application of a governance perspective in the tourism knowledge domain can be amply justified. Against the backdrop of governance theory the present paper claims that “embedded governance” [24] based on the interaction approach [25] referred to in network theory can foster effective relationships between local, regional and (supra) national stakeholders. This paper’s main contribution is threefold: It develops an “interactive” and
E-Services Governance in Public and Private Sectors
13
“embedded” governance model [16, 25] to bridge gaps by trustworthy relations in the context of decision making by network stakeholders. It applies the model of embedded governance to the Trentino case study [26] with the aim of balancing heritage and innovation through knowledge dissemination. It pinpoints advantages and disadvantages of sharing information amongst the public and private sectors as a step towards “reclaiming the narrative of the commons” [27]. So, the central issue is: How can the theoretical framework of e-services governance be applied in the public – and private sector context, so as to integrate the different knowledge domains needed to facilitate knowledge sharing and communication transfer between social networks?
The Network Approach in Tourism Research The network approach has been adopted in different research fields as a “pair of glasses” to analyse reality. In a recent study of tourism networks [28], the network theory is applied using different approaches that can be brought back to social network approach, inter-organizational approach, industrial network approach, entrepreneurial network and policymaking network. First, mathematical models fail to explain the complexity of tourism, particularly, the dynamic network processes that involve creation, transformation, replication and the behaviour of its actors. Second, from a managerial perspective, the IMP (Industrial Marketing and Purchasing) Group network interaction approach is proposed to managing business networks [25]. Lemmetyinen and Go [29] identify some key success factors of networks: the ability to develop and implement informational, interpersonal and/or decisional roles; the ability to create joint knowledge and absorptive capacity strong partnering capability; orchestrating and visioning the network in a way that strengthens the actors’ commitment to the brand ideology. Third, from a policy and regional studies perspective, networks have become an instrument to integrate the hierarchical top-down approach of governance reinforcing the horizontal interrelation between actors and favouring innovations. In this sense, Caalders offers an interactive approaches in local development along three models. These are first, the communicative model, with a basis in legitimacy, emancipation/democracy, self-governance and involving stakeholders in policymaking. Second, the instrumental model anchored in concepts such as quality/ innovation, improvement in term of content/rationality, network governance and developing plan and policy. Third, the strategic model which builds on concepts such as efficiency, effectiveness, public support, networking to create support for policy decisions) [30]. Information System and human resource management play an important role in network analysis and developing relational capabilities and competences supporting knowledge and competitiveness [31]. Also, knowledge management affords a relevant perspective to understand the evolution of New Public Management and hybrid public-private alliances that are built to cooperate and achieve mutual knowledge sharing and learning agenda [32, 33].
14
F.M. Go and M. Trunfio
Governance: Destination Organization Management Perspective The relevance of governance features is becoming a central topic for researchers and policy makers, alike, around the world to analyze countries, urban and rural areas to guide and coordinate touristic strategy designed to converge the tactics of firms’ and institutions towards common goals. The literature refers to different touristic governance approaches. Typically, they follow two rationales: first, the destination management approach and, second, the political-institutional hierarchic approach. The former converges the DMO as a meta-organizer with the express aim of balancing stakeholders’ different interests through coordination and, where appropriate, integrate their various perspectives within a coherent destination strategy [18, 34–37]. A recent study [18] shows the relationship between the DMO services and tourism firms that are part of selected networks. Regretfully, the marketing-oriented studies neglect to take into account analysis of the coordination between different hierarchy levels (regional, national, supra-national) of tourism development. Therefore, it would behove DMO’s to apply the governance concept not only to include a broader geographical catchment area, but also relevant knowledge domains, including policy formulation, which in practice tend to develop largely independently from one another. The political-institutional hierarchic approach belies its assumed significant status, particularly, the role of public authorities, i.e., political and administrative actors, in relation to private tourism actors, remains rather under-developed research domain [16, 17]. Recent studies indicate that local touristic development depends on the national governance model and policies [15–17]. In turn, these bear influence from political and social ideology and are encoded in the legal system, in lawmaking applied at national and a regional level in the tourism domain and beyond. From the legal perspective, the tourism governance features structure result from the application of a top-down hierarchic approach supported by touristic legislation. The latter defines the extent to which tourist policies can be decentralized in different national contexts. The hierarchic approach describes the relationship between government and society and can be characterised as a vision of the future as a domain that can be known, managed and planned for. It expresses a rational-scientific approach towards systems planning and integrated development. Several studies [16, 20, 38] recognize the “bankruptcy” of top-down planning. The complexity of a globalized society requires the adoption of different growth models beyond the regional planning domain. Or the re-invention of the role of system planning and integrated development. Accordingly, clear governance features must be created and maintained constant coordination marketing strategy in response to touristic development needs at different decisional levels (national, regional, local level). Due to scarce resources the optimal trade-off between central coordination (the collective interest) and the decentralization of power (supra/national interest) remains a controversial issue through which contradictions and conflicts can arise. Therefore, the main research challenge is to bring about the realization of an embedded subject of governance in a hierarchic model [24]; designed in a way to create interactive governance [16], both dynamic and
E-Services Governance in Public and Private Sectors
15
contextually sensitive to mobilize collaboration between actors, especially entrepreneurs and community members. The third rationality of embedded governance represents the convergence of the two processes top-down hierarchy (linear) and bottom-up democracy (non-linear). It is meant to create a vehicle for social innovation. So, the embedded touristic governance represents a platform – between political actors, business and community, designed to create sustainable development. This process must be supported by “collaborative and social inclusive consensus-building practices, designed to create three kinds of shared capital: social capital – trust, flows of communication and willingness to exchange ideas, intellectual capital – mutual understanding, and political capital – formal or informal agreements and implementation of projects” [10, 18]. In synthesis, the embedded governance of touristic system has some priority roles. These are, first, to understand stakeholders aims, second, to develop local culture of partnership, third, to create and support knowledge transfer, fourth, to define a participative/shared marketing strategy, fifth, to develop organization and marketing tools, sixth, to facilitate internal and external communication, seventh, to manage to changes, eight, to support innovations, ninth, to coordinate relationship between different actors and, finally, to control divergent processes. Whilst the co-creation of value is an admirable principle, it can generate a dilemma of governance due to: the complexity of managing network relations, managing multiple modes of collaboration, rapid change of competitive environment, need rapid response and decentralization and need for flexibility and accountability [39].
E-Services Governance in Public and Private Sectors: Trentino Case In this section the e-services public and private sector governance model is applied in the case study of Trentino S.p.a., which is based on precedent empirical research [40], including legal documentation, annual reports, official documents and web sites (http://www.trentinospa.infom, http://www.turismo.provincia.tn.it/osservatorio, http://www.visittrentino.it). It seeks to synthesize the relationship between networking, governance model, destination management, place branding, knowledge management and ICT in a manner that affords the balancing of local heritage and innovation, thereby preserving sustainable development and quality of life, derived from territorial assets (agriculture, culture and industry). The Trentino S.p.a. model of centralized territorial governance is designed to integrate and coordinate networks, enterprises and institutions into a common place brand strategy. It draws on the process of converging top down and bottom up dynamics. The top down process derives from a Provincial Law (“Disciplina della promozione turistica di Trento” n. 8/2002) through which the Autonomous Province of Trento has reorganized the provincial touristic organization creating Trentino S.p.a. (share: 60% Autonomous Province of Trento and 40% Chamber of Commerce) with the functions of governance of place (for the activity see Provincial
16
F.M. Go and M. Trunfio
Deliberation n. 390/2002, “Linee guida del progetto di marketing territoriale del Trentino”). The bottom up process is expressed from legitimation and participation of local actors within the network context. Based on embedded governance model [23], the Trentino SPA represents: a platform between different actors and networks: Chamber of Commerce, Province, Touristic Promotion Agencies, University of Trento, Touristic Consortiums, Consortiums of Pro Loco, Hotels Association, Club of products (thematic networks), local firm, project groups, external actors, others private and public actors; a filter of information, to reduce the external variety and converge toward local competitiveness, coming from European Union, National Government and Province of Trento (laws, funds, projects, etc.), market (define and implement the Trentino’s marketing strategy and place branding strategy), lobby and power coalitions; a facilitator bridge of knowledge sharing and communication transfer between networks of public and private actors or single actors, to create trust and a high integration degree of local services: every knowledge and strategy is shared and codified. In particular, the use of Trentino’s brand is submitted at applications of the manual and a strong standardization of services; a balance between local heritage and social innovation: the place brand strategy is based on the local community values of equilibrium between tradition and innovation, environment and development, interiority and openness. The place brand is symbolized by a butterfly that expresses equilibrium and quality of life. The new website (http://www.visittrentino.it) introduces a myriad of innovations to communicate and sell the destination. The institutional web site of Trentino is the centre of governance strategy and the support of demand and supply relationship. In 2009 the new web site shows how technologies have become an important instrument for promotion and delivery, part of a destination marketing strategy defined by Trentino S.p.a. [41]. In line with the Trentino S.p.a. strategy and through the support of new technologies, the website has become the fundamental destination promotion and marketing tool allowing: the reinforcement of a differentiated brand symbol; increasing popularity on internet; internet promotion to new market targets; introducing geo-marketing; transforming click requests for information and on-line reservation; enhancing stakeholders’ relations; monitoring trends to support strategic decisions. The results of Visittrentino are summarized in Table 1.
Table 1 The results of Visittrentino On line visit (visittrentino) Number of seen pages (visittrentino.it) On line visit (visittrentino þ trentinospa þ others touristic sites) Number of seen pages (visittrentino þ trentinospa þ others touristic sites) Request by e-mail Reservations Turnover Source: http://www.trentinospa.info
2008 3,998,973 23,108,141 4,604,231
2009 4,669,729 22,015,218 5,723,120
Variation % þ6.26% 4.73% þ24.30%
26,594,238
27,036,318
þ1.66%
86,227 85,793 0.50% 2,068 3,395 þ64.17% € 951,606.33 € 1,502,548.71 þ57.90%
E-Services Governance in Public and Private Sectors
17
Conclusion The Trentino S.p.a. case illustrates the practice of a centralized governance approach toward the territorial integration of E-Services Governance in Public and Private Sectors. It has taken a Destination Management Organization Perspective to ease the formation aimed at reducing the external variety, bringing about a degree of convergence of capabilities so as to raise the local competitiveness. In this context, the E-Services model fulfills a crucial bridge function to facilitate knowledge sharing and communication transfer between social networks, which represent both the public sector and the private sector. Furthermore, the E-Services model is innovative in that it connects common, territorial powers, thereby enhancing the DMO model, limited to serving tourism interests; integrates technical infrastructure and local services, aimed at balancing the interests of local heritage and social innovation, through the creation of trustworthy relations.
References 1. Klare, M.T. (2002) Resource Wars The New Landscape of Global Conflict, New York, Henry Holt & Co. 2. Williamson, O. (1996) The Mechnanisms of Governance, Oxford, Oxford University Press. 3. Foucault, M. (1982) The Subject and the Power, in H. Dreyfus & P. Rabinow Michel Foucault Beyond Structuralism and Hermeneutics, Brighton, Harvester: 208–226. 4. Kraft, M. E. and H. Fisk (1999) Toward Sustainable Communities Transition and Transformations, Environmental Policy, Boston, MIT Press. 5. Anon (2009) How to feed the world, The Economist, November 21st: 13. 6. Bramwell, B. and B. Lane (2000) Tourism collaboration and partnerships Politics. Practice and sustainability, Clevedon, Channel View Publications. 7. Bramwell, B. and A. Sharman (1999) Collaboration in local tourism policymaking, Annals of Tourism Research, 26(2): 392–415. 8. Getz, D., D.Anderson and L. Sheehan (1998) Roles, issues, and strategies for convention and visitors’ bureaux in destination planning and product development: a survey of Canadian bureaux, Tourism Management, 19(4): 331–340. 9. Gill, A. and P. Williams (1994) Managing growth in mountain tourism communities, Tourism Management, 15(3): 212–220. 10. Healey, P. (1996) Consensus-building across difficult divisions: new approaches to collaborative strategy making, Planning Practice and Research, 11(2): 207–216. 11. Jamal, T. and D. Getz, (1995) Collaboration theory and community tourism planning, Annals of Tourism Research, 22(1): 186–204. 12. Ladkin, A. and A. Bertramini (2002) Collaborative tourism planning: a case study of Cusco. Peru, Current Issues in Tourism, 5(2): 71–93. 13. Mandell, M. (1999) The impact of collaborative efforts: changing the face of public policy through networks and networks structures, Policy Studies Review, 16(1): 4–17. 14. Pforr, C. (2006) Tourism policy in the making. An Australian network study, Annals of Tourism Research, 33(1): 87–108. 15. Trunfio, M. (2008) Governance turistica e sistemi turistici locali. Modelli teorici ed evidenze empiriche in Italia, Turin, Giappichelli.
18
F.M. Go and M. Trunfio
16. Kooimann, J. (2008) Interactive Governance and Governability: An Introduction, The Journal of Transdisciplinary Environmental Studies vol. 7, no. 1. 17. Dinica, V. (2009) Governance for sustainable tourism: a comparison of international and Dutch visions, Journal of Sustainable Tourism, 17 (5): 583–603. 18. D’Angella, F. and F.M. Go (2009) Tale of two cities’ collaborative tourism marketing: Towards a theory of destination stakeholder assessment, Tourism Management, 30(3): 429–44. 19. Orbasli, A (2000) Tourists in Historic Towns Urban Conservation and Heritage Management, London: E & FN Spon. 20. Golinelli, C.M. (2002) Il territorio sistema vitale. Verso un modello di analisi, Turin, Giappichelli. 21. Golinelli, C.M., M. Trunfio and M. Liguori (2006) Governo e marketing del territorio. AA. VV. Nuove tecnologie e modelli di e-business per le Piccole e Medie Imprese nel campo dell’ICT, in Sinergie. Rapporti di ricerca, n. 23/2006 (2). 22. Petruzzellis, L. and M. Trunfio (2006) Caratteri e problematiche di governo dei sistemi turistici. Un possibile modello di sviluppo, Small Business, 1: 113–143. 23. Svensson, B., S. Nordin and A. Flagestad (2006) Destination governance and contemporary development models, in Lazzeretti, L. and C.S. Petrillo Tourism local system and networking, Elsevier. 24. Go, F.M. and M. Trunfio (2010) Tourism Development after the Crisis: Coping with Global Imbalances and Contributing to the Millenium Goals, Reports 60th Congress AIEST. 25. Ford, D., L.E. Gadde, H. Hakansson and I. Snehota (2003) Managing Relationships, 2nd Ed, Chichester: Wiley. 26. Yin, R. (1994) Case study research. Design and methods, USA, SAGE. 27. Bollier, D. (2003) Silent Theft The Private Plunder of Our Common Wealth, London, Routledge. 28. Lemmetyinen, A, (2010) The coordination of Cooperation in Tourism Business Networks, Turku: Turku School of Economics (Dissertation). 29. Lemmetyinen, A. and F.M. Go (2008) The key capabilities required for managing tourism business networks, Tourism Management, 30 (1): 97–116. 30. Caalders, J. (2003) Rural tourism development, Eburon Delft. 31. Buonocore, F. and C. Metallo (2004) Tourist destination networks, relational capabilities and relationship builders: the central role of Information Systems and Human Resources Management, in Petrillo, C.S. & J. Swarbrooke, Networking and partnership in destination development and management, ATLAS International Conference, Naples, Enzo Albano Editore. 32. Hamel, G. (1991) Competition for competence and inter partner learning within international strategic alliances, Strategic Management Journal, 12 (Special issue): 83–103. 33. Teece, D. J. (1992) Competition, cooperation, and innovation: Organizational arrangements for regimes of rapid technological progress, Journal of Economic Behavior & Organization, 18(1): 1–25. 34. Buhalis, D. (2000) Marketing the competitive destination of the future, Tourism Management, 21(1): 97–116. 35. Pechlaner, H. and K. Weiermar (2000) Destination Management. Fondamenti di marketing e gestione delle destinazioni turistiche, Milan, Touring University Press. 36. Franch, M. (2002) Destination Management. Governare il turismo tra locale e globale, Turin, Giappichelli. 37. Martini, U. (2005) Management dei sistemi territoriali. Gestione e marketing delle destinazioni turistiche, Turin, Giappichelli. 38. Richards, G. and D. Hall (2000) The community: A sustainable concept in tourism development?, in G. Richards & D. Hall (Eds.), Tourism and sustainable community development. London/New York, Routledge. 39. Prahalad C.K. & Ramaswamy, (2004). The Future of Competition, Co-Creating Unique Value with Customers, Cambridge, MA, Harvard Business School Press.
E-Services Governance in Public and Private Sectors
19
40. Trunfio, M. and M. Liguori, (2006) Turismo e branding collettivo: il caso Trentino, AA.VV. Nuove tecnologie e modelli di e-business per le Piccole e Medie Imprese nel campo dell’ICT, Sinergie. Rapporti di ricerca, n. 23/2006 (2). 41. Trunfio, M. (2010) Il marketing delle destinazioni turistiche: il caso Visittrentino, in Kotler P., J. Bowen, J. Makens, Marketing del turismo, Pearson Education Italia.
.
Intelligent Transport Systems: How to Manage a Research in a New Field for IS T. Federici, V. Albano, A.M. Braccini, E. D’Atri, and A. Sansonetti
Abstract This paper sheds light on the management of a research project in a new topic for IS like the one of Intelligent Transport Systems (ITS). It describes and discusses the methodology adopted for a survey designed by the authors and experimented during a recent research on ITS carried out on behalf of an Italian Ministry. The paper presents the first results of this research and draw some conclusions on the problems that have to be faced in order to successfully manage such type of research projects and to build a common knowledge base on ITS.
Introduction Intelligent Transport Systems (ITS) have been defined like “tomorrow’s technology, infrastructure, and services, as well as the planning, operation, and control methods to be used for the transportation of persons and freight” [1]. In spite of that, the official definition remains those given by the Commission of the European Union: “ITS mean applying Information and Communication Technologies (ICT) to transport. These Applications are being developed for different transport modes and for interaction between them (including interchange hubs)” [2]. At present several works exist on specific ITS systems [3–5], on available technologies [6, 7], or possible fields of applications [8, 9]. Other works try to recall the history of these systems and to create a state of the art of these technologies both in Europe and in the rest of the world [10, 11]. Anyhow, all these studies come from the transportation engineering discipline and consequently ITS have never been examined by the IS discipline. Moreover, no existing research can provide
T. Federici University of Tuscia, Viterbo, Italy e-mail:
[email protected] V. Albano, A.M. Braccini, E. D’Atri, and A. Sansonetti CeRSI-LUISS Guido Carli, Rome, Italy e-mail:
[email protected];
[email protected];
[email protected];
[email protected] A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_3, # Springer-Verlag Berlin Heidelberg 2011
21
22
T. Federici et al.
a comprehensive picture of all the ITS initiatives carried on in a specific country, namely in Italy. The present research has been commissioned by the Italian Ministry of Infrastructures and Transportations in 2008, and has been finished in the end of 2009. The original goal was the analysis of the informatics and organizational solutions adopted by a sample of ITS projects to be analyzed in depth. Since the very first beginning, the fields of Italian ITS systems appeared roughly defined. Therefore a complete map of ITS project applied on the Italian territory was not available, nor was their current state of the art known. It has therefore not been possible to select a statistically valid sample for the analysis originally planned. This circumstance challenged the planned research and required a shift in the goal to be pursued. The primary objective of the research became then a survey – as wide as possible, but not necessarily exhaustive – of ITS projects currently started on the Italian territory, and the construction of a map of these projects on the basis of a set of parameters necessary to meaningfully classify them. The availability of a comprehensive knowledge of previous experiences in the field, both experimental and applicative, is a preliminary condition necessary for the promoting of new initiatives in the ITS sector, and for new, and deeper, research initiatives on the ITS topic.
Research Methodology In the context depicted in the introduction, this research has followed an exploratory approach, both for the novelty of the ITS research field, and for the blurred definition of its scope (as clearly explained later in the text). The presence of an area of interest particularly wide, divided in several layers not univocally defined, has firstly requested the thorough definition of the scope of the research. First of all the research team has focused on the identification of a taxonomy of ITS systems that could be used as a guide to orient the design of the subsequent research activities and of the instruments that were going to be applied in them. At present there are some taxonomies developed to classify ITS projects [12, 13] usually built over highly articulated structures that relate the scope of the systems and the technologies used [14]. For the needs of the present research the taxonomy adopted has been taken from some official documents [15, 16], since it has been judged as the most clear and exhaustive, showing a net distinction among the terms used in it. The taxonomy adopted is again based on the scope of the systems and is composed by the following categories: l l l l l l l
Traffic and mobility management and monitoring Information to customers Public transportation management Fleet and freight transport management Automatic payment systems Advanced control of vehicles for security in transports Emergencies and accidents management
Intelligent Transport Systems: How to Manage a Research in a New Field for IS
23
Regarding the boundaries of the research, the projects have included in the survey on the basis of the following selection criteria: l l l
l
l
Projects started or promoted since the year 2000. Projects with an experimental or a deployment aim. Projects with at least an Italian partner or that have to be/been implemented in the Italian territory. Project centred on the following transportation modalities (and their interconnections): car, rail, ship, plane. Projects funded by: – The European Union: The Research, Energy, Transport, Information Society, and Media Enterprise and Industry funding programmes have been taken into consideration. – 6 central Italian administrations on the basis of the strategic role that ITS systems play in the policies for economic development, transport, security, and environment. – By one or more of the 21 Italian regions. – Promoted by the four most relevant municipalities in Italy: Roma, Milano, Torino, Napoli.
The wide abundance of the characteristics of the sources from which the selection of ITS projects has been made has made the adoption of a different research strategy: 1. Based on the information available on web sites 2. With the performing of an electronic survey The first strategy has seen two different steps (1) the use of data bank of public access queried with specific search keys to identify the details of the different projects and (2) the identification, selection, and analysis of project documents, in order to gather more detailed data (like the typology of the project, the list of the proponents of the project, the scope of the project). This research strategy could be applied only for the sources available from the European Commission. In no other case, this strategy could be worth pursued because the absence of specific search engines capable to recognize ITS projects. The only exception to this rule is the one of a central administration (MIUR: the Italian Ministry of Education University and Research). In this case, to ensure a more punctual research, given the limitations of the selection criteria offered by the two database queried (Arianna and Me.Mo.Ri) the Directorate General for Research grants has been asked to fill an electronic sheet with the fields that were necessary for the data collection. The second research strategy was based on an electronic survey that was been sent to a selected sample of recipients in the position to provide meaningful data on the ITS projects promoted by their administrations. This strategy was used in the case of the remaining central administrations that have to be included in the survey, for the Regions, and for the Municipalities.
24
T. Federici et al.
Both the electronic survey and the queries on publicly available database were targeted at obtaining the following details on the ITS projects: l
l l
l
l
General details: Name, abstract, website, and project type (research, development, deployment. . .) Partnership: Coordinator and others partners Activities: Starting date, ending date, current state of the art, localization of the project Financial details: Financial dimension of the project, dimension of the grant, funding sources Typology of the ITS system: Scope, modality, and aims
Once ready, the survey has been addressed to two recipients to test and validate its contents. The recipients of the survey have been identified by means of institutional websites of regional and municipal administrations. In particular the survey has been sent to the Directorate General (DG) responsible for the following sectors: transportations, mobility, security, and economic development. Later on, the survey has also been extended to the Agencies for Mobility both of the Regions and Municipalities (when available), or to other structures that, on the basis of the description of the activities performed, have been judged as possible targets worth contacting. The data collection process has followed 5 different steps (some of them have been iterated more than once): l
l
l
l l
First contact (over phone) with the administration to: identify the recipient(s) of the survey, illustrate and clarify the data that have to be collected and the methodology to be used for this data collection, get some feedback regarding the availability of the recipient to fill in the survey and the expected amount of time he/she requested before returning the survey. Survey dispatch along with a brief description of the research and the instructions to fill it in. A final telephone contact to provide assistance (when required) and to find an agreement on the return date of the survey. A remind (via mail or telephone) in the case of delays in the responses. A final contact (via mail) to thank those who have sent the data back, to ask them to check the data they have submitted for completeness or integrity, and to ask them to inform the research team in the case they had new information on further ITS projects.
To support the data collection process, the following artefacts have been designed, realized, and used: l l
An Access database containing all the details of the projects investigated. Software to manage the database of all the data gathered on the ITS projects (called “Banca dati sui Progetti ITS in Italia”) that allows to: update data already in the database, insert new data, and look up the data available in the database.
Intelligent Transport Systems: How to Manage a Research in a New Field for IS
25
First Results Following the first research strategy, 110 web pages have been queried, 2,100 ITS projects have been identified, and 175 have been selected. This large gap is mainly due to the inadequateness of filters available in the search engines that has forced the research group to, first of all use a broader set of research criteria, and then manually check all the results to identify good projects and discard not relevant ones. Following the second research strategy, instead, 71 different administrations, with an average of 2.3 DG each, have been identified as potential recipients. Along with the contact process the number of recipient administrations has grown to 76 (61 among regions and autonomous provinces, 9 municipalities, 6 ministries), with an average of 3 DG each. In total the research group has done 385 telephone calls, and has sent circa 300 e-mails: on average 5.42 phone calls have been made, and 4.89 emails have been sent per each recipient. The number of contacts directly testifies the difficulties faced in the identification of the right recipient to whom the survey has to be addressed and the delay in the return of a filled survey. The inertia in the process can be addressed to the set of steps necessary to directly interact with the head of the identified administrations, or to the subsequent discovery of a possible recipient, different from the one that has been firstly identified. Besides the time necessary for the identification of the right recipient, also the time required by the administrations to fill in the survey and send it back have been quite long. It is anyhow relevant to point out that, notwithstanding these difficulties, there was a significant willingness to take part into the survey and to provide information by recipients identified (response rate was quite high: 74%). This testifies that an activity to aggregate and disseminate information on this topic, as for example by creating a specific professional community, should be considered as worthwhile. At the end of the selection and data collection process, 83 projects have been identified divided from the following sources: l l l
34 projects from the survey addressed to regions and autonomous provinces 44 projects from the municipalities 5 projects from the Ministry of Infrastructures and Transportation
These projects sum up to the 175 identified from sources from the European Commission and to the 76 selected from the project files sent by the MIUR (as detailed before) for a total of 334 projects analyzed.
Discussion Some general considerations regarding the application of the method stem out of the described research experience.
26
T. Federici et al.
A first outcome regards the field of ITS systems that is so far, not clearly described and identified. It is in fact quite difficult to identify inside the administrations the subjects that have a direct competence on the topic, since there are no specific responsibilities devoted to the topic. Adding to this, also the difficulties of the recipients to identify what the acronym ITS or the expression “Intelligent Transport Systems” exactly refer to, has also to be mentioned. A second aspect worth mentioning is that the identification of information sources and the access to data is again difficult in this topic. The research group has noticed a certain degree of approximation and incompleteness of the information available on the websites, even on those official ones. Moreover, also in the cases of apparently complete catalogues, the subset of projects eventually referable to the ITS domain is not always directly selectable. Finally, data available on ITS project are many times scarce and incomplete: sometimes only information like the name of the project, the name of the coordinator, and a contact (usually without a telephone and mail). Frequently a website for the project, that could be a source to deepen the research, is not available. These considerations support the claim that the survey of ITS project carried out during this research is not complete, and that the information gathered are not to be considered exhaustive, due to the wrong, or misleading, interpretation of ITS from the addressed recipients. To this regard, the definition of a compact taxonomy of ITS projects, like the one we have adopted in this work, but possibly more clear and coherent, appears as a necessary step. Such a taxonomy should be properly disseminated among all subjects potentially interested, in order to create a necessary shared knowledge base that could serve as a common vocabulary to ease mutual communication and understanding in this topic. Also it has to be noted the absence of a single entity or subject that collects, organizes, and disseminates information on the numerous initiatives still running or already closed. Such absence prevents the generation and the diffusion of knowledge on a highly innovative field for the application of advanced technologies. The regular feeding of several knowledge sources, or even one single catalogue of projects thoroughly designed and constantly updated, are of particularly usefulness taking into account the fact that the initiatives for the creation of ITS systems are, at the same time, attracting (for novelty, capability, and for the potential of funding available) and challenging, especially for everything concerning the organization and management of services based on them. The availability of a patrimony of experiences, among which some possibly similar to the new one to be designed, can contribute as an incentive to the diffusion of ITS systems and, at the same time, to the improvement of technological and organizational choices. The current research might then be considered as a first step in this direction, bringing a patrimony of data, but also of witnesses and contacts, with whom it will be easier planning future initiatives with analogous objectives.
Intelligent Transport Systems: How to Manage a Research in a New Field for IS
27
Conclusions and Future Research Plans This paper introduces an exploratory research devoted to the ITS projects, illustrating the designed methodology, the difficulties encountered in the research effort, and the choices made to face them. The topic of ITS systems is a research area that has so far been completely neglected in IS. The description of the method adopted, of the results obtained, and of the characteristics of this research area – in terms of available shareable knowledge and of specific problems – offer other researchers in IS a knowledge base starting from which they can promote further and more deepen investigations. Regarding the research described in this paper, the efforts will proceed, firstly by means of the elaboration and discussion of the data gathered on the ITS projects during this survey. Later on, on the basis of the results of the discussions on these data, we plan to extend the identification of ITS projects and the deepening of some aspects connected to their organization and to the use of information and communication technologies.
References 1. Crainic, T.G., Gendreau, M., Potvin, J.Y. (2009). Intelligent freight-transportation systems: Assessment and the contribution of operations research. In Transportation Research Part C: Emerging Technologies, 17 (6), 541–557, Elsevier Ltd. 2. Commission of the European Communities (2008). Action Plan for the Deployment of Intelligent Transport Systems in Europe. Resource Document. Available on http://eur-lex. europa.eu/LexUriServ/LexUriServ.do?uri¼COM:2008:0886:FIN:EN:PDF. 3. Sengupta, R., Rezaei, S., Shladover, S.E., Cody, D., Dickey, S., Krishnan, H. (2007). Cooperative collision warning systems: Concept definition and experimental implementation. Journal of Intelligent Transportation Systems, 11(3), 143–155. 4. Bohli, J.M., Hessler, A., Ugus, O., Westhoff, D. (2008). A secure and resilient WSN roadside architecture for intelligent transport systems. Proceedings of the first ACM conference on Wireless network security, ACM, 161–171. 5. Arief, B., Blythe, P., Fairchild, R., Selvarajah, K., Tully, A. (2008). Integrating smartdust into intelligent transportation systems. 10th International Conference on Application of Advanced Technologies in Transportation, 27–31. 6. Hasan, M.K.(2010). A Framework for Intelligent Decision Support System for Traffic Congestion Management System, Scientific Research Publishing. 7. Gartner N. H., Stamatiadis C., Tarnoff, P. J. (1995). Development of Advanced Traffic Signal Control Strategies for Intelligent Transportation Systems: Multilevel Design. Transportation Research Record, 1494. 8. Bishop R. (2000). A survey of Intelligent Vehicle Applications Worldwide, IEEE Intelligent Vehicles Symposium 2000, October 3–5, Dearborn (MI), USA. 9. Masaki, I. (1998). Machine-vision systems for intelligent transportation systems. IEEE Intelligent Systems, 24–31. 10. Figueiredo, L., Jesus, I., Machado, J.A.T., Ferreira, J.R., Martins de Carvalho, J.L. (2001). Towards the development of intelligent transportation systems, 4th IEEE Intelligent Transportation Systems Conference, Oakland (CA), 1207–1212.
28
T. Federici et al.
11. Russo, F., Comi, A. (2004). A State of the Art on Urban Freight Distribution at European Scale, presented at ECOMM 2004, Lyon, available from The European Conference on Mobility Management, http://www.epomm.org. 12. European Commission’s Directorate-General for Energy and Transport, Transport Research Centre (2009). Intelligent Transport Systems, thematic research summary. Resource Document. Available on http://www.transport-research.info/Upload/Documents/201002/ 20100215_125401_19359_TRS_IntelligentTransportSystems.pdf. 13. Spyropoulou, I., Karlaftis, M ., Golias, J., Yannis G., Penttinen, M. (2005) Intelligent transport systems today: a European perspective. European Transport Conference 2005, October 3–5, Strasbourg, France. 14. Research and Innovative Technology Administration (RITA) - U.S. Department of Transportation. (2009) Taxonomy of Intelligent Transportation Systems Applications. Resource Document. Available on http://www.itslessons.its.dot.gov/its/benecost.nsf/images/Reports/$File/ Taxonomy.pdf 15. European Commission, Energy and Transportation DG, Luxembourg, (2003) Intelligent transport systems. Intelligence at the service of transport networks. Available on http:// europa.eu.int/comm/transport/themes/network/english/its/pdf/its_br_ochure_2003_en.pdf. 16. Ministero delle Infrastrutture e dei Trasporti - Direzione Generale per la Programmazione. (2003). Sistemi ITS – stato dell’arte.
Operational Innovation: From Principles to Methodology M. Della Bordella, A. Ravarini, F.Y. Wu, and R. Liu
Abstract The present research has the objective to discover and understand potential sources of Sustained Competitive Advantage (SCA) for companies and exploit this potential in order to achieve and maintain the competitive advantage through operational innovation and especially through the implementation of IT-dependent strategic initiatives. A new strategic analysis methodology is proposed and described within the paper. The concept of Business Artifact (BA) already introduced and used for business process modeling within the Model Driven Business Transformation (MDBT) framework is the basic element of our methodology. The theoretical foundations of the work are provided by the Resource Based View (RBV) of the firm theory [Barney, J Manage 17(1):99–120, 1991] and by the Critical Success Factors (CSF) method [Rockart, Harv Bus Rev 57(2):81–93, 1979]. Considering that, by definition, each Business Artifact has a data model, in which all the resources it needs and uses during its lifecycle are specified, we want to identify which Business Artifacts are strategically relevant for a company and prioritize them according to the Sustained Competitive Advantage they could be able to provide. These key BAs should then be the target of any IT-dependent strategic initiative, that should include actions aimed at improving or transforming these BAs in order to achieve, maintain and exploit the company competitive advantage.
M.D. Bordella and A. Ravarini Universita` Carlo Cattaneo – LIUC, Cetic, C.so Matteotti 22, 21053 Castellanza Varese, Italy e-mail:
[email protected];
[email protected] F.Y. Wu and R. Liu IBM Research, T.J. Watson Research Labs, 19 Skyline Drive, Hawthorne, New York, NY 10598, USA e-mail:
[email protected];
[email protected] A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_4, # Springer-Verlag Berlin Heidelberg 2011
29
30
M. Della Bordella et al.
Introduction Operational innovation, according to Hammer is defined as “the invention and deployment of new ways of doing the work; operational innovation is truly deep change affecting the very essence of a company and nevertheless is by nature disruptive and it should be concentrated in those activities with the greatest impact on an enterprise’s strategic goals” [11]. In the last two decades Michael Hammer and several scholars proposed approaches that translate reengineering (and operational innovation) from theory to practice [5, 9–11] and also in his 2004 HBR’s article, Michael Hammer provides some examples of operational innovation’s success stories and give guidelines on how to achieve it in a company. The Artifact-Centric Operational Modelling (ACOM) is a methodology – developed within a framework named Model Driven Business Transformation (MDBT) [3, 4] – that supports IT-enabled business transformations [14, 15]. ACOM specifies the modelling of a business process with an information-centric approach that allows generating quasi-automatically the software solution supporting the modelled business process [4, 13]. Contrary to the traditional, activity-centric approach to process modelling, ACOM enables time and money savings by rapidly providing a business analyst with a prototype representing the process and simulating its functioning [12], thus allowing her customer to start thinking about how to transform and improve that process. The ACOM approach proved to be suitable and successful in addressing specific business objectives related to a process re-design or to an IT platform implementation [6], and in general, it is particularly suitable if a company already knows which processes it needs to transform. Problems arise, instead, when the company doesn’t know exactly what its transformational objectives are before linking them to IT solutions: the Artifact-centric approach assumes that a company is able to identify which Business Artifacts make up its business and which ones should be transformed in order to fulfill strategic aims. Such an assumption is far from being realistic. In fact large part of the IS literature about IT/IS strategic alignment deals just with the complexity of the task of expressing the strategic objectives in terms compliant with the design of the information system [1, 21]. ACOM is a powerful tool for the achievement of operational innovation, but is too “operations-oriented” and has little visibility on a company’s business; in order to effectively translate operational innovation from theory to practice it needs to be integrated with a strategic analysis of the business that leads to the identification of which are the parts of the company that need innovation. The present research aims at completing the ACOM approach by adding a “strategic layer”. This layer consists of a methodology, complementary to ACOM, which drives the analysis of the business strategy and the identification of the strategic priorities by using the same central concept of ACOM, i.e. the Business Artifact, but extending its scope to include the role of a Business Artifact within the business strategy.
Operational Innovation: From Principles to Methodology
31
The remainder of the paper is organized as follows: in “Theoretical Background: The Artifact Centric Operational Modeling” we briefly describe the ACOM approach, in “Theoretical Background About Innovation and Strategy” we provide a theoretical background to the work, “Description of the Methodology” contains the description of the proposed methodology and finally “Future Work” is for explanation of the future work.
Theoretical Background: The Artifact Centric Operational Modeling The Business Artifact-centric approach, different from traditional business modeling methods, which often consider process modeling and data modeling separately, takes a unified approach by representing business processes as interacting business artifacts. Each business artifact is characterized by a self-contained information model and a streamlined lifecycle model. The lifecycle model consists of a collection of business activities that act on the business artifact progressing towards the operational goal as manifested by the business artifact. The information model includes information needed in executing the activities. For example in account opening process that takes place in a bank, the data entity Arrangement is likely to be identified as a business artifact. Its lifecycle model describes business activities such as Identifying Customers, Proposing Arrangement, Accepting Arrangement, and Activating Arrangement, etc. Each of these activities brings a significant milestone in the lifecycle of Arrangement. The information model of this business artifact contains data attributes of Arrangement, such as Customer ID and arrangement conditions, as well as other data artifacts, e.g., Proposal and Offer that are created or modified in the context of arrangements. Model-Driven Business Transformation (MDBT), shown in Fig. 1, is a methodology and also a tool set for transforming business strategies into IT implementation in order to achieve the alignment between business and IT. MDBT contains a series of transformations. The first transformation extracts operational objectives from a strategy model and then defines business entities to manifest the operational objectives. Accordingly, an operation model is created as interactive business entities. The second transformation in MDBT builds a composition model from the operation model. In the composition model, more application design details can be added. MDBT provides a tool to make this transformation semi-automatic. The last transformation generates IT applications that are also called implementation models from the composition model. Clearly, in MDBT methodology, the starting point is a well-defined business strategy model from which business artifacts can be easily identified. However, often business strategies do not lend themselves to business artifacts identification.
Business Level
32
M. Della Bordella et al.
Strategy Model Executive Designs, Objective
KPls
Operation Model LOB Manager Semi-Automatic Transformation
IT Level
Realization
How the enterprise does it
Performance Metrics
Composition Model IT Architect
What enterprise wants to do
What IT system needs to do
Measurements
Implementation Model
How IT system does it
IT Developer
Fig. 1 The model driven business transformation framework
Theoretical Background About Innovation and Strategy Before proposing a new strategic analysis methodology coherent with the Artifactcentric approach to process modeling, we performed an analysis of some representative strategic analysis models and methods (such as Porter’s five forces [17], Six Sigma, Component Business Model) in order to verify their compliancy with our research aims. None of the reviewed strategic analysis approaches resulted compliant with the concept of Business Artifacts [7]. Thus, we decided to design a new strategic analysis methodology by integrating the concept of BA with the Resource Based View of the firm and the Critical Success Factor method. The Resource Based View of the firm theory (RBV or RBT) proposed by Barney in 1991 has the objective to understand how a company can achieve a Sustained Competitive Advantage (SCA) by implementing strategies that exploit internal strengths, through responding to environmental opportunities, while neutralizing internal threats and avoiding internal weaknesses [2]. According to Barney, SCA can be achieved through firm resources. Ironically, the definition of firm resources is the main controversial issues of the RBT. A detailed literature review, performed on the more relevant contributions about RBV, especially considering, but not limited to, the leading IS strategy journals such as MISQ Quarterly and Information Systems Research, and the leading journals in the field of management and strategy such as Sloan Management Review, Strategic Management Journal and others, carried out basically without any time constraint from Barney’s studies (and even before) to the last year, led us to the conclusion that neither Barney nor other scholars later have been able to agree to a common vision about the concept of resource. According to [2] definition, firm resources “include all assets, capabilities, organizational processes, firm attributes, information, knowledge, etc. controlled by a firm that enable the firm to conceive of or implement strategies that improve
Operational Innovation: From Principles to Methodology
33
its efficiency and effectiveness. After this definition, several scholars distinguished between resources and capabilities and proposed several different definitions” [8, 19]. Considering resources from Barney’s perspective, they can be divided in some subsets and only a particular kind of resources is able to provide SCA. A resource that has the potential to provide SCA must be: l
l
l
l
Valuable, when they enable a firm to conceive of or implement strategies that improve its efficiency and effectiveness. Rare, when they’re not possessed by a large number of competing or potentially competing firms Imperfectly imitable, if the firms which don’t possess these resources cannot obtain them Not substitutable, if there are no strategically equivalent resources that are them themselves not rare or imitable.
At a superficial analysis, Barney’s definition of firm resource may appear coherent with the aim of the present work, because it’s the one more adherent to the concept of resource contained in the data model of the Business Artifacts. In fact, in the data model of a Business Artifact, one may find the specification of physical assets that are consumed by tasks or role players who execute tasks, as well as competences, skills and knowledge (that fall under the definition of capability) required for the lifecycle of that Artifact. On the other hand, one must consider that Business Artifacts use resources as they are processed (by activities), but at the same time (according to Barney’s definition) Business Artifacts are firm resources, because they encapsulate business processes that can implement strategies to improve efficiency and effectiveness [2]. Moreover there can be other firm resources, such as the management team or physical location that may not be used by Business Artifacts. Due to the broad and inherently ambiguous definition of firm resource it is advisable to get back to the roots of RBV and focus on what SCA is and which are the sources of the SCA,. According to Barney [2] a firm is said to have a sustained competitive advantage when it’s implementing a value creating strategy not simultaneously being implemented by any current or potential competitors and when these other firms are unable to duplicate the benefits of this strategy. Notably, there is much more agreement about the definition of the SCA rather than resource and, apart for some discussions about the sustainability and duration of the competitive advantage [20], the definition reported here is widely accepted among different authors and clear enough in order not to generate misunderstandings. On the basis of the general definition of SCA it is now possible to investigate which are the sources of SCA and what is the relation between Business Artifacts and SCA. For these reasons we found very useful to introduce the Critical Success Factor (CSF) method as a mean to link Business Artifacts and SCA. The CSF methodology, originally designed and developed by Rockart [18] has a long and successful tradition in the IS literature and especially in MIS planning.
34
M. Della Bordella et al.
According to Rockart’s definition Critical Success Factors are “the limited number of areas in which results, if they are satisfactory, will ensure successful competitive performance for the organization”. Putting this definition in terms of Sustained Competitive Advantage we can state that CSFs are those areas potentially able to generate SCA, which doesn’t mean that a 1-to-1 relationship between CSF and SCA exists, but that some of the CSFs with particular characteristics are the sources of SCA. We can sum up the advantages of involving CSF method in our research with these few points: We are able to look directly to the sources of SCA without the abstraction of the firm resource (that has shown to be misguiding). We can rely on a structured methodology for the identification of the CSFs proposed by Rockart, that has been tested successfully in several applications. The concept of CSF and of BA are similar and the association between CSF and Key Business Artifact should be easier to be performed that the one from SCA and Key Business Artifacts. Therefore the CSF method would allow us to link SCA and Business Artifacts and eventually SCA will be derived from: (1) a Business Artifact, (2) a particular characteristic of the Business Artifact that makes the BA in the condition of providing SCA, (3) or other factors not directly related to any Business Artifact (This is the case of what Barney calls “historical conditions” and “social complexity”).
Description of the Methodology We can apply the outcomes of the discussion presented above to define a methodology integrating the ACOM at the strategic layer of the MDBT. The first step of this methodology is the application of the CSF method for the elicitation of the CSFs; within this paper we decided to refer to Rockart’s method that consists of three steps (1) an introductory workshop with executives to explain the obtain their commitment and explain the methodology, (2) CSF interviews designed specifically in a way that each person identifies which are, in her opinion, the factors which are critical both for themselves and for the organization, (3) a CSF focusing group, in charge of coming up with a list of CSF, coherent with the company’s goals as well [18]. The second stage is the identification of the CSFs able to provide a SCA, business experts, familiar with the methodology, through a careful analysis of all the CSFs should identify the ones that possess the four characteristics mentioned above: they must be valuable, rare, imperfectly imitable and not substitutable. Once identified, these CSFs need to be associated with the BAs. At this level is important
Operational Innovation: From Principles to Methodology Table 1 Association between CSFs and BAs Critical success Prime measure factors Risk recognition Company’s years of experience with in major bids similar products; “new” or “old” and contracts customers; prior customer relationship Profit margin Bid profit margin as ratio of profit on on jobs similar jobs in this product line
35
Process involved/hypothetical business artifact Risk management/contract: deal; asset; invoice; loss event; claim Bidding process; profit forecast/ backlog; bid; order; profit profile forecast
to select the processes – and thus the Business Artifacts – involved in the CSFs. Using the very same example shown in Rockart’s paper we hereby provide an example of BAs identification starting from a CSF list. Noteworthy, the output of the second phase is inevitably a partial picture of the business, as we identified just those BAs influencing the creation of SCA. At this strategic level it is not necessary to complete this picture identifying all the BAs involved in the business of a company (Table 1). The monitoring stage is the last phase of the methodology, that operates in feedback; this latter stage of the methodology has not been defined yet, even though we already performed a literature review on the topic [7], none of the reviewed methodologies seems adequately suitable with the ACOM approach.
Future Work The future work in our study will be firstly directed to a further definition and refining of the proposed methodology, thus research should proceed through several steps. First it is necessary to extend the review on the RBV and CSF theories and especially on the related concept of dynamic capabilities, in order to develop a more accurate definition of the relationship between Business Artifacts and Sustained Competitive Advantage. Secondly, we will perform a theoretical investigation about which characteristics of a Business Artifact make an IT-dependent strategic initiative particularly effective in generating a sustained competitive advantage. A challenging issue would be related to the identification of some practical guidelines on how to design or redesign the key Business Artifacts in order to maximize the sustained competitive advantage they can provide to the company. Third, we need to design the performance measurement system for the Business Artifacts, (e.g. according to the Balanced Scorecard approach, as mentioned above). And finally, the developed methodology will be applied and tested for validation in a real company case study.
36
M. Della Bordella et al.
References 1. Avison, D., Jones, J., Powell, P. and Wilson, D. N. “Using and Validating the Strategic Alignment Model”, Journal of Strategic Information Systems, (13:3), 2004, pp. 223–246. 2. Barney, J. (1991). “Firm resources and sustained competitive advantage.” Journal of Management 17(1): 99–120. 3. Bhattacharya, K., Guttman, R., Lymann, K., Heath III, F.F., Kumaran, S., Nandi, P., Wu, F.Y., Athma, P., Freiberg, C., Johannsen, L., Staudt, A. (2005). A model-driven approach to industrializing discovery processes in pharmaceutical research. IBM Systems Journal, 44(1):145.162 4. Bhattacharya, K., Caswell, N.S., Kumaran, S., Nigam, A., Wu, F.Y. (2007). “Artifact-centered operational modeling: lessons from customer engagements”. IBM Systems Journal, v.46 n.4, p.703-721, October 2007 5. Caron, J.; Jarvenpaa, Sirkka; and Stoddard, Donna. 1994. “Business Reengineering at CIGNA Corporation: Experiences and Lessons Learned from the First Five Years,” MIS Quarterly, (18: 3). 6. Chao, T., Cohn, D., Flatgard, A., Hahn, A., Linehan, N., Nandi, P., Nigam, A., Pinel, F., Vergo, J., Wu, F.Y. (2009). “Artifact-Based Transformation of IBM Global Financing”. Proceedings of the 7th International Conference on Business Process Management, Ulm, Germany. 7. Della Bordella, M., Ravarini, A., Liu, R., (2009) “Performance measurement and strategic analysis for Model Driven Business Transformation” 8th Workshop on e-business, Phoenix, 2009. 8. Grant, R. M. (1991). “The resource-based theory of competitive advantage: implications for strategy formulation.” California Management Review 33(3): 114–135. 9. Hammer, M. (1990). “Reengineering work: don’t automate, obliterate” Harvard Business Review Vol.68 N.4, pp 104–112 10. Hammer, M., Stanton, S. (1999). “How process enterprises really work” Harvard Business Review Vol.77, N.6, pp 108–120 11. Hammer, M. (2004). “Deep change: how operational innovation can transform your company” Harvard Business Review Vol.82, N.4, April 2004 12. Kumaran, S., Liu, R. and Wu, F. Y. (2008). “On the Duality of Information-Centric and Activity-Centric Models of Business Processes”. Proceedings of the 20th International Conference on Advanced Information Systems Engineering (CAiSE’08), 2008, pp. 32–47. 13. Liu, R., Bhattacharya, K., Wu, F.Y. (2007). “Modeling business contexture and behavior using Business Artifacts”. Proceedings of the 19th International Conference on Advanced Information Systems Engineering (CAiSE’07). 14. Liu, R., Wu, F.Y., Patnaik, Y., Kumaran, S. (2009) “Business Artifacts: An SOA Approach to Progressive Core Banking Renovation,” scc, pp.466-473, 2009 IEEE International Conference on Services Computing, 2009 15. Nigam, A., Caswell, N.S. (2003). “Business Artifacts: An approach to operational specification”. IBM Systems Journal, v.42 n.3, p.428-445, July 2003 16. Piccoli, G. and B. Ives (2005). “IT-Dependent Strategic Initiatives and Sustained Competitive Advantage: A Review and Synthesis of the Literature.” MIS Quarterly 29(4). 17. Porter, M. (1981). “The contributions of industrial organization to strategic management.” Academy of Management Review 6(4): 609–620. 18. Rockart, J.F. (1979) “Chief executives define their own data needs” Harvard Business Review Vol.57 N.2, pp 81–93 19. Russo, M. V. a. F., P.A. (1997). “A resource-based perspective on corporate environmental performance and profitability.” Academy of Management Journal 40: 534–559. 20. Wade, M., and Hulland, J. (2004). “Review: The Resource-Based View and Information Systems Research: Review, Extension, and Suggestions for Future Research,” MIS Quarterly (23:1), 2004, pp. 107–142. 21. Wonseok, O. and Pinsonneault, A. “On the Assessment of the Strategic Value of Information Technologies: Conceptual and Analytical Approaches”, MIS Quarterly, (31:2), 2007, pp. 239–265.
Public Participation in Environmental Decision-Making: The Case of PPGIS Paola Floreddu, Francesca Cabiddu, and Daniela Pettinao
Abstract The Public Participation Geographic Information System (PPGIS) offers a special and potentially important means to facilitate public participation in the planning and decision making. The major problem is the lack of evaluation methods to verify the outcome of the effects of PPGIS on decision making processes. To fill this gap, the objective of the on-going research is to develop an analytical framework through which PPGIS initiatives can be evaluated.
Introduction The notion of citizen participation in environmental public decision-making has been discussed extensively in scientific literature. In particular, Wiedemann and Femers [1] introduce levels of public participation and the environmental scenario. Public participation is seen as the distribution of information to citizens who are concerned in environmental issues. The lowest rung of the ladder has been defined by the authors as the “right to be informed”, while the uppermost may be identified as “the partnership of the public in the final decision-making”. In this line of reasoning, Tulloch and Shapiro [2] have set a framework for the classification and measurement of different levels of citizen participation in the decisional processes regarding environmental issues. Technology brings a new element into this conceptual field [3]. ICT can be used to improve traditional ways of citizens’ involvement in environmental decisions. Among the information technology tools utilized in order to actively involve citizens in partaking in environmental issues, public participation geographic information system (PPGIS) has certainly had a key role. PPGIS pertains to the use of geographic information systems (GIS) to broaden public involvement in environmental decision-making [4–9].
P. Floreddu, F. Cabiddu, and D. Pettinao University of Cagliari, Cagliari, Italy e-mail:
[email protected],
[email protected],
[email protected] A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_5, # Springer-Verlag Berlin Heidelberg 2011
37
38
P. Floreddu et al.
The Public Participation Geographic Information System (PPGIS) offers a special and potentially important means to facilitate public participation in the planning and decision making. The major problem is the lack of evaluation methods to verify the outcome of the effects of PPGIS on decision making processes. To fill this gap, the objective of the on-going research is to develop an analytical framework through which PPGIS initiatives can be evaluated. This paper is organized as follows. We examine the role of PPGIS in environmental decision making. We next describe our methods and measures. We then present the results.
Theoretical Framework The term “PPGIS” was born in 1996 in a conference hosted by NCGIA which subject was how to improve access to GIS among non-governmental organizations and individuals, especially those who have been historically under-represented in public policy making [5, 6, 10]. The major object of PPGIS is the inclusion of local community in the spatial planning through participative approach including geographical technologies to improve the policy management [6, 7]. PPGIS involves a broad notion that spatial visualization in GIS represent a unique opportunity to enhance citizen involvement in public environmental decision [4, 9, 11]. Their use, in fact, facilitates problems understanding and allows interested parties to put directly on the maps their points of view [8]. The modern PPGIS applications include the new ICTs, tools such as chat, email, forums, blogs that they also allow bi-directional communication [5]. In recent years GIS technologies have became more open and accessible thanks to the rise of Internet, this growth has facilitated the democratization of spatial information, making it more family the interaction with spatial data. The development of tools like Google Maps and Google Earth have greatly facilitated the participation of users in e-participatory processes thank to of the ease and immediacy of the iteration that they make possible [12]. Utilizing the combination of GIS and Internet technologies [5], the Internetbased PPGIS allows the public to participate in topics being discussed anywhere with web access at anytime. It has the potential to reach a much wider audience and allows public participation in the very early stage of the planning and decision making process [7, 9]. To achieve the goal and define a feasible framework capable of facilitating the measurement of results through the use of PPGIS, a variety of measurement indicators have been contemplated. In particular, we considered both the contributions provided by Rowe and Frewer [13] and the procedures outlined by Macintosh [14]. These are used in order to assess in what way new technologies have contributed to the improvement of citizen involvement in the decisional process. This combined analysis has brought to the definition of a framework made up of six items of criteria, hereby summarized as: accessibility of resources, cost/benefit ratio; task definition; level of participation, influence, transparency.
Public Participation in Environmental Decision-Making
39
The criteria known as “accessibility of resources” has the objective of establishing if, during PPGIS project activation, participants had adequate accessibility to a variety of resources that allowed them to fully use the PPGIS tools. The latter, is divided into two components “information” and “access to the competence of experts” [15]. The criteria of “cost/benefits ratio” evaluates the methods chosen for the advancement of the decisional process and verifies if them have been really capable of reaching the goals [16]. The “task definition” criteria, indicates if public administration (PA) has clearly defined the nature, scope and modality of decisionmaking [16]. The “level of participation” criteria considers the extent citizen involvement in the decision making process [14]. The criteria of “influence” evaluates if the results of process involvement are capable influencing final decision-making [17]. The “transparency” principal highlights if the public is capable of controlling procedures and the outcome of the decisional process [16].
Methodology The research design is a qualitative, multiple-case study which is suited when little is known about a phenomenon and also when there can be little reliance on the literature [19]. Our primary data sources are the semi-structured interviews that were conducted during 2009 with key informants in four local administrations: the Landscape Observatory of the Region of Puglia, e21 Projects of the Municipalities of Pavia and Vimercate and the Geoblog of the Municipality of Canzo. All the projects chosen have the following basic/common characteristics: they deal with PA, they have the objective of involving citizens in environmental impact processes by using web tools and all utilize maps and geo-charts. Interviews are based on a common set of questions designed to elicit information on the PPGIS project and they were conducted with project managers, the most informed people on how implement the PPGIS. Questions are designed to get qualitative responses and interviews were administered by email. In order to analyze the cases, we use Eisenhardt’s method of within- and crosscase analysis [18]. Within-case analyses summarizes the data and develops preliminary findings; thus, we gain a richer understanding of the decision-making processes. Each PA allocated decision-making power to their citizens over the interaction with the PPGIS. The outcomes of the within-case analyses are compared and contrasted during the cross-case analysis to improve the rigor and quality of the results. Charts and tables were used to facilitate comparisons between cases [19].
Analysis of Results, Discussion and Conclusion Data analysis, which was obtained through the administration of questionnaires, shows different results in the spectrum of dimensions which make up the present work. Some of eight criteria, identified in the framework, are grouped in macro
40
P. Floreddu et al.
areas and the are displayed into tables to simplify the exhibition and visualization. Particularly, criteria “connectivity” and “accessibility of resources” are aimed to measure the PA’s interest to promote citizen active participation in decision making (Table 1). Criteria “cost/benefits ratio” and “task definition” convey analysis of project objectives respectively from an economic and organizational point of view (Table 2). Finally, criteria “level of participation” and “influence” express the measurement of effective inclusion and the possibility of citizen to have a genuine impact on decision making (Table 3). The dimension which evidenced the worst results corresponds to connectivity (Table 1). From the analysis of the answers provided, it is evident that PA attempted to improve the access to the new ICT only partially. None of the promoting PA provided specific funds for the purchasing of computers. Furthermore, only for two projects (Landscape Observatory and e21 Vimercate project) were specific website posts put into place in order to allow citizens to easily access the internet. With regards to the criteria of accessibility of resources it may be stated that the promoting administrations provided a variety of answers, and were all capable of providing useful information to citizens of both general and specific nature. The answer provided by the Region of Puglia (Landscape Observatory) was particularly interesting because it allowed participants to directly communicate with specialized personnel while online. Regarding to objectives (Table 2), the dimension ratio cost/benefits determine if the initiative can be considered effective by who carried out it. This criteria allows to establish that only one project, the e21 Pavia did not accomplish the goals that it set out with, because of “a lack of direct involvement by local body Directors and Administrators”, as stated by the person interviewed. The criteria of task definition demonstrated positive results in all administrations because the objectives of the four projects had been clearly and precisely been communicated by using a variety of channels in order to communicate them. Two criteria analysed shown different assessment: for the level of participation it should be highlighted that all projects were promoted by PA but for the dimension of influence results appears positive. The lack of public initiatives promoting the involvement in the online decisional process seems to highlight shortcomings in citizen involvement. As for the dimension of influence results appear to be positive: three of the administrations utilized the results of decisional processes obtained by
Table 1 Criteria of accessibility of resources Information: Were informative “Yes, they were essentially No No meetings set up with the conference areas of the PPTR citizens before the phases of where the website was activation of the website? presented” Access expert competence: Do “Yes, direct contacts with the site No “Yes, a participants have a direct manager, with a political moderator contact with competent representative and two was present” personnel when interacting experts of the technical on the website? secretariat”
Yes
No
“With a press release and an information campaign”
“It was sufficiently”
“Informative booklets disseminated at strategic sites in town Communication on the periodical “Vimercate Today”
“Advertising on the Municipality’s events page on the internet, press release and by using local information tools”
Was the objective reached? “Yes, we believe objectives No “Yes, we believe objectives were reached” were reached” “During the experimental “The project, which is still “Maximum flexibility, partial What are the reasons of “Success is due to decisive phase the direct running, came to a halt anonymity and the chance failure of the project? technical investments by involvement of local body during the year 2008 and to calmly reason at home What can they be led to? the local body; not only the directors and has now started off again on complex topics” economical ones, but administrators was a as support activities linked mainly those related to the drawback” to participated PGT” sharing of goals among administrators, politicians and technicians” Criteria of definition of task “Yes, it was” “Yes, though a detailed informative campaign was not implemented”
“[The objective was] Maximum flexibility and reachability for the final user: the citizen”
Geoblog Municipality of Canzo
Was the objective of the “We think it was” project communicated to citizens in a clear and precise way? “Brochure” In what ways was the project objective made Publication on regional available to the public? newspapers. Video projection at cinemas Publication dissemination at scientific conferences”
Vimercate “[The objective was] Facilitate participation through the use of computer technology tools” –
e21 Project Pavia
What was the objective set “[The objective was] Integrate – out by the promoting knowledge with a bottom body as for the up approach” utilization of the web?
Criteria of ratio costs–benefits
Table 2 Criteria of effectiveness of the process Question Landscape observatory
Public Participation in Environmental Decision-Making 41
e21 Project Pavia
Geoblog Municipality of Canzo
No
No
“Allow citizens and PA to get “The use of the web, in order closer and guarantee to evaluate the strategic transparency of public choices that will be put acts, listen to the needs of into place by using citizens” planning tools”
Vimercate
Were the results obtained “Yes, it was specified in the “In the REA arrangement and No “Yes, to reach the planned by using this tool PPTRs” in the VAS process of objective” utilized by the PA? In Governmental territorial what way? planning” No “Yes, the forum was observed No Was there a clear position “Yes, and it is ongoing, and by administrators who by PA on the use of the there is also a good level of provide answers to citizens results obtained from awareness regarding the and follow through on the website? potential of such a tool by requests regional administration officials”
“Their perception of the “Offer a concrete answer to landscape in order to commitments undertaken balance policies/actions/ by underwriting the intervention projects based Aalborg Commitment” on widespread and sharing of knowledge partnership were registered” Are citizens actively “No, citizens ask to be able to No involved in requiring access the information, not that the promoting body the opportunity to “build” activate the website? information actively” Criteria of influence
What was the goal of citizen inclusion in sharing the choices of the promoting body?
Criteria of level of participation
Table 3 Criteria of citizen involvement Question Landscape observatory
42 P. Floreddu et al.
Public Participation in Environmental Decision-Making Table 4 Criteria of transparency Question Landscape observatory
e21 Project Pavia
Are the decisional “Yes, if they “Yes, they are acts taken at regard acts available on the public council dealing with website of the meetings made the planning Municipality” easily available process” to citizens? No “Yes, even Can citizens though such acquire results are information strongly regarding the mediated by results of the PPTR participation? editor”
43
Vimercate No
Geoblog Municipality of Canzo Yes
“On the website of Yes the Municipality the deliberations of the council are made available to citizens”
using PPGIS. The e21 Vimercate project is the only one that has not contemplated the use of the results obtained from the participation process. With regards to the criteria of transparency (Table 4), the e21 Pavia project is the only one that evidenced a low level of evaluation. Pavia is the only PA that doesn’t allow for visibility and the consultation of decisional acts. The objective of the following work was to develop an analytical framework through which PPGIS initiatives can be evaluated. Data analysis, obtained through the administration of questionnaires, demonstrated excellent results with regards to the definition of the objectives and the dissemination of information (definition of tasks), while good results were reached in terms of accessibility of resources, the ratio costs/benefits and transparency. On the other hand, the dimension of “participation”, proved to have insufficient results but the dimension with the worst results (as shown in Table 1) was connectivity. The projects analyzed demonstrate, sufficient but not excellent results, and it may be highlighted that PPGIS systems do not show elevated levels of inclusion in this phase.
References 1. Wiedemann P.M., Femers S. (1993), Public participation in waste management decision making: Analysis and management of conflict, Journal of Hazardous Materials, 33:355–368 2. Tulloch D.L., Shapiro T. (2003), The intersection of data access and public participation: Impacting GIS users’ success?, URISA Journal, 15(2):55–60 3. Holden, S.H. (2003) The Evolution of Information Technology Management at the Federal Level: Implications for Public Administration, in Garson G.D. (ed.) Public Information Technology Policy and Management Issues, Hershey, PA, Idea Group Publishing.
44
P. Floreddu et al.
4. Carver S. (2003), The Future of Participatory Approaches Using Geographic Information: developing a research agenda for the 21st Century, URISA Journal, 15(1): 61–71 5. Hansen H.S., Prosperi D.C. (2005), Citizen participation and internet GIS: Some recent advances, Computers Environment and Urban System, 9(6): 617–629 6. Sieber R.(2006), Public participation geographic information systems: A literature review and framework, Annals of Association of American Geographers, 96(3): 491–507 7. Jankowsky P. (2009), Toward participatory geographic information system for community based environmental decision making, Journal of Environmental Management, 90 (6):1966–1971 8. Schlossberg M., Shuford E. (2003), Delineating “Public” and “Participation” in PPGIS, URISA Journal, 16(2): 15–26 9. Kingston R., Carver S., Evans A., Turton I. (2000), Web-based public participation geographical information system: an aid to local environmental decision making, Computers Environment and Urban System, 24(1): 109–125 10. Obermeyer N. (1998), PPGIS: The Evolution of Public Participation GIS, Cartography and GIS, 25: 65–66 11. Peng Z.R. (2001), Internet GIS for public participation, Environmental and Planning B, 28:889–905 12. Dunn C. (2007), Participatory GIS: a people GIS?, Progress in Human Geography, 31 (5):616–637 13. Rowe G., Frewer L.J. (2000), Public participation methods: A framework for evaluation, Science, Technology and Human Values, 25(1): 3–29 14. Macintosh, A. (2004), Characterizing E-Participation in Policy-Making, Proceedings of the 37 Annual Hawaii International Conference on System Sciences, January 5 – 8, 2004, Big Island, Hawaii 15. Laituri M. (2003), The Issue of Access: An Assessment Guide for Evaluating Public Participation Geographic Information Science Case Studies, URISA Journal, 15(2): 25–32 16. Rowe G., Marsh R., and Frewer L. (2004), Evaluation of a deliberative conference, Science, Technology, and Human Values, 29(1): 88–121 17. Rowe G and Frewer LJ. (2004), Evaluating public participation exercises: A research agenda, Science, Technology, and Human Values, 29(4): 512–557 18. Eisenhardt K. (1989), Building theories from case study research, Academy of Management Review, 14(4): 532–550 19. Miles, M. and A. M. Huberman (1984). Qualitative Data Analysis. Beverly Hills, CA, Sage.
Single Sign-On in Cloud Computing Scenarios: A Research Proposal S. Za, E. D’Atri, and A. Resca
Abstract Cloud computing and Software as a Service infrastructure are becoming important factors in E-commerce and E-business processes. Users may access simultaneously to different E-services supplied by several providers. An efficient approach to authenticate and authorize users is needed to avoid problems about trust and redundancy of procedure. In this paper we will focus on main approaches in managing Authentication and Authorization Infrastructures (AAI): i.e. federated and centralized and cloud based. Then we will discuss about related some critical issues in Cloud computing and SaaS contexts and will highlight the possible future researches.
Introduction Cloud Computing was defined in [1] as “both the applications delivered as services over the Internet and the hardware and systems software in the datacenters that provide those services. The services themselves have long been referred to Software as a Service (SaaS). The datacenter hardware and software is what we will call a Cloud.” Software as a Service approach give people the possibility to have an ubiquitous relationship with different applications and business service and to access on demand to the “cloud” from everywhere in the world [2]. Cloud will be composed by different e-services from several providers and every time people access them have to fulfil an authentication procedure: increasing the number of providers will increase also the authentication procedures.
S. Za and A. Resca CeRSI, LUISS GUIDO CARLI University, Roma, Italy e-mail:
[email protected];
[email protected] E. D’Atri ITHUM srl, Roma, Italy e-mail:
[email protected] A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_6, # Springer-Verlag Berlin Heidelberg 2011
45
46
S. Za et al.
Giving a personal authentication to these services, both for social or business reasons, involves problems about security of personal data and trust relationship with the provider. Olden considered this as a digital (or online) relationships because “IT influences the institutional and social trust concept. Additionally to this occurs also the concept of technological trust (trust in technology)” [3]. Trust relationships with the service provider become a critical aspect to be considered each time a user gives an authentication, as highlighted in [4] where this issue is considered in term of risk management: “What is important is risk management, the sister, the dual of trust management. And because risk management makes money, it drives the security world from here on out”. Considering this, and how important is a valid Authentication and Authorization Infrastructure (AAI) in e-commerce contexts [5], our work will focus on the concept of authentication, then we will examine two different approaches in managing this infrastructure. In particular our focus is on Single Sign On feature provided by the AAI. Finally we will present how the authentication systems can be involved in a cloud computing context and what are the possible questions to investigate in further works.
Possible Solutions in Authentication Management Organizations around the world protect access to sensitive or important information using the Digital Rights Management (DRM) technology [6]. Authentication plays a key role in forming the basis for enforcing access control rules, for determining what someone is allowed to do (read a document, use an application, etc.); for this reason the system must first ascertain who that individual is. Technically, we speak of “Subjects”, and the term refers to an entity, typically a user or a device, that needs to authenticate itself in order to be allowed to access a resource. Subjects, then, interact with authentication systems of various types and various sources. An authentication type is the method the Subject uses to authenticate itself (e.g., supplying a user ID and a password). An authentication source is the authority that controls the authentication data and protocol. Authentication takes place both within an organization and among multiple organizations. Even within an organization, there may be multiple sources. However, traditional authentication systems generally presume a single authentication source and type. An example would be Kerberos [7] where the source is a trusted Key Distribution Center (KDC) and the type is user IDs with passwords. In a Public Key Infrastructure (PKI) [8] the source is the Certification Authority (CA) and the type is challenge/response. While both Kerberos and PKI permit multiple authentication sources, these authentication sources must be closely coupled. Often, complex trust relationships must be established and maintained between authentication sources. This may lead to authentication solutions operationally infeasible and economically cost-prohibitive. Another security problem of many current web and internet applications consists in offering individual solutions for the login procedure and user management, so the
Single Sign-On in Cloud Computing Scenarios: A Research Proposal
47
user have to register to each application and then to manually login to each one of them. This redundancy in the input of user data is not only less user friendly; it also presents an important security risk, namely the fact that the users are forced to remember a large number of username and password combinations. A study made by the company RSA (RSA 2006) shows that 36% of the experienced internet users are having 6–15 passwords and 18% of them even more than 15. From these numbers, it is obvious that it is difficult to manage such a big number of user data in an efficient way. In this case, users have the tendency of using simple passwords, of writing them down or simply using the same password everywhere. The purpose of Authentication and Authorisation Infrastructures (AAIs) is to provide an authentication system that is designed to resolve such problems [9]. The AAIs are a standardized method to authenticate users and to allow them to access distributed web contents of several web providers. In the context of E-Service and E-Business, it takes place that a group of organizations decides to cooperate for a common purpose. For example, each organization in the group provides one or more services to the others; their respective employees use these services after the authentication and authorization procedure by means of an Authorization and Authentication Infrastructure (AAI). After a successful authentication, each user can access specific services if he is authorized to use them. In these situations, a first decision to be made is the choice between a central or a federated infrastructural environment. The advantages of the federation environment will be shown by means of test scenarios without considering the cost factor. If the group decides to use central AAIs to control the access to one or more services, they need to decide on the provider of these services. One scenario is that the provider is part of the group, another is that the service is provided by an external company probably specialized in this. In both cases it is necessary to create a trust climate between trustee (who manages the identity information) and truster (the companies using the service). The scenario becomes more complex if a company from the group decides to participate in a different group as well. In this case, the company has to provide another organization managing a central server with its identity information, this resulting in a new trust climate. On the other hand, if the group decides for the federated AAI, each company manages its own identity information, so it is not crucial to establish a high trust climate within the group. This group of organizations is defined as “circle of trust,” in which every participant can act either as Service Provider (SP), or Identity Provider (IdP) or both. Furthermore, each party can easily join a different group because it remains the owner of its identity information. An example of technical and business standards and guidelines allowing the deployment of meaningful web services can be found in the Liberty Alliance documentation. Liberty Alliance Project1 was formed to foster standards and specifications development implementing federated identity systems based on products and technologies that support the Liberty protocols. In the next paragraph we will show more details about the federated AAI that can be used in cloud contexts. Then we will describe another
1
http://www.projectliberty.org/ and http://kantarainitiative.org/.
48
S. Za et al.
solution for managing several access service credentials by the use of one cloud based identity service to provide a SSO functionality.
A Federated AAI: Liberty Alliance Project According with our hypothesis, federated identity management systems represent a possible architectural solution to change the way consumers, businesses, and governments think about online authentication. The term “federated” refers to multiple authentication types and sources too. The purpose of this solution is to establish the rules that will let different authentication systems work together, not only on a technological, but even a policy level. In this scenario issues are related to the assignment of trust levels for a credentialing system, to determining rules for issuing credentials, and creating a process for assessing the trustworthiness of credentials. With these rules in place, disparate systems should be able to share authentication data and to rely on data provided by other systems. For instance, when a user wants to log into a bank or credit card website, an outside organization could, based on digital signature, guarantee that the user at the keyboard is indeed who he claims to be. In order to understand this architecture, some new concepts must be introduced. The “federation”, also referred as a “circle” or “fabric” of trust, is a group of organizations which establish trusted relationships with one another and have pertinent agreements in place regarding how to interact with each other and manage user identities (Fig. 1). Once a user has been authenticated by one of the identity providers in a circle of trust, that individual can be easily recognized and he takes part in targeted services from other service providers (SPs) within that Circle of Trust. In the proposed federated architecture three main actors are involved. We use the term “Subjects” to identify: the Identity providers (IdP – in which the user’s registration data reside), the Service Providers (SP – provides one or more services) and the User agent (the user application to communicate with IdP or SP – i.e. the web browser). When a user signs in the circle of trust, his own IdP creates an “handle” and sends it to the user agent. This handle is held by the user agent until next logout and is accepted by any IdP or SP belonging to the Circle of Trust. Every time the user tries to access a trusted SP, the user agent submits the user handle to the SP. Then, the SP communicates with the user’s IdP in order to gain the user’s credentials (without any other user login operations). Finally, when a user signs out from any SP, the related IdP is notified about the logout process and sends a logout message to all the other SPs in which the user has been logged in. The user handle used in each session are stored in order to avoid duplicates. Such an architecture, allows users to sign in only one time during the session (SSO) and makes them able to interact with any SP or IdP in the Circle of Trust without any other login operation until next logout. Moreover, user’s registration data are gathered only by the IdP chosen by each user with obvious advantages in terms of privacy concerns.
Single Sign-On in Cloud Computing Scenarios: A Research Proposal
49
Fig. 1 Circle of trust
Cloud-Based AAI: OneLogin We’re seeing a lot more discussion on the topic of single sign-on for SaaS (Software-as-a-Service) environments [10, 11]. The issue is becoming more important as security emerges as a top concern for companies considering making the move to cloud-based environments. OneLogin2 is a company that offers single sign-on, cloud-based service that represent a solution also for small and mid-sized companies to enjoy the same level of security that usually is a prerogative only of large companies. Further, this kind of solution does not need to deploy security methods that employ SAML (Security Assertion Markup Language, an XML-based standard for exchanging authentication and authorization data between security domains) as Liberty Alliance standard that is expensive to deploy. In this case, the user need only to: create a OneLogin account, store all his credentials for accessing several web-services, and finally install the OneLogin plug-in in his browser (previous defined User agent) to have the single sign-on functionality. In this way, authentication procedures such as the traditional user name/password system process are bypassed. In fact, users need only to provide once their own OneLogin
2
http://www.onelogin.com/.
50
S. Za et al.
credential directly on the login page provided by the system. This means that using the plug-in, accesses to web-services take place without providing any username/ password authentication because the OneLogin plug-in do it automatically on users’ behalf. OneLogin and the like’s infrastructures, in some sense, sits in the cloud. In other words these infrastructures become an instrument to enable accesses to web services in situations in which dedicated servers and related personnel is not required.
Discussion Above, two completely different AAI infrastructures (federated systems and OneLogin and the like systems) have been considered entities that allow equally single signon functionalities. Both of them simplify the access to distributed web contents on several web providers and, actually, at a superficial view, they work in the same way. However, the inherent nature of these two infrastructures is poles apart. OneLogin and the like systems are entities that manage identities. In other words, the several usernames/passwords or other forms of qualifications that each of us use surfing the web are piled up and activated in case it is required by users’ surfing. Further, users will decide which of these qualifications will be assigned to these systems and which ones will be managed autonomously. In the case of centralized and federated systems nothing of the kind occurs. Users are not involved at all in the identification management. Rather, they can be completely unaware of shifts among different software systems surfing the web. As shown above, this is due to the fact that a group of organizations agree to share web services, web contents and identification management to simplify the access to them. At the basis of federated authentication systems there are reciprocal agreements in order to club together in this respect. Further, these authentication systems have a larger potentialities in comparison with the OneLogin and the like ones. Let’s take into consideration accesses to WI-FI networks. Each of us experiences that, wherever we are, switching on a lap top and checking if WI-FI connections are available, several possibilities are at hand. It goes without saying that, in this situation, a redundant service is provided. However, to connect to these networks appropriate qualifications are required. What would happen if WI-FI service providers take advantage of federated authentication systems? Further, let’s think about universities of a specific country, for instance. Nowadays, all of them, more or less, are equipped with WI-FI systems and provide similar services to students. But what would happen if they decide to share an authentication system? Suddenly, undergraduate and graduate students have the possibility to take advantage of broadband connections in all universities of the country having the benefits of this kind of service. At least from a technical perspective this is not a big issue. Above, a couple of solutions have been introduced and they are available on the market from several years. The question is why federated authentication systems are not so spread in spite of advantages that can be obtained. Here, the security issue emerges as the main obstacle in this respect. The point is on what basis do users connect to the
Single Sign-On in Cloud Computing Scenarios: A Research Proposal
51
network in question if authentication procedures are not directly managed and controlled? On this issue further research is required. It becomes fundamental to study which security issues have been overcome when federated authentication systems have been introduced successfully and which issues, on the other hand, are still at stake delegating to other organizations users’ authentication.
The AAI in Cloud Computing Context Even though federated authentication systems, on the one hand, and OneLogin and the like systems, on the other hand, are completely different in nature, this does not mean that they cannot be used in combination. For example, qualifications for accessing the former can be attributed to the latter. This means that is in users’ hands the possibility to entrust systems such as OneLogin to allow the access to a series of web services and web contents regrouped through federated (or also centralized) systems. Obviously, all of this has some consequences. The combination between these two kinds of authentication systems facilitate considerably accesses to electronic services seamlessly. In contrast, this required to hand over sensitive qualifications to a third party (the OneLogin system for example) and for this reason the usual issue comes out: security. Even in this respect, further research activity would be beneficial in order to investigate how, actually, issues such as this one can be faced in order to favour the access to distributed web contents and web services considering, at the same time, security issues. So far, centralized federated authentication systems have been considered indistinctly even though, as it was mentioned above, they differ significantly. In the centralized systems one of the members of the inter-organizational group manages users’ qualifications for all other members. In the case of federated systems each member has custody of qualifications of its own users and on the basis of a circle of trust they are considered valid by other members. In this regard, a comparative study can be useful in order to examine pros and cons of these two solutions. At a first glance, the latter seems more apt in order to manage security issues. The fact that each organization has the possibility to supervise its own users’ qualifications can represent a compromise between direct control and the delegation to a third party of electronic identification management. Literature on cloud computing outline an alternative way to manage hardware and software systems. Due to the development of the internet, applications and databases are accessible from all over the world. However, at the basis of this way of reasoning there is a single entity that decided to outsource these activities rather than in-source them. The evolution of AAI and in particular of centralized and federated systems has changed the scenario as now also co-sourcing becomes possible. In this case, not only Software as a Service (SaaS) but also Identity as a Service (IaaS) [12] represents an alternative way of identification management. But the fact is that this type of management can enable further forms of inter-organizational collaborations or of web services and web contents.
52
S. Za et al.
Conclusion and Future Steps Our objective is to introduce some suggestions on the development of the use of information technology following a specific perspective: the web accessibility. In this respect, SSO potentialities have been outlined. Both centralized and federated AAI, on the one hand, and OneLogin and the like systems, on the other hand, represent instruments that, actually, can outline a significantly different scenario surfing the web. And it is not only a question to move from one application to the other seamlessly. Cloud computing can be seen according to a new angle as it is not only the result of an outsourcing process by a specific entity but also co-sourcing becomes possible and new forms of inter-organizational collaborations can be figured out. In the further researches, we would investigate about security implication, even as trust and risk management, related the adoption of AAI mentioned above. Then, we would try to discover where and why some architectures are more used in respect to other ones in which context (i.e. personal or business).
References 1. Ambrust, M., Fox, A., Griffith, R., Joseph, A.D., Katz, R.H., Konwinski, A., Lee, G., Patterson, D.A., Rabkin, A., Stoica, I. Zaharia, M. (2009). Above the clouds: A Berkeley view of cloud computing. EECS Department, University of California, Berkeley, Tech. Rep. UCB/EECS-2009-28 2. Buyya, R., Yeo, C.S., Venugopal, S., Broberg, J., Brandic, I. (2009).Cloud computing and emerging IT platforms: Vision, hype, and reality for delivering computing as the 5th utility. Future Generation Computer Systems 25(6), Elsevier. 3. Geer, D. (1998). Risk management is where the money is. Forum on Risks to the Public in Computers and Related Systems, ACM Committee on Computers and Public Policy 20(6) 4. Olden M., Za S., (2010). Biometric authentication and authorization infrastructures in trusted intra-organizational relationships. In Management of the Interconnected World, D’Atri et al. Eds., ISBN: 978-3-7908-2403-2, Springer. 5. Lopez J., Oppliger R., Pernul G. (2004). Authentication and authorization infrastructures (AAIs): a comparative survey. Computers & Security 23(7), 578–590. 6. Rosenblatt B., Trippe B. and Mooney., S. (2001). Digital Rights Management: Business and Technology. Hungry Minds/John Wiley and Sons, New York. 7. Kohl J. And Neuman C. (1993), The Kerberos Network Authentication Service (V5), RFC1510, DDN Network Information Center, 10 September 1993. 8. Ford W. And Baum M. (1998). Secure Electronic Commerce, Prentice Hall 9. Schl€ager, C.; Sojer, M.; Muschall, B.; Pernul, G. (2006): Attribute-Based Authentication and Au-thorisation Infrastructures for E-Commerce Providers, pp132-141 Springer-Verlag. 10. Lewis, K.D. and Lewis, J.E. (2009). Web Single Sign-On Authentication using SAML. International Journal of Computer Science Issues, IJCSI 2, 41–48 11. Cser, A. and Penn, J. (2008). Identity Management Market Forecast: 2007 To 2014. Forrester. 12. Villavicencio, F. (2010) Approaches to IDaaS for Enterprise Identity Management. http:// identropy.com/blog/bid/29428/Approaches-to-IDaaS-for-Enterprise-Identity-Management (accessed June 27, 2010).
Part II
Organizational Change and Impact of ICT F. Pennarola and M. Sorrentino
Information and Communication Technologies (ICT) absorb a dominant share of an organization’s total capital investments. Organizations expect to use ICT platforms to run new processes, innovate products and services, gain higher responsiveness, and implement new corporate environments aimed at transforming their internal structures into better achieving organizations. One of the most challenging tasks faced by managers is the effective implementation of ICT, since it requires people to understand, absorb, and adapt to new requirements. It is often said that people love progress but hate change. Therefore, the ultimate impact of ICT is mediated by a number of factors, many of which require an in-depth understanding of the organizational context and human behavior. The six papers presented in the following pages discuss a broad spectrum of organizational and technical issues, and provide perspectives from different settings and countries. They also demonstrate the fundamental importance of exploring the transformational role of ICT for the development of knowledge and concrete lines of action for organizations and their managers. The reader will find an overview of these contributions below. The paper by Paola Adinolfi, Mita Marra and Raffaele Adinolfi: “The Italian Electronic Public Administration Market Place: small firm participation and satisfaction” reconstructs the route taken by the reform of public procurement in Italy and shows the results of a survey aimed at analyzing the level of satisfaction of small/medium enterprises (SME) participating in the national e-marketplace. The paper reveals that the companies using this electronic platform are still very few and suggests how the Ministry of Economy could intervene to improve the level of acceptance and use of this tool. Frank Go and Ronald Israels in their paper titled “The role of ICT demand and supply governance: a large event organization perspective” address one of the most challenging tasks faced by managers today, namely the implementation of ICT in a Large Event Organization (LEO). Through the application of the Demand and Supply Governance Model the authors suggest that the appropriate use of ICT in a complex organized context involving numerous stakeholders can help to manage the ‘bright side’ of an LEO and prevent and reduce the impacts of its ‘dark side’.
54
F. Pennarola and M. Sorrentino
In their article “Driving IS value creation by knowledge capturing. Theoretical aspects and empirical evidences” Camille Rosenthal-Sabroux, Renata Dameri and Ines Saad focus on the concept of IS value deriving from business process change. The starting point of the analysis is based on the three factors indicated by Davenport (Integrate, Optimize and Informate), to which the authors add a fourth element as yet barely taken into account, i.e. Identify knowledge. The case study of a major Italian industrial group undergoing a vast project of organizational change is used to discuss the implications deriving from the authors’ analytical proposal. In their paper “The impact of using an ERP system on organizational processes and individual employees”, Alessandro Spano and Benedetta Bello` examine the implementation of ERP systems in the public sector. The purpose of this qualitative research is to gain an understanding of the role of ERP in a local administration, namely the Sardinia Regional Council. The three focus groups conducted by the authors enable them to affirm that the user organization should consider the following relevant issues as a whole to improve ERP effectiveness: system introduction planning, organizational and technical aspects. The constructs to emerge from the focus groups and the relationships that interlace the various elements will be the object of a successive study in which the authors will attempt to quantify the weighting of each factor. Giulia Ferrando, Federico Pigni, Cristina Quetti and Samuele Astuti are the authors of the article “Assessing the business value of RFId systems: evidences from the analysis of successful projects”, in which they develop a general model to frame the RFId business value on the base of: the objectives of the investment, the results achieved, and the effects of contextual moderating factors. The authors draw on this model to analyze 64 successful projects. The research demonstrates the relationship between the use of RFId and business process performance. Further, the study suggests that the assessment of RFId business value requires a holistic approach that takes into account also the intangible benefits that accompany the adoption of this ICT solution.
The Italian Electronic Public Administration Market Place: Small Firm Participation and Satisfaction R. Adinolfi, P. Adinolfi, and M. Marra
Abstract The paper reconstructs the path taken by the reform of public procurement in Italy which has gradually evolved from a and centralized market to an open and accessible one. Despite the development of the Electronic Public Administration Market Place (MEPA), information regarding its performance is scant. There are no available collected data on firm satisfaction. The paper discusses the role Consip, a public company owned by the Ministry of Economy and Finance, has played (and continues to play) to guide the decentralization of public e-procurement. At the same time it shows the results of a sample investigation aimed at analysing the level of satisfaction of small/medium enterprises (SMEs) participating in the MEPA.
Introduction In Italy, the transformation of government procurement began in 2000 with the model developed by Consip SpA1 (Public Information Services Agency) for all public agencies across the nation. Consip, a public company owned by the Ministry
1 SpA stands for Limited Responsibility Company. This is a legal expression used for private companies, which has been extended also to public companies (owned totally or partially by the state).
R. Adinolfi Department of Business Studies, University of Salerno, Corso V. Emanuele, Salerno n.143- 80122, Italy e-mail:
[email protected] P. Adinolfi Department of Business Studies, University of Salerno, Fisciano, Salerno, Italy e-mail:
[email protected] M. Marra Italy’s National Research Council, Institute for the Study of Mediterranean Societies, Via P. Castellino, Naples 111 80129, Italy e-mail:
[email protected] A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_7, # Springer-Verlag Berlin Heidelberg 2011
55
56
R. Adinolfi et al.
of Economy and Finance, set up both the IT platform and the operational procedures to carry out acquisition processes at national level. By negotiating the best conditions in terms of price and quality for nationally required supplies, the system aimed both to tap economies of scale and to avoid fragmentation, waste, corruption, and hidden public spending. The aim of this paper is to reconstruct the path traced by the public procurement reform in Italy (resulting in the gradual evolution from a concentrated and centralized market to an open and accessible one) and to assess the degree of satisfaction on the part of SMEs. The paper is divided into three parts. Part 1 uncovers the theoretical underpinnings of Italy’s current public procurement reform efforts unfolding along parallel and at times, cross-purpose directions. It also presents the rationale of the study and its analytic framework and methodology. Part 2, updating a previous survey, empirically analyzes how the centralized public e-procurement model was designed and implemented, and highlights both the results achieved and the perceptions of local public agencies and vendors. Part 3 outlines the results of a survey concerning the degree of satisfaction of a sample of SMEs participating in the electronic public administration market place (MEPA) and highlights the latter’s relative strengths and weaknesses.
Part 1: Theoretical Framework and Research Design In Italy, procurement transformation has thus far co-existed with two very different parallel – and at times cross-purposeful – approaches. One (centralized) focuses on tightening the controls on spending [1–3]. Conversely, procurement reform attends more to the qualifications and capacities of employees, promoting efforts to decentralize procurement decisions while infusing technology, changing assessment practices and developing networks and partnerships [4]. In this perspective, e-procurement is a source of innovation spurring behavioral change within organizations [5, 6]. Information Technology literature (IT) in public administration has dealt mainly with the adoption, use, and management of IT and systems as well as its productivity implications. The literature not only addresses IT-enabled reinvention and government reforms, but also the description, assessment, and management of web-based e-government projects. Particularly the business literature on e-procurement examines the determinants of e-procurement and its relations to e-commerce forms. However, this body of literature deals very little with the institutional, organizational, and political factors affecting a public e-procurement system. There is also a lack of information on the performance of on-line markets and the degree of satisfaction of participating enterprises. Our paper delineates the regulatory framework supporting the new public procurement system, and examines the procedures centrally developed by Consip for e-procurement. In addition it investigates the level of satisfaction achieved on
The Italian Electronic Public Administration Market Place
57
the part of SMEs participating in the MEPA. The following research questions are addressed: 1. What are the main strengths and weaknesses of a centralized system of public procurement? 2. What is the overall level of satisfaction achieved by the enterprises which use MEPA? 3. What are the features of the MEPA which are considered relevant by SMEs? 4. What is the level of satisfaction reported relative to the various features of the MEPA? With regard to the first issue, a previous survey [7, 8] was updated. The analytical framework adopted in this study integrates the analysis of external, institutional and regulatory factors with management capacities and strategies for e-procurement development and effective use. The research focus is on the specific institutional and organizational setting in which Consip operates. The study highlights key aspects of e-procurement practices as codified by national laws, and as resulting from Consip management capacity to respond to the needs of purchasing agencies. As regards the other issues, for the assessment of customer satisfaction, in order to define the overall level of satisfaction and efficiency gap we use a compounded index which evaluates the perceptions, in terms of importance and satisfaction, of a number of service characteristics of the on-line market.
Methodology This study builds on a case-oriented approach [9]. Research data were collected through (1) 22 semi-structured interviews of samples of informants; (2) analysis of official documents; (3) a social science literature review; (4) 100 web-based structured questionnaires submitted to the sales managers of SMEs, randomly chosen from among those authorized for the MEPA. Semi-structured interviews were devised to gather opinions and perceptions on how nation-wide changes in procurement procedures have been designed and implemented by Consip. The interviews described and analyzed the three major factors expected to affect both the development and implementation of government procurement, that is: external factors, internal factors, and performance. Interviews were conducted between May 2009 and October 2009 with five samples of informants in order to triangulate different perspectives, perceptions, opinions, and descriptions [9]. The structured questionnaires were put to the staff responsible for sales in 100 small-medium size firms; a random sample of enterprises stratified by sector of activity. The remit of the questionnaires was to gather data on the level of satisfaction recorded on the part of SMEs, the features of the MEPA services and, eventual margins for improvement.
58
R. Adinolfi et al.
Part 2: Small Firm Participation in the e-Procurement Market Consip was created in 1997 under the D’Alema government then in force, as a public company, 100 percent-owned by the Italian central government whose mission was to design and manage IT within the Ministry of Economy and Finance (MEF). In 2001, under the Berlusconi government, the Consip mission shifted to a policy of rationalizing public spending for goods and services. The Budget (Law n.488, 1999) for 2000, launched the Italian government’s Public Spending Rationalization Program (PSRP) which aimed to reduce public spending for acquiring goods and services. PSRP also aimed to accrue savings in public expenditure through National Framework Contracts (NFCs). The true revolution occurred with the 2003 Budget which made it mandatory for all public agencies at all levels of government to apply to NFCs in order to rationalize their purchasing processes. All purchasing entities were obliged to procure through the electronic catalogue whenever the goods and services they required were listed therein [10]. The mandatory compliance with Consip’s directives caused much contention both within public administration and among private vendors. Disgruntled local dealers, who had handled previous public agency purchases, reinforced the scepticism of purchasing departments. Vendors decried the potential risk for excessive centralization of acquisition processes, too stringent bidding procedures, and lack of competition, particularly for SMEs, distributed throughout Italy. In their eyes, this was a case of “unfair competition”. Allegations of market concentration as well as the crowding out of SMEs, mounted among small businesses, business organizations and specific sectors of public administration particularly resistant to giving up their traditional discretionary powers.
Opening (Up) the e-Procurement Market Under the pressure of lobbying of small firms and the reluctant purchasing agencies, the government lifted mandatory compliance for all public agencies, with the exception of Ministries at state level. In August 2003, the bulk of the public sector was set free to autonomously negotiate acquisition contracts, provided that contract conditions and prices were more favorable than those applied in NFCs. The lifting of mandatory compliance with regard to NFCs was received with mixed feelings. While local agencies welcomed their renewed freedom in procurement processes shared with small firms, Consip found itself having to thoroughly rethink its own strategy through a renewed legislative definition of its official mandate.2 Under the pressure of the limelight, Consip began to change its whole approach to 2
Based on interviews with CONSIP high-level decision makers.
The Italian Electronic Public Administration Market Place Table 1 MEPA figures Years Turnover 2004 8.3 2005 29.8 2006 38.2 2007 83.6 2008 172.2 2009 230.6
Transactions 3,147 9,677 11,468 28,173 63,245 72,796
59
On line vendors 309 597 868 1,156 2,088 3,027
References 113207 190484 226748 332465 540000 1331915
e-procurement, addressing the demands emerging from individual or groups of public agencies, and involving business organizations in the negotiation of NFCs. Subsequently, Consip concentrated its efforts on the development of MEPA to favour on the one hand, the decentralization of purchases of local public administrations and to allow on the other, for SMEs to access the market. Three main actors operate in the MEPA: 1. CONSIP: Guaranteeing the technological support for purchases and the respect of the MEPA purchasing procedures without fees (CONSIP is a non-profit broker). 2. Vendors: Authorized enterprises having the requisites set out in the tender, in full autonomy they define their commercial strategies and bargain with the Public Administration in the regulated framework. 3. Public bodies: Autonomous purchasers of goods and services. Public bodies (PB) can purchase goods and services on the MEPA by means of two alternative tools: Direct Order (DO) and B – Request for Quotation (RFQ). The DO allows the PB to buy directly from the e-catalogue at a prefixed (i.e., posted) price. The RFQ is a competitive selection procedure through which the PB solicits a specific group of suppliers to submit a tender. Responding suppliers provide both a price quotation and the details of technical/quality improvements when required. The contract is awarded to the best price-quality combination without using an explicit, that is, publicly announced, scoring rule. Thus PBs have some discretionary power in awarding RFQs relatively more added value [11]. Table 1 highlights the increase of MEPA in the period 2004–2010.
Part 3: The Degree of Satisfaction Reported by SMEs Despite the marked development of the electronic market, there are no empirical studies measuring or concerning the degree of corporate satisfaction or the characteristics of MEPA services considered more relevant. The present investigation attempts to fill such a gap, thus to provide useful indications for Consip and, in general, for policy makers and those interested in the contribution technologies can make for businesses.
60
R. Adinolfi et al.
Table 2 General satisfaction, for employees and business sector (Likert scale 1–5) ICT Office Services Health materials Micro 4 3.8 4.3 3.6 Small 4 3.9 4.2 3.7 Medium 3.7 4.3 3.8 4.1 Average 3.9 4 4.1 3.8
Table 3 Ranking I II III IV V
Efficiency gap relative to the potential advantages of using MEPA Items Selling cost reduction Development of human resource capital Major visibility with respect to the range of Public Bodies (PBs) B2G introduction in addition to existing B2B and B2C Extending the platform of potential buyers.
Other 3.8 4 3.9 3.9
Efficiency gap (%) 59.30 56.20 44.00 34.70 31.60
The level of satisfaction is generally considered positive (Table 2). Taking into account the different business sectors, e.g. in those sectors characterized by higher product standardization (office and health materials) the bigger the firm size, the higher the level of satisfaction (positive correlation index), while in the business sectors characterized by a lower degree of standardization (ICT and service), the bigger the firm size, the lower the satisfaction level (negative correlation index). The efficiency gap is a compounded index, which includes the evaluations expressed by the interviewees in terms of importance and satisfaction concerning a proposed range of services. The interviewers ranked the following topics: – Selling cost reduction (due to broadening of potential customers base, lower intermediary costs and inexpensive digital platform); greater visibility with regard to the range of Public Bodies (PBs); B2G introduction in addition to existing B2B and B2C; development of human resource capital; extension of the platform of potential buyers. The efficiency gap [12, 13] is linked to two simple indices, relating respectively to perceived importance and satisfaction concerning various features of the services. In terms of numerical representation, it is possible to identify the inefficiency value for each feature namely by measuring the distance between the top value (10) and the perceived satisfaction value and multiplying the same by the importance value. Efficiency gap ¼ (10-satisfaction degree) importance level. The larger the resulting percentage value, the lower the efficiency gap. Table 3 shows the efficiency gap relative to the potential advantages of using MEPA. The most appreciated features of MEPA are linked to the potential expanding of the market, while the benefits linked to the reduction of sale costs seem to be less relevant. The efficiency gap was calculated also for the principal features of the IT platform using Ghose and Dou [14], and Wilhite [15] models which demonstrated
The Italian Electronic Public Administration Market Place Table 4 Efficiency gap findings relative to MEPA platform features Ranking Items I
II
III
IV V VI
Education (provides useful information about MEPA, you can learn a lot about MEPA, the site answers my question about the work MEPA does, it would be easy to explain the work of MEPA to someone else) Empowerment (the site makes me feel that I can make a difference, provides me with ideas for possible actions, provides me with ways in which I can take action, the site sanctions my taking action) Interaction (the site is easy/simple to navigate, has an uncomplicated interface to encourage dialogue, has developed a community, includes interesting links, offers ways to help by using MEPA) Customization (platform is easy to tailor to my own needs, it offers me several ways to keep in touch, it is easy to tailor the content of site) Accountability (I’m confident that transactions are secure, the site makes it clear how my personal data will be used) Accessibility (the site offers different ways to give support, it was easy to use MEPA, the site provides customization (tailoring) for disadvantaged people)
61
Efficiency gap (%) 54.30
52.10
46.12
38.75 38.20 32.12
the link between the quality of interaction and commercial success in the on-line environment. Table 4 shows the efficiency gap findings relative to MEPA platform features. As can be evinced, while accountability and accessibility are highly valued, it seems that the technological platform could be improved by addressing more attention to content and the capacity to provide the user with information on the platform.
Conclusions The Italian central government has launched radical government-wide intervention for restructuring acquisition processes, through strong political commitment, longterm vision, and strategic management capacities. Set up as a public company, Consip, devised to avoid red tape, operates outside administrative rules and regulations, in ways such as to achieve higher worker dedication; it includes a combination of IT and project management skills, and is more client-sensitive and customized. However, such reform-sustaining programs are difficult to design, and implement successfully unless they are backed up by local initiatives. Under the pressures of lobbying small firms and reluctant purchasing agencies, the government lifted mandatory compliance set for all public agencies, with the exception of Ministries at state level. In August 2003, the bulk of the public sector was set free to negotiate acquisition contracts autonomously, provided that contract conditions and prices were more favorable than those applied in NFCs. Consip consequently developed MEPA, a virtual market for public bodies and certified suppliers open to small and medium size firms.
62
R. Adinolfi et al.
Our paper shows that despite the marked development of MEPA, still very few enterprises use it. However, the latter report on average, a high level of satisfaction mainly in terms of opportunities for potentially expanding markets. Indeed, the effectiveness of virtual markets depends on the number and level of activity of the enterprises involved. In this respect, the paper highlights significant margins for improvement. To increase the number of firms accessing MEPA, Consip should improve communication on the one hand and their technological platform on the other. Our research findings concerning the level of satisfaction on the part of SMEs operating through MEPA could be a useful tool for policy makers involved in the development of MEPA. Highlighting the most interesting features for SMEs as regards communication and promotion initiatives, they provide useful indications on the potential improvements of the technological platform. In particular, it has emerged that while variables such as platform user-friendliness, accountability and interaction are acceptable, others such as education and empowerment need improving. For a fuller picture of MEPA the analysis should also be extended to large(r) firms.
References 1. Hirschleim R.A. (1992). Information systems epistemology: an historical perspective. In Galliers R. (ed.) Information Systems Research: Issues, Methods and Practical Guidelines. London: Blackweel Scientific Publications. pp.28-60. 2. Henderson L.C. e Lee S. (1992), Managing I/S Design Teams: A Control Theories Perspective, in «Management Science», 38, pp. 757–777. 3. Cox A., Lonsdale C., Watson G. e Farmery R. (2004), Collaboration and Competition: The Economics of Selecting Appropriate Governance Structures for Buyer-Supplier Relationships, in C. Scott e W.E. Thurston (Eds.), Collaboration in Context, University of Calgary, 2003/4 4. Pettigrew, A.M., and Fenton, M. (Eds.) (2000) The Innovating Organization, London: Sage Publications. 5. Cheema G.S., Rondinelli D A (eds.) (1983) Decentralization and Development: Policy Implementation in Developing Countries. Beverly Hills: Sage 6. Bovaird T. (2006) Developing new forms of partnership with the market in the procurement of public services, in Public Administration, 84(1), pp. 81–102 7. Marra M. (2004) Innovation in e-procurement: the Italian experience, The IBM center for the business of government, Washington, USA. 8. Marra M. (2008) Centralizzazione e innovazione tecnologica nella riforma degli acquisti della Pa: un bilancio, in Mercato concorrenza regole / a. IX, n.3, dec 2007, pp.487–516 9. Ragin C. C. (1987) The Comparative Method. Moving Beyond Qualitative and Quantitative Strategy. Berkley: University of California Press 10. Ministry of Economy and Finance (2004) Programma di Razionalizzazione degli acquisti di beni e servizi per le Pubbliche Amministrazioni, 2003. Report to Parliament., Rome 11. Consip (2009) Annual Report. Rome, Italy 12. Broggi D. (2006) Un prontuario per i “policy makers” in Impresa e Stato, n.7, Milan. 13. Martini A. Sisti M. (2009) Valutare il successo delle politiche pubbliche, Il Mulino, Bologna 14. Ghose S. and Dou W. (1998) Interactive functions and their impacts on the appeal of Internet presence sites. Journal of Advertising Research, March/April, 38(2), 29–43 15. Wilhite R. (2003). When was your website’s last performance appraisal? Management Quarterly, 44(2), 2–15
The Role of ICT Demand and Supply Governance: A Large Event Organization Perspective F.M. Go and R.J. Israels
Abstract This paper addresses one of the most challenging tasks managers face, namely to implement information and communication technologies (ICT) effectively to a Large Event Organization (LEO), a special type of project organization. Such process requires people to absorb and implement factors which, in turn, demand an understanding of the organizational context characterized by “uncertainty” and ambiguous human behavior. Through the application of the Demand and Supply Governance (DSG) model, which has been tested in the products manufacturing industry (e.g. Supply Chain Management) and in steady state ICTorganizations we test the potential for application in the Large Event Organization (LEO) context with many sponsors, public sec-tor transport and tourism stakeholders. The results of our empirical investigation of DSG applied in a LEO afford a theoretical framework for understanding the ICT-related management: its characteristics, dilemmas, enablers and inhibitors. The study findings indicate that systematic data combination and division contribute to the potential for improving financial, resilience, reliability and security needs; special attention should be paid to prevent for example total information blackouts during LEO staging.
Introduction This paper uses a literature review to explore a variety of contemporary forms of organizational structures, including the ICT Demand-Supply Governance (DSG), in relation to a set of criteria: objectives, ownership, geographic location, technology
F.M. Go Centre for Tourism Management at the Rotterdam, Rotterdam School of Management, Erasmus University, Rotterdam, The Netherlands e-mail:
[email protected] R.J. Israels Quint Wellington Redwood, Amsterdam, The Netherlands e-mail:
[email protected] A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_8, # Springer-Verlag Berlin Heidelberg 2011
63
64
F.M. Go and R.J. Israels
deployed, for purposes of detecting potential patterns of conduct. Our claim is that an ICT DSG “governance” model underpinned by the interaction approach [1] can help reconcile both risk and uncertainty experienced by autonomous stakeholders in the process of sharing “information as a common pool resource”. So far, it has proven hard to develop a comprehensive theory of change management, which enables effective support for the proper implementation of ICT in the inter-organizational context. This paper’s main contribution is threefold: first, it develops an “interactive” ICT DSG applied to the Large Event Organizations (LEO) model to aid the bridging of gaps (e.g., cross-cultural, infrastructural and governance distance) between network stakeholders; second, applies this model to case studies; and, third, pinpoints advantages and disadvantages of sharing information and ICT infrastructure for LEO’s. It builds on network theory [1], particularly the conditions of “uncertainty” and “ambiguity” that can be seen as outcomes of the practice of big corporations after they “unbundled” themselves into business components [2]. In doing so, especially when multiple stakeholders are involved, whose backgrounds, objectives and “jargon” may differ, and therefore, possibly impede the proper information exchange (of spoken word, images and other data) it raises a formidable challenge: How can we manage the appropriate ICT solutions to support the inter-organizational processes to turn a LEO into a big success?
Large Event Organizations A LEO is a special type of project organization [3] with the following characteristics that represent also the main-issues to solve: 1. The issue of large scale as is evidenced by a voluminous audience (on site >50,000 and by AV media >1 million) and the large budget (>€10 million) involved to stage the LEO. 2. The issue of time–space compression, as is evidenced by the LEO’s short life cycle in real life ( Mapping ... <event address="selectedCoordinates">
Fig. 2 An example of component semantic descriptor
can be partially alleviated in case of SA-REST or SAWSDL specifications. In such cases, wrappers are made available that are used to extract semantic annotations of operations and I/Os and include them in the semantic descriptors. Moreover, we will consider the development of tools to support semantic annotation of component description.4 The semantic descriptor MapViewer_SD is shown in Fig. 2. Categories are attached to MapViewer_SD through a categories tag. Each operation (resp., event) in the descriptor refers to the corresponding operation (resp., event) in the mashup component through the address attribute; for example, the operation in MapViewer_SD refers to the operation show in the component API, while the event refers to the selectedCoordinates event in the component. The semanticReference attribute is used to annotate operations, their I/Os and the event output parameters with concepts from reference ontologies; for instance, the show operation and its first input are annotated with the concepts showLocation and Address taken from the "http://localhost:8080/ Travel.owl" ontology.
4
See for example semantic annotation tools available in Kepler (http://kepler-project.org).
122
D. Bianchini et al.
The Mashup Ontology In our approach, component semantic descriptors are organized in a mashup ontology, to better support collaborative enterprise mashup. In the mashup ontology, descriptors are related in two ways (a) semantic descriptors SDi and SDj of components which provide complementary functionalities and can be wired in the final mashup application are connected through a functional coupling link; (b) semantic descriptors SDi and SDj of components which perform the same or similar functionalities are connected through a functional similarity link. To identify coupling or similarity links (resp.), semantic matching techniques and algorithms can be used. In particular, on the basis of our previous work on Web service matching and discovery [10], we have defined the coupling degree coefficient and the functional similarity degree coefficient (resp.) shown in Fig. 3 and based on the Dice’s coefficient. These coefficients are based on the computation of concept affinity CAff() between pairs of, respectively, (a) operations names, (b) I/Os names and (c) event outputs names used in the semantic descriptors to be matched. Concept affinity has been extensively defined in [10]. Here we simply state that it is based on both a terminological (domain-independent) matching based on the use of WordNet [11] and a semantic (domain dependent) matching based on ontology knowledge. SimIO(SDR, SDC) between SDR and SDC is computed to quantify how much SDC provides at least the operations and I/Os required in SDR; no matter if SDC provides additional operations and I/Os. SimIO(SDR, SDC) is obtained by computing the concept affinity of, respectively, inputs, outputs, operations names of SDC w.r.t. those of SDR. In particular, SimIO(SDR, SDC) equals 3 if every input (respectively, output, operation) of SDR has a corresponding input (respectively, output, operation) in SDC. The SimIO value is then normalized to [0..1]. The similarity coefficient is asymmetric. In fact, according to a symmetric similarity measure, the additional COUPLING DEGREE Σh,k CAff (outh, ink)
∈ [0,1] ⏐OUTev (evi)⏐ Σi,j Coupl EvOp (evi, opj) CouplIO (SDi, SDj) = ∈ [0,1] ⏐EV (SDi)⏐
CouplEvOp (evi, opj) =
FUNCTIONAL SIMILARITY DEGREE Sim IO (SDR, SDC) =
Σi,j CAff (iniR, injC)
+ ⏐IN (SDR)⏐ Σh,k CAff (outhR, outkC) ⏐OUT (SDR)⏐ Σl,m CAff (oplR, opmC) ⏐OP (SDR)⏐
Fig. 3 Coupling and functional similarity degree
+
∈ [0,3]
A Semantic Framework for Collaborative Enterprise Knowledge Mashup
123
operations and I/Os, that are in SDC but not in SDR, would reduce the similarity value even if required operations and I/Os are found in SDC. CouplIO(SDi, SDj) is obtained by computing values of event-operation coupling coefficients, CouplEvOp(evi, opj), for all pairs of events/operations. CouplEvOp(evi, opj) is obtained by computing the concept affinity of outputs of the event with respect to the inputs of the operation; the sum of the resulting values is then normalized in the range [0..1]. This coefficient is equal to 1 if every output of the event has a semantically equivalent input in the operation. CouplIO(SDi, SDj) is then obtained by summing up the values of CouplEvOp (evi, opj) for all pairs of events/operations and normalizing the result in the range [0..1]. CouplIO(SDi, SDj) is asymmetric. In fact, it equals 1 if every event ev in SDi has a corresponding operation op in SDj and, in particular, every output of ev has a corresponding input in op, no matter if SDj provides additional operations. The mashup ontology can be seen as a graph, where nodes are component semantic descriptors and directed edges represent similarity or coupling between descriptors. Directed edges are due to the asymmetry of coupling and similarity coefficients. Formally, the mashup ontology MO is defined as hD; E S ; E C ; f S ; f C i, where D is the set of component semantic descriptors, hSDi, SDji 2 E S iff SimIO(SDi, SDj) d, hSDi, SDji 2 E C iff CouplIO(SDi, SDj) y, f S : E S![0..1] is a function that associates similarity values to edges in E S and f C : E C ![0..1] is a function that associates coupling values to edges in E C. The thresholds d and y are experimentally set.
Collaborative Enterprise Mashup The need of combination and aggregation of multiple components (data and/or application logics) provided by third parties is particularly relevant for building situational applications in enterprises that exploit this form of collaboration. Therefore, they are very interested in mashup construction by composing sharable readyto-use components. In our approach, the Mashup Ontology can be exploited for searching, finding and suggesting suitable components to be used in mashup applications. The mashup designer (i.e., the consumer) starts by specifying a request SDR for a component in terms of desired categories, operations and I/Os. A set of components SDi which present a high similarity with the requested one and such that at least a category in SDR is equivalent or subsumed by a category in SDi are proposed. Components are ranked with respect to SimIO values. Once the consumer selects one of the proposed components, additional components are suggested, according to similarity and coupling criteria (a) components that are similar to the selected one (the consumer can choose to substitute the initial components with the proposed ones); (b) components that can be coupled with already selected ones during mashup composition. Each time the consumer changes
124
D. Bianchini et al.
and selects another component, the Mashup Ontology is exploited to suggest the two sets of suitable components.
Conclusions In this paper, we proposed a semantic framework for mashup component selection and suggestion for composition in the context of collaborative enterprise mashup. Mashup components are semantically described and organized according to similarity and coupling criteria, and effective (semi-)automatic design techniques have been proposed. The framework is intended to be used in an integrated way with mashup engines, which provide the functionality to generate the final mashup application. Future efforts must be devoted to extend the model with (a) additional facets [2] to refine and improve the components search; (b) other kinds of knowledge about components and mashups, such as collective knowledge [7]; (c) additional kinds of compatibility between components (such as type compatibility). Moreover, the framework will be tested on real case scenarios.
References 1. Hoyer, V. and Stanoevska-Slabeva, K. (2009) Towards a Reference Model for grassroots Enterprise Mashup Environments, 17th European Conf. on Information Systems (ECIS). 2. Gomadam, K., Ranabahu, A., Nagarajan, M., Sheth, A. P. and Verma, K. (2008) A Faceted Classification Based Approach to Search and Rank Web APIs, 6th IEEE Int. Conference on Web Services (ICWS08). 3. Daniel, F., Casati, F., Benatallah, B. and Shan, M.C. (2009) Hosted Universal Composition: Models, Languages and Infrastructure in mashArt, 28th Int. Conference on Conceptual Modeling (ER09), pages 428–443. 4. Lathem, J., Gomadam, K. and Sheth, A. (2007) SA-REST and (S)mashup: Adding Semantics to RESTful services, IEEE Int. Conference on Semantic Computing, pages 469–476. 5. Abiteboul, S., Greenshpan, O. and Milo, T. (2008) Modeling the Mashup space, Workshops on Web Information and Data Management, pages 87–94. 6. Ngu, A.H.H., Carlson, M.P., Sheng, Q.Z. and Paik, H.Y. (2010) Semantic-Based Mashup of Composite Applications, IEEE Trans. On Services Computing, vol.3, no.1. 7. Greenshpan, O., Milo, T. and Polyzotis, N. (2009) Autocompletion for Mashups, 35th Int. Conference on Very Large DataBases (VLDB09), pages 538–549. 8. Riabov, A.V., Boillet, E., Feblowitz, M.D., Liu, Z. and Ranganathan, A. (2008) Wishful search: interactive composition of data mashups, WWW08 Int. Conference, pages 775–784. 9. Elmeleegy, H., Ivan, A., Akkiraju, R. and Goodwin, R. (2008) MashupAdvisor: A Recommendation Tool for Mashup Development, 6th Int. Conference on Web Services (ICWS08), pages 337–344. 10. Bianchini, D., De Antonellis, V., Melchiori, M. (2008) Flexible Semantic-based Service Matchmaking and Discovery, World Wide Web Journal, 11(2):227–251. 11. Fellbaum, C. (1998) WordNet: An Electronic Lexical Database, MIT Press.
Similarity-Based Classification of Microdata S. Castano, A. Ferrara, S. Montanelli, and G. Varese
Abstract In this paper, we propose a similarity-based approach for microdata classification based on tagging, matching and clouding techniques. The goal is to construct entity-centric microdata clouds where similar microdata items can be properly arranged to highlight their relevance with respect to a selected target entity according to different notions of relevance defined in the paper. An application example is provided, based on a microdata collection extracted from a real microblogging system.
Introduction The increasing popularity of Web 2.0 and related user-centered services like news publishing, social networks, and microblogging systems, have led to the availability of a huge bulk of microdata that are mostly characterized by short textual descriptions with poor metadata and a basic structure [1]. Microdata become an essential source of information, sometimes unique, to answer users’ queries about specific events/topics of interest with the goal of providing for example subjective information reflecting users’ opinions/preferences. In this direction, existing work is mainly focused on defining techniques and applications for microdata search and retrieval with special focus on news search-engines (e.g., http://www.bloglines.com, http:// megite.com, http://spinn3r.com). However, solutions for microdata organization and classification are still at a preliminary level of development [2–6]. In this paper, we propose a similarity-based approach for similarity-based microdata classification. The approach is based on tagging techniques to automatically extract tag equipments with terminological relationships from microdata items. Techniques for microdata matching are then used to evaluate the level of similarity between microdata. Such techniques have been conceived to exploit as much as possible the quantity of information carried by microdata tag equipments. Similar
S. Castano, A. Ferrara, S. Montanelli, and G. Varese Dipartimento di Informatica e Comunicazione, Universita` degli Studi di Milano, Milano, Italy e-mail:
[email protected];
[email protected];
[email protected]; varese@dico. unimi.it
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_15, # Springer-Verlag Berlin Heidelberg 2011
125
126
S. Castano et al.
microdata items are then arranged in microdata clouds around a selected entity of interest, by relying on clouding techniques based on the notion of relevance. Relevance captures the “importance” of a microdata item within a cloud, by distinguishing, also in a visual way, how much prominent microdata item(s) are with respect to the cloud entity. An application of the proposed approach to cloud a collection of microdata items extracted from the Twitter microblogging system is also discussed.
The Proposed Approach Microdata represent a popular way of communicating among people based on fast, short, ready-to-consume news and information that are composed according to a variety of formats and distributed using a variety of communication means, including the web, but also email and SMS. An important feature of microdata is that they include not only contents generated from official information sources such as newspapers and broadcasters but also the so called user-generated content, as it can be derived from microblogging and other similar kinds of information sources. In order to support acquisition and management of a variety of microdata formats (e.g., RSS, Atom, SMTP/MIME, Twitter), in [7], we presented a reference meta-model based on the notion of microdata item providing a structured representation of the different featuring properties of a single microdata, like its title and textual content. Given a collection of microdata items, our approach to microdata classification is organized in the following three phases (see Fig. 1): l
l
Microdata tagging, where the tags featuring the content of each microdata item in the collection are extracted and organized through text analysis techniques. The result of this phase is a set of Tag Equivalence Classes (TEC) interconnected by terminological relationships. Microdata matching, where microdata matching techniques are used to evaluate the level of semantic affinity of the microdata items by exploiting TEC classes previously defined on the considered collection. The result of this step is a Microdata Similarity Graph (MSG) denoting the similarity values detected between pairs of microdata items.
Fig. 1 A similarity-based approach to microdata classification and clouding
Similarity-Based Classification of Microdata
127
Table 1 A portion of the considered collection of Twitter posts Mdi Content mdi 6251 “You’re not John Locke and you are insulting a great man by wearing his face” – Jack #LOST mdi 5930 “Jack Shephard was the William Henry Harrison of island protectors.” mdi 5941 RT @xxxx “I hope someone does for you what you just did for me.” ~John Locke #quote to Jack Shephard via #LOST mdi 6050 “Speed Painting the Lost finale – John Locke vs. Jack Shephard – http://tinyurl.com/ 2g2p9a8” mdi 6222 “@xxxx John Locke, Flocke, and Terry O’Quinn. Never loved a Character(s) more” mdi 6231 “Thought I just saw John Locke walking around the university. LOST overload! #fb” mdi 6234 “‘I looked right into the eyes of the Island, and what I saw was beautiful’. John Locke” mdi 6238 “#lostfinale ‘I have looked into the eye of this island, and what I saw was beautiful.’ – John Locke” mdi 6245 “Thomas Hobbes, John Locke wrote about ‘laws of nature.’ Today @xxxx weighs in: ‘hot women don’t date ugly guys unless they are rich’”
l
Microdata clouding, where the graph MSG is exploited to build up microdata clouds. A microdata cloud originates from MSG in an entity-centric way, by starting from an item selected as the cloud centroid, and by recursively inserting items that are adjacent nodes in MSG according to cohesion and distance criteria.
Running example: To show an application of the techniques for tagging, matching, and clouding of microdata items, we consider an application example based on a collection of 6,000 posts extracted from the Twitter social network system (http:// www.twitter.com) containing “John Locke” or “Lost TV series.” A portion of this microdata collection is shown in Table 1, where a unique mdi identifier is assigned to each microdata item. In the items of Table 1, we note that most of the microdata comment on “John Locke” with respect to its role in the Lost TV series, but microdata items referring to the English philosopher are also retrieved (e.g., mdi 6245). We will discuss the application of our techniques to classify this collection of 6.000 microdata items with the goal of building a microdata cloud around the entity “John Locke sayings” for the Lost TV series.
Tagging Microdata Items The goal of this first step is to associate a Tag Equipment TE(mdii) with each microdata item mdii listing the relevant terms (i.e., tags) featuring the textual content of the microdata item itself. Microdata items are submitted to a tagging process, where terms featuring their textual content are extracted and stop-words (e.g., articles, conjunctions, prepositions) and special characters/symbols (e.g., #, @) are discarded through conventional text analysis techniques. The tags belonging to the tag equipments of the considered collection of microdata items are then
128
S. Castano et al.
Fig. 2 Example of tag equivalence classes
organized into Tag Equivalence Classes TEC. An equivalence class tec 2 TEC is defined as a set of pairs {(ti, fi)} where ti is a tag appearing in at least one tag equipment and fi is the frequency of ti in all the tag equipments of the whole collection of microdata items. The tags belonging to a tec are characterized by the same lemma. For tags appearing in WordNet, the tec lemma coincides with the WordNet entry. For the other tags, conventional lemmatization techniques are used to determine the lemma and the appropriate tec. A tec contains the possible variants of the corresponding tec lemma, such as the “ing” form for verbs and the plural forms for nouns, meaning that usually few different terms populate an equivalence class. The tecs in TEC are linked to each other through the Synonymy (SYN), HyperonymyOf/HyponymyOf (BT/NT), HolonymyOf/MeronymyOf (RT), and InstanceOf/HasInstance (IS) relations, on the basis of the WordNet relations holding between the corresponding tec lemmas and related synset. Example: Considering the microdata item mdi 6251 shown in Table 1, the tag equipment extracted is TE(mdi 6251) ¼ {you, john, locke, you, insulting, great, man, wearing, face, jack, lost}. An example of tecs resulting from the analysis of the 6,000 microdata items extracted from Twitter are shown in Fig. 2.
Matching Microdata Items In order to match microdata, we have to deal with the fact that microdata items are basically poorly structured chunks of information, mainly consisting in short textual contents. Thus, for matching, we exploit the tag equipments and equivalence classes associated with microdata items, by properly taking into account both the meaning and the social nature of microdata. Given a pair of microdata items mdii and mdij, the function SA(mdii, mdij) 2 [0, 1] is defined to calculate the level of semantic affinity holding between mdii and mdij, that is proportional to the number of matching tags in the tag equipments TE(mdii) and TE(mdij), as follows: SAðmdii ; mdij Þ ¼
2 jðtk tz Þj jTEðmdii Þj þ TEðmdij Þ
In order to say that a pair of tags tk 2 TE(mdii) and tz 2 TE(mdij) matches, denoted as tk ~ tz, the following condition must hold:
Similarity-Based Classification of Microdata
129
tk tz iff simðtk ; tz Þ th ^ simðtk ; tz Þ simðtk ; tl Þ; 8tl 2 TEðmdij Þ where sim(tk, tz) 2 [0, 1] is a value denoting the degree of similarity between tk and tz and th 2 (0, 1] is a threshold setting the minimum level of similarity required to consider two tags as matching tags. To evaluate the tag similarity sim, we do not limit to string matching functions, but we want to capture the meaning and the social nature of microdata and tag information. In particular, for considering the meaning of a tag t we refer to the semantic relations holding between the tec(s) associated with t, that are associated with a strength s(R) 2 [0, 1], with s(SYN) s(BT/NT) s(IS) s(RT). Values of s(R) are determined experimentally and express the implication of R for similarity. For assessing the social nature of t we introduce a notion of “quantity of information” carried by t. The quantity of information is expressed by a weight wt that captures the popularity of the tag t in the tag equipments collection: wt ¼
1 logðf=FÞ
where f is the frequency of occurrence of t in the collection T of all the tags used in the tag equipments of all the microdata items and F is the frequency of the most frequent tag in T. On this basis, the tag similarity coefficient sim(tk, tz) is calculated as follows: simðtk ; tz Þ ¼ K sðRÞ þ ð1 KÞ
w þ w tk tz 2
where s(R) is the strength of the strongest semantic relation R holding between tecs of tk and tecs of tz. Finally, K 2 [0, 1] is a weight denoting the relative importance/ impact to be assigned to semantic relations and quantity of information in measuring sim(tk, tz), respectively. As a result, a Microdata Similarity Graph MSG ¼ <MD, E> is defined, where MD is a set of graph nodes, each one representing a microdata item and E is a set of labeled edges, each one denoting the level of semantic affinity between a pair of microdata items. Example: Consider the items mdi 6251 and mdi 5941 of Table 1, with TE(mdi 6251) ¼ {john, locke, insulting, . . .} and TE(mdi 5941) ¼ {. . ., hope, someone, john, . . .}. In the example, we use s(SYN) ¼ 1.0, s(BT/NT) ¼ 0.8, s(IS) ¼ 0.7, s(RT) ¼ 0.5. For matching mdi 6251 and mdi 5941, we take into account each of the tags of mdi 6251 and we find the corresponding best matching tag in mdi 5941 according to their sim coefficient. By using a threshold th ¼ 0.5, that is sufficient in this example to cut off poorly relevant results of tag matching, we obtain a semantic affinity SA(mdi 6251, mdi 5941) ¼ 0.47. The resulting MSG graph for microdata items of Table 1 is shown in Fig. 3. As an example of tag similarity evaluation, we take into account the tag “john” in the tag equipment of mdi 6251. The best match for the tag “john” of mdi 6251 is the tag “john” of mdi 5941, since they coincide. However, assuming to use K ¼ 0.5
130
S. Castano et al.
Fig. 3 Example of MSG graph for the microdata in Table 1
that allows to balance the impact of semantic relations and quantity of information, we have sim(“john”, “john”) ¼ 0.5 1 þ 0.5 0.45 ¼ 0.73 since the quantity of information carried by the tag “john” is equal to 0.45.
Clouding Microdata Items Given a microdata similarity graph MSG, the entity-centric construction of a microdata cloud MDC(mdic) starts with the specification of the target entity, that is a label describing the event/topic purpose of the cloud, and with the choice of the cloud centroid mdic, that is the microdata item that prominently represents the cloud entity. Moreover, the following parameters need to be specified for constructing MDC(mdic) out of the MSG graph: l
l
Cloud cohesion, chs(MDC(mdic)) 2 [0, 1]: it represents the minimum level of semantic affinity SA required for a microdata item to be included into MDC (mdic). Cloud depth, dpt(MDC(mdic)) 1: it represents the maximum path length in MSG allowed between the node mdic and any microdata item node to be included into MDC(mdic).
Initially, the cloud MDC(mdic) only contains the centroid mdic. The construction of the microdata cloud MDC(mdic) is articulated in the following two steps. Selection of the microdata items: The microdata cloud MDC(mdic) coincides with the MSG graph portion including the microdata items that satisfy the cohesion and depth parameters. Starting from the cloud centroid mdic, the microdata items of MSG are recursively traversed within a distance d dpt(MDC(mdic)) from mdic. Each considered microdata item mdii 2 MD is inserted in MDC(mdic) iff SA(mdic, mdii) chs(MDC(mdic)). Computation of the microdata relevance: Each item mdii 2 MDC(mdic) is characterized by a relevance rel(mdii, MDC(mdic)) value, which denotes the
Similarity-Based Classification of Microdata
131
Fig. 4 Example of microdata cloud around the entity “John Locke sayings”
importance of mdii with respect to the other items in the cloud MDC(mdic). To calculate the relevance value of a microdata item, different criteria can be used according to different notions of importance that should be emphasized through the cloud. For instance, we can envisage a relevance-by-centrality criterion where rel(mdii, MDC(mdic)) is in direct ratio with the number of incoming edges connecting mdii with the other items of the cloud. A relevance-by-provenance criterion can be also considered, where rel(mdii, MDC(mdic)) is determined on the basis of the level of reliability/trust of the provenance datasource from which mdii has been acquired. Moreover, the “quantity of information” associated with the tags in the equipment of a microdata item mdii can be exploited to determine a relevance-by-popularity value rel(mdii, MDC(mdic)) expressing the importance of mdii according to the frequency of the tags therein contained. Example: In Fig. 4, the microdata cloud around the entity “John Locke sayings” is shown. This cloud is built from the MSG graph of Fig. 3 by selecting the microdata item mdi 6251 as the centroid of the cloud and by setting chs(MDC (mdi 6251)) ¼ 0.4 and dpt(MDC(mdi 6251)) ¼ 3. In this example, the size of the microdata items is determined by their associated relevance values calculated with a relevance-by-centrality criterion, according to the number of incoming connections.
Concluding Remarks In this paper, we presented a similarity-based approach for microdata classification based on tag-oriented matching techniques, characterized by semantic and social information carried by microdata tag equipments. The definition of a “quantity of information” measure for microdata matching and a relevance value for arranging
132
S. Castano et al.
similar microdata in entity-centric clouds are the main contribution of our approach with respect to the initial work existing in the literature about structured data clouding [8–10]. Ongoing and future work are related to the formal definition of the classification framework and to its application and experimentation on real microdata and their combination with conventional data. Our goal is to evaluate the effectiveness of microdata clouds in performing searches over a collection of microdata and to compare our results with existing keyword-based microdata engines.
References 1. Koutrika G, Bercovitz B, Ikeda R, Kaliszan F, Liou H, Zadeh Z, Garcia-Molina H (2009) Social Systems: Can We Do More Than Just Poke Friends?, Proc. of the 4th Biennial Conference on Innovative Data Systems Research, Asilomar, CA, USA. 2. Bergamaschi S, Guerra F, Orsini M, Sartori C, Vincini M (2007) RELEVANTNews: a Semantic News Feed Aggregator, Proc. of the Workshop on Semantic Web Applications and Perspectives, Bari, Italy. 3. Li X, Yan J, Deng Z, Ji L, Fan W, Zhang B, Chen Z (2007) A Novel Clustering-Based RSS Aggregator, Proc. of the 16th Int. Conference on World Wide Web, Banff, Alberta, Canada. 4. Radev D, Otterbacher J, Winkel A, Blair-Goldensohn S (2005) NewsInEssence: Summarizing Online News Topics, Communications of the ACM 48(10): 95–98. 5. Gulli A (2005) The Anatomy of a News Search Engine, Proc. of the 14th Int. Conference on World Wide Web, Chiba, Japan. 6. Das A, Datar M, Garg A, Rajaram S (2007) Google News Personalization: Scalable Online Collaborative Filtering, Proc. of the 16th Int. Conference on World Wide Web, Banff, Alberta, Canada. 7. Castano S, Ferrara A, Montanelli S, Varese G (2010) Matching Micro-Data, Proc. of the 18th Italian Symposium on Advanced Database Systems, Rimini, Italy. 8. Koutrika G, Zadeh Z, Garcia-Molina H (2009) Data Clouds: Summarizing Keyword Search Results over Structured Data, Proc. of the 12th Int. Conference on Extending Database Technology: Advances in Database Technology, Saint Petersburg, Russia. 9. Hernandez M, Falconer S, Storey M, Carini S, Sim I (2008) Synchronized Tag Clouds for Exploring Semi-Structured Clinical Trial Data, Proc. of the Conference of the Center for Advanced Studies on Collaborative Research, Richmond Hill, Ontario, Canada. 10. Kuo B, Hentrich T, Good B, Wilkinson M (2007) Tag Clouds for Summarizing Web Search Results, Proc. of the 16th Int. Conference on World Wide Web, Banff, Alberta, Canada.
The Value of Business Metadata: Structuring the Benefits in a Business Intelligence Context D. Stock and R. Winter
Abstract Business metadata (BM) plays a crucial role in increasing data quality of information systems (IS), especially in terms of data believability, ease of understanding, and accessibility. Despite its importance BM is primarily discussed from a technical perspective, while its business value is scarcely addressed. Therefore, this article aims to contribute to the further development of existing research by providing a conceptual framework of qualitative and quantitative benefits. A financial service provider case is presented that demonstrates how this conceptual framework has successfully been applied in a twostage cost-benefit analysis.
Introduction Motivation and Objectives Over the last years “making better use of information” gained importance and now ranks among the top five priorities of IT executives [1]. This trend is linked to the prevailing significance of Business Intelligence (BI), where data quality is a crucial factor for the perceived net benefits to the end-user [1–3]. In this context the scope of data quality is not limited to factual dimensions like data accuracy and completeness, but also addresses individual-related dimensions like data believability, ease of understanding, and accessibility [4, 5]. In particular for the individual-related dimensions, business metadata (BM) plays an important role in increasing data quality and therefore the acceptance of BI systems [6, 7].
D. Stock and R. Winter Institute of Information Management, University of St. Gallen, St. Gallen, Switzerland e-mail:
[email protected];
[email protected] A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_16, # Springer-Verlag Berlin Heidelberg 2011
133
134
D. Stock and R. Winter
Despite its increasing relevance for practitioners [8], academic literature lacks an explicit discussion regarding the benefits of BM [9, 10]. In general, the discussions of benefits of BM remain rather abstract. For example, Foshay et al. [6] substantiate the positive effect of BM on the overall usage of BI systems, Fisher et al. [11] show that BM does influence decision outcomes, and Even et al. [12] examine the impact of BM on the believability of information sources. Therefore, this article contributes to a structured analysis of the benefits of BM by proposing a framework of qualitative and quantitative benefit dimensions. This framework can be applied in a pragmatic cost-benefit analysis of respective BM solutions.
Research Methodology This article applies the design research paradigm in order to accomplish utility. According to Hevner et al. [13] and March and Smith [14], the outcomes of a construction process under the design research paradigm can be classified as constructs, models, methods, and instantiations. Several reference models for this process have been proposed [e.g., 13–15]. The most recent process of Peffers et al. [15] specifies six phases: “identify problem and motivate”, “define objectives of a solution”, “design and development”, “demonstration”, “evaluation”, and “communication”. In this article the “design and development centered approach” is applied to introduce a conceptual framework (model) of BM benefits. Therefore, we will demonstrate the applicability and utility of our artifact in a single case and postpone a comprehensive evaluation to future research.
Business Metadata “Data about data” has evolved as the most widespread definition of metadata. Since this definition is utterly imprecise, we will adopt the definition in Dempsey and Heery [16]: “Metadata is data associated with objects which relieves their potential users of having full advance knowledge of their existence or characteristics”. This definition highlights that the scope of metadata is a matter of perspective, particularly concerning the “objects” in focus. In the case of a library any electronic data on books (e.g., title and publisher) is considered metadata. In the case of BI the focus lays on data and the associated systems. Therefore, metadata comprises definitions (e.g., column or row headers), detailed descriptions, quality indicators (e.g., completeness of data), and many more. BM is a sub-category of metadata that is used primarily by the business side, whereas technical metadata is used by IT [6, 17, 18]. It should be noted, that although the two sets are collectively exhaustive, they are not disjoint. This means that metadata can imply both business and technical relevance (e.g., functional
The Value of Business Metadata
135
descriptions of information services for better business-IT-alignment). In literature seven categories of BM can be distinguished [6, 9, 19]: 1. 2. 3. 4.
Definitional metadata – What do I have and what does it mean? Data quality – Is the quality of data appropriate for its intended use? Navigational metadata – Where can I find the data I need? Process metadata (also lineage metadata) – Where did this data originate from and what has been done to it? 5. Usage metadata – How frequently is a specific data set/report requested and what user profiles can be derived? 6. Audit metadata – Who owns the data and who is granted access? 7. Annotations (semi-structured comments) – Which additional circumstances or information do I need to consider when reading this data?
Derivation of Benefit Dimensions The derivation consisted of three successive steps for identifying qualitative and quantitative benefits of business metadata. First, we screened the body of knowledge to collect empirically proven or mentioned cause-effect relations. Second, the definitions and examples for business metadata were used to analytically derive additional benefits. Finally, the qualitative benefits were explored in specific use cases to identify quantifiable measures. In general, the generation and management of metadata serve two purposes. First, metadata is necessary to minimize the efforts for development and administration of BI-Systems. Second, it is used to improve the extraction of information from BI-Systems [16, 18, 20, 21]. In order to identify the benefit potential of BM, we will examine these two aspects separately.
Development and Administration of BI-Systems Since the article focuses on BM, we will only include business related aspects of the development and administration of BI-Systems. The Data Management Association (DAMA), which published a comprehensive framework for data management in practice, names three activities the business side is operationally responsible for [22]: requirements engineering, data quality management, and data security management. Requirements engineering comprises activities to identify, analyze, and define requirements [22–24]. Usage metadata increases the transparency on data usage through access statistics and user profiles [9, 10, 21]. Definitional, quality, and navigational metadata can be used to increase the level of data reuse by analyzing the current data inventory for reusable elements [21]. Overall, higher transparency
136
D. Stock and R. Winter
on data usage and increased data reuse results in lower maintenance and development costs by phasing out unused reports and avoiding redundancy. Going beyond mere reactive action, data quality management works as a proactive and preventive concept. This concept is characterized by a continuous cycle consisting of activities to define, measure, analyze, and improve data quality [22]. Hereby data quality metadata can be beneficial. On the one hand, the transparency on data quality is increased through quality indicators. On the other hand, the level of automation for measuring and improving data quality by defining business rules for data validation and/or cleansing [6, 7, 9, 10, 21]. Additionally, process metadata can be used to improve the traceability of data issues through a root-cause and impact analysis along the data transformation process [6, 7, 9, 10]. Overall, transparency, automation, and traceability contributes to lower data cleansing costs and better decision making by proactively and efficiently managing data quality. Data security management comprises activities to develop and execute security policies in order to meet internal and regulatory requirements [22, 25]. Thereby audit metadata increases the transparency on compliance and ensures the traceability of compliance issues through audit logs [9, 10]. Overall, transparency and traceability on compliance reduces regulatory fines by proactively managing privacy protection and confidentiality.
Extraction of Information from BI-Systems Within systems theory information is defined as data within a certain context, whereas data itself has no meaning beyond pure existence [26]. BM describes the context of data by providing additional information (e.g., definitions and applied transformation rules). Therefore the benefits of BM in the context of information extraction are closely related to the usage dimensions of data quality: ease of understanding, interpretability, believability, and accessibility [4, 27]. Ease of understanding evaluates to which extend information is clear, readable, and easily understood. Hereby, definitional metadata can be used to enforce a unique terminology and communication language within the enterprise by eliminating terminological defects [21, 28, 29]. Ease of understanding therefore increases the acceptance and usage of BI-Systems [6, 7] and/or results in less need for first-level support. From an information producer perspective, a unique terminology also increases the data quality by fostering a consistent data entry. Interpretability evaluates to which extend information is interpretable in the light of individual belief, judgment, and circumstances. Especially definitional and quality metadata is necessary to assess the information’s fit for use [11, 30]. In addition annotations are a means of pointing out recent events through structured comments. From an information producer perspective, annotations also increase the flexibility during information entry. Better interpretability results in better decision making [11, 30].
The Value of Business Metadata
137
Fig. 1 Qualitative and quantitative benefits of BM
Believability evaluates to which degree the information is trustworthy. Since BISystems are often regarded as black-boxes, process and quality metadata helps to increase transparency on the information value chain [12, 31] by specifying source systems, applied transformation rules, and quality restrictions. Higher believability not only increases the acceptance and usage of BI-Systems [6, 7], but also contributes to better decision making [12, 31]. Accessibility evaluates if the information in the BI-System is retrievable and available. Regarding retrievability, definitional and navigational metadata facilitates the locating of existent information by providing an index or ontology for the available information within the BI-Systems [28]. Additionally, usage metadata can be employed to derive “related reading” proposals from user profiles. Regarding availability, usage metadata can be evaluated in order to adapt the availability of information to consumer demand. Better accessibility results in lower search costs [28] and/or less need for first-level support. Figure 1 summarizes the identified qualitative and quantitative benefits of BM.
Application within Credit Risk Management Initial Situation EUFSP is a European financial services provider, who offers leasing and structured finance products in Central and Eastern Europe. Lately EUFSP suffered from several data quality issues due to inconsistencies in definitions of internal and
138
D. Stock and R. Winter
regulatory performance indicators. This led to bad decisions at management level, high process costs for information retrieval, and compliance risk. This situation required immediate action, since the level of inconsistency was likely to increase. Therefore the implementation of a central business glossary for fostering a unique company-wide terminology was evaluated.
Business Case for Central Business Glossary In order to assess the benefits of a central business glossary the herein proposed conceptual framework was applied in a two-stage approach. Since the quantification of business benefits is difficult in general, we pre-validated the use of a central business glossary by evaluating it against the qualitative benefit dimensions in the “information extraction” domain. A Likert-scale was used to assess and compare the as-is situation along the qualitative benefit dimensions to the expected development after the implementation of a central business glossary (see Fig. 2). In step two, EUFSP quantified the benefit dimensions with biggest improvement potential: “ease of understanding” and “accessibility”. In particular they estimated cost savings in information retrieval and better decision making. Since EUFSP’s core business is evaluating credit risk, better decision making was examined in terms of credit loss savings. The associated running costs for maintaining a central business glossary were approximated by the personal costs for a responsible data quality manager. In total the cost-benefit analysis identified a potential of 70,500 Euros/year. Figure 3 summarizes the business case. At the end, the introduction of a business glossary at EUFSP was assessed to be beneficial. The next stage of the project will be the evaluation of tools (e.g., “ASGmetaGlossary” and “SAP metapedia”) for the implementation. In addition to the maintenance of the BM (e.g., providing a role-based workflow support) the focus
Fig. 2 Pre-assessment of improvement potential by introducing a business glossary
The Value of Business Metadata
139
Fig. 3 Business case for introducing a business glossary at EUFSP
will lay on the integration into the BI frontends. Typically the user will access the metadata through dedicated wizard-functions, a mouse-over-function, or report specific catalogues.
Discussion and Conclusion In this article we derived a conceptual framework of qualitative and quantitative benefits for BM. This framework was applied in a two-stage cost-benefit analysis at a European financial service provider. The first application of the conceptual framework of qualitative and quantitative business benefits has proven itself successful. First of all, the framework structured the discussion of possible benefits of BM in general and of a central business glossary in particular. And second, the preassessment along the qualitative dimensions focused the quantification of the expected benefits which saved time in coming up with a recommendation during the economic feasibility study. Nevertheless a single case is not enough to prove applicability and usefulness in every possible situation. Therefore, we will seek for further opportunities to apply the framework in the herein introduced two-stage approach and discuss our findings with experts. A promising alliance with a vendor for metadata solutions will assist us in achieving this.
References 1. Luftman, J.N., Kempaiah, R., Rigoni, E.H.: Key Issues for IT Executives 2008. MISQ Executive 8, pp. 151–159 (2009) 2. Sabherwal, R., Jeyaraj, A., Chowa, C.: Information System Success: Individual and Organizational Determinants. Management Science 52, pp. 1849–1864 (2006)
140
D. Stock and R. Winter
3. Wixom, B.H., Watson, H.J.: An Empirical Investigation of the Factors Affecting Data Warehousing Success. MIS Quarterly 25, pp. 17–41 (2001) 4. Wang, R.Y., Strong, D.M.: Beyond Accuracy: What Data Quality Means to Data Consumers. Journal of Management Information Systems 12, pp. 5–34 (1996) 5. Jarke, M., Lenzerini, M., Vassiliou, Y., Vassiliadis, P.: Fundamentals of Data Warehouses. Springer, Heidelberg (2000) 6. Foshay, N., Mukherjee, A., Taylor, A.: Does Data Warehouse End-User Metadata Add Value? Communications of the ACM 50, pp. 70–77 (2007) 7. Foshay, N.: The Influence of End-user Metadata on User Attitudes toward, and Use of, a Data Warehouse. IBM, Somers (2005) 8. Schulze, K.-D., Besbak, U., Dinter, B., Overmeyer, A., Schulz-Sacharow, C., Stenzel, E.: Business Intelligence-Studie 2009. Steria Mummert Consulting AG, Hamburg (2009) 9. Shankaranarayanan, G., Even, A.: Managing Metadata in Data Warehouses: Pitfalls and Possibilities. Communications of the Association for Information Systems 14, pp. 247–274 (2004) 10. Shankaranarayanan, G., Even, A.: The metadata enigma. Communications of the ACM 49, pp. 88–94 (2006) 11. Fisher, C.W., Chengalur-Smith, I., Ballou, D.P.: The Impact of Experience and Time on the Use of Data Quality Information in Decision Making. Information System Research 14, pp. 170–188 (2003) 12. Even, A., Shankaranarayanan, G., Watts, S.: Enhancing Decision Making with Process Metadata: Theoretical Framework, Research Tool, and Exploratory Examination. In: Proceedings of the 39th Hawaii International Conference on System Sciences, pp. 1–10. IEEE Computer Society, Los Alamitos (2006) 13. Hevner, A.R., March, S.T., Park, J., Ram, S.: Design Science in Information Systems Research. MIS Quarterly 28, pp. 75–105 (2004) 14. March, S.T., Smith, G.F.: Design and Natural Science Research on Information Technology. Decision Support Systems 15, pp. 251–266 (1995) 15. Peffers, K., Tuunanen, T., Gengler, C.E., Rossi, M., Hui, W., Virtanen, V., Bragge, J.: The Design Science Research Process: A Model for Producing and Presenting Information Systems Research. In: Proceedings of the First International Conference on Design Science Research in Information Systems and Technology, pp. 83–106. (2006) 16. Dempsey, L., Heery, R.: Metadata: A Current View of Practice and Issues. Journal of Documentation 54, pp. 145–172 (1998) 17. Tannenbaum, A.: Metadata Solutions: Using Metamodels, Repositories, XML, and Enterprise Portals to Generate Information on Demand. Addison-Wesley, Boston (2002) 18. Marco, D.: Building and Managing the Meta Data Repository. John Wiley & Sons, Inc, New York, Chichester, u.a. (2000) 19. M€uller, R., St€ohr, T., Rahm, E.: An Integrative and Uniform Model for Metadata Management in Data Warehousing Environments. In: Proceedings of the International Workshop on Design and Management of Data Warehouses, pp. 12–28. ACM Press, New York (1999) 20. Bauer, A., G€unzel, H.: Data Warehouse Systeme - Architektur, Entwicklung, Anwendung. dpunkt.verlag, Heidelberg (2004) 21. Vaduva, A., Vetterli, T.: Metadata Management for Data Warehousing: An Overview. International Journal of Cooperative Information Systems 10, pp. 273–298 (2001) 22. DAMA: The DAMA Guide to the Data Management Body of Knowledge. Technics Publications, New Jersey (2009) 23. Darke, P., Shanks, G.: Stakeholder viewpoints in requirements definition: A framework for understanding viewpoint development approaches. Requirements Engineering 1, pp. 88–105 (1996) 24. Leite´, J.C.S.P., Freeman, P.A.: Requirements Validation Through Viewpoint Resolution. IEEE Transactions on Software Engineering 17, pp. 1253–1269 (1991)
The Value of Business Metadata
141
25. Whitman, M.R., Mattord, H.H.: Principles of Information Security. Course Technology (2007) 26. Ackoff, R.L.: From Data to Wisdom. Journal of Applied Systems Analysis 16, pp. 3–9 (1989) 27. Wand, Y., Wang, R.Y.: Anchoring data quality dimensions in ontological foundations. Communications of the ACM 39, pp. 86–95 (1996) 28. H€uner, K.M., Otto, B.: The Effect of Using a Semantic Wiki for Metadata Management: A Controlled Experiment. In: Proceedings of the 42nd Hawaii International Conference on System Sciences, pp. 1–9. IEEE Computer Society, Los Alamitos (2009) 29. Stock, D., Gubler, P.: A Data Model for Terminology Management of Analytical Information. In: Proceedings of European Students Workshop on Information System, pp. 1–14. (2009) 30. Chengalur-Smith, I., Ballou, D.P., Pazer, H.L.: The Impact of Data Quality Information on Decision Making. IEEE Transactions on Knowledge and Data Engineering 11, pp. 853–864 (1999) 31. Shankaranarayanan, G., Watts-Sussman, S.: A Relevant Believable Approach for Data Quality Assessment. In: Proceedings of the MIT International Conference on Information Quality, pp. 178–189. (2003)
.
Online Advertising Using Linguistic Knowledge E. D’Avanzo, T. Kuflik, and A. Elia
Abstract Pay-per-click advertising is one of the most paved ways of online advertising today. However the top ranking keywords are extremely costly. Since search terms have a “long tail” behaviour, they may be used for a more costeffective way of selecting the right keywords, achieving similar traffic, and reducing the cost considerably. This paper proposes a methodology that, exploiting linguistic knowledge, identifies cost effective bid keyword in the long tail distribution. The experiments show that these keywords are highly relevant (90% average precision) and better targeted than those suggested by other methods, while enabling reduced cost of an ad campaign.
Introduction Pay-per-click advertising is a common way of online advertising today, a $25Bþ industry [1] with more than $10B spent on textual advertising, i.e., textual ad – the short commercial messages displayed alongside Web search results (sponsoredsearch advertising) or on third-party Web sites (content-match advertising). Since the cost of the top position in the list of ads depends chiefly on the keywords selected, search engines let advertisers bid against each other in auction-like bidding in order to gain the highest ad placement positions on search result pages, related to specific search keywords. For example, by running Google traffic estimator, issuing a query for “flights” (as done by [2]), the first ad ranking for the term “flights” will cost around 1.42€ per click. Whereas, the first position for the term “direct flights to” costs 0.05€ per click. Even if the latter does not produce as
E. D’Avanzo and A. Elia Dipartimento di Scienze della Comunicazione, Universita` degli Studi di Salerno, Fisciano (Salerno), Italy e-mail:
[email protected];
[email protected] T. Kuflik Department of Management Information Systems, University of Haifa, Haifa, Israel e-mail:
[email protected] A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_17, # Springer-Verlag Berlin Heidelberg 2011
143
144
E. D’Avanzo et al. Keywords vs. Global Searches Frequency
fli gh ts ch ea pf lig ch ht ea s p ai rli ne fli gh lo ts w co st fli gh ts bu dg et fli gh ts fli gh tf ar es lo w co st fli gh t
fli gh t
in es ai rl
ai rli ne
ai rli ne s
gh ts
fli
et s ic k
bm ib ab y
fli gh ts
gh tt fli
je t2
ai rli ne
pa ci fic
ca th ay
un ite d
ai rli ne s
4.000.000,00 3.500.000,00 3.000.000,00 2.500.000,00 2.000.000,00 1.500.000,00 1.000.000,00 500.000,00 0,00
Fig. 1 A graph representing a keywords search distribution with a few high-traffic keywords and a number of low-traffic ones behaving in a long tail style [3]
much traffic as the former (590 vs. more than 2 million global monthly searches), it is more economical, and may be much better since “flights” is extremely generic term. Figure 1 shows the monthly global search distribution, using “flights” seed keyword, found by two tools discussed later in the paper. The graph behaves in a long tail style [3], with a few high-traffic keywords and many low-traffic ones. According to the traffic obtained, bidding on a large number of these low-traffic keywords, may add up to the level produced by a popular keyword such as the former in the example above, at a fraction of the cost. Moreover, the traffic is targeted better and will typically result in a better clicks-per-sale rate. Since advertisers’ aim usually at increasing their business volume, it is desirable to have the largest possible set of keywords that are relevant to the product or service being promoted [1]. Otherwise, the users are unlikely to click on the ad, or if they click, they are unlikely to purchase the product or the service being offered. In fact, an ad campaign could contemplate a number of landing pages, and finding the right keywords turns out to be quite laborious even for a small seed set [4]. This caused an emergence of commercial tools that create bid keywords sets directly from the landing page, such as Google KeywordToolExternal and Google sktool. The process of constructing a set of keywords is mostly manual: it requires an advertiser to define one or more “seed” keywords (manual, subjective selection), get related bid keywords and, maybe, additional information such as expected volume of queries, costs, and so forth, that are supplied by freely available tools such as freekeywords by Wordtracker.com, keyword tool external by Google and search advertising by Microsoft.com. The techniques employed by these tools range from meta-tag spiders and iterative query expansion to proximity-based searches and advertiser-log mining. For example, Wordtracker employs a metatag spider that queries search engines for seed keywords, then extracts meta-tag words from high ranked websites, exploiting search engine optimization techniques [2]. On the other hand, proximity-based tools issue queries to a search engine to get highly ranked web pages for the seed keyword expanding later the seed with words found in its proximity [2]. Google Adwords, when searching for keyword “flights”, shows other keywords searched by other advertisers that looked for “flights”, exploiting co-occurrence relationships in advertisers query logs. These techniques have drawbacks: even if Google Adwords provides a large number of keywords, they are not always relevant to the landing page. These keywords are only the most
Online Advertising Using Linguistic Knowledge
145
frequent in advertiser search logs with the chance of being expensive because of their popularity. Moreover, these tools do not consider semantic aspects and techniques based on query-logs fail to explore new words not frequently correlated with query log data. This work proposes a linguistic based approach for an easier and better way of selecting the most cost effective bid keywords, taking into account the long tail phenomenon. The results were compared with common approaches taken by [1, 2] over a standard set of ads, achieving encouraging results.
Related Work Ravi et al. [1], having analyzed a large real-life corpus of ads, found that as many as 96% of the ads had at least one associated bid phrase not present in the related landing page, so starting from the landing page turns out to be a challenging task since, no matter how long the promoted product description is, it is unlikely to include many synonymous phrases, let alone other perfectly relevant but rare queries. Therefore, extractive methods that only consider the words and phrases explicitly mentioned in the given description are inherently limited. The authors of [1] proposed a methodology based on a two-phase approach. First, candidate bid phrases are generated by a number of methods, including a mono-lingual translation model able of generating phrases not contained within the text of the input, i.e., the textual context of the landing page, as well as previously “unseen” phrases. Second, the candidates are ranked in a probabilistic framework using both a “translation model”, to identify relevant phrases, and a bid phrase language model, to identify well-formed phrases. The translation model, a well-known technique in machine translation literature [5], gives the translation probabilities from bid phrases to landing pages learned using a parallel corpus so that it is able of generating phrases not contained within the text of the input. Its main goal is to bridge the vocabulary mismatch in order to give credit to words in a bid phrase that are relevant to the landing page but do not appear as part of it. Instead, the language model characterizes whether a phrase is likely to be a valid bid phrase. Thanks to this model the phrase “car rental” will be preferred over the phrase “car on at” [1]. It is based on the intuition according to advertisers, to increase the chance of their ads being shown and clicked, choose bid phrases matching popular queries. Search query logs are therefore good sources of well-formed bid phrases. In particular, the authors [1], assuming web queries are short, used a bigram model in order to capture most of the useful co-occurrence information. For their experiments Ravi et al. [1] to build the model used a large-scale Web search query log. Every bid phrase associated with a given ad becomes a “labeled” instance pair made of a landing page and a bid phrase. The authors evaluated their methodology by employing two well-studied measures used in information retrieval and natural language processing, that are the normalized edit distance and a recall-based measure. The former determines the similarity of two strings by computing partial sub-string matches instead of an absolute match; whereas
146
E. D’Avanzo et al.
the latter, implemented in the ROUGE system [6], evaluates the quality of a candidate bid phrase against all relevant “gold bid phrases” (provided by the advertiser) and not just the best matching one. In terms of ROUGE scores the methodology outperformed other systems (i.e., baseline and content-match-system), obtaining a score ranging from 0.27 to 0.29. While for the normalized edit distance, the methodology scored between 0.68 and 0.70 obtaining results appreciably lower with respect to the content management system, that scored between 0.78 and 0.83, and quite higher with respect to the baseline, that scored among 0.66 and 0.75. According to the authors the good performance in terms of ROUGE score can be attributed to both the use of the language model and translation model that generate candidate present on the landing page as well as unseen ones, enhancing the candidates pool. Moreover, they claim that the language model generate well-formed phrases on one side, while the translation model allow high relevance. However, they do not provide any explanation to the poor performance in terms of the normalized distance. Joshi and Motwani [2] proposed TermNet, a technique that, leveraging search engines to determine relevance between of terms by captures their semantic relationships as a directed graph. Experimenting their methods, each term got two ratings, relevance and nonobviousness. A relevance rating of relevant/irrelevant was provided by five graduate students, familiar with the requirements of this technique. Instead, a nonbvious term is one not containing the seed keyword or its variants by sharing a common stem. The benchmark queries (e.g., flights) were run on TermsNet and other tools (i.e., AdWords Specific Word Matches, AdWords Additional Keywords, Overture Keyword Selection Tool, Meta-Tag Spider and Related-Keywords list from Metacrawler). Each technique was evaluated using average precision, average recall, and average nonobviousness. The first metric, defined as the ratio of number of relevant keywords retrieved to number of keywords retrieve, returns the goodness of a technique in terms of the fraction of relevant results returned. The second metric, the proportion of relevant keywords that were retrieved out of all relevant keywords available, is problematic since the total number of relevant keywords is unknown. The authors approximate this as the size of the union of relevant results from all techniques. Though imperfect in the absolute sense because of this approximation, recall is still useful. Finally, the third metric, average nonobviousness, is the proportion of nonobvious words, out of retrieved relevant words. All three metrics were calculated for each query and their respective results were averaged for each technique. TermNet obtained respectively 0.78, 0.58 and 0.91, outperforming Meta-tags that obtained respectively 0.48, 0.12 and 0.56. It outperformed Ad-Broad in average precision (0.63), and average recall (0.20) but scored lower for nonobviousness (1.0). It underperformed MetaCrawler with respect to average precision (0.91) and average recall (0.91) but scored better for nonobviousness (0.74). TermNet underperformed Ad-spec with respect to average precision (1.0) but outperformed it with respect to average recall (0.25) and nonobviousness rating (0.0). It underperformed Overture with respect to average precision (1.0) but outperformed it with respect average recall (0.20) and nonobviousness rating (0.0). According to the authors, AdWords and Overture returns only queries containing the seed term so that all suggested keywords are
Online Advertising Using Linguistic Knowledge
147
relevant but too obvious. Meta-tags may or may not contain high relevant terms, doing well in recall but underperforming at precision and nonobviousness [2]. Metacrawler’s keywords are usually high relevant and reasonably high nonobvious too, getting, however, fewer results, hence low recall. TermsNet captures relevance very well, probably because of the use of semantic relationships and co-occurrence. It has a relatively high recall, as it tends to give a fair chance to all terms in the underlying graph – the larger the underlying graph, the greater the recall. Nonobviousness is correctly captured too because it exploits the incoming links to a term. These results are not directly comparable with those obtained with the previous approach [1], both due to different metrics and datasets.
The Linguistic Knowledge Approach The techniques discussed in the Related Work tried to fix some drawbacks that we highlighted in the Introduction and that, in general, haunt commercial keyword suggestion tools. For instance, [1] proposed a methodology able to generate relevant and well-formed phrases, however, these phrases seem to characterize readers’ natural scanning when searching or browsing [7] with the effect of alleviating their cognitive overload as well. Furthermore, Deane [8] demonstrated that these phrases are chiefly located in the tail of a long tail distribution. On the whole, this empirical evidence supported our hypothesis work of employing linguistic knowledge to identify keywords for online advertising purposes. We experimented with LAKE (Linguistic Analysis Knowledge Extractor), a keyword extraction system applying supervised learning and linguistic processing [9]. LAKE chooses candidate bid key words from a set of landing pages that are sequences of Part of Speech (PoS) containing Multiword Expressions (ME) and Named Entities (NE). The system has three main components: a Linguistic PreProcessor, a Candidate Phrase Extractor and a Candidate Phrase Scorer. Every document is analyzed by the Linguistic Pre-Processor following three consecutive steps: PoS analysis, ME and NE recognition. Once all the uni-grams, bi-grams, trigrams, and four-grams are extracted from the linguistic pre-processor, they are filtered using pre defined patterns. The result of this process is a set of bid keywords that may represent the landing page. Candidates bid keywords are scored in order to select the most appropriate phrases as representative of the original text. The score is based on a combination of TF IDF and first occurrence, i.e. the distance of the candidate phrase from the beginning of the document in which it appears.
Preliminary Experiments The absence of a common dataset for experimentation as well as the lack of commonly agreed metrics forced us to construct specific experiments to compare our method with those suggested by prior research.
148
E. D’Avanzo et al.
One experiment compared LAKE with the methodology applied by [1] that used a mixture of language and translation model. Following [1], we sampled data from Yahoo! ad corpus that contains a set of ads with advertisers-specified bid phrases. A total number 30,000 page were collected sampling five URLs per domain to avoid any bias from advertiser with a large number of ads (e.g., Antiques, Cameras and Photo, Computer and Networking, Health & Beauty, and so forth). Then, the data (i.e., landing pages) were randomly split into a training set of 20,000 and a test set of 10,000 pages. To reproduce the same experimental conditions of [1], a series of pre-processing steps (such as html parsing, page cleaning and so forth) have been applied to the landing pages. Results are comparable with those obtained by [1]. The normalized edit distance results range from 0.67 to 0.69 (which is a little lower than the results of [1] that were between 0.68 and 0.70) while for the ROUGE metric results are comparable with those obtained by [1] ranging from 0.28 to 0.30 (while [1] ranges from 0.27 to 0.29). A second experiment compared LAKE with that proposed by Joshi and Motwani [2]. Their evaluation was reproduced by running an input set of 8,000 terms picked randomly from query logs about three topics popular among advertisers (travel, carrental, and mortgage). Keyword suggestion results were obtained for 100 benchmark queries using LAKE. Following [2], each keyword suggestion was given two ratings (i.e., relevance and nonobviousness). The relevance rating was provided by employing five master students from the Communication Science department at the University of Salerno. The students have a strong background in marketing and other web-related disciplines required for this technique. For nonobviousness rating, defined as a term not containing the seed keyword or its variant (sharing a common stem), using a Porter Stemmer to mark off not obvious words, without employing human evaluators. LAKE outperformed TermNet in precision, obtaining an average precision of 0.90 (vs. 0.58), but scored lower in recall with average recall of 0.60 (vs. 0.91) while for average nonobviousness it obtained the same result of 0.91. However, in our case precision is much more important than recall because this turns out we got more relevant keywords, increasing the click-per-sale-rate [1]. Table 1 shows top results for “flights” sample query from the two methodologies, LAKE and TermNet. For each methodology the table reports the cost-per-click (CPC) and estimated click per day obtained issuing (automatically through an API) Google Traffic Estimator. The column “Estimated cost/day” contains the daily cost obtained multiplying the previous two values. As Table 1 shows, even if most of the keywords extracted by LAKE belong to the tail of the long tail distribution (ranging from the keyword “flights airlines” to the keyword “low cost flight” of Fig. 1) they enjoy an high rate of estimated click per day at a cheaper cost. While keywords obtained by TermNet have an higher cost per day, as shown in Table 1, because they occupy the high frequency position (i.e., the head) of the long tail distribution (ranging from the keyword “united airlines” to the keyword “bmibaby” of Fig. 1). Based on the data in Table 1, the average cost per click using TermNet method costs 1.84€ which is higher than the 1.50€ of LAKE. Moreover, the average number of estimated click per day using LAKE terms is higher than the average estimated number achieved using TermNet terms (103.22 vs. 88.85).
Online Advertising Using Linguistic Knowledge Table 1 Comparison of LAKE and TermNet Keywords CPC Monthly global Long searches tail rank 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
United airlines Cathay pacific jet2 Airline flights Flight tickets Bmibaby Flights airlines Airline flight Airlines flights Cheapflights Cheap airline flights Low cost flights Budget flights Flight fares Low cost flight
€4.59 €0.70 €0.70 €1.62 €1.24 €2.20 €1.69 €1.36 €1.46 €1.59 €1.92 €1.49 €1.30 €1.06 €1.64
3.350.000 1.220.000 1.000.000 550.000 550.000 450.000 368.000 368.000 368.000 368.000 201.000 135.000 90.500 60.500 40.500
149
Estimated click/day 1 140 1,1 209 181 1 146 217 146 174 58 76 47 24 41
Estimated cost/day
Method
€4.59 €98.00 €0.77 €338.58 €224.44 €2.20 €246.74 €295.12 €213.16 €276.66 €111.36 €113.24 €61.10 €25.44 €67.24
TermNet
LAKE
In fact, using the complete list of keywords grasped, given the same amount of about 1,000.00€ we obtain 787 clicks per day using LAKE system versus 657 clicks per day with TermNet, resulting better targeted as demonstrated by the high average precision.
Conclusion This paper presented a methodology that, while exploiting linguistic knowledge, identifies bid keyword in the long tail distribution. Experimental evaluation showed that these keywords are highly relevant and better targeted, compared to results achieved by recent research prototypes. The practical meaning is that the proposed approach reduces the cost of an ad campaign. Even though LAKE appears to be suitable for bid keywords suggestion, future improvements are considered, including the employing of Latent Semantic Kernel as addressed by [10] to generate keywords not present in the original page in order to increase its recall.
References 1. Ravi, S., Broder, A., Gabrilovich, E., Josifovski, V., Pandey, S., and Pang, B. (2010) Automatic generation of bid phrases for online advertising. In Proceedings of the Third ACM international Conference on Web Search and Data Mining (New York, New York, USA, February 04–06, 2010). WSDM ’10. ACM, New York, NY, 341–350.
150
E. D’Avanzo et al.
2. Joshi, A. and Motwani, R. (2006). Keyword Generation for Search Engine Advertising. In Proceedings of the Sixth IEEE international Conference on Data Mining – Workshops (December 18–22, 2006). ICDMW. IEEE Computer Society, Washington, DC, 490–496. 3. Fuxman, A., Tsaparas, P., Achan, K., and Agrawal, R. 2008. Using the wisdom of the crowds for keyword generation. In Proceeding of the 17th international Conference on World Wide Web (Beijing, China, April 21–25, 2008). WWW ’08. ACM, New York, NY, 61–70. 4. Broder, A. Z., Ciccolo, P., Fontoura, M., Gabrilovich, E., Josifovski, V., and Riedel, L. 2008. Search advertising using web relevance feedback. In Proceeding of the 17th ACM Conference on information and Knowledge Management (Napa Valley, California, USA, October 26–30, 2008). CIKM ’08. ACM, New York, NY, 1013–1022. 5. Brown, P. F., Pietra, V. J., Pietra, S. A., and Mercer, R. L. 1993. The mathematics of statistical machine translation: parameter estimation. Comput. Linguist. 19, 2 (Jun. 1993), 263–311. 6. Lin, C. ROUGE: A package for automatic evaluation of summaries. In Proc. of the Workshop on Text Summarization Branches Out, ACL (WAS), 2004. 7. Harper, S. and Patel, N. 2005. Gist summaries for visually impaired surfers. In Proceedings of the 7th international ACM SIGACCESS Conference on Computers and Accessibility (Baltimore, MD, USA, October 09–12, 2005). Assets ’05. ACM, New York, NY, 90–97. 8. Deane, P. 2005. A nonparametric method for extraction of candidate phrasal terms. In Proceedings of the 43rd Annual Meeting on Association For Computational Linguistics (Ann Arbor, Michigan, June 25–30, 2005). Annual Meeting of the ACL. Association for Computational Linguistics, Morristown, NJ, 605–613. 9. A. Elia, E. D’Avanzo, T. Kuflik, G. Catapano, M. Gruber. An Online Linguistic Journalism Agency – Starting Up Project. LangTech 2008, pp 106–109, Roma, Italy, February 28–29, 2008. 10. D’avanzo E., Gliozzo, A. M., Strapparava, C. Automatic Acquisition of Domain Information for Lexical Concepts. In Proceedings of the second MEANING workshop, Trento, Italy, February 2005.
Part IV
IS Quality, Metrics and Impact C. Francalanci and A. Ravarini
The two papers presented in this section represent original research contributions on measurable impacts of information systems within organizations. While it is widely recognized that information technology impacts on companies along multiple organizational dimensions, the assessment of the actual costs and benefits of information systems raises a number of research questions that are still largely unanswered. What are the real costs of key IS projects? For example, is IT a green technology? What are the tangible benefits delivered by ITs and what evidence exists on the measurable impacts of these benefits, both at an organizational and at an industry level? The section provides a systematic view of the state of the art on these questions and provide insights on the methodologies and techniques that can be applied to assess the quality of modern information systems.
.
Green Information Systems for Sustainable IT C. Cappiello, M. Fugini, B. Pernici, and P. Plebani
Abstract We present the approach to green information systems adopted in the Green Active Management of Energy in IT Service centres (GAMES) Project. The goal of GAMES is to develop methodologies, models, and tools to reduce the environmental impact of information systems at all levels, from application and services to physical machines and IT plants. This paper focuses on models and methods for the analysis and reduction of the energy consumption associated with applications which are made energy-aware through annotations and Green Performance Indicators (GPI) in applications.
Introduction While the focus of research in IT has been in getting more and more performing and reliable systems, the analysis of the impact of Information Systems (ISs) from the point of view of energy consumption has been lacking. Research activity is mainly focusing on power management in large data centres or on technical characteristics of devices from the point of view of power consumption [1]. In this paper, we overview the approach to green IS studied in the Green Active Management of Energy in IT Service centres (GAMES) EU Project. GAMES [2] considers the environmental impact of resources involved in the whole life cycle of ISs, from design to run time and maintenance, in organizations. The goal of GAMES is to develop methodologies, models, and tools to reduce the environmental impact of such systems, reducing energy consumption and energy losses of the IS, from application and services, to physical machines and IT plants. The focus is on methods to analyze energy consumption at the application level, which eventually leads to actions that can be undertaken to save energy, such as redundancy elimination, disposal of services. The method is centred around a designanalysis-adaptation-monitoring cycle for energy-awareness using annotations about
C. Cappiello, M. Fugini, B. Pernici, and P. Plebani Dip. Elettronica e Informazione, Politecnico di Milano, piazza da Vinci, 32, I-20133 Milano, Italy e-mail:
[email protected];
[email protected];
[email protected];
[email protected] A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_18, # Springer-Verlag Berlin Heidelberg 2011
153
154
C. Cappiello et al.
how an application can run to save energy. To this aim, we propose to enrich applications with annotations regarding energy consumption. Such annotations are coupled with Green Performance Indicators (GPI), as a measurement of how green an application is, analogously to the role played by KPI in indicating various performance indexes od system [3]. If energy leakage is observed (through monitoring), the methodology allows one to (partially) remove energy losses by, for example, reducing redundancies of data and processes, by using the memory in slow mode, by substituting expensive services.
GAMES Approach to Green Information Systems GAMES proposes guidelines for designing and managing service-based ISs along the perspectives of energy awareness. The approach focuses on the following two aspects: (a) Co-design of energy-aware ISs and their underlying services and IT service centre architectures in order to satisfy users, context, and Quality of Service (QoS) requirements, addressing energy efficiency and controlling emissions. This will be carried out through the definition of suitable GPI able to evaluate if and to a what extent a given service and workload configuration will affect the carbon footprint emissions’ levels. (b) Run-time management of IT Service Centre energy efficiency, which will exploit the adaptive behavior of the system at run time, both at the service/ application and IT architecture levels, considering the interactions of these two aspects in an overall unifying vision. The integrated approach in GAMES focuses on IT use and management, and on an energy-saving design and management of application and data resources of the IS. The project relies on Web services technology, which is suitable to support adaptivity to different system states and necessities in front of energy saving policies. GAMES defines a green lifecycle for development of adaptive, selfhealing, and self-managing application systems able to reduce energy consumption. Figure 1 shows the GAMES phases: analysis, to set up GAMES enabled applications, also using an Energy Practice Konwledge Base (EPKB in Fig. 1); design and evolution, to develop and long-time maintain such applications, adaptation, which, at run-time, adjusts energy consumption and, at design time, provides enhanced energy awareness, and monitoring, which observes running applications from the energy-consumption viewpoint. The phases comply with the MAPE stages of a self adaptive system.1 The phases are handled by the GAMES architecture composed of a Design-Time Environment, a Run-Time Environment and the ESMI (Energy Sensing and Monitoring Infrastructure) layer [2]. All the tools (sensors, power meters, etc.), dealing with the physical machines 1
M. Salehie, L. Tahvildari, “Self-adaptive software: Landscape and research challenges” ACM Tran. on Aut. and Adaptive Systems, ISSN:1556–4665, 2009.
Green Information Systems for Sustainable IT
155
Fig. 1 GAMES phases for energy-aware applications design, execution and management
and devices and belonging to the infrastructure level of GAMES, monitor the IT infrastructure and the environment where the applications run; they are assumed to exist in advance and are not implemented in the project. Monitoring detects, through GPI, application issues that reveal energy consumption. In order to be able to adapt at run-time, in the design phase, the GAMES energy-aware application co-design methodology is employed (supported by the Design-Time Environment tools of the architecture). Evolution occurs when GPI and all relevant observed data are consolidated so that applications can be deeply modified. A set of models are provided describing how the system should react in case some GPI or QoS indexes are not satisfied at run-time. The EPKB of annotations, GPI, and design models is constructed and continuously updated for co-design. Besides functional and non-functional descriptions, annotations also include adaptation strategies. ESMI supports the collection of the run-time energy/performance data and their analysis to both adapt at run-time and to correct the design, so leading to GAMES-enabled applications evolution. The EPKB stores the energy/performance knowledge obtained by executing mining algorithms on the historical data collected by the ESMI monitoring tools. Such data (context data) refer to IT infrastructure energy/performance data, environmental data and the state of the GAMES enabled application running on the service centre. At run-time, context data are used to take adaptation decisions using a context model, which is instantiated at run-time with the context data captured by the ESMI tools. The context model instances are processed to determine the service centre energy/performance state and to infer new context information that is relevant for the decision process.
156
C. Cappiello et al.
GAMES-enabled applications are defined as applications composed by activities defined in terms of their functional and non-functional requirements. Functional requirements for a given application or activity can be fulfilled by a set of services that run on virtual machines. At run-time, instances of processes, activities, and services are defined in terms of their execution states. Non-functional requirements of main concern here are those that are energy-related, and are expressed through GPI. Other non-functional requirements are those related to QoS. While for instance data or service redundancy is needed to ensure a given QoS index (e.g., reliability) [4] of a system in operation, such redundancy can be potentially eliminated or reduced in the system to save energy. So, data replicated in many archives, or services which are dimensioned for given response time requirements, can be dismissed if monitoring reveals energy losses and the application can still meet QoS requirements with less resources. Energy saving vs. QoS is to be treated in trade-off analysis (see analysis in Fig. 1).
Methodology for Green Applications Starting from the GAMES approach, the methodology we are studying considers that an application, or a service, performs in a “green way” if it delivers its expected results by saving energy consuming, for instance, less processor and/or less memory and storage and/or less I/O, less data, less services, less application resources, and according to the user QoS requirements. First, a methodology towards green services has to evaluate factors such as the intensity of use of Processor, Memory, Storage, and I/O peripherals, as well as the application flow given by the application structure and its activities/services and data. For example, the factors related to energy consumption are: l l l l
l
Activities/Services need IT resources and consume power Data are read/written on storage and transferred to I/O Data objects (volatile) are read/written in Memory The application has a structure with a workflow defined at design-time, and additional information (e.g., branching and failure probabilities) QoS parameters are provided (e.g., response time, performance, duration, costs, data accuracy, security)
To design energy-aware, adaptive applications we use annotations and GPI as illustrated in the following.
Annotations Annotations characterize applications for energy consumption, so that designers, by observing several runs of a same application in different cases through different instantiations, can annotate the application with structure-dependent information,
Green Information Systems for Sustainable IT
157
such as used data and activities, number of accesses to the databases, dimensions of data exchanged in transactions, and with data regarding how much of machine resources the process needs. Accurately and dynamically monitoring energy usage patterns in applications forms a first requirement for more efficient energy management in future runs of the application, starting from monitored energy usage data. Annotations can be used by application engineers to inform demand-side management systems in its future runs, to estimate future demands, and to drive the evolution of the application (see Fig. 1) through application energy profiling (see Fig. 2). An example of annotation is the branching probability P(b) associated with every outgoing edge of a control activity, representing the probability of executing the corresponding branch. For instance, branching probabilities associated with an xor control node can be P(b) ¼ 0.8 – true branch – and P(b) ¼ 0.2 – false branch. Although determining P(b) can be hard and time-consuming task (see [4]), P(b) are useful for energy consumption computation: if it is known that, after the xor, the probability of executing the false branch is low, the machine where the service on the false branch is made available can stay turned off or run in idle mode most of the time. Other annotations regard the failure rates of services enacting activities. Service substitution can be energy consuming but can prevent energy losses for application stuck in waiting for a non available service. In our approach, the use of design-time annotations aim at collecting information for designers for energyaware design of activities (e.g., lowering the failure probabilities) and to feed the EPKB of Fig. 1 with relevant data about energy consumption. Such data can undergo process mining, to make the activities self-adaptive with respect to energy consumption. Energy leakages in application executions is detected by comparing similar applications and finding out that using a different (e.g., a less
Fig. 2 GAMES annotation for energy-aware applications
158
C. Cappiello et al.
processing-intensive) service, the activity can be executed with the same functional results, an acceptable response time, and less energy consumption. In summary, through annotations, an application can be made adaptive with respect to energy consumption if the amount of needed resources can be adapted on the basis of energy and QoS requirements. Strategies can have different degrees of complexity: from substitution of a single activity, to re-design of the whole application. Strategies can also affect the infrastructure elements (such as consolidation or dynamic power management). Adaptivity regards how an application can be adapted to run in different modes (e.g., low processor usage, slow disk, etc.). If the infrastructure is wasting say processor utilization (the CPU results under-occupied for a process due to over-dimensioned allocation), then we have an energy leakage. In general, we have to determine which parameter(s) of the 4 is actually wasting energy and then adapt the process running mode to avoid energy leakage. For example, if an application can run also in “less data storage” mode, still respecting the QoS requirements in terms of response time, we can adapt it to use less data storage and to clean redundant data. The methodology has to foresee also a set of strategies for detecting, correcting, and adapting an application in the elements which are wasting energy at run-time. Let us give an example, where adaptation is performed at run-time. Example: Strategy of Adaptation through Substitution of Activities. Substitution can be used when running activities are considered as definitively unavailable or temporarily not appropriate since they violate some energy or QoS constraints. To complete the application execution, a substitution strategy allows changing the activity by finding a service that provides the same operations. Suppose that several equivalent candidate activities are available. In such case, energy and QoS constraints can be the driver for the service selection. In case of a substitution due to service failure, we aim at spending the same or lower energy of the correct execution. E(ai Þ E(ai Þtf þ E(asub Þ þ E(a newij Þ where E(ai)tf is the energy consumed by ai till the failure time tf, E(asub) is the energy spent in the operations performed for the substitution (e.g., compensation of the failed activity) and E(a-newij) is the energy associated with the execution of the j-th substitute activity.
Green Performance Indicators Along with the usual definition of an application in terms of activities, data flow/ control flow and KPI, a GAMES-enabled application is also defined in terms of GPI and adaptation strategies to be enacted in case the objectives of one or more GPI are not fulfilled. GPI that can be either related to whole service center, as traditionally
Green Information Systems for Sustainable IT
159
considered in the literature and/or related to a specific application, since the target of GAMES is also to realize green applications, and therefore it is important to be able to assess the greenness of an application. GPI are a measurement of the index of greenness of an IT system. GPI define the inherent and IT-configuration dependent green properties of an application and/or IT Service centre environment. We consider indicators relevant to execution of an application and its environment that allow its greenness, independently of factors related to the physical center where the applications are executed. More precisely, these indicators are derived from variables that are monitored in the system and indicate the energy consumption, energy efficiency, energy saving possibilities and all energy-related factors within an application and within its development and execution environment. In an IT Service center, most performance indicators are grouped into either as KPI or GPI. For example, the total cost of an application (including design, coding, and deployment) is a KPI. These indicators have no direct relationship with greenness. Alternatively, the Data Center Infrastructure Efficiency (DCiE) is a GPI related to energy consumption and greenness of an application and an IT Service centre. However, some KPI can impact GPI. In GAMES, resource usage (e.g., CPU Usage or Space, Watts and Performance – SWaP) is a GPI, since it characterizes the resource usage of an application and its environment. Therefore, we consider the relationship GPI-KPI as an intersection. In [5] we propose GPI layered in operative, tactical, and strategic [6] GPI, covering all aspects of application lifecycle. At the strategic level, GPI drive high-level decisions about system organization in terms of used human resources, impact on the environment, outsourcing of non-core services, guidelines according to eco-related laws and regulations such as EU Code of Conduct for Data Centers 2010,2 Energystar,3 United Nations Global Compact,4 Scottish Environmental Protection Agency.5 GPI at the tactical level denote how the application will consume less energy if its development is enhanced through mature platforms and improvement of the system quality in terms of service delivery to customers. At the operational level, we define GPI for monitoring the usage of IT resources [5, 7].
Concluding Remarks We have presented the GAMES approach enabling evaluating energy consumption in applications using annotations and GPI for energy-awareness. The approach considers both design-time and run-time strategies to monitor, analyze, and adapt 2
EU Code of Conduct for Data Centres 2010. http://re.jrc.ec.europa.eu/energyefficiency/html/ standby_initiative_data_centers.htm. 3 Energystar http://www.energystar.gov/. 4 United Nations Global Compact http://www.unglobalcompact.org/. 5 Scottish Environmental Protection Agency http://www.sepa.org.uk/.
160
C. Cappiello et al.
the applications’ energy consumption by means of a co-design approach that considers the application structure, its execution, and the IT platform where the execution occurs, so including various factors of potential energy waste. We are currently refining (according to what proposed in the literature such as in [8, 9]) both the GPI by defining metrics for resource (CPU, memory, disk, and so on) usage evaluation and metrics regarding the application lifecycle, such as quality factors of the employed development platform, data redundancies, possible savings in consumables and environmental resources needed by the applications. We are developing a prototype that performs data mining about application executions to derive information about energy consumption and hence adjust resource usage at design and run time to achieve adaptation. Acknowledgements This work is supported by the GAMES project6 and partly funded by the European Commission’s IST activity of the 7th Framework Program, under Contract n. ICT248514. The opinions in this work are of the authors and not necessarily of the European Commission. The Commission is not liable for any use that may be made of information in this work.
References 1. Chase, J. S., Anderson, D. C., Thakar, P. N., Vahdat, A. M., and Doyle, R. P. (2001). Managing Energy and Server Resources in Hosting Centres, SIGOPS Oper. Syst. Rev., 35(5),103:116. 2. Bertoncini, M., Pernici, B., Salomie, I., and Wesner, S., GAMES: Green Active Management of Energy in IT Service Centers, CAiSE Forum, Hammamet, Tunisia, June 2010. 3. David, F., Strategic Management, Merrill Publishing Company, 1989. 4. Zeng L., Benatallah B., Dumas, M., Kalagnanam, J., and Sheng, Q., Quality Driven Web Services Composition, In Proc. 12th WWW Conference, 2003. 5. Fugini, M.G., Gangadharan, G.R., and Pernici, B., Designing and Managing Sustainable IT Service Systems, APMS 2010 Intl. Conf. on Competitive and Sustainable Manufacturing, Products and Services, Cernobbio, Italy, October 2010. 6. van Bon,J., de Jong, A., Kolthof, A., Pieper, M., Rozemeijer, E., Tjassing, R., van der Veen, A., and Verheijen, T., IT Service Management-An Introduction. Van Heren Publishing, 2007. 7. Brown, D., and Reams, C., Toward Energy-Efficient Computing, Comm. of the ACM, 53(3), March 2010. 8. Sekiguchi, S., Itoh, S., Sato, M., and Nakamura, H., Service Aware Metric for Energy Efficiency in Green Data Centers. 2009. 9. Nie, Z., Jiang, Z., Liu, J., Yang, H., Performance Analysis of Generalized Well-formed Workflow, 8th IEEE/ACIS Int. Conf. on Computer and Information Science (ICIS 2009), Shanghai June 2009, 666:671.
6
http://www.green-datacenters.eu/.
The Evaluation of IS Investment Returns: The RFI Case Alessio Maria Braccini, Angela Perego, and Marco De Marco
Abstract Today CIOs and IS departments in general are struggling to find a framework to evaluate the performance and the return of their IS investments. Notwithstanding a long-term research tradition on the topic of the business value impact of IS, so far the identification of the returns of the investments of IS is still an open issue. Even though a consistent body of literature has examined the problem over a time frame of more than 20 years, the IS business value research has produced so far a plethora of theoretical contributions with few practical applications. Starting from the assumption that real-world experiences differ from theoretical explications, and with the intent to contribute in the IS business value research field bringing evidences and witnesses from the reality, this paper presents a case of an IS Performance Management System used to assess the value delivered by IT in RFI (Rete Ferroviaria Italiana), the manager of the Italian railroad infrastructure.
Introduction The assessment of the real contribution of IT resources and Information Systems (IS) to the firm, in terms of business value, is the core of a wide and intense debate that has engaged, and still engages, both academics and practitioners for years. Interest in the debate has increased even though the conclusions of several studies in this area have not been able to confute Robert Solow’s famous remark: “we see
A.M. Braccini Universita` LUISS Guido Carli, Rome, Italy e-mail:
[email protected] A. Perego SDA Bocconi, Milan, Italy e-mail:
[email protected] M. De Marco Universita` Cattolica del Sacro Cuore, Milan, Italy e-mail:
[email protected] A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_19, # Springer-Verlag Berlin Heidelberg 2011
161
162
A.M. Braccini et al.
computers everywhere except in the productivity statistics” [1], nor Nicholas Carr’s affirmation: “IT doesn’t matter” [2]. Starting from that, several researchers have tried to examine the relationship between IT/IS investments and the business value they are supposed to deliver. Despite these efforts, the identification of measures to assess the efficiency of IT/IS investments in terms of value is still an open issue. This lack of knowledge increases the difficulties firms face in the evaluation of the value performance of IT/IS. In many cases firms implement IT/IS Performance Management Systems (PMS) even though they cannot appropriately evaluate the results in economic terms. In confirmation of that, a survey by Gartner [3] reveals that PMS is a high priority for CIOs but at least one half of the companies that implement PMS in the next two years will fail to realize the full benefits. In the light of the depicted scenario, this research paper contributes with the description of an IT/IS PMS that has been applied in an Italian large firm and with the analysis of the factors leading to its successful implementation. The structure of the paper is as follows: the following paragraph will analyze relevant literature on the topic addressed in this paper. Right after that, the research design will be described, immediately followed by the case description. The case is then discussed in a further paragraph. Some final remarks regarding the findings, the limitations, and future research plans, will conclude the paper.
Research Background The contribution that IT/IS investments can deliver to the organization in terms of business value is a research stream that, in spite of more than 20 years of studies has still not generated enough consensus among contributions [4–7]. A relevant milestone in this research stream is the identification of the so called “IT productivity paradox” by Brynjolfsson [8], suggesting that traditional productivity measures could not be appropriate to estimate the value outcomes of IT/IS investments. After Brynjolfsson, several other researchers have tried to examine the relationship between investments in IT/IS and their effect on organizational performance. A plethora of different research methodologies has been adopted in this research stream that shows also contributions coming from several disciplines like economics, strategy, accounting, operational research and, of course, information systems [9]. Among approaches tried by researchers, the theory of production has been particularly useful in conceptualizing the process of production and providing empirical specifications enabling an estimation of the economic impact of IS [10]. Researchers have also employed growth accounting [11], consumer theory [12, 13], data envelopment analysis [14], conversion effectiveness [15, 16], and also divergent theoretical frameworks [4]. Melville et al. [9] identified with a theoretical model that investigations on IT Business Value have been carried out at three different level of analysis. They call these levels (from the broader to the narrower): macro environment, competitive
The Evaluation of IS Investment Returns: The RFI Case
163
environment, and focal firm. The macro environment roughly corresponds to investigations performed at the level of a whole economy. The competitive environment corresponds to the industry, while the focal firm focuses only on a single firm, or a part of it. The impact of IT resources in terms of Business Value has proven to be different at the three levels. In brief it can be said that “the more detailed the level of analysis, the better the chance to detect the impact, if any, of a given technology” [17, p. 275]. Many economy-level studies [18, 19] observed a negative relationship between technology-related variables and performance. At the industry level, on the other hand, the results are mixed, with some studies documenting a positive impact of IS [20, 21] while other studies detect no significant advantage to IS investments [22, 23]. Finally, at the more-detailed firm level, a lot of studies present results that indicate a positive relationship between technology and performance [24–30]. A good summary of the current state of the art of research on IT Business Value is provided by Kohli and Grover [6] who, on the basis of an extensive literature review, affirm that: (1) IT does create value: a consistent mass of studies demonstrate that there is a relationship between IT and some aspect of firm value; (2) IT creates value under certain conditions: to produce value, IT must be part of a business value creation process where it interacts with other IT and organizational resources; (3) IT-based value manifests itself in many ways: studies have shown so far that IT value can reveal itself in several ways, like productivity improvements, business processes improvements, profitability, consumer surplus, or advantages in the supply chain; (4) IT-based value could be latent: there exists a difference between creating value and creating differential value; (5) There are numerous factors mediating IT and value: there might be latencies in the manifestation of IT value; (6) Causality for IT value is elusive: it is difficult to fully capture and properly attribute the value generated by IT investments. Notwithstanding these efforts, research in the IT Business Value field is far to a conclusion. Among several different necessities, there is the need of having methodologies or tools to justify the investments in IT on a rigorous basis, and not only on an emotional and/or empathically way. There is therefore the need of studies that propose practically applicable frameworks and methodologies, something that has also been identified as a limitation of several previous studies [31]. This paper attempts to give a contribution to fill this gap describing the way Rete Ferroviaria Italiana (RFI) surmounted the evaluation of IT/IS investment returns.
Research Design The method used for the analysis is a case study, useful to examine a phenomenon in its natural setting. The unit of analysis is formed by the company Rete Ferroviaria Italiana (RFI), which is part of the group Ferrovie dello Stato, and is the manager of the Italian railroad infrastructure. RFI is responsible for the tracks, the train stations, and the installations of the Italian railroad infrastructure. RFI ensures the access to
164
A.M. Braccini et al.
the Italian railroad network, manages the investments for the upgrading and improvement of railway lines and installations, performs the maintenance, ensures the safe circulation on the whole network, and finally develops and deploys the technology for the systems and the materials it uses in its daily activities. The RFI case will be analyzed with the proposition to identify key factors that can contribute to a successful implementation of a methodology to assess IT/IS investments return in terms of value. Therefore the focus of the analysis is on the way RFI implemented an IT PMS and on the actions it performed to be successful.
Case Description In the last few years the rise in IT expenditures in RFI has led to put pressure on the IT department to evaluate IT investments returns and demonstrate the IT contribution to the business value. With the aim of answering this request from the top management, the CIO launched a project targeted to develop a system that can support RFI to evaluate IT investments and to ensure that organization realizes optimal value from IT-enabled business investments at an affordable cost, with a known and acceptable level of risk. The project started at the end of 2009 and its focus was both on the investment decision making process and on its execution. As a matter of fact, the new system should have enabled RFI to: (1) increase the understanding and transparency of cost, risks and benefits; (2) increase the probability of selecting investments with potential high returns; (3) reduce the risk of failure; (4) reduce the likelihood of unexpected events relative to IT cost and delivery. In order to support decision making process, RFI applied a methodology called Economic Value Creation (EVC) which evaluates the IT investments return by calculating: (1) Financial Metrics; (2) Benefit Metrics; (3) Cost Metrics; (4) Risk Metrics. The Financial Metrics consist in the most known and widespread ones: Return of Investment (ROI), Payback Period, Net Present Value (NPV), Internal Rate of Return (IRR) and Value Added. The calculus of Benefit Metrics starts with the mapping of the initiative’s features and capabilities to operational benefits. Then the analysis of the operational causes and effects is necessary to determine the means of quantification in each benefit. Finally each benefit is linked to a measure with its algorithm of calculus. Benefits considered in the methodology are both qualitative (non cash benefits) and quantitative (cash benefits). In particular cash benefits are divided in four profit-impact types, including: Cost Reduction, Cost Avoidance, Revenue Increase, and Revenue Protection. The Cost Metrics assess the capital and non-capital expenses, as well as the initial and the ongoing expenses. Finally, to assess the Project Risk, the previous metrics are calculated in three different scenarios: worst case, most likely, and best case. This allows executives to examine the forecasted results in terms of various “what if” scenarios.
The Evaluation of IS Investment Returns: The RFI Case Table 1 Metrics of P2 Project Metric Simple ROI Payback NPV (net present value) IRR (internal rate return) Added value Risk of not making investment TBO (total benefit of ownership) Cash only benefits TCO (total cost of ownership) Cumulative cash flow
Best case 121% 19 Months €4,561,801 79% €6,312,494 €0,00 €16,704,000 €0,00 €5,528,176 €6,689,494
165
Most likely case 50% 38 Months €1,767,015 46% €2,594,894 €0,00 €11,088,000 €0,00 €6,028,176 €3,017,894
Worst case 22% None €1,730,009 N/A €1,876,279 €0,00 €5,472,000 €0,00 €6,528,176 €1,405,279
In order to verify the effectiveness and applicability of the methodology the project team applied it to two projects: an ongoing project started in 2002 (the P1 project), and a new project just started (the P2 Project). The P1 project attempts to optimize and improve the current regulation system through the centralization of information related to infrastructure management and the rationalization of local resources. Whereas, the aim of the P2 Project is to support the Timetable Planning by the implementation of a simulation system integrated with the RFI infrastructure database. The project team also involved executive representatives who were so able to obtain a better understanding of the assumptions, data, and formulas used to calculate each benefit and cost of the initiative. Table 1 shows as an example the results of the methodology application to the P2 Project. The difference between the Best and Worst Case can give a measure of the project risk and the ability to steer towards the best case under certain conditions and exploiting opportunities during the project life cycle or the transition into operations. Therefore the project metrics provide a tool for project risk management, according to the assumptions on which the outcoming cases are based. The second part of the initiative focused on the monitoring of project execution and RFI was especially interested in: (1) comparing actual project performance with the project management plan; (2) assessing performance to determine whether any corrective or preventive actions were necessary; (3) monitoring project risks to make sure that they were identified and that appropriate response plans were being executed; (4) maintaining an accurate, timely information base concerning project outputs; (5) providing information to support status reporting, progress measurement, and forecasting; (6) providing forecasts to update current cost; (7) monitoring implementation of approved changes when and as they occur. The result of this part is the Project Monitoring Dashboard that should provide RFI with six key indicators to check if: (1) cost is under control; (2) schedule is under control; (3) scope is managed; (4) quality is managed; (5) actions are monitored; (6) risks are under control. Therefore Project Monitoring Dashboard provides RFI with information on issues and actions to perform in order to maximize the expected benefits, according to the project business case.
166
A.M. Braccini et al.
The initiative finished in April 2010 and the high satisfaction of it has driven RFI to start the study of a way to turn this application into a strategic tool for managing the IT Project Portfolio.
Discussion The experience of the RFI case shows that the need for an effective method to evaluate the benefits delivered by IT/IS investments in terms of business value emerges when the amount of these investments grows in size so that the top management starts to wonder which is the actual return from them. As a result of this need, in the RFI case, the company has started to address the problem adopting an evaluation perspective, trying to identify an approach that could be suitable to evaluate the possible benefits they could have achieved from their IT/IS investments. The application of the methodology described in the previous paragraph for the evaluation of the benefit of IT/IS investments has been judged by RFI itself a success case. The method identified has been applied to two projects. The application of this method to the two projects selected has been pursued by RFI with two main aims: first of all to demonstrate the feasibility of the approach, and secondly to identify the possible informative outcomes stemming out of the performance measurement system to be implemented. One of the critical elements that emerged in the RFI case, and that contributed to the success of the initiative, has been the full commitment of both IT and top management since the very beginning of the initiative. As a matter of fact, both the IT side and the business side have been involved in the definition of the methodology to measure quantitative and qualitative benefits from the IT investments. By doing so they have given to themselves a sense of responsibility for the final success of the initiative. Another relevant element regarding the RFI case consists also in the approach followed during the implementation of the performance evaluation system. From the description of the case two steps have been identified. A first step where RFI aimed at just assessing benefits of the IT investments, and a second step where the Project Monitoring Dashboard has been developed to monitor the ongoing progresses. Under this point of view the RFI initiative is not just an ex-ante or ex-post evaluation exercise, but is a managerial system that provides information regarding IT/IS investments both ex-ante (in terms of expected benefits), during the project (thanks to the usage of the Project Monitoring Dashboard), and ex-post.
Conclusion This paper describes a case of the implementation of PMS in RFI, a large Italian company managing the Italian railway infrastructure. The paper describes a practical implementation of a PMS that is suitable to help the company to identify the returns, in term of value, of IT/IS investments.
The Evaluation of IS Investment Returns: The RFI Case
167
The case described in this paper has indicated as key aspects necessary for a successful implementation of a performance measurement system: the awareness of the importance to evaluate and assess the return on IT/IS investments in terms of value, the commitment of both the IT management and the top management, the pilot approach to test the feasibility of the initiative and the outcome in terms of informative potential of the methodology to be adopted preceding a full scale implementation, and the involvement of the business side in the evaluation of IT/ IS investments. The methodology adopted to evaluate the benefits, in terms of value, of the IT investments in these projects has been so far implemented at the level of pilot experience. As already mentioned, the success of these pilot experiences convinced RFI to implement this methodology as an operational tool to support IT/IS investments planning and for the portfolio management. Since this step has not been completed this could be a partial limitation of the current research. In order to overpass this limitation, and to deepen the understanding of the case, further research activities will be planned in the near future.
References 1. Solow R.S. (1987) We’d better watch out, New York Times Book Review. 2. Carr N. (2003). IT doesn’t matter, Harvard Business Review, 81(5): 41–49. 3. Gartner (2009). EXP Worldwide Survey of More than 1,500 CIOs Shows IT Spending to Be Flat in 2009. Online at http://www.gartner.com/it/page.jsp?id¼855612, Accessed 4.21.2010. 4. Oh W, and Pinsonneault, A. (2007). On the assessment of the strategic value of information technologies: conceptual and analytical approaches. MIS Quarterly, 31(2): 239–265. 5. Tallon P.P. (2007) Does IT pay to focus? An analysis of IT Business Value under single and multi-focused business strategies, Journal of Strategic Information Systems, 16(3): 278–300. 6. Kohli R., Grover V. (2008). Business Value of IT: an essay on expanding research directions to keep up with the times, Journal of the Association for Information Systems, 9(1): 23–39. 7. Scheepers H, Scheepers R. (2008). A process-focused decision framework for analyzing the Business Value potential of IT investments, Information Systems Frontiers, 10(3): 321–330. 8. Brynjolfsson E. (1993). The Productivity Paradox of IT, Communications of the ACM, 36 (12): 66–77. 9. Melville N., Kraemer K.L., and Gurbaxani, V. (2004). Review – Information Technology and organizational performance: an integrative model of IT Business Value. MIS Quarterly, 28(2): 283–322. 10. Brynjolfsson E., Hitt L. (1995). Information Technology as a Factor of Production: The Role of Differences Among Firms, Economics of Innovation and New Technology, 3(4): 183–200. 11. Brynjolfsson E., Hitt L. (2003). Computing Productivity: Firm-Level Evidence, Review of Economics and Statistics, 85(4): 793–808. 12. Brynjolfsson E. (1996) The Contribution of Information Technology to Consumer Welfare, Information Systems Research, 7(3): 281–300. 13. Hitt L., Brynjolfsson E. (1996). Paradox Lost? Firm-Level Evidence on the Returns to Informa tion Systems Spending, Management Science, 42(4): 541–558. 14. Lee B., Barua A. (1999). An Integrated Assessment of Productivity and Efficiency Impacts of Information Technology Investments: Old Data, New Analysis and Evidence, Journal of Productivity Analysis, 12(1): 21–43.
168
A.M. Braccini et al.
15. Weill P. (1992). The relationship between investment in information technology and firm performance: A study of the valve manufacturing sector, Information Systems Research, 3(4): 307–333. 16. Soh C., Markus M.L. (1995). How IT creates business value: a process theory synthesis, In Ariav G., Beath C., DeGross J.I., Hoyer R., Kemerer, C.F. (Eds), Proceedings of the 16th International Conference on Information Systems, ACM, Amsterdam, pp. 29–42. 17. Kohli R., Devaraj S. (2003). Measuring information technology payoff: A meta-analysis of structural variables in firm-level empirical research. Information Systems Research, 17(3): 198–227. 18. Roach, S. (1987). America’s technology dilemma: A profile of the information economy. Special Economic Study, Morgan Stanley, New York. 19. Morrison C., E. Berndt. (1991). Assessing the productivity of information technology equipment in U.S. manufacturing industries. National Bureau of Economic Research working paper no. 3582, Washington, D.C. 20. Kelley M. (1994). Productivity and information technology: The elusive connection. Management Science, 40(11): 1406–1425. 21. Siegel D.Z. Griliches. (1992). Purchased services, outsourcing, computers, and productivity in manufacturing. In Griliches Z. (eds.) Output Measurement in the Service Sectors. University of Chicago Press, Chicago, 429–458. 22. Berndt, E., Morrison C. (1995). High-Tech Capital Formation and Economic Performance in U.S. Manufacturing Industries: An Exploratory Analysis, Journal of Econometrics 65: 9–43. 23. Koski H. 1999. The implications of network use, production network externalities and public networking programmes for firm’s productivity. Research Policy, 28(4): 423–439. 24. Diewert E.W., Smith A.M. (1994). Productivity measurement for a distribution firm. National Bureau of Economic Research working paper no. 4812, Washington, D.C. 25. Hitt L., Brynjolfsson E. (1995). Productivity, business profitability, and consumer surplus: Three different measures of information technology value. MIS Quarterly, 20(2): 121–142. 26. Dewan S., Min C. (1997). The Substitution of Information Technology for Other Factors of Production: A Firm Level Analysis, Management Science, 43(12): 1660–1675. 27. Menon N.M, Lee B., Eldenburg L. (2000). Productivity of information systems in the healthcare industry. Information Systems Research, 11(1): 83–92. 28. Devaraj S., Kohli R. (2000). Information technology payoff in the healthcare industry: A longitudinal study. Journal Management Information Systems, 16(4): 41–67. 29. Lee H., Choi B. (2003) Knowledge Management Enablers, Processes, and Organizational Performance: An Integrative View and Empirical Examination, Journal of Management Information Systems, 20(1): 179–228. 30. Aral S., Brynjolfsson E., Wu D.J. (2006). Which Came First, IT or Productivity? The Virtuous Cycle of Investment and Use in Enterprise Systems, Twenty-Seventh International Conference on Information Systems, 1819–1840. 31. Leem C.S.m Yoon C.Y., Park S.K. (2004). A process-centered IT ROI analysis with a case study, Information Systems Frontiers, 6(4): 369–383.
Part V
Systemic Approaches to Information Systems Development and Design Methodologies B. Pernici and A. Carugati
Information systems development and design methodologies cover the information system development life cycle from its early inception to its realization and use. In the information systems area, the focus is mainly on the initial phases of information system development and design, starting from the initial strategic planning, to the phases of requirements collection and analysis, and to the design of the enterprise information architecture. The aim of this section is to present recent research results developed in the Italian context. The three papers of this section focus on the phases of strategic planning, enterprise architectures development, and requirements elicitation and on the transition from requirements to design. The first paper on Legal issues in eGovernment services planning considers the influence of strategic planning on development of enterprise architectures, targeting in particular the domain of service provisioning in e-government. Within the eG4M framework, developed by the authors to provide guidelines for definition of appropriate models for public administration enterprise architectures, the contribution of the paper is on considering legal issues in strategic planning. The second paper analyzes the transition from strategic to conceptual information modelling. The paper proposes a methodology for elicitation and modelling of strategic information requirements. A framework for elicitation is proposed to identify information classes in enterprises which the authors validate in a real case study. The contribution of the paper is towards a systematic mapping of strategic information entities onto conceptual entities in the entity-relationship schemas. The third paper is on Use case double tracing linking business modeling to software development strictly model-driven engineering approach. The work focuses on linking business modelling and system modelling. Based on an extension of use cases in UML to support business modelling activities, the authors propose a “double tracing” mechanism to establish links between business requirements and the software solution to be developed. In conclusion, this section includes original and promising research results towards a systematic development of information systems, based on model-driven approaches. A particular attention is paid to the aspects of providing design guidelines of general applicability and on tracking design decisions. The approaches, validated in real case studies, pose a basis for innovative directions in information systems development and design methodologies.
.
Legal Issues in eGovernment Services Planning G. Viscusi and C. Batini
Abstract Planning activities are a relevant instrument to carry out sustainable and valuable eGovernment initiatives. The set of expertise needed for the design of eGovernment systems ranges from social to legal, economic, organizational, and technological perspectives, which have to be faced in a unique framework. The aim of the eG4M framework is to bring out these issues with an integrated approach. In this paper we focus in particular on legal issues in the strategic planning phase, aiming to show their relevance for the choice of appropriate solutions in terms of legal framework and enterprise architecture for service provision. The paper provides an example of application of the framework on experiences carried out in Italy.
Introduction In recent years, we assist to a new phase in eGovernment called transformational government [1], focused towards the reuse of available solutions and systems, and to what in private sector is defined as enterprise architecture. The focus on backoffice improvement and enterprise architecture is considered as the strategic way to create more efficient and customer-centric public services. The change of focus towards back-office processes claims for (1) a modelling activity encompassing the alignment between the technological facets of information systems, the organisation, the economic and legal facets (among others); (2) flexible and modular methodologies allowing to adapt the planning activity to different social and legal contexts. The aim of the eG4M framework [2] is to bring out these issues with an integrated approach to strategic and operational planning. The proposed framework aims to provide guidelines which consider the relevance of the definition of appropriate models for public administration enterprise architecture. These guidelines aim to
G. Viscusi and C. Batini Department of Informatics, Systems and Communication (DISCo), University of Milano-Bicocca, Milano, Italy e-mail:
[email protected];
[email protected] A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_20, # Springer-Verlag Berlin Heidelberg 2011
171
172
G. Viscusi and C. Batini
support IT strategy alignment analyses in the public sector and the modelling of the evolution of the public sector information systems on the basis of compliances with existing laws and rules. The paper focuses on considering legal issues in strategic planning by providing an example of the application of two steps of the eG4M framework, i.e. the state reconstruction and assessment steps, on real experiences carried out in Italy.
Related Work Rules have a relevant role in the social activity, consequently in eGovernment initiatives involving and impacting on institutions and the related social environment. As pointed out in [3], institutions are nested and coevolve together with their linkages, where routinization and repetition as rules based actions for their social construction can be the source of change, when related to the reinterpretation of their rules-based roles. Thus, a major issue in eGovernment planning is to provide access to the systems of rules of the institutions involved, in order to improve their interpretation and to focus on the constraints they introduce in the planning of further initiatives. At the state of the art, various types of rules have been proposed on the basis of their orientation, namely rules as solution-guiding mechanisms, rules as behaviour controlling mechanisms or rules as behaviour constraining mechanisms [4]. In particular, rules are relevant in system design methodologies aiming to analyze and provide solutions to soft problems [5], such as the ones impacting on eGoverment initiatives as institutional technology based interventions. In structuration theory, structures as resources and rules mediate social action through three modalities, i.e. facilities, norms and interpretative schemes [6–8]; these latter, through their instantiation by social actor, enact a reconstitution of the resources and rules that structure the social action. A structure, indeed, is a relational effect that recursively generates and reproduces itself. Following Giddens [6] we distinguish between rules of social life applied in the enactment/reproduction of social practices and formulated rules. These latter characterize a bureaucracy, such as a public administration, where laws are both rules and resources that define roles of action through the attribution of power and the imposition of duties [9]. Indeed, a legal system can be considered as a system of rules [9], where rules can be classified in terms of primary rules that express rules of conduct, and secondary rules that define the roles of the civil servants who have to administer the rules of conduct [9]. A complementary distinction is made between regulative norms which describe obligations, prohibitions and permissions, and constitutive norms that regulate the creation of institutional facts like property or marriage, as well as the modification of the normative system itself [10]. These latter are related to secondary rules. Furthermore, laws have impact on the effectiveness of the investments on Information and Communication Technologies (ICTs), on the redesign of administrative procedures defined
Legal Issues in eGovernment Services Planning
173
by laws, and on service provision processes. Among the tools aiming to help governments to assess the impact of regulation, the Regulatory impact analysis (RIA) has been widely adopted in the OECD countries also in the context of eGovernment initiatives, in particular to reduce the regulatory burden [11, 12]. A key feature of RIA is the consideration of the potential economic impact of regulatory proposals [13–15]. Notwithstanding, at the state of the art the impact of laws on e-Government projects planning has been seldom investigated. The eG4M framework aims to consider legal issues in the planning of eGovernment initiatives by means of a methodology which is composed of two main phases (1) strategic planning, and (2) operational planning. The strategic planning is composed of three main steps (1) eGovernment vision elicitation, (2) state reconstruction, (3) assessment, (4) definition of priority services and value targets (these latter introducing the operational planning phase). In the following we apply the methodology to an example related to the provision of the change of residency services by Italian public administrations; the application will be limited to the state reconstruction and assessment steps of the methodology. The aim is to show how the methodology supports (1) the identification of the main rules which govern the services provision, and (2) the choice of the technological or legal solutions that have to be developed.
Reconstruction of the Context of Intervention The state reconstruction step provides the analyst a clear and structured representation of the overall context of intervention. The first activity of the state reconstruction considers services to be developed together with the related laws. Laws usually provide an abstract specification of the administrative processes; in particular, the public administrations involved in service provision are defined. The analysis shows the different roles by law of the public administrations involved in the service provision, together with the ownership of the official registries and archives. Furthermore, the results provide a first representation of the up-to-date status of the legal framework, considering strategic issues for the provision of electronic services, such as laws on digital archive, on the legal validity of digital documents, on the exchange of electronic data between administrations, and on advanced support for the authentication in the access to public administration web-sites or desks (such as, e.g. smart-card, digital signature, etc.). Considering in particular the certification of residency, it is worth noting that in the Italian case (1) the Municipality is the owner of the registry office (Law 1128/1954); (2) the Ministry of Interior is in charge of the national record of the Registry office (Law 470/1988); (3) the electronic data and documents have a legal validity by law (Law 59/97). Other relevant laws are related to the driving licence provision, influencing other services; these laws establish (1) the creation and ownership of digital archives and
174
G. Viscusi and C. Batini
registries (in the example, digital archive for vehicle, civil status and a national record of the Registry office); (2) the obligation for local public administrations to exchange data in electronic format. A relevant issue concerns the public administration which by law has the ownership of the service provision; in order to choose the most suitable eGovernment initiative, a preliminary requirement is to represent the flows of information for service provision and the related ownership. To this end, we have to detail the roles of the involved organizations for the different types of information exchanged; they are (1) governs, namely, the public administration controls the management of information in service provision, assuring the correctness and the accountability of the procedure, and maintaining and preserving the data used in the information flows; (2) certifies, the public administration is responsible by law for the certification and service provision of the related information; (3) provides, the public administration or private delegated actor physically provides the services and the related information; (4) uses, the public administration (or other actors) uses the information to accomplish further activities related to service provision. The representation of the type of information exchanged and the roles of the involved organizations put in evidence governance constraints in service provision, both at the technological and organizational level. In our example, the Ministry of Finance certifies information both for residency and for health card provision, and co-governs with the Ministry of Health the health card provision information flow. Whereas, the Ministry of Interiors governs the residency information flow. Consequently, the analyses carried out in state reconstruction suggest a common initiative for the certification of residency and health card provision, where the overall governance is led by the Ministry of Finance. Another interesting analysis concerns the ownership of databases and shows for each type of information in the example (1) the database in which the information is represented; (2) the public administration owner of the database. It is worth noting that the bylaw relationships between different databases can be retrieved from previous analyses, where, e.g., Law 1228/1954 points out the institution by the Ministry of Interiors of the national record of the Registry office, and Decree 437/1999 claims for the role of Ministry of Finance for the electronic health card and the ownership of health data by regional/local health authorities. The state reconstruction shows the degree of complexity of the governance of data bases and data flows in the case of the hypothesized design of a common initiative for the certification of residency and health card provision. In the following, we discuss the assessment step.
Assessment of the Legal Framework The assessment step aims to evaluate the potential impact of the current legal framework on the eGovernment initiatives; in particular, the assessment considers (1) the impact of the current legal framework, (2) the coherence of ICTs with the legal framework, and (3) the quality of the legal framework. We now discuss each of the above issues. In order to express the impact at the organizational and technological level we define five possibilities:
Legal Issues in eGovernment Services Planning l
l
l
l
l
175
Enables technological change, when an existing law facilitates technological innovation. For example, a law that explicitly mentions wrapper technologies for the reuse of legacy systems. Enables organizational change, when an existing law facilitates organizational innovation. For example, in the UK the Cabinet Office’s 2002 privacy and data sharing report add data sharing gateway clauses to a number of laws, making data available for various agencies [16]. Brakes the technology, when an existing law inhibits technology and consequently, has be cancelled/modified. An example is an internal rule of an organization that forbids the use of a Publish and Subscribe technology. Bounds the organization, when existing laws constraint design choices in the architecture of processes. For example, a law does not allow the sharing of data between administrations. Innovates, when a law has to be introduced in order to enable technological/ organizational change.
Moreover, the impact evaluation considers the concrete level of enforcement of the law, e.g. the loose enforcement of a good enabling law leads to a negative effect on the considered dimensions. In Table 1, we show the impact analysis for the example. It is worth noting that the loose enforcement for digital signature is a critical issue, resulting in innovation at the technological and organizational level. The impact of laws is only one side of the problem, the other side considers the coherence of currently adopted technology for existing laws and their operating status. In order to analyse the relationship between current legal framework and existing technologies to be adopted in eGovernment initiatives, we define a new matrix, shown in Table 2. In Table 2, digital signature is a relevant technology for (1) Law 59/97 which introduces the legal validity of electronic documents, and for (2) the Decree 396/ 2000 with the force of law, that introduces the obligation for local public administrations to exchange data in electronic format through the national public network. Nevertheless, the digital signature technology is not really operating, and this issue can be influenced by the loose enforcement of the laws for digital signature, resulting from Table 1. Taking these issues into account, the quality assessment of the legal framework focuses on the quality dimensions that influence the enforcement of laws on digital signature; in the example, the enforcement of these laws influences the application of digital signature technology at the technological level. Qualities considered in the methodology concern services, organization/processes,
Table 1 Types of impact of laws and their enforcement status Legal framework Organizational impact Technological impact Law 59/97 Enables Enables Decree 437/1999 Binds Enables Laws for digital signature Innovates Innovates Decree with the force of law Enables Enables 396/2000
Enforcement status Strong Strong Loose Strong
176
G. Viscusi and C. Batini
Table 2 Laws vs. enabling technologies (AS-IS) Legal framework
Law 59/97
Decree with the force of law 396/2000
Decree 437/1999
Information and communication technologies (ICTs) Digital signature Centralized Distributed Publish and technologies DBMS DBMS subscribe Relevant for Relevant for – – law: Yes law: Yes Operating: Operating: No Yes Relevant for – Relevant for Relevant for law: Yes law: Yes law: Yes Operating: No Operating: Operating: No No – – Relevant for Relevant for law: Yes law: Yes Operating: Operating: No No
Channel technology –
Relevant for law: Yes Operating: No Relevant for law: Yes Operating: No
legal framework, ICT infrastructure and data layers and they belong to four general categories; for clarity we refer to services as an example, see also [2]: l
l
l
l
Efficiency is the ratio between the output related to a service provision and the amount of resources required to produce the output. Effectiveness is the closeness of the provided service to user’s expectations and needs. Accessibility is the ease of service request and use in terms of the available resources, and the user-friendliness of the interactions. Accountability is the assumption of responsibility for actions, products, decisions, and policies of the administration. It includes the obligation to report, explain and be answerable for resulting consequences of service provision.
As for the legal framework, quality dimensions can be defined for the whole legal framework or for specific laws, parts of laws or for a set of laws referred to a specific domain. We now focus on two of the quality dimensions for the legal framework, namely, completeness and accountability. With reference to completeness, we assign a “low” level to the legal framework, due to the already noted scarcity in the definition of rules for digital signature. The level of completeness can be improved by enriching the legal framework with: l
l l
A law which introduces the legal validity of electronic documents with legal signature. A law which defines rules and guidelines for the digital signature. A decree with the force of law that introduces the obligation for local public administrations to exchange data in electronic format through the national public network by adopting the digital signature.
Furthermore, we also assign a “low” level to accountability, since the considered Law 59/97 and the Decree with the force of law 396/2000 do not define the public administration(s) or the agencies that have responsibility on the control of the legal requirements, the validity of the information flows and of
Legal Issues in eGovernment Services Planning
177
the data/documents exchanged. Higher levels of accountability can be achieved e.g. by assigning the responsibility for the enforcement of the laws for data exchange to a central agency for the legal validity of electronic documents and the digital signature, providing also the standard requirements for the adoption of the digital signature by public administrations. The legal framework can be improved by introducing general rules on digital signature and certification services; the new rules must be enacted with new technical rules defining the guidelines for their enforcement. The new technical rules substitute and complete previous ones. Indeed, the improvement of the legal framework enabled by the adoption of digital signature allows the innovation at the technological and organizational level in initiatives such as, for example, the redesign of record office processes.
Conclusion and Future Work In this paper we have discussed how the eG4M framework aims to consider legal issues in the strategic planning of eGovernment initiatives. We have discussed the different steps of the related methodology by means of an example referred to the Italian legal framework and eGovernment services provision. The methodology supports the identification of current legal impacts on ICT technology adoption and the definition of guidelines for improvement solutions, on the basis of the available knowledge on the legal framework of the considered country. The complexity of laws analysis has required a discussion of quality dimensions for the assessment of the legal framework, considering their relationships with the different types of impact of laws and their enforcement status. In future work we will deepen these issues in order to provide a richer set of metrics for the considered quality dimensions. Furthermore, we aim to adopt the methodology in more complex contexts, such as cross-border contexts, in the planning of eGovernment initiatives, involving two or more different legal frameworks that have to be coordinated, and consequently whose technological and organizational impact has to be considered under a unified perspective.
References 1. Irani, Z., Love, P.E.D., Jones, S. (2008) Learning lessons from evaluating eGovernment: Reflective case experiences that support transformational government, The Journal of Strategic Information Systems, 17: 155–164. 2. Viscusi, G., Batini, C., Mecella, M. (forthcoming 2010) Information Systems for eGovernment: a quality of service perspective, Springer, Berlin-Heidelberg 3. March, J.G., Olsen, J.P. (1998) The Institutional Dynamics of International Political Orders, International Organization, 52: 943–969
178
G. Viscusi and C. Batini
4. Gil-Garcia, J.R., Martinez-Moyano, I.J. (2007) Understanding the evolution of e-government: The influence of systems of rules on public sector dynamics, Government Information Quarterly, 24: 266–290. 5. Checkland, P., Scholes, J. (1990) Soft systems methodology in action, Wiley, Chichester. 6. Giddens, A. (1984) The Constitution of Society: Outline of the Theory of Structure, University of California Press, Berkeley, CA. 7. Jones, M.R., Karsten, H. (2008) Gidden’s Structuration Theory and Information Systems Research, MIS Quarterly, 32: 127–157. 8. Orlikowski, W. (1992) The Duality of Technology: Rethinking the concept of Technology in Organizations, Organization Science, vol. 3: pp. 398–427. 9. Hart, H.L.A. (1961) The Concept of Law, Clarendon Press, Oxford. 10. Searle, J. (1995) The Construction of Social Reality, The Free Press, New York. 11. OECD (2002) Regulatory Policies in OECD Countries - From Interventionism to Regulatory Governance, OECD. 12. PriceWaterHouseCoopers (2005) Regulatory Burden: Reduction and Measurement Initiatives, PriceWaterHouseCoopers for Industry Canada. 13. Hahn, R.W., Burnett, J.K., Chan, Y.-H.I., Mader, E.A., Moyle, P.R. (2000) Assessing The Quality Of Regulatory Impact Analyses: The Failure of Agencies to Comply With Executive Order 12,866, The Harvard Journal of Law and Public Policy, 23. 14. Malyshev, N.A. (2006) Regulatory Policy: OECD Experience and Evidence, Oxford Review of Economic Policy, 22: 274–299. 15. Rodrigo, D., Andre´s-Amo, P. (2008) Building an Institutional Framework for Regulatory Impact Analysis (RIA) - Version 1.1, Regulatory Policy Division Directorate for Public Governance and Territorial Development - OECD. 16. Evans-Pughe, C. (2006) Share and Share Alike, Engineering & Technology.
From Strategic to Conceptual Information Modelling: A Method and a Case Study G. Motta and G. Pignatelli
Abstract This paper presents a method that models Enterprise Information Architecture with the objective of integrating overall Enterprise Architecture Modeling frameworks such as TOGAF. The method, called SIRE (Strategic Information Requirements Elicitation), includes elicitation and modeling of strategic information requirements, that are an abstraction level higher than traditional conceptual level. Elicitation is based on a framework that identifies information classes in enterprises, by using two logical categories, Information Domains and Information Types. Specifically the paper considers the method by which SIRE schemata are mapped and transformed into Entity Relationship schemata, using a set of predefined rules and an open source tool. The method is validated by a real life case study. The novelty of the approach is its universality. Furthermore it shortens times and is very easy understood by user managers.
Background: Modeling Techniques for Enterprise Information Architecture Enterprise information is traditionally modelled on two abstraction levels, logical and conceptual. The former is represented by Relational models [2] and the latter by Entity Relationship (ER) for databases and Dimensional Fact Model (DFM) [3] for data warehouses. Each abstraction level targets a specific community: logical level, that is closer to implementation, targets Database Administrators (DBA) and implementation engineers, while conceptual level, that is more abstract and semantically richer, targets analysts. However, even conceptual models fall short to address the overall enterprise information architecture, that is a key for IT strategic planning. We call this additional level “strategic level” and related information requirements “strategic
G. Motta and G. Pignatelli Department of Informatics and Systems, University of Pavia, Pavia, Italy e-mail:
[email protected];
[email protected] A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_21, # Springer-Verlag Berlin Heidelberg 2011
179
180
G. Motta and G. Pignatelli
information requirements”. At this level, we assume that modelling should provide a compact language that can (a) be understood by user managers (b) define a normative schema of enterprise information (c) be translated/mapped into a standard conceptual model, as ER. Strategic information modelling has been addressed by different techniques. The forerunner Business Systems Planning (BSP), very popular in 1980s [5] associates data classes and processes in a grid, that shows which process uses which data. Later, Information Strategy Planning (ISP) [7] integrates different information models, such as BSP, ER and Data Flow Diagrams (DFD) in order to provide an almost seamless approach to information engineering. However, these traditional techniques do not provide a normative schema. Within a general perspective, the strategic level method is addressed by TOGAF in Enterprise Information Architecture [6]. However, TOGAF intentionally does not provide a normative model but only steps. In the same general perspective, the issue of the schema of enterprise information has been discussed by some researches on Enterprise Information Integration (EII). This is a bottom up approach, that has the purpose of combining information from diverse sources into a unified format [1, 4]. However, to date no normative modelling has been developed. Within industry oriented frameworks, strategic information modelling has been considered by Enhanced Telecom Operations Map1 (eTOM) [10] by the Shared Information Data model (SID). SID offers a normative paradigm for shared information/data, based on the concepts of Aggregated Business Entities (ABE) and Attributes [10, 11]. ABE is an information of interest to the business, while Attributes are facts that describe the Business Entity. Specifically ABE “is a well-defined set of information and operations that characterize a highly cohesive, loosely coupled set of business entities”. The concept of ABE fits many requirements of a strategic information model. However, since it is bounded to telecommunications industry, it does not provide a universal approach to identify entities. In the domain of ERP (Enterprise Resource Planning), ARIS (Architecture of Integrated Information Systems) provides a normative approach at strategic level [9]. However, ARIS is a proprietary approach and rather focused on SAP software platform. Given the above state of the art, we have developed a specific technique, called Strategic Information Requirements Elicitation (SIRE) [8], that defines a normative schema of enterprise information and uses a language easily understood by user managers, thus satisfying requirements (a) and (b) stated at the beginning of this section. Furthermore, the normative schema is universal and not proprietary, thus overcoming the limits of industry frameworks. In order to satisfy requirement (c), i.e. translation/mapping into a standard conceptual model, as ER, we here illustrate a simple method and tool, backed by a case study. SIRE contains a universal catalogue of enterprise Strategic Information Entities (SIE) shown in Table 1. Each SIE results from crossing Information Types and Information Domains. Information Types reflect the nature of Information that may be structural (Master Data) or describe events (Transaction Data) or define computed indicators (Analysis Data). In turn, Information Domains describe the
From Strategic to Conceptual Information Modelling: A Method and a Case Study
181
Table 1 SIRE catalogue of strategic information entities Information type Master data Transaction data Analysis data Information domain Stakeholders Law Competitor Customer Supplier Broker Shareholder Resources Personnel Plants Raw materials Cash Context Structure Project Region Output Process Product Service
universe about which information recorded and are conceptually similar to SID’s ABEs. Trough a sequence of steps, SIEs are tailored to an individual enterprise. Potentially, SIRE offers a flexible approach that can be incorporated in methodological framework as TOGAF.
The Mapping Method Our work intends to illustrate an approach to map the strategic SIEs onto the conceptual ER entities. In order to obtain a viable conceptual schema from the initial catalogue of SIEs we have defined the steps summarized in Table 2 that shows input, output and activities of each step. The starting point is the catalogue of standard SIEs, from which the analyst identifies entities of interest (Selected SIEs) that are further refined into Customized SIEs. The key point is a model to model transformation by which SIEs are transformed into Entities and Relationship of the classic ER model. This is accomplished by step (3), while subsequent steps (4) and (5) refine the schemata. Step (3) is named “Coarse Mapping” because it maps SIEs into ER model in rather rough way, that, therefore needs to be improved afterwards. The rules used for such mapping are listed in Table 3. In step (3) resulting ER schema is made of “Information Islands”, that associate a Master Entity and its related Transaction Entities. The name “island” highlights that only master-to-transaction links are considered. The resulting schema is clearly not semantically satisfactory, because cross-domain relationships, such as CustomerProduct, are often critical.
182 Table 2 Mapping steps Step Input Catalogue of Select strategic standard SIEs information entities (SIEs) Customize and refine Selected SIEs SIEs Coarse map Customized strategic information entities Link Information Conceptual Islands Information Islands Refine ER schema Conceptual linked entities
G. Motta and G. Pignatelli
Output Selected SIEs
Activities Define the scope of analysis Select SIEs and add properties
Customized SIEs Creation/specialization/ decomposition of SIEs Conceptual Link strategic master data to Information strategic transaction data Islands Conceptual Link different Information linked entities Islands Refined conceptual entities
Creation/specialization/ decomposition of conceptual entities Creation of new relationship as needed
Table 3 Mapping algorithm of SIE to ER model SIRE model ER model Specialization Enhanced ER specialization (overlap or disjoint) Decomposition Compound ER or compound/complex attribute Property of master data Entity type or attributes Property of transaction data Entity type and relationship type Property of analysis data Calculated attributes
Step (4) adds cross domain relationships, and obtains a more cohesive ER i schema. No specific rule applies, but enhancement is based on the domain expertise of the analyst. Step (5) refines the ER schema by adding relationships, by specializing or decomposing entities and attributes and by aggregating or generalizing entities. The last step enhances the ER schema by inserting deeper domain competences and eventually produces “Refined Entities”.
A Case Study in a Large Municipality The Housing Division of an Italian Municipality provides housing services to the citizens. Estate management is outsourced to various Real Estate Management Organizations (REMO) A regional law stated new rules for renting. Rents must balance the value of the apartments and the overall condition of tenants, measured by the Index of Equivalent Economic Situation (IEES). The Municipality wants to appraise the impact of the new rules on functional requirements. Actually, new rules would heavily impact IT systems of REMOs as they (a) impact allocation of housing units by changing granting rules, and rent
From Strategic to Conceptual Information Modelling: A Method and a Case Study
183
computation rules, (b) change DB schemas and require new software functions and (c) change templates and information content of reporting and billing processes. Let us consider the use of SIRE method in this case. The first steps concern the customization of the standard grid. After a cycle of interviews to the Municipality and REMO managers, the standard grid has been customized (Table 4). For instance, the Information Domain “Customer” has been specialized into three domains “Tenant”, “Household”, “Broker”. The same approach holds also for the Information Types. The rows written inside the grid boxes list candidate aggregate attributes of the related customized SIE. The customized grid is used to define the normative data schema trough the mapping algorithm discussed above. For the sake of simplicity, we here consider only Customer and Plants domains. By the mapping algorithm we obtain the two Information Islands shown in Fig. 1, which represent, respectively, the Customer and the Unit domains. A subsequent step is to link Information Islands master data belonging to different Information Domains. For instance each tenant rents a unit, a deal involves a unit and so on. Through this step we could see the ER schema stemming from Information Islands as shown in Figs. 2 and 3.
Table 4 Customized sire grid for the housing division of the municipality Master Transactions data data Events Certifications Stakeholder Law Allocation Periodic check of Compliance rules eligibility conditions status Priority rules Customer Tenant Master data Lease agreement Lease payments Payment delays Lease renewals Lease adjustments Lease abatements Lease increases Lease appeal Household Master data Certified Declared ISEE IEES Broker REM REM Master data Resources Plants Unit Master data Ordinary maintenance Extraordinary maintenance Valorisation and rationalization actions Renovation actions Architectural barrier and environmental improvements Utilities Services
184
G. Motta and G. Pignatelli Lease Deal
Ordinary Maintenance
Lease
Other Information
Extraordinary Maintenance
Customer Master Data
Lease Dleay Valorization Action
d
Breach Household
Tenant
Ratioalization Action Lease Deal Renewal
Declares IEES
Master Data
Unit Renovation Action
Adjustment
d
Utilities
Abatement Apartment
Garage
Ralse
Services
Appeal
Environmental Action
Certified IEES
Barrier-free design Action
Fig. 1 Information Island related to customer and unit domains
Fig. 2 Link between Information Islands
Tool In order to support the analysts, we have designed an Eclipse based tool that enables the creation of well-formed SIRE models and related ER schemas. The development of such a tool overcomes the scarcity of tools for mapping strategic information requirements into conceptual models, and, more generally, for conceptual modelling. In fact, Eclipse plug-in central provides 28 plug-ins for relational modelling such as CLAY MARK II, ERMaste, AMATERAS ERD and Mogwai ERDesigner. Alas no conceptual modelling tool is provided. Data Tool Platform (DTP) project (http://www.eclipse.org/datatools/) is a powerful Eclipse project but it produces only relational schemas.The commercial DATABASE VISUAL ARCHITECT (http://www.visual-paradigm.com/) provides the design conceptual
From Strategic to Conceptual Information Modelling: A Method and a Case Study
185
Fig. 3 Ultimate ER schema
Fig. 4 A screenshot of the tool displaying the customized strategic entities
models but it is not open and it does not map strategic requirements on conceptual level. Our tool is based on the SIRE and ER meta-models that have been developed with the Graphical Modelling Framework (GMF, http://www.eclipse.org/gmf/) provided with the Eclipse platform. The tools are shown in Fig. 4.
Conclusions We have illustrated a technique that enables to design a complete and consistent Enterprise Information Architecture from the strategic to the conceptual levels. The approach is based on robust models. Actually strategic modeling is based on a normative framework (SIRE, [8]) that generalizes some concepts extensively tested by eTOM SID [12]. In turn the conceptual level uses the universally known ER notation. The approach is complete as it provides simple rules to map the strategic level onto the conceptual level and it is supported by an open source tool based on the worldwide-spread Eclipse platform.
186
G. Motta and G. Pignatelli
SIRE methodology fits requirements and fulfils objectives of TOGAF – Data Architecture (Phase C of Architecture Development Method – ADM [10]) that states that “the identified Data Type must be understandable by stakeholders, complete and consistent, stable”. Future developments include coverage extension, an extended validation and the tool improvement. The coverage will be extended by a bottom-up mapping, from conceptual to strategic level. This mapping could help IT management to extract a strategic view from the current heterogeneous and diverse databases. A very similar research direction is to structure a strategic information architecture from unstructured text documents (e.g. manuals, organization charts, interviews and alike). Finally the tool improvement will include the integration of the conceptual modelling tool with logical modelling tools and, also, guidance in model-to-model transformation.
References 1. Bernstein, P. A. and Haas, L. M. 2008. “Information integration in the enterprise”. Communications of ACM 51, 9 (Sep. 2008), 72–79. DOI ¼ http://doi.acm.org/10.1145/1378727.1378745 2. Elmasri, Navathe, 2004, Fundamentals of Database Systems, Fourth edition, Pearson Education 3. Golfarelli M., Maio D.,Rizzi S., Conceptual Design of Data Warehouses from E/R Schema, Proceedings of the Thirty-First Annual Hawaii International Conference on System SciencesVolume 7, p.334, January 06–09, 1998 4. Halevy, A. Y., Ashish, N., Bitton, D., Carey, M., Draper, D., Pollock, J., Rosenthal, A., and Sikka, V. 2005. “Enterprise information integration: successes, challenges and controversies” Proceedings of the 2005 ACM SIGMOD international Conference on Management of Data (Baltimore, Maryland, June 14–16, 2005). SIGMOD ’05. ACM, New York, NY, 778–787. DOI¼ http://doi.acm.org/10.1145/1066157.1066246 5. IBM, 1975, Business Systems Planning, GE 20-0257-1 6. Josey, A., Harrison R., 2009, TOGAF Version 9: A Pocket Guide, Van Haren Publishing 7. Martin J., 1990, Information engineering, Prentice Hall, New York 8. Motta G, Pignatelli G 2008. Strategic Modelling of Enterprise Information Requirements. Proceedings of the 10th International Conference on Enterprise Information Systems (Barcelona, Spain, June 12–16, 2008) 9. Scheer, W. 2000 ARIS – Business Process Modelling, 3d edition, Springer, Berlin 10. The Open Group. TOGAF Version 9. The Open Group Architecture Framework. ISBN 978-90-8753-230-7. Document Number G091. Published by The Open Group, 2009 11. TMForum, 2003, Shared Information/Data(SID) Model - Concepts, Principles, and DomainsGB922, July 2003 12. TMForum, 2005, Enhanced Telecom Operations Map (eTOM), The Business Process Framework – GB921, November
Use Case Double Tracing Linking Business Modeling to Software Development G. Paolone, P. Di Felice, G. Liguori, G. Cestra, and E. Clementini
Abstract Use cases are recommended as a powerful tool to carry out applications when moving from requirements analysis to design. In this contribution, we start from a recent software methodology that has been modified to pursue a strictly model-driven engineering approach. The work focuses on relevant elements of use cases in UML modeling, adapted and extended to support business modeling activities. Specifically, we introduce the idea of performing a “double tracing” between business modeling and system modeling: in this way a strong link between business requirements and the software solution to be developed is established.
Introduction An information system is the technological image of a business system [1]. The key of success of an IT project is therefore its faithfulness to the enterprise environment. This is the only way corporate users can find in the application the same modus operandi of their own function [2]: each actor plays within the organization a set of use cases and does so regardless of automation. In fact, the biggest innovation brought by the use case construct introduced by Jacobson [3] is that it exists in the business system independently of the automation process: the designer’s task is therefore to dig and obtain software application’s use cases from business system analysis. “A use case is a description of a set of sequences of actions, including variants, that a system performs to yield an observable result, which is valuable for an actor”, [3]. Today, use cases are widely used in modeling and development of software applications [4, 5]. Business modeling is a well-known set of activities altogether committed to use case specification. In turn, a model is a simplified view of a complex reality, able for creating abstraction and thus allowing one to eliminate irrelevant details and focus
G. Paolone, P. Di Felice, G. Liguori, G. Cestra, and E. Clementini Department of Electrical and Information Engineering, University of L’Aquila, L’Aquila, Italy e-mail:
[email protected];
[email protected];
[email protected];
[email protected];
[email protected] A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_22, # Springer-Verlag Berlin Heidelberg 2011
187
188
G. Paolone et al.
on one or more important aspects at a time. Business models enable a common understanding and facilitate discussion among different stakeholders [2, 6, 7]. In previous papers, we proposed a use case-centred methodology, based on an iterative and incremental approach that proceeds through refinements, featuring as the most important aspect a smooth continuity between business modeling [8], conceptual analysis [9], design and implementation [10]. The methodological process is use case driven, since the use case artefact can be found in business and system models, though represented by different stereotypes, and is also present in the application code. The methodology is structured in four distinct layers sketched in Fig. 2a. The top two use case analysis layers (Business Use Case – BUC and Business Use Case Realization – BUCR) are related to business modeling and their scope is to create a complete representation of the enterprise reality. The bottom two layers (UC and UCR), instead, are related to system modeling, that is, the modeling of the software system. Figure 2a shows, in addition, the relationship between the four layers with the Computation Independent Model and the Platform Independent Model. In essence, the methodology allows us to represent both the business and the system models. Its adoption brought benefits from a software engineering point of view and with respect to the reduction of project development time [8]. The methodology is supported by a Java-based framework made up of a group of structures that establish a one-to-one relation between analysis model entities and code elements (e.g., a UCR becomes a Java class). Unfortunately, the adoption of the recalled methodology in large industrial projects highlighted two limits with respect to the unambiguous business modeling as well as the transformation of the business model into the system model. Both limits are related to the modeling of the behavioural aspect of the system. The first limit is due to the adoption of the top–down approach, while the second one relates to the use of the RUP stereotype business use case. In this paper, we propose a solution to the mentioned problems by introducing a variation in the use of the UML construct package in the business modeling phase and, at the same time, a double tracing linking business modeling to software development. Automation of software systems from business process specifications is a significant topic in software engineering. In particular, there has been a lot of interest in research about use case modeling, especially regarding how they stem from business modeling. Common approaches for identifying use cases employ business process analysis [6] and activity diagrams [7]. Both approaches adopt the BPM notation from which UML artefacts are generated. Our paper proposes another way of tackling the same problem, by using UML as the common language for both communities of software and business engineers. So, we reevaluate our previous methodology with respect to the adoption of the top–down approach, which introduces a degree of subjectivity during the modeling of enterprise processes. As a consequence, the top–down approach restricts the possibility of automatic model transformations, in contrast with the guidelines of the Model Driven Architecture (MDA) paradigm. This is actually an important drawback, because the MDA is considered necessary to manage system complexity and to build well-structured
Use Case Double Tracing Linking Business Modeling to Software Development
189
information systems [11]. The problem of automatic model transformations is a compelling need. A special annotation has to be done to what we call double tracing, referring to the trace operation involving both business modeling layers against the system modeling layers. This operation allows us to transform the business model into the system model. This creates a strong link between business modeling and software development with regard to the behavioural aspect of the system.
Limit of the Current Approach The methodology proposed in [8–10] has a limit caused by the application of the top–down approach which has a degree of subjectivity regarding the level of abstraction that is chosen at each layer and also regarding use case definition at business and system level. Thus, it is perfectly possible that two designers produce UML diagrams at different level of details to represent the same business reality and consequent software system, without having the means to prove the correctness of a solution with respect to the other. Unfortunately, the lack of detail is not compatible with the MDA approach, whose aim is to automatically transform a given model into another one, by using a finite set of values and rules that produce a unique result. In a methodological approach that works with stepwise refinements through four distinct layers, it is unlikely that a set of rules could be identified allowing a unique model transformation. In fact, the latter burden is often left to the skills and cleverness of the business analyst and software designer. This is the main cause of the already mentioned subjectivity in the methodology, which consequently loses formal soundness. In [12] an example showing this limit is given. A further limit of the previous approach is that the BUC is improperly used, since it should represent an interaction with the business goals at a high level of abstraction. Instead of defining a single interaction mode between an actor and the system, the BUC as used in [8] defines a large enterprise business area, which is too often associated to a wide range of actor’s classes. The limit of the current methodology concerns the system behavioural aspect (use case model) and not the structural aspect (class diagrams). In fact, in the class model defined in the system modeling phase, we can find all the business object classes discovered during the business modeling phase that during the trace operation (the passage from business modeling to system modeling) have been tagged as necessary for system automation: the same classes are also present in the coded model. During an automatic transformation process, it is possible for the structural aspect of the system to uniquely identify the object classes that need to be created starting from the business model. This is not possible for the behavioural aspect, where the continuous refinement process does not consent the identification of rules for a unique mapping.
190
G. Paolone et al.
The New Approach The new approach continues to view the enterprise as a system that can be divided into subsystems represented by UML diagrams [8] both in the business and system representations: the difference with the previous approach lies in the relation between the two models. In business modeling, the analyst uses the UML package artefact (of the UML v.2.1.2) to group elements and provide a namespace for them. According to the RUP guidelines [13], the organization unit and business system stereotypes (Fig. 1) are commonly used to represent the enterprise and its parts (usually named areas, departments, divisions and so on), respectively. This kind of modeling proved to be quite effective, since it relies on abstraction to describe the enterprise reality. A RUP’s best practice starts the business modeling from the detection of the organization unit related to the IT project and then continues with the definition of its business systems. In turn, each business system is composed of a variable number of subsystems, organized in a nested structure of degree k (>¼1). The new proposal consists in adding, at the business system modeling layer, the extra level kþ1 which takes the place of the BUC layer of our previous methodology (Fig. 2). This allows us to have a higher level of abstraction in the definition of BUCs and BUCRs, removing the second limit found in [8–10] and discussed previously. In this way, we obtain a more thorough representation of the action sequences taken by the actors inside the enterprise, independently from the automation. Another advantage implied by the new approach is that we do not need a further refinement at the beginning of system modeling, but we can immediately proceed to a trace operation. Nevertheless, the analyst is granted the possibility to introduce technological use cases that could not have been discovered during business modeling. Our methodology keeps the well known advantages of the top–down approach, made possible by a new use of the package artefact with the business system stereotype. In summary, the proposed approach makes UCs discovery easier and guide us in their representation in the system model. Partitioning the enterprise system into kþ1 levels allows us to employ the use case construct in its original meaning [10]. The trace operation does not require any refinement and the system use cases coincide with those of business.
Fig. 1 The RUP elements of business modeling
organization Unit
business System
Use Case Double Tracing Linking Business Modeling to Software Development
a
191
b Business System (subsystem of level k)
Business System (subsystem of level k+1)
represented
BUC
BUC
Business Modeling
BUCR
CIM
BUCR
CIM
double tracing
System Modeling
UC
UCR
UC
UCR PIM
PIM
Fig. 2 The previous (a) and the proposed (b) methodology
Double Tracing Our solution proposes (Fig. 2b) to execute business analysis through business modeling and analysis modeling as discussed in [8]: the business modeling layers remain the same of the original proposal as well as the realize process that links them, although with different levels of abstraction. During the system analysis phase, a trace operation of both business modeling layers to the system modeling layers is executed (double tracing). In this way, we can transfer all the BUCs and BUCRs to the system perspective, which become the UCs and UCRs, respectively. The logic behind the trace remains unaltered: in the system view, only the UC that will be automated are taken into consideration. In the context of an MDA process for enterprise automation, it is imperative to identify UCs and UCRs that define the interaction between end-users and software
192
G. Paolone et al.
system following the pre-existing workflows and communication between business actors. The double tracing allows to transform in total continuity the business model into the system model: BUCs are transformed one-to-one to the UCs, and BUCRs become the UCRs with the addition of technological UCRs (if any). The realize process at the business modeling, as well as all the relations among the UCs (i.e., extend, include and generalization) are traced one-to-one in the system modeling. This result represents a good starting point in support of a model-driven process because it prevents the creation of different software models against the same business model. In practical terms, this grants that different workgroups would produce identical software systems to automate the enterprise system, because the BUCs, which are the starting point of the process, exist in the enterprise system independently from its automation. Thanks to the framework introduced in [10], each UCR becomes a Java class. This creates a strong link between business modeling and software development with regard to the behavioural aspect of the system.
An Example This section proposes an example applied to a real-life document management project for a bank. Figure 3 shows the organization unit Bank and one of its business systems: Administration. According to our previous approach, the document management would be modeled as a BUC (in [12] called Documental Management), while by applying the new approach we model it as a business system that contains the interactions between actors and system. Figure 4 shows the corresponding business goals diagram. The organization unit Bank contains the business system Administration, which can be considered as the generic level k. At level kþ1 we add a business
Bank
Fig. 3 The organization unit and a business system
Administration
Use Case Double Tracing Linking Business Modeling to Software Development
193
> Fast retrieval of documents
Documental Management >
Filing of documents
Fig. 4 The business goals diagram Fig. 5 The BUCs of the Documental Management
Document Acquisition
Sender
(from Documental Management)
(from Documental Management)
Internal Document Acquisition
Document From Supplier
Supplier
Person
Enterprise
Fig. 6 The BUCRs diagram
system called Documental Management. The BUCs of this system are Building Localization, Document Acquisition, Document Distribution, Document Filing, Document Validation and Sender (Fig. 5).
194
G. Paolone et al.
Document Acquisition
(from Documental Management)
Document From Supplier
Internal Document Acquisition
(from Documental Management)
(from Documental Management) DocumentAcquisition
>
> Internal Document Acquisition
LinkFile
Document From Supplier
Fig. 7 The UC trace diagram
For each BUC we can identify the related BUCRs. For example, the realizations of the Document Acquisition are Internal Document Acquisition and Document From Supplier, while those of Sender are Supplier, Person and Enterprise, as shown in Fig. 6. The trace of BUCs into UCs and BUCRs into UCRs can be done at the same time. The only UCRs that can be added to the discovered use case are those related to automation or technological elements of the system. Figure 7 shows the trace of the BUC Document Acquisition. As it can be noticed, the trace process does not require any refinement. The trace is applied to all the artefacts of the business model at the same time. After this phase, we obtain a system model that is identical to the business model, except for the addition of the UCR LinkFile.
References 1. Youcef Baghdadi (2002) Web-Based Interactions Support for Information Systems. Informing Science: Designing Information Systems, Volume 5, No 2. 2. X. Zhao, Y. Zou, J. Hawkins and B. Madapusi (2007) A Business Process Driven Approach for Generating E-Commerce User Interfaces, Model Conference, Nashville TN, pp. 256–270. 3. G. Booch, I. Jacobson, J. Rambaugh (1999) The Unified Modeling Language. User Guide. UK, Hardcover. 4. L. Zelinka V. Vrani´ (2009) A Configurable UML Based Use Case Modeling Metamodel. First IEEE Eastern European Conference on the Engineering of Computer Based Systems. 5. J. Duan (2009) An approach for modeling business application using refined use case. International Colloquium on Computing, Communication, Control, and Management.
Use Case Double Tracing Linking Business Modeling to Software Development
195
6. A. Rodrı´guez, E. Ferna´ndez-Medina, M. Piattini (2008) Towards Obtaining Analysis-Level Class and Use Case Diagrams from Business Process Models. ER Workshops. 7. S. Sˇtolfa, I. Vondra´k (2004) A Description of Business Process Modeling as a Tool for Definition of Requirements Specification. 12th System Integration, pp. 463 469. 8. G. Paolone, G. Liguori, E. Clementini (2008) A methodology for building enterprise Web 2.0 Applications, MITIP, Prague, Czech Republic. 9. G. Paolone, G. Liguori, E. Clementini (2008) Design and Development of web 2.0 Applications, itAIS, Paris, France. 10. G. Paolone, G. Liguori, G. Cestra, E. Clementini (2009) Web 2.0 Applications: model-driven tools and design, itAIS, Costa Smeralda, Italy. 11. N. Sukaviriya, S. Mani, V. Sinha (2009) Reflection of a Year Long Model-Driven Business and UI Modeling Development, INTERACT, Part II, LNCS 5727, pp. 749–762. 12. G. Paolone, P. Di Felice, G. Liguori, G. Cestra, E. Clementini (2010) A Business Use Case driven methodology: a step Forward, ENASE, Athens, Greece. 13. P. Kruchten (2003) Rational Unified Process, An Introduction – Second Edition. UK, Addison Wesley.
Part VI
Human Computer Interaction G. Tortora and G. Vitiello
Traditional Human Computer Interaction (HCI) topics, such as user-centred system design, usability engineering, accessibility, and information visualization are important to Management Information Systems, as they influence technology usage in business, managerial, organizational, and cultural contexts. As the user base of business interactive systems is expanding from IT experts to consumers of different types, including elderly, young and special needs people, who access services and information via Web, new and exciting HCI research topics have emerged dealing with broader aspects of the interaction, such as designing for improving the overall user experience, favouring social connections and supporting collaboration. Moreover, the introduction of advanced interactive devices and technology is dragging researchers’ attention towards innovative methods and processes for interaction design, modeling and evaluation, which take fully into account the potential of modern multimodal user interfaces. In line with the general HCI research trends, the present section includes a selection of ten papers that discuss practices, methodologies, and techniques tackling different aspects of the interaction among humans, information and technology. A first group of four papers is focused on the design of advanced user interfaces supporting the target users in their everyday activities. The first paper, by Daniela Angelucci, Annalisa Cardinali and Laura Tarantino, entitled “A Customizable Glanceable Peripheral Display for Monitoring and Accessing Information from Multiple Channels” describes the design/evaluation process that led to a customizable glanceable peripheral display able to aggregate notifications from multiple sources (e.g., email, news, weather forecast), with different severity levels. The second paper, entitled “A Dialogue Interface for Investigating Human Activities in Surveillance Videos” , by Vincenzo Deufemia, Massimiliano Giordano, Giuseppe Polese, and Genoveffa Tortora, presents a dialogue interface for investigating human activities in surveillance videos. The interface exploits the information computed by the recognition system to support users (security operators) in the information-seeking process, by means of a question-answering model. In the third paper, entitled “The effect of a dynamic user model on a customizable mobile GIS application”, Luca Paolino, Marco Romano, Monica Sebillo, Genoveffa Tortora and Giuliana Vitiello analyze the role that a dynamic user model may play in simplifying query formulation and solving in an existing audio-visual map
198
G. Tortora and G. Vitiello
interaction technique conceived for mobile devices. The fourth paper, entitled “Simulating Embryo-Transfer Through a Haptic Device”, by Andrea F. Abate, Michele Nappi and Stefano Ricciardi, describes a visual-haptic training system which helps simulate Embryo Transfer (ET), an important phase of the In Vitro Fertilization process. The system is based on a virtual replica of the anatomy and of the tool involved in the ET, and exploits a haptic device to position the virtual catheter in the target location. A second group includes three papers focusing on end user development issues. In the first paper, entitled “Interactive Task Management System Development Based on Semantic Orchestration of Web Services”, Barbara R. Barricelli, Antonio Piccinno, Piero Mussio, Stefano Valtolina, Marco Padula and Paolo L. Scala, discuss the emerging issue to allow end users, who lack technical background in development activities, to adapt and shape the software artifacts they use. The authors propose a simplified service composition approach, which abstracts this process from any unnecessary technical complexity. The approach is discussed on a case study concerning with the design of a Task Management System that supports the activities of workflow designers of an Italian research and certification institution. The second paper, by Rosanna Cassino and Maurizio Tucci, entitled “An Integrated Environment to Design and Evaluate Web Interface” presents a tool to design, implement and evaluate web interfaces, which builds upon an integrated development methodology to generate the HTML pages of a web site that respect some usability metrics before the application is released and tested by canonical testing techniques. The third paper entitled “A Crawljax Based Approach to Exploit Traditional Accessibility Evaluation Tools for AJAX Applications”, by Filomena Ferrucci, Davide Ronca, Federica Sarro and Silvia Abrahao presents an innovative Crawljax based technique to automatically evaluate the accessibility of AJAX applications. As case study, the accessibility evaluation of Google Search and AskAlexia has been perfomed and results are discussed in the paper. A third group of two papers concerns with computer mediated human-to-human interaction. In the first paper, entitled “A Mobile Augmented Reality system supporting co-located Content Sharing and Displaying”, by Rita Francese, Andrea De Lucia and Ignazio Passero, the authors present a mobile application, named SmartMeeting, aiming at supporting co-located content sharing and displaying for small groups, based on location-aware technologies, Augmented Reality and 3D interfaces. In the second paper, entitled “Enhancing the Motivational Affordance of Human-Computer Interfaces in a Cross-Cultural Setting”, Christoph Schneider and Joseph S. Valacich present a study meant to analyze relevant aspects of human-computer interface design that should be considered for group collaboration environments, in order to overcome performance-inhibiting factors typical of cross-cultural settings. The last paper in this chapter, entitled “Metric Pictures: Source Code Images for Visualization, Analysis and Elaboration”, by Rita Francese, Sharefa Murad, and Ignazio Passero, proposes the adoption of principles and practices typical of image analysis and elaboration to enhance traditional software metric evaluation and visualization techniques, highlighting the beneficial effects on software comprehension and maintainability.
.
A Customizable Glanceable Peripheral Display for Monitoring and Accessing Information from Multiple Channels D. Angelucci, A. Cardinali, and L. Tarantino
Abstract Nowadays the availability of virtually infinite information sources over the web makes information overload a severe problem to be addressed by tools able to aggregate and deliver information from selected channels in a personalized way. The support provided by portals (e.g., iGoogle) forces users to abandon primary tasks to monitor useful information. Feed readers are sometimes based on peripheral notifications that do not interfere with primary tasks but are often mostly textual. In this paper we present a customizable glanceable peripheral display able to aggregate notifications from multiple sources (e.g., email, news, weather forecast) as well as to provide quick access to information sources. The design is based on state-of-the-art guidelines and on preliminary usability studies conducted at mockup level both on the abstract model and on its realization.
Introduction Nowadays the availability of virtually infinite information sources over the web makes information overload a severe multifaceted problem that has to be addressed by tools that help users to cope with its different dimensions: information growth and diversity, human to information interaction, interferences with ordinary working activities. As pointed out in [5], “information overload is as much a problem of information diversity, or clutter, as of its quantity”. The paper, reporting on a test conducted with workers in more than 1,000 large organizations, underlines that when asked which was the worse – the quantity of information they have to deal
D. Angelucci Istituto di Analisi dei Sistemi ed Informatica Consiglio Nazionale delle Ricerche (IASI-CNR), Roma, Italy e-mail:
[email protected] A. Cardinali and L. Tarantino Dipartimento di Ingegneria Elettrica e dell’Informazione, Universita` degli Studi dell’Aquila, L’Aquila, Italy e-mail:
[email protected];
[email protected] A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_23, # Springer-Verlag Berlin Heidelberg 2011
199
200
D. Angelucci et al.
with or its diversity – for respondents diversity won. Furthermore, for those who suffered the most from information overload, information diversity scored even higher. Though the test deals with the general case of mix of paper and digital information, nonetheless we believe that lessons learned from it can be considered a good starting point also from the case of digital-information-only we are coping with. Among the solutions respondents suggested, there is a central repository to aggregate information in one place and make it more accessible. Tools and web services have been recently proposed to aggregate information from web sources and RSS feeds, like iGoogle, GoogleReader, FriendFeed, Gregarius (gregarius.net), which, however, force users to switch from their primary tasks to the tool to check whether useful information has arrived. An example of glanceable peripheral notification summarizer is Scope [13], based on a circular display divided into sectors presenting diverse types of notifications, leaving initiative primarily to the users. Anyhow, studies proved that when users are provided with the ability of negotiating the receipt of notifications they tend to postpone them indefinitely [9], with a resulting inability to get the right information at the right time. Furthermore tests showed that users trained on primary tasks without interruptions perform very badly on the same tasks when interrupted, suggesting that a moderate interruption rate is less disruptive in the long run [10]. Given all these results, our approach aims at a personalizable glanceable peripheral system able to unify in a single place notifications from multiple sources, classified by types (e.g., email, news, traffic information), at different severity levels, so to interrupt users only when information does require it, as well as to provide quick access to the information sources. The system is the result of several design steps, accompanied by specific usability tests, focused on different aspects of the problem, which we discuss throughout the different sections of the paper.
A Simple Notification Display Our research originated in a specific application domain, namely fault notification in telecommunication networks to solicit technical intervention. After the analysis of virtues and flaws of traditional systems, we designed a system based on a glanceable peripheral display using a visual coding technique and a transition policy such that low severity alarms are associated with a few data conveyed in a subliminal way, whereas urgent alarms are associated with notifications requiring focal attention and technical intervention. The system results to be an unobtrusive application that distracts users only if the severity requires it [2, 4] and fixes basic characteristics on which generalizations and subsequent versions were built.
A Customizable Glanceable Peripheral Display for Monitoring
201
The First Design Step: Dealing with Single Notifications The notification component is an in-desktop peripheral display located in the bottomright corner of the monitor, outside the visual focus. The display occupies a rectangular area small enough to be possibly displayed on hand-held mobile devices (studies have shown that small displays result in fast identification of changing information [8]). To ensure glanceability the visual coding technique is based on visual variables; icons and background colors are used to convey alarms class and severity (results on glanceability tests indicate them as the two most popular visual properties [7]). Color is associated with redundant coding to ensure correct interpretation by users with color vision diseases. The rectangular area of the display is partitioned into three sub-areas: un upper bar for temporal data, a bottom bar for user data, and a middle area split into a synthetic component, conveying alarm severity, and a detailed component, visualizing alarm descriptions. The application domain severity levels (cleared, warning, minor, major, critical) are mapped onto three abstract schemata in which the detailed component gets progressively denser of information while the alarm severity increases (Fig. 1a depicts the case of a “major alarm”). Interruption design is based on animations that make the display move from visual periphery to foveal vision. According to studies in human attention, we mapped the alarm severity levels to the change-bind, makeaware, interrupt and demand-attention notification levels [6]. Correspondingly, display transitions are based on slow-motion, discrete-update, flashing, and flashing-until-action, thus achieving a system behavior ranging through change-blind, ambient and alerting displays, depending on alarm criticalness.
The Second Design Step: Dealing with Multiple Notifications To manage situations in which multiple simultaneous faults occur, we redefined the synthetic component to make it a glanceable overview of the overall situation,
Fig. 1 The basic display: (a) single notification, (b) multiple notifications
202
D. Angelucci et al.
providing quick comprehension of number of alarms, their severity and nature. “Simultaneousness” is intended in a broader sense than “arriving at the same time”: wrt notification purposes, we consider simultaneous alarms that are “active at the same time” (i.e., no acknowledgment from operators has been received yet). Broadly speaking, the synthetic component is split into a number of portions related to the number of incoming alarms (see Fig. 1b) (we refer to [2] for details). This solution is efficient when the number of simultaneous alarms remains below a given threshold, since the discernibility of individual alarm severity may be jeopardized if the width of individual areas in the synthetic component gets too small, for two reasons: on the one hand, size is a dissociative visual variable that, for low values, affects the perception of colors, and, on the other hand, heavy simultaneity increases the probability that contiguous colors do not contrast enough. To identify the readability threshold and explore possible improvements, we performed a usability study at mockup level based on sample configurations of working cases. Ten experienced users of PCs with Microsoft Windows (an appropriate sample for a general purpose study [11]) were presented with a series of 40 screenshots (covering a broad range of credible scenarios), each displayed for a few seconds to evaluate what users grasp at a glance. After each screenshot, users were asked to report the number of alarms in the synthetic component for each color. The results of the test suggest a threshold of about 1015, depending on whether alarms originate in the same network element, or in different elements [1]. Beyond the threshold, less than 50% of users provided correct answers.
Towards Customization to Other Application Domains Though the initial goal was domain specific, the analysis of critical data led us to a design exhibiting a level of abstraction that permits applicability in different context as well, provided that information to be notified satisfies few basic characteristics (classification along the dimensions of class and severity, and description as tuples of values). Anyhow, while the threshold on simultaneity was not a problem for the networks we had to deal with (given their simple topology and number of elements), the scalability analysis suggests to modify some design choices to obtain a broader applicability and move towards a customizable tool. Our goal is a system notifying alerts or updates from different information channels, each potentially generating a high number of notifications. Given the scalability analysis results, among the aspects that may impact on display efficacy we have number of different channels and update rate of individual channels. Furthermore it would be appropriate to foresee hierarchization mechanisms not only to mirror the hierarchical organization of some real application domains (e.g., news organized by topics) but also as a mechanism that breaks down complex cases into smaller pieces, more manageable wrt to notification (e.g., traffic information on Italian highways may be dispatched on a region-based criterium).
A Customizable Glanceable Peripheral Display for Monitoring
203
Our first redesign step was hence aimed at studying new notification mechanisms for simultaneous alerts, oriented to information classification and hierarchization, generic enough and orthogonal, to be selected and possibly combined so to adapt the display to the specific channels to dispatch. Due to space limitation, rather than providing a complete formal discussion (given in [1]), here we restrict ourselves to presenting two basic cases and a possible combination of them.
Introducing New Basic Mechanisms The approach is based on the following assumptions (1) to retain previous design choices wrt synthetic component partitioning, (2) to organize the “information space” into simpler subsets with a low probability of exceeding the threshold, (3) to visualize the status of one subset at the time, and (4) to provide mechanisms to switch among subset status. As to assumption (4), the two chosen basic mechanisms are horizontal scrolling and display tabbing, each exhibiting pro and cons. Two sample scenarios are in Fig. 2. The display on the left monitors highway traffic: regions with incoming news are treated separately and shown in sequence, each presenting simultaneous news by customary synthetic component partition. Arrows at the extremes of the synthetic component act as affordances for its behavior. The advantage is the absence of limit in the number of individual regions, and hence the potential for notifying arbitrary numbers of simultaneous alerts; its disadvantages are the lost of global overview and the lack of direct access to individual regions. The second solution is illustrated by Fig. 2b, showing notifications of incoming email, organized in categories (e.g., inbox, spam, work, etc.) represented by tabs. This solution allows us to regain a glanceable global overview, and guarantees direct access to individual categories. The cost to pay is an upperbound on the number of categories due to tab space limitation.
Fig. 2 Handling mulitple notifications: (a) horizontal scrolling, (b) tabbing
204
D. Angelucci et al.
Fig. 3 Combining horizontal scrolling and tabbing
Combining the Two Mechanisms Both mechanisms allow us to deal with information spaces organized in categories. Furthermore, thanks to their natural orthogonality, they can be combined to deal with more complex information spaces, as illustrated by Fig. 3, referring to a multichannel scenario in which individual channels are associated to tabs, and are further structured into subcategories handled by horizontal scrolling of the display. The combined approach was evaluated by a test based on a dual-task scenario. The 15 participants (heterogeneous wrt to age, sex and skill) were asked to read a text while simultaneously monitoring the peripheral display showing constantly changing news, traffic information, calendar alerts, stock information and incoming mail. At the end of the session participants were given questions asking them to recall details of the text and information notified by the display. The test showed that the display did not affect the completion of the primary tasks (with a correctness rate of 100%). The correctness rate of the secondary task is above 80% in 66% of the questions, and around 60% in the remaining 33%. It is worth noticing, however, that critical information (on a red background) was always correctly recalled and that the same behavior was found also in successive evaluation tests.
Towards a Personalizable Multichannel Notification Display Given the promising results, our successive design step was aimed at exploiting the approach for designing a personalizable multichannel notification display. Notwithstanding the good test results, we decided to further revise the design in order to face two problems. Though formally correct, the abstractness of the display in Fig. 3, wrt category management, may cause “similarity interference” (the problem
A Customizable Glanceable Peripheral Display for Monitoring
205
was discussed in [12], where authors point out that when formatting visual displays for dynamically updating environments, the design has to make information highly distinctive across items in the display). In our case it is easy to recognize the currently displayed category, but no information other than number and severity of incoming notifications is graspable at a glance for other categories. Furthermore, the upperbound on the number of categories that users may decide to monitor would be a sever limitation for a broad applicability of the system. A unified solution for the two problems was found by redesigning the category area according to the basic ideas in Fig. 4. Categories are now visually represented by icons (Fig. 4a), which not only address similarity interference, but also exploit perceptual sensory immediacy. Sensoriality is also exploited for providing a rough indication of the number N of simultaneous incoming notifications within a category: categories are associated not with a predefined icon but with a family of icons getting visually richer as the number of simultaneous alarms increases, as illustrated by Fig. 3b (the exact indication of N is provided by tagging the icon with N, as shown in Fig. 5a). In other words, we now combine in this new category bar most of the information previously split in tabs and synthetic component. Furthermore, as one may notice from the two side arrows in Fig. 4a, the upperbound on the number of categories is removed by making the category list a scrollable list. Now, since we aim at a system-initiated interaction, we need a
Fig. 4 The new category bar: (a) its organization, and (b) an example of status sensitive icon
Fig. 5 The final design: (a) the notification component, and (b) the notification list
206
D. Angelucci et al.
mechanism to automatically cycle over categories with incoming notifications. At this aim, we designed (and tested with users) four different solutions, which differ in the visual hints used to indicate the current category (tab metaphor vs. lens metaphor), in the relative movements of categories and overlapping tabs/lens, and in the scrolling direction of individual notifications within the detailed component of a single category (vertical vs. horizontal). A four round dual-task experiment was conducted at mockup level, similar to the one discussed in “Towards Customization to Other Application Domains” (details are in [3]). Results again showed lack of interference with primary tasks and correct grasping of all critical information. Among the four proposed displays, the one with better performance, and also subjectively preferred by users, has a lens in a fixed position of the category bar, the category list scrolling from right to left, and the notification descriptions scrolling from bottom to top. The final design is illustrated by the sample display in Fig. 5a, where one may also note that the display has been enriched by a control palette on the left side [with icons for (1) pausing the automatic scrolling, (2) going directly to the notification source, and (3) deleting the notification from the list] and a scrollbar on the right side to directly interact with the notification list. Furthermore the synthetic component has been replaced by two lightweight visual information: a redundant coding for the notification criticalness, and the indication of the item position in the list of its category. Such list is also visualizable on demand, to have a quick overview and direct access to individual selected items (see Fig. 5b).
Conclusions and Future Work We presented the design/evaluation process that led us to a glanceable notification aggregator prototype based on an interruption policy that makes the system works as a change-blind, ambient or alerting display depending on incoming notification criticalness. It acts as an unobtrusive peripheral application that does not interfere with user’s primary tasks, unless the alarm severity requires it. The system is implemented as a Java Swing application, according to a Presentation/Entity/Control architecture. The Control module is responsible for gathering information either from a local database (e.g., for Calendar alerts) through the MySQL JDBC driver, or from the Internet, using the JavaMail library for getting personal email by the IMAP protocol and the ROME RSS library for retrieving RSS feeds. The notification system is complemented with a back-office component allowing users to customize the aggregator [3]. It is possible, among others, to add or modify categories, to specify RSS links, to fix category severities (low, medium, high) and to specify notification life length. Future research activities will focus on one side to deeper usability studies in real working settings, and on the other side to investigate intelligent mechanisms for dynamically adjusting notification criticalness.
A Customizable Glanceable Peripheral Display for Monitoring
207
References 1. Angelucci, D. (2009) Un display periferico per la notifica di informazioni critiche: architettura e sviluppo del sistema. Dipartimento di Ingegneria Elettrica e dell’Informazione, Universita` degli Studi dell’Aquila, Master Thesis. 2. Angelucci, D., Di Paolo S., Tarantino L. (2009). Designing a glanceable peripheral display for severity based multiple alarm visualization. In L. Lo Bello & G. Iannizzotto (Eds.) Proc of 2nd Int Conf on Human System Interaction, HIS’09, IEEE Catalog Number: CFP0921D-USB, ISBN: 978-1-4244-3960-7, Library of Congress: 2009900916, Track TT4. 3. Cardinali, A., (2010) Visualizzazione e notifica periferica di informazioni in contesto multisource: usabilita` ed implementazione di un caso. Dipartimento di Ingegneria Elettrica e dell’Informazione, Universita` degli Studi dell’Aquila, Master Thesis.. 4. Di Paolo S. & Tarantino L. (2009). A peripheral notification display for multiple alerts: design rationale. In A. D’Atri & D. Sacca` (Eds.) Information systems: people, organizations, Institutions, and Technologies, (pp. 521.528). Heidelberg: Physica-Verlag. 5. Gantz J., Boyd A., Dowling S. (2009). Cutting the Clutter: Tackling Information Overload at the Source. http://www.xerox.com/assets/motion/corporate/pages/programs/information-overload/ pdf/Xerox-white-paper-3-25.pdf 6. Matthews T., Forlizzi J., Rohrbach S. (2006). Designing glanceable peripheral displays. EECS Department, University of California, Berkeley, Technical Report No. EECS-2006-113. http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-113.pdf 7. Matthews T., Dey A. K., Mankoff J., Carter S., Rattenbury T. (2004). A toolkit for managing user attention in peripheral displays. In S. Feiner & J. A. Landay (Eds.), Proc of the 17th Annu. ACM Symp. on User Interface Software and Technology (pp. 247–256), Santa Fe: ACM. 8. McCrickard D. S., Catrambone R., Stasko J. (2001). Evaluating animation in the periphery as a mechanism for maintaining awareness. In M. Hirose (Ed.) Proc IFIP TC.13 Int. Conf. on Human Computer Interaction INTERACT’01 (pp. 148–156). Amsterdam: IOS Press. 9. Mc Farlane, D. (1999). Coordinating the interruption of people in human-computer interaction. In M. A. Sasse & C Johnson (Ed.) Human Computer Interaction INTERACT’99 (pp. 295.303). Amsterdam: IOS Press. 10. Hess. S. M. & Detweiler, M. C. (1994). Training to reduce the disruptive effects of interruptions. Proc of the Human Factors and Ergonomics Society, 38th annual meeting (1173–1177). Santa Monica: Human Factors and Ergonomics Society. 11. Nielsen, J. (2000) Why You Only Need to Test With 5 Users. In Alertbox, March 19, 2000. http://www.useit.com/alertbox/20000319.html 12. Rhodes J. S., Benoit G. E., Payne D. G. (2000). Factors affecting memory for dynamically changing system parameters: implications for interface design. In Proc of IEA 2000/HFES 2000 Congress (pp. 284–285). Santa Monica: Human Factors and Ergonomics Society. 13. van Dantzich M., Robbins D., Horvitz E., Czerwinski M. (2002). Scope: Providing Awareness of Multiple Notifications at a Glance. In M. De Marsico, S. Levialdi, E. Panizzi (Eds.) Proc of the 6th Int. Working Conf. on Advanced Visual Interfaces AVI 2002 (pp. 157–166). New York: ACM Press.
.
A Dialogue Interface for Investigating Human Activities in Surveillance Videos V. Deufemia, M. Giordano, G. Polese, and G. Tortora
Abstract In this paper we present a dialogue interface for investigating human activities in surveillance videos. The interface exploits the information computed by the recognition system to support users in the investigation process. The interaction dialogue is supported by a sketch language enabling users to easily specify various kinds of questions about both actions and states, and the nature of the response one wishes. The contribution of this research is twofold (1) proposing an intuitive interaction mechanism for surveillance video investigation, and (2) proposing a novel question–answering model to support users during the informationseeking process.
Introduction With the increasing need of security in today’s society surveillance systems have become of fundamental importance. Video cameras and monitors pervade buildings, factories, streets, and offices. Thus, video surveillance is a key tool for enabling security personnel to safely monitor complex and dangerous environments. However, even in simple environments, a video surveillance operator may face an enormous information overload. It is nearly impossible to monitor individual objects scattered across multiple views of the environment. It thus becomes vital to develop interfaces making the investigation process on the overwhelming quantity of videos more intuitive and effective. In the recent years intelligent user interfaces (IUIs) have been investigated for multimedia applications, aiming to improve efficiency, effectiveness, and naturalness of human-machine interaction by representing, reasoning, and acting on models of the user, domain, task, discourse, and media [9]. IUIs have to make the dialogue between the user and the system possible. Real interaction occurs when there is a need to ask for information during a computation. This need actually arises
V. Deufemia, M. Giordano, G. Polese, and G. Tortora Dipartimento di Matematica e Informatica, University of Salerno, Fisciano, Salerno, Italy e-mail:
[email protected];
[email protected];
[email protected];
[email protected] A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_24, # Springer-Verlag Berlin Heidelberg 2011
209
210
V. Deufemia et al.
during the computation and cannot be shifted to the starting point of the computation process. This kind of interaction affects the computation and only the interfaces able to realize it can be considered intelligent and able to manage the interaction between system and user. A problem arising in the application of this view is the need of a powerful language, like it happens among people using natural language for dialoguing. Natural language interfaces are difficult to realize as they yield difficult problems related to natural language processing. The Question–Answering (Q/A) paradigm [13] is a suitable mean to interact with video surveillance systems. Indeed, Q/A implements the investigative dialogue and supports the guided investigation by foreseeing the user actions. On the other hand, it is essential for interfaces to have human-like perception and interaction capabilities that can be utilized for effective human–computer interaction (HCI). In this paper we present a dialogue interface for investigating human activities in surveillance videos. The interface exploits the information computed by the recognition system to support users (security operators) in the investigation process. The interaction dialogue provides a sketch language enabling users to easily specify various kinds of questions about both actions and states, and the nature of responses one wishes. The contribution of this research is twofold (1) proposing an intuitive interaction mechanism for surveillance video investigation, and (2) proposing a question–answering model to support users during the information-seeking process.
Related Work In the domain of video surveillance much attention has been devoted to the problem of using visualization techniques for clustering and anomaly detection [2]. Little work has been devoted to the development of interfaces and interaction paradigms to support users in the investigation process. In the recent years many video retrieval frameworks for visual surveillance have been proposed [6]. They support various query mechanisms, because queries by keywords have a limited expressive power. In particular, query-by-sketch mechanisms have been adopted to express queries such as “a vehicle moved in this way”. An approach similar to the one presented in this paper has been developed by Katz et al. [7]. They integrate video and speech analysis to support question– answering about moving objects appearing within surveillance videos. Their prototype system, called Spot, analyzes objects and trajectories from surveillance footage and is able to interpret natural language queries such as “Show me all cars leaving the garage”. Spot replies to such a query with a video clip showing only cars exiting the garage. In the recent years several video retrieval systems have been developed to assist the user in searching and finding video scenes. In particular, interactive video retrieval systems are becoming popular. They try to reduce the effect of the semantic gap, i.e., the difference between the low-level data representation of videos and the
A Dialogue Interface for Investigating Human Activities in Surveillance Videos
211
higher-level concepts a user associates with videos. An important strategy to improve retrieval results is the query reformulation, whereas strategies to identify relevant results are based on relevance feedback and interaction with the system. The system proposed in [1] combines relevance feedbacks and storyboard interfaces for shot-based video retrieval. The interaction dialogue proposed in our approach is a generalization of relevance feedback. Indeed, the relevance questions are asked to catch the user’s information need, whereas the question–answering process we propose implement a real dialogue between the user and the system making the investigation process more effective.
A Video Understanding System Based on Conceptual Dependency Video understanding aims to automatically recognize activities occurring in a complex environment observed through video cameras [5]. The goals of the proposed activity recognition system are to detect predefined violations observed in the input video streams and to answer specific queries about events that have already occurred in the archived video [3]. We exploit Artificial Intelligence techniques to enable the system “understand” events captured by cameras. In particular, our approach is based on the Schank’s theory [11], a “non-logical” approach that has been widely used in natural language processing. Two main reasons led us to use this theory in video surveillance systems. First, the presence of well-studied primitives to represent details about the actions; second, the possibility to use highly structured representations like scripts, which are a natural way to manage prototypical knowledge. Thus, we are able to associate different levels of meanings to a situation: conceptualization, scene, and script level, which allow us to deeply understand the current situations and to detect anomalies at different levels. In order to detect anomalies and to raise alert messages, the system tries to interpret a scene based on its knowledge about “normal” situations, using the conceptual dependencies to describe single events and scripts for complex situations. Therefore, the proposed video-surveillance system is an intelligent system associating semantic representations to images. Figure 1 gives an overview of our video understanding system which is composed of three main modules: detection and tracking of multiple objects, scene understanding, and reasoning. The module for tracking multiple objects is implemented by use the codebook based adaptive background subtraction algorithm proposed in [8]. We are concerned with tracking three kinds of objects – human, vehicles and packages. The reasoning module has twofold functions: understanding the situations that happen and managing the dialogue with the interface. The first task is accomplished
212
…
Web-Cam n
Internet images
OBJECT DETECTION & TRACKING
expectations
object information
REASONING
Web-Cam 1
V. Deufemia et al.
SCENE UNDERSTANDING scripts
Knowledge
DIALOGUE REASONING alarms
answer/question
question
SKETCHBASED INTERFACE
Fig. 1 Overview of our system
by the scene understanding module, whose aim is of associating a semantic representation to the content of the scenes. This module recognizes events and actions using the knowledge about the standard events and situations stored in its knowledge base. In particular, the information on the tracked objects, i.e., trajectories and features (such as color, size, etc.), are synthesized by constructing conceptualizations, which are given in output to the next module. As an example, the following conceptualization expresses the fact that a given car moves from the garage entry to a car place.
The scene understanding module also activates pertinent scripts and appropriate scenes from the script produced by the tracking module in order to identify possible anomalies. In particular, when a script is activated, the conceptualizations belonging to the scenes that might occur are sent to the tracking module to work in a predictive mode. To correctly understand the scene structure, we label various areas of the background, such as doors, elevators, ATM, and so on. The conceptualizations are generated based on object properties and their interactions with these labeled background regions. The output of the understanding module is the scripts describing the occurred situations. The reasoning tasks of the scene understanding module are: 1. To understand events: The task of representing current events using the stored knowledge is accomplished both reducing events to simple ones and instantiating the objects in the conceptualization with actual data. 2. To reason about events: Once an event has been interpreted using the existing knowledge it is possible to make inferences and to supply the lack of information in occurred events. The object detection and tracking module and the scene understanding module exchange information themselves since the first one passes information to be conceptualized to the second one, and, in turn, the latter passes expectations to
A Dialogue Interface for Investigating Human Activities in Surveillance Videos
Topic
User
Query
System
213
Result
Fig. 2 Interactive search: human (re)formulates query, based on topic, query, and/or results
the first one. Expectations are events or actions which typically follow the last recognized event, making easier the low level recognizing of events. The task of managing the dialogue with the interface is realized by the dialogue reasoning module, whose main task is to answer questions about occurred events. The function of this module will be treated in deepen in the next section. The sketch-based user interface allows users to interact with the videos through a language which is natural and intuitive for the user and complete to support a dialogue between the user and the system.
Dialogues for Investigation The classical investigation process in multimedia retrieval is accomplished through the interactive search process shown in Fig. 2. The user repetitively submits a query to the system based on the topic under investigation, the previous queries, and the results obtained so far. The introduction of relevance feedback has allowed to improvement this process. Relevance feedback is the method of reformulation and improvement of the original search request, based on information from the user about the relevance of the retrieved data [10]. However, this method suffers from several limitations such as the constraint to browse the results to give feedback to the system. If we think of both the user and the system as interrogative reasoners, the interface can be interpreted as an oracle for both user and system. In fact, the system does not know who the user is, but it is sure that s/he tells the truth; analogously the user trusts the system, thinking that it tells the truth. Therefore, the interface represents the system for the user and it represents, in turn, the user for the system. Metaquestioning. Question–answering systems (henceforth Q/A) have the goal of finding and presenting answers to questions that the user makes. In these systems, the interface has the role of managing a common language between user and system in order to enable the former to make questions and the latter to provide answers. However, Q/A systems do not realize a real dialogue, because the user can only ask questions and the system can only answer. As observed by Driver in [4], it is possible and often desirable that a question be followed by another one, as in the following examples: q1 What happened yesterday? q2 Would you like a short or a long response?
214
V. Deufemia et al.
Question q2 in the previous examples are called metaquestions. They occur between an inquirer (questioner) asking a first order question and a responder (answerer/metaquestioner) answering through a metaquestion. The importance of metaquestions in the context of Q/A systems is due to the fact that they can be used to overcome obstacles to answering the first order questions and, hence, they have an active role in the Q/A process itself. In a sense, metaquestions can be seen as a generalization of feedbacks. Metareasoning. Metaquestioning involves many features deriving from the fact that it is related to different research areas, like dialogue theory, problem solving, and metareasoning. From the point of view of dialogue theory [12], metaquestioning can be seen as the general process underlying the information-seeking type of dialogue, whose goal is to exchange information and where the goals of user and system are to acquire information and to give information, respectively. According to this view, it turns out that metaquestioning is a version of the analytic method involving two reasoners: the user and the system.
Sketch-Based Dialogues for Human Activity Investigation A problem arising in the development of dialogue interfaces is the need of a powerful language, like the natural languages people use for dialoguing. We propose the use of 2D sketches as a dialogue language for investigating human activities in surveillance videos. The language is not as versatile as natural languages, but it allows users to query the system in a natural way. A sketch language is formed by a set of sketch sentences over a set of shapes from a domain-specific alphabet. To support the question–answering process the sketch language should allow users specify: 1. The kind of object to be retrieved (the unknown) and the constraints on it (e.g., a person in an angle of the room). 2. Actions and states involving the scene objects (e.g., a person opening a door, a person waiting the lift). 3. Temporal information on the events to be investigated (e.g., the time interval of a theft). 4. Elements of the metaknowledge (e.g., some properties of the response). In the following, we describe the symbols composing the dialogues between users and the system. Language symbols. The user can associate a sketched symbol to each kind of object identified by the Object Detection algorithm, and will use it to refer to the objects in the context of questions. As an example, if the detection algorithm is able to categorize the mobile objects in people and packages, then during the specification of the questions the user can refer to them by using the sketched symbols in Fig. 3. In case the objects involved in the question are part of the scene, the user can select them by hooping them with a hand-drawn circle.
A Dialogue Interface for Investigating Human Activities in Surveillance Videos
215
Fig. 3 Sketched symbols of the objects detected in the video scene (a) and (b), and the sketched symbols representing the actions of a person (a) picking up and (b) leaving a package
Actions and states. Useful information during the investigation process is in the actions involving the detected objects, and the states in which they could be. As for relationships, they depend on the actions that the algorithm used to generate the facts that are able to infer. As an example, the action “a person picks up a package” is described by the sketch in Fig. 3c, while the sketch in Fig. 3d describes the action of leaving a package. Temporal information. Questions information regarding time intervals is specified by drawing a circle on the timeline at the bottom of window. Metaquestions. Sketches are also used by the system to represent questions for the user. We have defined the sketch symbols for a set of metaquestions, such as “how long should the response be?”, “which is the path followed by the person?” The example in the next section shows some metaquestion symbols. Unknown. The unknown of the question is indicated with a question mark on top of the sketched symbol. As an example, the unknown of the question in Fig. 4 is the walk path of the person enclosed with a circle.
The Sketch-Based Interface We have built a prototype video surveillance system with a sketch-based interface answering interesting questions about video surveillance footage taken in university offices, corridors, and halls. The scenes contain both persons and packages. A typical segment of the video footage shows persons leaving and entering offices, persons discussing in the hall, and persons putting down and picking up packages in the offices and corridors. Figure 4 shows the system interface. The main window contains the (background) image of the selected camera, on which the user can draw the sketch representing a question, and the system can reply with another question. As said above, the timeline at the bottom of the image is used to specify the temporal information of the question as we show in the following example. The frame at the bottom of the interface (Fig. 4b) contains the images obtained from the user question. A storyboard containing the previous investigations is on the right of the interface (see Fig. 4c).
216
V. Deufemia et al.
c a
b
Fig. 4 Sketch-based interface showing (a) the (background) image of the selected camera, (b) the images resulting from a previous query, (c) the storyboard of the previous investigations
Conclusions We have presented a novel interface for investigating human activities in surveillance videos. The interaction with the surveillance system consists of sketch dialogues allowing users to easily specify various kinds of questions about occurred events. The presented proposal contains two innovative solutions: the use of sketching for representing the dialogue between the user and the system, and the use of question–answering to support users in the investigation process.
References 1. Christel, M. G., and Yan, R. Merging Storyboard Strategies and Automatic Retrieval for Improving Interactive Video Search, in Proceedings of CIVR 2007, 486–493. 2. Davidson, I., and Ward, M. A Particle Visualization Framework for Clustering and Anomaly Detection, in Proceedings of KDD Workshop on Visual Data Mining, 2001. 3. Deufemia, V., Giordano, M., Polese, G., Vacca, M. A Conceptual Approach for Active Surveillance of Indoor Environments, in Proceedings of DMS’07, 45–50. 4. Driver, J. L. (1984) Metaquestions, Nous 18, 299–309. 5. Fusier, F., Valentin, V., Bre´mond, F., Thonnat, M., Borg, M., Thirde, D., and Ferryman, J. (2007) Video Understanding for Complex Activity Recognition, Journal of Machine Vision and Application 18: 167–188.
A Dialogue Interface for Investigating Human Activities in Surveillance Videos
217
6. Hu, W., Xie, D., Fu, Z., Zeng, W. and Maybank, S. (2007) Semantic-Based Surveillance Video Retrieval, IEEE Trans. on Image Processing 16(4):1168–1181. 7. Katz, B., Lin, J. J., Stauffer, C., Grimson, W. E. L. (2003) Answering Questions about Moving Objects in Surveillance Videos, in Proceedings of AAAI Spring Symposium on New Directions in Question Answering, 145–152. 8. Kim, K. Chalidabhongse, T. H. Harwood, D. and Davis, L. (2004) Background Modeling and Subtraction by Codebook Construction, in Proceedings of IEEE International Conference on Image Processing, 3061–3064. 9. Maybury, M. T. and Wahlster, W. (1998) Intelligent User Interface: An Introduction, Readings in Intelligent User Interfaces, 1–14, Morgan Kaufmann Press. 10. Salton, G., Fox, E. A., and Voorhees, E. (1985) Advanced Feedback Methods in Information Retrieval, Journal of the American Society for Information Science 36(3): 200–210. 11. Schank, R. C., and Abelson, R. (1977) Script Plans Goals and Understanding, Lawrence Erlbaum Associates. 12. Walton, D. (2000) The Place of Dialogue Theory in Logic, Computer Science and Communication Studies Synthese, 123: 327–346. 13. Wisniewski, A. (1995) The Posing of Questions, Logical Foundations of Erotetic Inferences, Kluwer.
.
The Effect of a Dynamic User Model on a Customizable Mobile GIS Application L. Paolino, M. Romano, M. Sebillo, G. Tortora, and G. Vitiello
Abstract In the present paper we analyze the role that a dynamic user model may play in simplifying query formulation and solving in an existing audio-visual map interaction technique conceived for mobile devices. We have re-designed the system functionalities devoted to gain summary information about off-screen data and to suggest the best direction towards a target. We show that customizing a query on the basis of the current user profile may give the user the advantage of making queries simpler and may avoid similar steps meant to refine the results.
Introduction The goal of our recent research has been to design multimodal interaction techniques, which would take into account proper usability requirements arising from the usage of specific mobile technology, such as mobile devices and PDA. Framy was initially introduced as a visualization technique representing an appropriate tradeoff between the zoom level needed to visualize the required features on a map and the amount of information which can be provided through a mobile application. It exploits the visualization of semi-transparent colored frames along the border of the device screen to provide information clues about different sectors of the off-screen space [4]. Subsequently, the aim to widen the system accessibility and its use also within uncomfortable, low light environments, has led us to enhance Framy with alternative interaction modes, which exploit the tactile and the auditory channels to convey the same information clues as those visualized on the frame sectors of the device interface [5]. In the present paper we describe the transformation of Framy into an adaptive user interface, which dynamically takes into account user’s preferences and needs to visualize clues about query results. The rapid diffusion of wireless technologies,
L. Paolino, M. Romano, M. Sebillo, G. Tortora, and G. Vitiello Dipartimento di Matematica e Informatica, Universita` di Salerno, Fisciano, Salerno, Italy e-mail:
[email protected];
[email protected];
[email protected];
[email protected]; gvitiello@ unisa.it
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_25, # Springer-Verlag Berlin Heidelberg 2011
219
220
L. Paolino et al.
and mobile services meant to enhance people experiences in their everyday activities, has gradually changed user’s perception of Quality of Services (QoS), which has become a crucial issue for mobile service providers, designers and developers. Discussing QoS typically deals with system response time availability, security, throughput [3], but QoS can be also considered in terms of the quality of user’s experiences [1]. In that respect, we have investigated the contribution that but QoS can be also considered and contextual needs may bring to the system performances and we have re-designed accordingly the system functionalities devoted to both gain summary information about off-screen data and suggest the best direction towards a target. Customizing a query on the basis of the current user profile gives the user the advantage of making queries simpler avoiding similar steps meant to refine the query results. Moreover, the interaction with the small screen of a mobile device is improved and a reduction of the device workload in determining the query results is also gained, due to the use of filters that notably cut the involved datasets. The remainder of the paper is organized as follows. In “Embedding a Dynamic User Model for Personalized Query Results in Framy” the dynamic user model is described and its integration in Framy is specified. “A Scenario Featuring the User Profile” describes a typical scenario, which illustrates the use of the adaptive version of the system in map navigation and feature search tasks. In “A Comparative Usability Study”, we present a comparative usability study meant to evaluate satisfaction and efficiency with and without the personalization module. In “Concluding Remarks” we give some final remarks.
Embedding a Dynamic User Model for Personalized Query Results in Framy The query results obtained through the Framy multimodal technique are based on the computed color/pitch intensity, which in turn depends on the expected output, e.g. distance and number. In order to consider also users’ preferences and contextual needs, we introduce a new rank value during the formulation of the intensity function, which takes into account long-term as well as short-term users’ interests. In particular, we specify a formula which computes the total relevance between a POI a and a user model um in terms of user-oriented weights assigned to each item of interest. Such a formula is based on two components, the former related to the long-term interests, the latter related to the short-term ones. As for the first component, it can be based on two frameworks, which consider a classification of the POI domains and characteristics mapped onto user’s general interests, respectively. Initially, when building a user profile, the system stores the score the user assigns to each domain as well as to its derived subcategories. For our purpose, we have considered a classification of 35 POI categories. When a user usr initially registers to the services, she/he is asked to assign a score SCAT(usr) to each general category CAT, and a score SSCi(usr) to each of its subcategories SCi. For
The Effect of a Dynamic User Model on a Customizable Mobile GIS Application Table 1 User-specified scores for CAT item subcategories
SubCAT SC1 ... SCn
usr1 val11 ... valn1
... ... ... ...
221 usrm val1m ... valnm
every domain, information concerning scores assigned by the user to specific item subcategories is stored as a matrix, where rows correspond to the subcategories and columns correspond to users (see Table 1). As each POI has a pre-assigned subcategory, selection with respect to this reference framework is immediate. Each POI a is assigned the score associated with the corresponding specific category in the corresponding user profile. Thus, the relevance of a POI a for a user model um, classified as belonging to the subcategory SCi, corresponds to the score assigned to SCi by the user usr. Namely: Catum ðaÞ ¼ SCi ðusrÞ:
(1)
A similarity value sim(a, CAT) is then computed between the POI a and the general category CAT to which SCi belongs, by exploiting the cosine similarity formula for the vector space model. Consequently, the relevance between a POI a and all the general categories of a user model um is computed using the following formula: P35 SubCatum ðaÞ ¼
i¼1
simða; CATi ÞSCAT ðusrÞ P35 i¼1 SCAT ðusrÞ
(2)
The second reference framework for long-term interests further specializes user profile. It is based on a set of user-specified keywords, which are weighted on the basis of their relevance for her/him. For each user usrj, these keywords are stored as a term weight vector . Again, the relevance between the POI a and the keywords of a user model um is given by the similarity cosine of the vector space model as follows: Keyum ðaÞ ¼ simða; kusr Þ:
(3)
Thus, the long-term interest LT(usr, a) of a user usr in a given POI a can be computed by combining formulas (1)–(3) as follows: LTðusr; aÞ ¼
w1 Catum ðaÞ þ w2 SubCatum ðaÞ þ w3 Keyum ðaÞ ; w1 þ w2 þ w3
where w1, w2, w3 are the weights representing the importance assigned to the three relevance measures, referred to the specific category, to the general category and to user’s keywords, respectively. The value calculated on the basis of the long-term user interests is successively updated by combining it with a short-term rank, which
222
L. Paolino et al.
is based on the feedback the user provides during the exploration of the location, and by a proximity value, which considers the user’s current position. Short term interests are computed by means of user’s feedback about the most recent geographic area she/he has been visiting and the pieces of information she/he has gained meanwhile. That is to say, the user provides positive or negative feedback about the information she/he receives, and a set of representative terms fi are extracted from it. This information is processed and the resulting value is a term weight vector (t). Let k correspond to the number of pieces of information most recently received by the user. Given a POI a, we define rati ¼ sim(a, ti) as the similarity degree between the POI and the i-th component of vector t . Then, we define the current short-term interests of that user as: Pk rðaÞ ¼
i¼1 fi rati
k
Another element which is considered important to further personalize user’s dialogue with Framy, is her/his current location. As a matter of fact, it has been observed that POIs related to easily achievable places may be more interesting with respect to those far away. For this reason, we have decided to consider the proximity value pv(a, usr) of user usr from POI a, computed as the inverse of the distance between the usr current position and the closest location where a could be found. By combining long-term, short-term interests and proximity values, the total relevance of a POI a to a user usr in our model is computed as follows: Intðusr;aÞ ¼
w1 Catum ðaÞ þ w2 SubCatum ðaÞ þ w3 Keyum ðaÞ þ w4 rðaÞ þ w5 PVðusr;aÞ w1 þ w2 þ w3 þ w4 þ w5
where w4 and w5 are the weights representing the importance given to the shortterm and to the proximity value in the given model. The total relevance computed for each POI has been exploited to re-formulate the function g in Framy, as follows: m P
gðUi Þ ¼
Intðusr; aj Þ
j¼1
m
where m is the number of POIs in Ui, the off-screen region corresponding to Seci. Short-term interests, just like the current location, tend to correspond to temporary information needs whose interest to the user wanes after the connection session. As an effect, the visualized intensity of Framy sectors will dynamically vary, even with the same user, better capturing her/his contextual requirements. It is worth mentioning that an approach similar to the one presented in this paper is suggested in [2]. Here, authors define an adaptive user profile model based on Memory-Model but the work is aimed to offer users different WEB contents based on their particular interests. Differently from our approach, it modifies the short- and long-term interests on the basis of the frequency of visits.
The Effect of a Dynamic User Model on a Customizable Mobile GIS Application
223
A Scenario Featuring the User Profile The user is a 26 years old lady. She is interested in clothes shopping, she has a strong passion for the tea shops and prefers seafood restaurants. Currently, she is looking for entertainments, such as restaurants, pubs, wine shops, located at most 3 km around her, which could match her profile. In order to support her task, the system should determine the off-screen areas the user might be more interested to visit, by coloring each frame portion with a proper intensity. The initial subdivision of the screen is proportional to the running zoom level. Starting from this setting, an outside screen buffer is applied (see Fig. 1). The frame displayed in Fig. 1 also shows the visual feedback the user gets. It indicates that the most interesting area according to the user profile, located within a 3-km area, can be identified moving towards Northeast. By touching the border of the screen along the frame a sound will be produced with a pitch intensity proportional to the corresponding color intensity. Besides the initial setting, the system can then answer specific user’s requests. Figure 2a illustrates how the system sets the color portions of the frame when the user looks for a specific typology of entertainment, i.e. restaurants located within 3 km. Such a feedback improves user’s awareness about the distribution of restaurants and helps her recognize the right direction to follow. Moreover, as the user moves towards the given direction, the map focus changes accordingly, and she may refine her search. Figure 2b depicts the output of the same query, performed by taking into account also user’s profile. The system more intensely colors the portions corresponding to the off-screen areas where she can find the highest number of restaurants matching her gastronomic interests, i.e., seafood restaurants. A further capability of Framy consists of guiding users along a given direction to reach a
Fig. 1 Framy visual and audio feedback for off-screen features based on user profile
224
L. Paolino et al.
Fig. 2 (a) Framy visual feedback to a specific request. (b) Customized enhanced feedback
Fig. 3 (a) Identifying a specific off-screen location. (b) Visual and audio feedback for the filtered feature distribution
specific goal. Let us suppose that the user is interested in locating the closest restaurant with respect to her current position. The produced result is unique and only one portion is colored (see Fig. 3a). Of course, the higher the screen subdivision set by the user, the better will be the indication on how to reach the target. Moreover, the user may zoom the map which implies both to capture more details and increase the number of sectors, accordingly. Finally, let us suppose the user is driving and hence in an awkward situation to look carefully at the screen. Her goal is to determine the sector with the highest number of petrol stations which accept the credit card, as she has specified in her profile. The audio output modality may help in this case. The application will analyze and filter the petrol stations distribution within each sector Seci, and apply a visualization/sonification intensity proportional to it. Figure 3b shows the feedback the user gets for this query. The frame portion with the highest color intensity/pitch intensity is located Southwest, due to the large number of petrol stations placed within the i-th visible portion which at this time concurs to the final count.
A Comparative Usability Study In this section, we describe the comparative usability study we have carried out on Framy, evaluated with and without the personalization module.
The Effect of a Dynamic User Model on a Customizable Mobile GIS Application
225
The Tasks: The first task, from now on Task1, was based on the scenario where the frame provides an idea of the distribution of POIs all around the subject. It consisted of searching all the hotels located at most one kilometer far from the current position. After completing the task each subject indicated the sector with the highest perceived intensity and provided an evaluation of each hotel in the corresponding map region. The second task, from now on Task2, consisted in locating the closest museum. Again, after completing the task, each subject provided an evaluation of the museum addressed by Framy. Independent and Dependent Variables: The independent variable in the experiment was the application which the subjects used to perform the tasks, namely Framy with the personalization module (P) or Framy without the personalization module (NP). As for Task1, the usage of P provided a weight to each hotel. Such values raise or decrease the contribution that it provides to the aggregative function and then to intensity of each sector. In the same way, as for Task2, the usage of P modifies the real distance of each museum on the basis of the preferences of the subject who was performing the search. As for the dependent variables, we decided to measure satisfaction in terms of the appreciation of the navigated POIs. Basically, we asked subjects to assign a score ri to each POI, after browsing the corresponding Web site. Scores could be real values in agreement with the following scale: Very Good ¼ 5, Acceptable ¼ 3, Bad ¼ 0. The degree of satisfaction was then computedP in the following way: m
rj
Satisfaction ¼ mj¼1 ,where m is the number of sites visited for the selected sector (the number of hotels contained in the sector for the first task). The efficiency was computed for Task1 by taking into account the time required to find out the first five hotels evaluated P better than 4: 5
ti
i¼1 , where ti is the time to find the ith object with a score higher Efficiency ¼ 5 than 4. Efficiency for Task2 was evaluated by fixing a threshold 50 and asking subjects to assign a score to each artifact featuring in the addressed museum. We calculated the efficiency by measuring how long it required to reach the threshold. Subjects and Groups: We involved 40 subjects which were randomly distributed over four groups of ten subjects named: G1, C1, G2, and C2. G1 and C1 are the experimental and the control groups which performed Task1 by using P and NP, respectively. On the other hand, G2 and C2 are the experimental and the control groups which performed Task2 by using P and NP, respectively. Data and Discussion: By comparing the satisfaction average value of the experimental group G1 to that of the control group C1 on Task1, it is possible to notice that the average value of the first one (4.1) is higher than the average value of the second one (3.5). Basically, it means that when using Framy to find out which zones are more interesting according to user’s interests we have an improvement of 15% by using the personalization module with respect to the non-personalized approach. This improvement is even more evident when Framy with the personalization module is used to look for specific points of interest. Indeed, by comparing the satisfaction mean value of the experimental group G2 (3.40) to that of the control group C1 on Task2 (2.31), the increase is close to 32%.
226
L. Paolino et al.
As for the efficiency, data show that for both the tasks there could exist significant improvements. In the first case, the experimental group was 36% quicker in completing the task than the control group. Specifically, it took 40.1 min on the average for the experimental group against 62.4 min for the control group. In the second case, the improvement was less evident but still significant. In fact, the experimental group G2 needed 20% shorter time than the control group C2. From a numerical point of view, subjects in G2 performed the task in 15.5 min on the average whereas it took 19.3 min for C2 subjects. Tests and Hypotheses: In order to prove that the improvements were not casually derived we applied a one-tail t-test with a p-value