ADVANCES IN MULTIPLE CRITERIA DECISION MAKING AND HUMAN SYSTEMS MANAGEMENT: KNOWLEDGE AND WISDOM
Advances in Multiple Criteria Decision Making and Human Systems Management: Knowledge and Wisdom In honor of Professor Milan Zeleny
Edited by
Yong Shi School of Management, Chinese Academy of Sciences, China
David L. Olson Department of Management, University of Nebraska, USA
and
Antonie Stam Department of Management, University of Missouri, USA
Amsterdam • Berlin • Oxford • Tokyo • Washington, DC
© 2007 The authors. All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without prior written permission from the publisher. ISBN 978-1-58603-748-2 Library of Congress Control Number: 2007927107 Publisher IOS Press Nieuwe Hemweg 6B 1013 BG Amsterdam Netherlands fax: +31 20 687 0019 e-mail:
[email protected] Distributor in the UK and Ireland Gazelle Books Services Ltd. White Cross Mills Hightown Lancaster LA1 4XS United Kingdom fax: +44 1524 63232 e-mail:
[email protected] Distributor in the USA and Canada IOS Press, Inc. 4502 Rachael Manor Drive Fairfax, VA 22032 USA fax: +1 703 323 3668 e-mail:
[email protected] LEGAL NOTICE The publisher is not responsible for the use which might be made of the following information. PRINTED IN THE NETHERLANDS
Advances in Multiple Criteria Decision Making and Human Systems Management Y. Shi et al. (Eds.) IOS Press, 2007 © 2007 The authors. All rights reserved.
v
Preface A human being should be able to change a diaper, plan an invasion, butcher a hog, con a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insect. (Robert A. Heinlein) Motto of Milan Zeleny
This volume, edited as a Festschrift in honor of Prof. Milan Zeleny, reflects and emulates his unmistakable legacy: the essential multidimensionality of human and social affairs. There are many levels of this multidimensionality presented in this volume: 1. Multidisciplinarity of contributed papers, 2. Multinationality of their authors, extending even to the editors and the publisher, 3. Multicultural and multilevel exposition, ranging from empirical studies to philosophical foundations. Generally, these papers can be divided into three parts: Multiple Criteria Decision Making; Social and Human System Management; and Information, Knowledge and Wisdom Management. Just going through some keywords in the titles of individual contributions to this volume, represents an adventure in multidimensionality: multi-value decision making, multicriteria communication, multi-objective EMI, multicriteria analysis of OECD, digest wisdom, enlightenment, collaborate for win-win, value focused management, highly intelligent nation, KM pragmatism, human ideals, outsourcing risks, mobile technology, intelligent knowledge, purposeful coordination of action, high technology R&D, de novo programming, continuous innovation, competence set analysis, knowledge sharing, wisdom shaped management, socio-technical enablers, informed intent – such new words promise fresh insights, affirm that a new era has arrived, and invite the reader to the challenges of integration and synthesis, to knowledge and wisdom. It is the recognition of multidimensionality in decision making, economics, optimization, systems, cybernetics and the pursuit of knowledge that bear the stamp of specific Zeleny’s contributions. His life-long dedication to multidimensionality has produced an ultimate multidimensional being, living in academic “multiverse”, functioning in a boundaryless world of all continents, cultures and countries. He has lost all respect for nonpermeable boundaries and artificially imposed limits when he crossed the first such border in 1967: from his native Czechoslovakia (now non-existent) to his beloved United States of America. To this volume we have invited top researchers and scientists from an amazing variety of countries, ranging from the U.S.A., China, Korea, New Zealand, Singapore and Taiwan, to England, Greece, Finland, Israel, Italy, Poland, Portugal, Sweden and Slovakia. Even so, it is only a small sample of all the countries Milan has visited and
vi
worked in. He has become a truly global professor, with ongoing appointments on four continents, expanding his activities into a growing circle of areas, cultures, countries and friends. Together with multidimensionality comes naturally integration, cooperation and systems – the other side of the coin of recognized and appreciated multidimensionality. Finally, with any integrative and collaborative efforts come knowledge and wisdom, the other natural pursuits of people who integrate and collaborate across all boundaries, real, self-imposed or virtual. Knowledge and wisdom are the pursuits he is trying to mold into respectable academic areas, moving beyond their metaphorical or habitual traditions of usage. It all ads up to human systems management as the transdisciplinary expression of humane pursuits of human interests through systems. It is no coincidence that his brainchild, the journal of Human Systems Management, is celebrating its first 25 years, while Milan celebrates 65 years of his still accelerating quest for the new, the unknown and the original in social systems. Reading through the titles of his publications, one can see that there is very little coincidence in Milan’s work: it all unfolds from many different starting directions, evolves and begins to “come together” in the end – as if all has been carefully designed and crafted, all planned. It probably was, although assembling his impressive body of knowledge and wisdom has been executed spontaneously and with apparent ease. Nobody ever saw Milan working. He is always enjoying life, enjoying good food and drink, going to interesting places and cherishing the ever-evolving company of even more interesting men and women from all around the globe. As he escaped the ever-tightening borders of the ever-diminishing Czechoslovakia, he has continued “escaping” ever since: always ahead of the curve, pushing the envelope, outside the box. Milan has contributed to so many fields and areas, that most of us, being specialists, do not know the true extent of his work. He is certainly not a one-topic man: he has become known to many non-intersecting groups and societies, often he himself being their sole intersection. Just consider: artificial life, autopoiesis and tradeoffs-free resource allocation. His contributions to all those multiple fields are always original, fundamental and controversial, yet immediately recognizable for their emphasis on multidimensionality, contextual dependency, dynamics and pragmatic utility. Zeleny clearly abhors the “mainstream” of anything; he avoids it like a vacuum: mainstream thinking, mainstream research, mainstream values, mainstream life. He escaped the “mainstream” long time ago and shows no intentions of returning. He even escapes the fields he himself established or founded – as soon as they show the deadly signs of becoming “mainstream”. Mainstream thinking, he says, invites mediocrity, routine, copy and self-approval: perhaps useful and necessary to some, but so unexciting, boring and unchallenging to boundary-crossing seekers. The very definition of “mainstream” implies: within the boundaries, accepted by majority, mass behavior with no individuality, no surprises and certainly no inner rewards. So, in this Festschrift we also honor a challenge. What are we to think of a man who initiated, introduced or contributed to not only multiple criteria decision making, multiple criteria simplex method, linear multiobjective programming, de novo programming, eight concepts of optimality, compromise programming, knowledge-based fuzzy sets, knowledge management, self-producing social systems, spontaneous social orders, high technology management, theory of the displaced ideal, conflict dissolution, multidimensional radar diagrams, osmotic growths, inorganic precipitates, etc., but also historical studies on Trentowski’s Cybernetyka, Bogdanov’s Tectology, Leduc’s Syn-
vii
thetic Biology and Smuts’ Holism, as well as original contributions to management, strategy, systems sciences, cybernetics, autopoiesis, artificial life, game theory, APL simulations, social judgment theory, economics of interactions, tradeoffs-free economics, and so on. How do we honor such a student, teacher and man? It seems to us that only through a book like this one: a book that is as diverse and as multidimensional as the man and his work. Finally, we would like to express our sincere thanks to Yong Shi’s doctoral students at the Chinese Academy of Sciences, Xingsen Li, Rong Liu and Zhongbin Ouyang for their hard work on the formation of this book. We also acknowledge grants from National Natural Science Foundation of China (#70621001, #70531040, #70501030, #70472074), 973 Project #2004CB720103, Ministry of Science and Technology, China, and BHP Billiton Co., Australia for their support in the preparation of the book. Yong Shi Beijing, China David L. Olson Lincoln, Nebraska, USA Antonie Stam Columbia, Missouri, USA
ix
Biosketch of Milan Zeleny Milan Zeleny, Professor of Management Systems, Fordham University, New York City, recently published Information Technology in Business (Thomson International), co-edited New Frontiers of Decision Making for the Information Technology Era (World Scientific). His current book, Human Systems Management: Integrating Knowledge, Management and Systems, is to be followed by The BioCycle of Business: Managing Corporation as a Living Organism, Roads to Success: Bata Management System and other works-in-progress, including Knowledge of Enterprise, and The Art of Asking Why: Foundations of Wisdom Systems. His previously published books include Multiple Criteria Decision Making (McGraw-Hill), Linear Multiobjective Programming (Springer-Verlag), Autopoiesis, Dissipative Structures and Spontaneous Social Orders (Westview Press), MCDM-Past Decades and Future Trends (JAI Press), Autopoiesis: A Theory of Living Organization (Elsevier North Holland), Uncertain Prospects Ranking and Portfolio Analysis (Verlag Anton Hain), Multiple Criteria Decision Making (University of South Carolina Press), Multiple Criteria Decision Making: Kyoto 1975 (Springer-Verlag), and others. Milan Zeleny was born 22. 1. 1942, in a small village of Klucké Chvalovice in Bohemia. After studies at Prague School of Economics (Ing., 1964), military service in Prague, and a few years at the Czech Academy of Sciences, he left the communist Czechoslovakia in 1967 in order to extend his studies for Ph.D. in Operations Research and Business Economics at the University of Rochester, where he earlier got his M.S. in Systems Management in 1970. Milan’s roots are in the literary family of Vácslav and Vladivoj Zelený. His father, Josef Zelený, founded one of the first organizational consulting firms in the 30s and 40s in Prague (“ZET-organizace”). After the communist takeover in 1948, his father became a coal miner (in Kladno) and his uncle worked in the uranium mines of Jachymov. Milan’s fate as an exulant from his own country and a global professor of his later years was sealed. After studies in the US, he followed a string of employments: 1971–1972, University of South Carolina, Columbia, Assistant Professor of Statistics and Management Science; 1972–1979, Columbia University, New York, Associate Professor of Business Administration; 1979–1980. Copenhagen School of Economics, Copenhagen, Denmark, Professor of Economics; 1980–1981, European Institute for Advanced Studies in Management (EIASM), Brussels, Belgium; Professor of Management Science. Since 1982 he has become Professor of Management Systems at Fordham University at Lincoln Center, New York, which then became his permanent tenured appointment. Since 1998 he holds parallel appointments at the Tomas Bata University in Zlin and since 2004 also at Xidian University in Xi’an, China. In 2006 he worked at Fu Jen University in Taipei and in 2007 at the Indian Institute of Technology in Kanpur.
x
His early research involved Critical Path Analysis. The Multiple Criteria Decision Making (MCDM) he initiated in 1972. Later he developed new interests in Knowledge Management (KM) with a first paper in the field in 1987. Other research areas included: games with multiple payoffs, Integrated Process Management (IPM), knowledge-based theory of fuzzy sets, Baťa System of Management, high-technology management, mass customization and portfolio selection, risk analysis, measurement of consumer attitudes, human intuition, creativity and judgment, simulation models of biological organization and autopoiesis, artificial life (AL), osmotic growths, spontaneous social organizations and early computer modeling (via GPSS, APL, FORTRAN and BASIC). In recent years his main research has concentrated on corporation as a living organism (The BioCycle of Business). He became also active in consulting and later on in CEO coaching while pursuing practical projects of entrepreneurial university, recycling and remanufacturing, and integration of data, information, knowledge and wisdom into a coherent management support. Among his major awards are: • • • • • • •
Erskine Fellowship, University of Canterbury, New Zealand The Georg Cantor Award, International Society of MCDM USIA Fulbright Professor in Prague, Czechoslovakia Bernstein Memorial Lecturer, Tel-Aviv, Israel Alexander von Humboldt Award, Bonn, Germany Rockefeller Foundation Resident Scholar, Bellagio Study Center Norbert Wiener Award of Kybernetes
His memberships included AAAS, TIMS, ORSA, SGSR, HSM, SASE, Beta Gamma Sigma, Omega Rho, Club of Rome. He is listed in all major Who’s Who. Among his more interesting adjunct and visiting positions are CSIRO, Pretoria, South Africa, Mathematics and Statistics, Visiting Scientist, 1986; State University of New York at Binghamton, School of Advanced Technology, Professor in General Systems, 1986–1992; Program MBA, Irish Management Institute (IMI), Dublin, Professor of Management Systems, 1986–1990; Centro Studi di Estimo e di Economia Territoriale, Florence, Italy, Professor, 1992; Università degli Studi di Napoli Federico II, Dipartimento di Conservazione dei beni, Architettonici ed Ambientali, Naples, Italy, Visiting Professor of Environmental Economics, 1993; and many others. He is the author of over 400 papers and articles, ranging from operations research, cybernetics and general systems, to economics, history of science, total quality management, and simulation of autopoiesis and artificial life (AL). Articles on Integrated Process Management (IPM), the Bata-System and Mass Customization were translated into Japanese, others into Chinese, French, Italian, Hungarian, Slovak, Czech, Russian and Polish (He also wrote over 500 short stories, literary essays and political reviews in Czech, Slovak and English). He has served as the Editor-in-Chief of Human Systems Management, the global journal, over the last twenty five years. He also served on editorial boards of Operations and Quantitative Management, International Strategic Management, Operations Research, Computers and Operations Research, Future Generations Computer Sys-
xi
tems, Fuzzy Sets and Systems, General Systems Yearbook and Prestige Journal of Management and Research. Currently serves on editorial boards of International Journal of Information Technology and Decision Making, International Journal of Mobile Learning and Organization, and International Journal of Innovation and Learning, among others.
xiii
Contents Preface Yong Shi, David L. Olson and Antonie Stam
v
Biosketch of Milan Zeleny
ix
Part 1. Multiple Criteria Decision Making Multi-Objective Preferences and Conflicting Objectives: The Case of European Monetary Integration Maurizio Mistri
3
Multicriteria Routing Models in Telecommunication Networks – Overview and a Case Study João C.N. Clímaco, José M.F. Craveirinha and Marta M.B. Pascoal
17
Post-Merger High Technology R&D Human Resources Optimization Through the De Novo Perspective Chi-Yo Huang and Gwo-Hshiung Tzeng
47
An Example of De Novo Programming David L. Olson and Antonie Stam Multi-Value Decision-Making and Games: The Perspective of Generalized Game Theory on Social and Psychological Complexity, Contradiction, and Equilibrium Tom R. Burns and Ewa Roszkowska Comparing Economic Development and Social Welfare in the OECD Countries: A Multicriteria Analysis Approach Evangelos Grigoroudis, Michael Neophytou and Constantin Zopounidis
65
75
108
Part 2. Social and Human System Management The Enlightenment, Popper and Einstein Nicholas Maxwell Value Focused Management (VFM): Capitalizing on the Potential of Managerial Value Drivers Boaz Ronen, Zvi Lieber and Nitza Geri
131
149
Zeleny’s Human Systems Management and the Advancement of Humane Ideals Alan E. Singer
176
Outsourcing Risks Analysis: A Principal-Agent Theory-Based Perspective Hua Li and Haifeng Zhang
195
xiv
Mobile Technology: Expanding the Limits of the Possible in Everyday Life Routines Christer Carlsson and Pirkko Walden
202
Informed Intent as Purposeful Coordination of Action Malin Brännback
218
Competence Set Analysis and Effective Problem Solving Po-Lung Yu and Yen-Chu Chen
229
Part 3. Information, Knowledge and Wisdom Management Information and Knowledge Strategies: Towards a Regional Education Hub and Highly Intelligent Nation Thow Yick Liang Needed: Pragmatism in KM Zhichang Zhu Knowledge Management Platforms and Intelligent Knowledge Beyond Data Mining Yong Shi and Xingsen Li Continuous Innovation Process and Knowledge Management Ján Košturiak and Róbert Debnár An Exploratory Study of the Effects of Socio-Technical Enablers on Knowledge Sharing Sue Young Choi, Young Sik Kang and Heeseok Lee
253 269
272 289
303
Urban System and Strategic Planning: Towards a Wisdom Shaped Management Luigi Fusco Girard
316
Digest® Wisdom: Collaborate for Win-Win Human Systems Nicholas C. Georgantzas
341
Selected Publications of Milan Zeleny
372
Biosketches of Contributing Authors
390
Author Index
405
Part 1 Multiple Criteria Decision Making
Advances in Multiple Criteria Decision Making and Human Systems Management Y. Shi et al. (Eds.) IOS Press, 2007 © 2007 The authors. All rights reserved.
3
Multi-Objective Preferences and Conflicting Objectives: The Case of European Monetary Integration Maurizio MISTRI International Economics and Information Economics Department of Economic Sciences, University of Padova via del Santo,33, 35124 Padova, Italy Tel.: (University) 049 8274222; Fax: 049 8274211 E-mail:
[email protected] and
[email protected] Abstract. This paper analyses the birth of the Euro as a phenomenon in which the decision-makers, who are not necessarily the electors of the countries involved, reach the decision to create a new monetary regime – new inasmuch as it incorporates new rules of behavior between nations, inducing changes in the weight attributable to different goals of economic and monetary policy. The paper is based on the assumption that decision-making processes on the matter of monetary policy are influenced both by the hegemonic lines of thinking in a given historical period and by the political role played by the major powers. The process of European economic integration is thus retraced and reinterpreted in terms of the conflict between multi-objective preference functions, thereby enabling the emphasis to be placed on the nature of the fundamental steps in said process of integration.
Introduction The aim of this paper is to analyze the process of European monetary integration that led to the birth of the Euro, seen as a decision-making process involving several collective subjects (the national governments involved) called upon to define a collective preference function that the single governments had progressively established during the course of negotiations on the process of monetary integration. It goes without saying that the negotiations in question may sometimes have been influenced by extraordinary exogenous events (such as the oil crisis of 1973–74) and are naturally influenced, more in general, by the structure of the national collective preference functions. Given the nature of the problems they intend to deal with, and because the negotiations take place within the single states first, said preference functions are multiobjective preference functions that can be defined on the space of the domestic and international economic policy strategies. Generally speaking, in a regime of parliamentary democracy, the single national governments are called upon to identify economic policy strategies, which are attributed certain objectives, and to do so they must derive some form of collective preference function capable of satisfying the individual preferences of the single electors. This is a problem that has given rise to an interesting line of research following the seminal work by Arrow (1951) on the determination of social choices consistent with individual preferences. From Arrow’s work, we can see that a
4
M. Mistri / Multi-Objective Preferences and Conflicting Objectives
collective utility function is derivable without posing problems of logical consistency providing the objectives are brought down to just one. If there is more than one objective, then problems of logical consistency develop, as expressed in the formula of Arrow’s “impossibility theorem”. The passage from a system of national currencies to a single European currency entails increasing the number of possible objectives of economic policy and adopting a system of rules on the matter of monetary policy, and such a system may be radically innovative with respect to the system of rules previously adopted by the governments of the countries taking part. This study consequently starts with an analysis of the concept of rule, or institution, in the context of the system of international economic relations, identifying a rule in a specific international monetary regime. Then an attempt is made to answer the question of whether the search for and determination of a new monetary regime coincides with a rational design supported by the electors and the governments, or with the evolution of a self-supporting process, in which phenomena may emerge in the context of the real economy that are in conflict with one or more of the goals that should have been contained in the multi-objective collective preference function. The answer to this question depends on how we choose to identify the relationship between strategies and the payoffs they award. One of the logical approaches used here is the one developed in Mistri (2003), where it is assumed that economic subjects move in a situation of bounded rationality and that institutions are determined as a result of an evolutionary process revealing a sort of path dependence, so that decisions for the future are influenced by the heritage of decisions made in the past, so that we can speak of the “production of norms by means of norms”. The paper begins with an analysis of the processes that can give rise to social norms in a setting in which the decision-makers are representatives of national states; these norms are assumed to reflect the emergence of cooperative attitudes between the national governments involved in negotiating specific agreements. Then we analyze the relationship between governments (seen as “agents”) and their electorate (the “principals”), emphasizing the informative asymmetries of which both parties suffer. Having established the general framework, we move on to consider the relationship between collective preference function and strategic choices on the matter of monetary policy. This is where the strategic choices are developed in terms of multi-objective preference functions and we demonstrate how the passage from one monetary regime to another means, in short, a transformation of said preference functions, identifying the boundary conditions that make said change inevitable.
1. Birth of Collective Behavioral Norms Monetary integration, as foreseen by the Maastricht Treaty and taking concrete shape in the creation of the Euro, represents the final stage in a sequential process for determining rules or, if you will, monetary regimes, that are a specific form of international economic regime. An international economic regime can be considered as a particular form of institution. In institutionalist economics, and in game theory too, institutions now constitute an important field of economic investigation (Aoki, 2001; Hodgson, 1988, 1999; Kasper and Streit, 1998; Nurmi, 1988; Schotter, 1981; Vanberg, 1994; Young, 1998), with a view to assessing how they are formed and the reasons behind their evolution. But before we deal with the processes that lead to the formation of economic rules for governing relationships between states, it might be a good idea to start
M. Mistri / Multi-Objective Preferences and Conflicting Objectives
5
with a few considerations on how the rules governing relationships between people are formed. In theory, we can thus identify two classes of institution-forming processes, i.e. one for the formation of institutions at “micro” level and the other for the formation of institutions at “macro” level. There are clearly significant differences, however, between the two classes due to the way in which individual preferences and beliefs are formed on the one hand, and collective preferences and beliefs on the other. Certainly, we can apply the models of strategic interaction developed primarily to represent institution-forming processes at micro level to the macro dimension too, providing we impose certain conditions on the way in which the collective preference functions are constructed. So if we consider the question of how institutions form in the micro dimension, economists can generally be said to deal with the problem of the birth and consolidation of institutions taking two different approaches. A first approach stems from the “functionalism” of typical anthropology, while a second approach refers to the analysis of the evolution of dominant strategies in game theory (Aoki, 2001). Here, I wish to focus on a third possible approach, inspired by the results of research in the cognitive sciences, which see institutions as the outcome of problem-solving processes in conditions of bounded rationality. Taking the functionalist point of view, we can assume that institutions are needed for a social purpose: they help to reduce the intrinsic uncertainty in interpersonal relationships within a given market. This is the stance taken by North (1998), for instance, who bases this type of analysis on the idea that the ability of an institution to function depends on its “rationality”, i.e. on its capacity to reduce the relational uncertainty between the members of a given group. Streit, Mummert and Kiwit (1997) claim that institutions reduce the uncertainty by imposing constraints on human actions; Dopfer (1997) emphasizes that institutions should be seen as “correlated behavior patterns”, in that they consist of standardized rules of behavior. These rules become standardized as a result of the attempt of elimination of behavior patterns that are non-consistent, i.e. that do not comply with a principle of rationality. In many strategic game models, in fact, any logically non-consistent behavior contrasts with a principle of strategic rationality. The way in which subjects establish rules for the purpose of reaching objectives of a social order, as mentioned earlier, is a central issue in anthropology, but economic science seems able to provide a specific interpretation of how rules develop in the case of individuals setting themselves problems of procedural rationality. Among other things, these individuals may possess a thorough understanding, but are far more likely to have only a partial understanding of the present and future state of affairs, and it is this incompleteness of information that underscores the role of institutions – such as those required to contain transaction costs. In the analysis of the processes behind the formation of social rules in general, and of economic rules in particular, we can now draw amply from game theory and imagine that economic subjects have an adequate understanding, or strong convictions at least, concerning the relationship existing between strategic choices and payoffs. Generally speaking, the emergence of institutions designed to ensure a form of governance over social relations is assumed to indicate the consolidation of a cooperative type of behavior. The cooperative game concept first appears in a work by Nash (1950), but the seminal works by Axelrod (1981; 1984; 1997) have been of fundamental importance in the subsequent development of this line of research.
6
M. Mistri / Multi-Objective Preferences and Conflicting Objectives
2. The Emergence of Cooperation in the Micro Dimension Suitable schemes for formally representing the problem of how rules are chosen are offered by the evolutionary games approach. These games are strongly inspired by biological patterns based on the concept of “evolutionally stable strategy”, which indicates the emergence of one strategy from among the various available strategies that prevails over all the others, and that the subjects who use it would consider it inconvenient to abandon. Behind the conceptual scheme à la Axelrod lies a biological conception that significantly influences its logic and structure. The point has been made, however (Samuelson, 1998, p. 37), that in order to construct an evolutionary game theory complying with a principle of economic significance, we must pay attention to the fact that, once it has taken place, an evolutionary process of biological type tends to become rather stable in time, whereas the winning evolutionary strategies in the economic sphere may very soon no longer be evolutionary. In game theory, the concept of cooperative behavior is traced back to a rational decision reached by subjects involved in strategy games. A little earlier, we mentioned Nash’s cooperative game concept: according to Nash, cooperative games are strategy games in which the players can communicate with one another and can stipulate binding agreements. In the micro dimension, cooperative behavior does not necessarily satisfy the conditions imposed by Nash. It may stem from behavior implicitly determined as a result of processes of self-organized coordination, as Hayek assumed (1973) in his analysis of spontaneous order. Such “implicitly cooperative” behavior develops within sufficiently cohesive social and economic networks, so that the market can be seen as a system of rules developed between subjects who interact on the strength of blind standards that govern their behavior (Vanberg, 1994:77). Mention has already been made of the important role of game theory in modeling situations in which there may be dynamically evolving processes converging towards cooperation. Given the variety of practical situations, it may be useful to go back to the analysis by Ullman-Margalit (1977), who identifies three types of situation or problem that subjects engaged in strategy games have to solve, i.e.: 1) coordination problems; 2) “prisoner’s dilemma” problems; and 3) differentiation problems. The “prisoner’s dilemma” problems seem to be the most suitable for describing situations characterizing relations between individuals, while the coordination problems appear more apt in describing situations that characterize relations between states. We must not forget, however, that coordination problems may also derive from questions characterized by the agents involved adopting an opportunist behavior. The prisoner’s dilemma game is well known and features a situation in which opportunist strategies “prevail” over cooperative strategies whenever each agent involved does not know beforehand what behavior to expect from the other agent(s). Failing any clear signs or binding commitments, each agent ultimately chooses the strategy with the worst payoff (Table 1). As usual, the pattern involves two players (A and B) and two alternative strategies, “cooperate” (C) or “defect” (D); the numerical values represent arbitrarily determined payoffs. We all know that the choice (D,D), with the payoffs (–3, –3), represents a Nash equilibrium, the outcome of a decision made by both players although they understand the prospective advantage of playing (C,C). This is a game entailing only one move, but – as Axelrod emphasizes (1984:11) – a cooperation may emerge when the agents move an indefinite number of times, i.e. when the game involves an indefinite number of interactions, so the agents’ time horizon can be considered as infinite.
M. Mistri / Multi-Objective Preferences and Conflicting Objectives
7
Table 1. The prisoner’s dilemma A
B
Cooperate (C)
Defect (D)
Cooperate (C)
–1, –1
0, –5
Defect (D)
–5, 0
–3, –3
For a game to lead to stable cooperative solutions, the agents need to assign sufficient importance to the payoffs derivable from future interactions. They know that their current moves will influence the other agents’ future behavior, especially if the strategy they adopt is tit-for-tat. But if the assumption is that the future is “less important” than the present, then the payoff from a future move will carry less weight than the payoff of a present move. Axelrod (cit.) indicates the value of each future payoff as w, imposing the condition that 0 < w < 1. This value can be used as a factor with which to correlate the discount rate, δ, of the payoff of a whole sequence of moves.
3. Formation of Collective Preference Functions The sphere of the behavioral rules of nations naturally belongs to the macro level, which have their own peculiar traits with respect to the behavioral rules adopted at micro level. Knowing these rules can help us understand what happened when the Euro was born, since it was the offspring resulting from a long and difficult search for a stable arrangement for the governance of European monetary policies. Even at first glance, it is fairly plain that the logic behind the determination of behavioral norms at macro level is not the same as for behavioral norms at micro level – and the reason why lies mainly in the differences between individual and collective preference functions, and in the way in which collective preference functions (referring to the nations) are created. Switching from individual preference functions to collective preference functions poses quite a few methodological problems. There are at least two issues that, in my opinion, need to be adequately taken into account. The first relates to the capacity of a collective preference function to represent the set of preferences of the n members of a state. The second concerns the derivation of the mechanisms adopted by the members of a state in making decisions concerning their economic and financial policy strategies. Individual utility functions can be defined mainly on the space of available goods, though they can also be defined on the space of values, which correlate with specific behavioral rules that coincide, as we know, with institutions. In the case examined here, the collective utility functions assumed to exist are consequently defined on the space of the available economic and financial policy strategies, Ω. Since the seminal contribution from Arrow (1951), the scientific debate on how collective utility functions are determined focuses mainly on the aggregation of individual preferences and on how satisfactory solutions can be found to the problem of the aggregation of said individual utility functions. Here, let us imagine that the process for determining the collective preference functions is handled by each national state taking part in a set of negotiations, and that these preference functions are defined on the space of their economic and financial policy strategies, including the strategies that have to do with governing the international currency markets, i.e. the system of rules that the citizens of the various states
8
M. Mistri / Multi-Objective Preferences and Conflicting Objectives
involved, through their governments, in the negotiations on the European currency system believe they are able to construct for the European monetary system or, in other words, the “European monetary regime”. An interesting analysis of the concept of international economic regime is provided in Gilpin (2001: ch. 4), who sees an international economic regime as a set of behavioral rules, implicit and explicit procedures, on which the expectations of the governments of a given area converge as regards their participation in international economic relations. It becomes self-evident that the international economic regime is an institution at macro, i.e. intergovernmental level that serves the same purpose that an institution in the micro dimensions serves at interpersonal level. The scholar who has contributed most to developing the international regimes approach is Keohane: in his book “After Hegemony” (1984), he followed up the institutionalist approach, acknowledging international economic regimes the primary role of improving the efficiency of the institutions. There are at least two major differences, however, between a given institution at micro level and one at macro level. The first relates to the levels of knowledge that characterize the individual preferences vis-à-vis the collective preferences. For instance, Rowley (2001:651) draws from Downs (1957) and claims that, in economic analysis, individual consumers are supposedly well-informed, while no such an assumption can be made for situations of a political type. The second difference consists in the fact that, while in the micro dimensions each subject makes decisions that have not been mediated, in the macro dimension there is a principal/agent type of relationship between electors and their governments, the electors being the principals and their governments the agents, who are apparently appointed by the principals to take action with a view to optimizing the collective utility function.
4. Informative Asymmetries Between Principals and Agents One of the fundamental issues in principal/agent relations is represented by the existence of informative asymmetries between one and the other. The informative asymmetries to which I refer, drawing from Arrow (1985), concern both the so-called moral hazard and the so-called adverse selection. Of course, the moral hazard is manifest when an agent’s actions cannot be verified directly by the principal, or when the agent acquires private information after the relationship has been established, while there is adverse selection when the agent possesses private information before the relationship has been established. In the case of economic policy strategies, it seems difficult to identify a strategy in advance that will be consistent with diverse and often even divergent individual interests. Moreover, the principals cannot possibly have a clear idea about the payoffs obtainable from one strategy rather than another because, at the start, they cannot rely on the necessary direct learning process, as they do in the case of the consumption of material goods. The search for an ideal monetary regime suffers from these cognitive difficulties of the principals because they have to base their choice of the strategies to adopt on certain more or less well-founded convictions, or even on heuristics (ideologies), not on tried and tested results. For instance, the switch from a system of flexible exchange rates to a system of fixed exchange rate, or even to a sole currency, is an event that is unlikely to occur twice in an individual’s experience, so the principals have no reference models with which to interpret this specific event. The principals basically choose between a system
M. Mistri / Multi-Objective Preferences and Conflicting Objectives
9
of flexible exchange rates and a system of fixed exchange rates, or a sole currency, depending on the level of trust that they place in the claims made by people they credit with a better understanding of the issue. As a rule, these people are in fact the agents, i.e. the politicians appointed to govern, who obtain information from other people who act as their “economic advisers”. The economic advisers are seen as “experts”, though it remains to be seen how much their understanding is useful for the purpose of analyzing new, unrepeatable situations. By relying on these experts, the politicians thus place their trust in the economic theory that is hegemonic in a given historical period. Dominant economic theories alone are not enough, however, to drive a process for the creation of a new international economic regime. The existence of credible economic theories, at a certain point in the economic history of a set of states at least, may well be a necessary condition for triggering a change in the international economic regime, but the sufficient condition is that there be a force capable of transforming an economic theory into an economic policy strategy, and into a consequent change in the international economic regime. The hegemonic opinions that have developed in the arena of international economics have relied mainly on a historical knowledge of similar, though certainly not identical situations that have occurred at other times and in other places, though some economic theories have taken hold for no other reason than the logical rigor they have demonstrated. This is the case, for instance, of the theory of comparative advantage of Ricardian derivation, whose authoritativeness satisfies the necessary condition for a free trade strategy to come into being. The assumption that exchange rate variability interferes with the integration of the economies involved is based on economic theory and supported by common sense. The existence of a comparative assessment of the payoffs obtainable from a regime of flexible exchange rates versus a regime of fixed exchange rates or a single currency is not a sufficient condition to enable a switch from a competitive strategy, based on the competitive devaluation of the exchange rates, to a cooperative strategy based on stable exchange rates or a sole currency. In addition to an important difference in the payoffs obtainable from the different monetary regimes, a shared opinion must develop concerning the benefits afforded by the new monetary regime (the sole currency in the case in point). According to an influential politological school, the formation of such a convergence of opinion cannot fail to be influenced by a dominant power – and this takes us back to what Keohane (1980) defined as the “theory of hegemonic stability”, as opposed to the theory of international regimes. In fact, the theorists of hegemonic stability see the economic order created after the Second World War as the expression of a political (and military) hegemony of the United States. The birth of the Euro is clearly due partly to the breakdown of the old international economic order, but above all to the economic and political role acquired by the reunited Germany within the area of the European Union, as we shall see. So the theory of hegemonic stability could be used to explain the birth of the Euro, alongside the hypothesis that the hegemonic economic theories influence both the agents’ decisions and the principals’ instructions. As mentioned earlier, in the perspective that I consider here, the principals have a limited ability to analyze the significance (in terms of the consequences) of the economic theories that the experts bring to the attention of their governments. At the same time, these governments also have a limited ability to grasp the meaning, i.e. the consequences, of these theories, so they have to rely on experts belonging to the circuit of institutional organizations, such as the national central banks and the committees of
10
M. Mistri / Multi-Objective Preferences and Conflicting Objectives
Figure 1. Relations between international monetary policy agents.
technicians who guide their analyses. The information flow and the intensity of the influences can be represented as shown in Fig. 1. The directions of the arrows in the figure indicate the dominant/dominated relationship in the relations between the agents involved, and block letters are used to indicate the particular strength of said relations. According to the layout illustrated in Fig. 1, the decision-making processes are actually oriented by experts uninvolved in the principal/agent relations; the only real relationship the experts have, if at all, is with the central banks on the one hand, and with the agents on the other. As demonstrated very neatly by Chappel, McGregor and Vermilyea (2005), who examine the case of the United States, decisions on the matter of monetary policy are made by ad hoc “committees”. In the States, the committee that makes this type of decision consists of the seven members of the Board of Governors and the chairmen of the 12 district banks. In the Euro area, decisions are made by the governors of the central national banks, within the mandate they have received on the basis of the Maastricht Treaty. The Maastricht Treaty appears to set extremely rigid guidelines, charting the course that the European Central Bank (ECB) is required to follow, by taking minimal steering action from time to time to keep the European monetary market on course. The ECB implements the Maastricht Treaty directives, but the Maastricht Treaty was constructed by the experts from the national central banks, who based their decisions on the fundamental ideas of the economic mainstream that was dominant at the time. This is what Issing et al. say clearly when they write that, “The ECB’s monetary policy strategy has been influenced by academic work on macroeconomics and monetary policy” (2001:3). They go on to say that the monetary policy adopted by the ECB is entirely consistent with the dominant economic thinking in this particular historical period. All this is legitimate in terms of political logic, but we are interested in focusing here on the fact that the mandate that the principals have given to their agents is in fact a mandate that said agents (i.e. the government) have assigned to themselves based on the experts’ recommendations. It goes without saying that the collective preference function incorporating the monetary strategies has been derived irrespective of whether or not the principals and agents were capable of assessing the payoffs obtainable by each significant social group.
M. Mistri / Multi-Objective Preferences and Conflicting Objectives
11
5. Collective Preference Functions and Strategic Choices in Monetary Policy As we saw a little earlier, collective preference functions on the matter of economic policies stem in practice from negotiations between national governments “guided” by outside experts and national central bankers, based on guidelines that reflect the hegemonic lines of thought in economic theory. We have also seen that the principals’ specific preferences carry little weight, partly because they are unable to make deductions on the validity of one monetary policy by comparison with another. If anything, they only show certain attitudes to objectives that, in their eyes, bear no direct link to the adopted or adoptable monetary policies. These objectives may be full employment, for instance, or stable exchange rates. The information the principals possess about the links between general economic policy objectives and monetary policy tools, as we said earlier, are obtained from the experts and filtered by the agents. In theory, we can imagine that national collective preference functions are the expression of an “internal” negotiation process based on the preferences expressed by single individuals, or that certain economic policy strategies are defined on the set of all the possible economic policy strategies. In empirical terms, collective preference functions are the outcome of negotiations between influential social groups although conditioned by the information system accessible to them. In entirely abstract terms, we can indicate the set of possible economic policy strategies as Ω, defined on the space Rl; from among these, we can identify a limited basket of strategies α = (α 1, α2, , ……, αl) that are preferable to another basket of strategies β = (β 1, β2, , ….., βl), so that the binary relationship ≥ E Ω x Ω is a preference relationship from which we can obtain a suitable utility function. In short, we can assume that governments (or agents) seek to achieve two fundamental objectives, i.e. an increase in employment and stable exchange rates. For a long time, these two objectives were considered somehow inversely related to one another, in the sense that it was assumed that employment rates could increase thanks to an increase in income, Y, and that this expansion could be assured by an adequate expansion of the monetary mass, even at the cost of a rise in the rate of inflation. Every good manual on macroeconomics illustrates the relationship between unemployment levels and rates of inflation, represented by the famous Phillips curve. In the short term, we can assume that the employment level N, is a function of the aggregate demand, Yd, which can be expressed as follows: Yd = Yd*+ γ (p – p*)
where N = φ (Yd)
(1)
The expression (p – p*) indicates the difference between the actual level of prices, p, and the equilibrium level of prices, p*, while γ is a specific coefficient, such that 0 < γ < 1. Yd* is the equilibrium revenue. Likewise, again in the short term, we can express the level of the exchange rate as follows: a = a* – λ (p – p*)
(2)
where λ is a specific coefficient, such that 0 < λ < 1. The formula (2) shows that the exchange rates deviate from the long-term level, a*, whenever the level of prices deviates from its long-term level. If p > p*, given the con-
12
M. Mistri / Multi-Objective Preferences and Conflicting Objectives
Figure 2. National income and exchange rate in relation to inflation.
ditions imposed on γ and λ, then in the case of [1] the level of income tends to rise, while the exchange rate tends to become worse (Fig. 2). In a strictly Keynesian logic, the goal of full employment seems to collide with that of an equilibrium of the trade balance because the existence of differences in the national inflation rates further disrupts the trade balance. In a Keynesian view, however, a system of flexible exchange rates is assumed to be effective – providing the MarshallLerner condition holds – for the purposes of a growth in employment, while it is supposed to be ineffectual for the purposes of controlling inflation. Vice versa, again in a strictly Keynesian view, a system of fixed exchange rates is assumed to be ineffectual for the purposes of any growth in employment, but effective for the purposes of controlling inflation. On these issues, in his “General Theory of Employment, Interest and Money” (1936), Keynes expresses a preference for a moderate growth in monetary salaries, preferably accompanied – in the case of an open economy – by a fluctuation in the exchange rates, with a view to maintaining the equilibrium with the rest of the world (cit., ch. XIX). In the Europe of the years between 1960 and 1980, the basket of strategies thus consisted of two major strategies, i.e. full employment, α1, and equilibrium in the balance of payments, α2, such that α = (α1, α2). In entirely generic terms, the structure of this basket can be expressed as follows: α = (k1 α1, k2 α2 )
con k1 + k2 = 1
(3)
where k1 and k2 are parameters representing the relative weights attributed by the agents to the two strategies.
6. Plurality of Objectives in the European Monetary Issue The mutation in the relative weights, k1 and k2, reflects changes that can occur in the relative preferences for the baskets representing the different strategies. In more recent years, we have seen an increase in the weight of k2 at the expense of k1, i.e. there has been a growth in the preference of the various European governments for a strategy to maintain stable exchange rates. There has been a consequent tendency to override the preferences that the European governments had expressed as regards policies oriented
M. Mistri / Multi-Objective Preferences and Conflicting Objectives
13
towards giving priority to national goals of full employment. For a long time now, the conflict between the strategy aiming for full employment and the strategy aiming for the stability of the exchange rates has been at the center of the scientific and political debate on the matter of European economic integration, ever since the European Common Market come into being. Inter-European exchange rate stability and growth in employment seem to be objectives that are not easy to reconcile using the rules that governed the relationships between the European countries from the end of the Fifties up until the mid-Eighties. It has nonetheless been the development of inter-European exchange rates that has reinforced a certain collective preference for exchange rate stability, leading up to the hypothesis of some form of monetary union. As emphasized by Maes (2002:32), this need emerges in the Werner Report of 1970, according to which monetary union implies the total and irreversible convertibility of the currencies and consequently the elimination of all margins of fluctuation in the exchange rates, the irrevocable fixation of the parity rates and the consequent total liberalization of the movement of capital. The Werner Report considered two options, i.e. fixed exchange rates or a sole currency. The two options were initially considered as interchangeable, but the Werner Commission very soon arrived at the conviction that a system of fixed exchange rates would be unable to withstand the pressures that would come to bear on the national economic policies in the event of unfavorable contingencies. The option of the sole currency nonetheless continued, at the time, to fall on deaf ears, both amongst the national governments and in the academic world. There was a lurking fear that adopting a sole currency would entail excessive sacrifices on the employment side. For instance, there was a time when Meade (1952) declared a preference for a system of flexible exchange rates because he considered it the objective of full employment a priority, even with respect to free trade. Even 20 years later, Dehem (1972) denied that governments knew how to cope with the unemployment or inflation that would be generated as a result of monetary union, partly because of inescapable rigidities in the production factors market. In fact, the node of the trade-off between employment and exchange rate stability constitutes a focal point in the literature on the optimal currency areas (Mundell, 1961; McKinnon, 1963; Kenen, 1969). This was a topic that had been discredited towards the end of the Seventies (Tavlas, 1993), but was taken up again more recently under the impulse of studies on the topic of European monetary integration (Engel and Rogers, 1996; Frenkel and Rose, 1998). One of the main issues in the old and new debate on the matter of optimal currency areas concerns the conflict between the goal of full employment and the goal of stability of the exchange rates. The choices made by the European Commission and the national governments were oriented towards mixing of Keynesian strategies with strategies inspired basically by the monetarist approach. The resulting tendencies provide yet another demonstration of the existence of an equilibrium between the two fundamental strategic options, also bearing witness to a lack of leadership of either scientific approach. In more recent years, the complex relationship between the two approaches has been changing progressively within the European Commission, with a greater weight being attributed to the theoretical positions of the supporters of supply-side economics at the expense of those supporting the monetarist approach à la Friedman (1968) and of the exponents of the rational expectations approach (Maes, 2001). This change in strategic viewpoint seems to reflect the supremacy won by a school that poses a particular emphasis on exchange rate stability and on the capacity of monetary policy to have a
14
M. Mistri / Multi-Objective Preferences and Conflicting Objectives
“neutral” role in the dynamics of the real economy in the long term. If we grant such an assumption, then the choice of economic and monetary policy becomes compulsory and, as a result, the decisions of national governments are conditioned by the primary objective of combating inflationist tendencies. The doctrinal positions find support in the stance taken by the Bundesbank in relation to the creation of a central European bank entrusted with the task of ensuring the stability of the new exchange rates. So it was Germany that provided the impetus for the construction of a European monetary system with a sole currency. The political events of the end of the Eighties seemed to offer Germany the chance to establish an economic hegemony in Europe. The Deutschmark was converted into the Euro and Germany’s hegemonic role takes concrete shape in the ECB’s commitment to follow the basic policy of the Bundesbank (Dyson and Featherstone, 1999). That is why the governor of the Bundesbank can claim that, “Above all, agreement must exist that stability of the value of money is the indispensable prerequisite for the achievement of other goals” (Pohl, 1988, p. 132), thus becoming the champion of a European Central Bank capable of staying on the course of stability of the general level of prices. The adoption of the Bundesbank’s policy as the guiding philosophy of the ECB happened at a time when there was a specific convergence of conditions that have more to do with politics than with any changes in the principals’ preferences. A first condition was political, represented by the pressure of German reunification on the political and economic balance in Europe. Maes (2001) clearly shows that it was the political classes of the major European countries that were particularly keen on launching the Euro, albeit for different reasons. A second condition, which seems complementary to the first, is represented by the maturing of a cultural climate that, in political economics, is expressed in the scientific hegemony of the monetarist paradigm. So we must look to the founding theories of the system that can be summarized in the names of Lucas and Friedman in order to understand the nature of the ECB’s ideological manifesto. As a consequence of adopting the Bundesbank’s approach, the Keynesian-type policies were effectively abandoned. The two conditions listed above demonstrate sufficiently clearly that the birth of the Euro derives from the disruption of the previous equilibrium between two different scientific approaches in political economy, and from the consequent disruption of the equilibrium between two different conceptions of economic policy. Going back to formula (3), we can say that the conflict between different objectives is reduced by means of a simplification of the objectives in question, so that the resulting strategy is of the type: α = α 2 k2
where k2 = 1
(4)
7. Conclusions In this paper, we have used the process of European monetary integration as an opportunity to demonstrate the possible interactions between some conceptual tools of decision theory and the analysis of phenomena of economic politics. Our aim was to emphasize that decision processes on the matter of international economic policy, where several national governments are operating, can be analyzed using the typical tools of economic theory, such as the principal-agent approach. It is the very use of such tools, however, together with those belonging to decision theory, that highlights the real deci-
M. Mistri / Multi-Objective Preferences and Conflicting Objectives
15
sion-making centers. In short, I think that using multi-objective analysis may help to further clarify the relationships involved. The analysis shows how the process of European monetary integration marks a progressive change in the respective weights in the collective preference functions of the single national governments. The question we might ask is whether this change represents the outcome of an “objective” learning process or a more or less winding path of adjustment to changes in the balance of power between the economies of the nations involved. This is a question that we are unable to answer for the time being.
References [1] Aoki, M. (2001), Toward a Comparative Institutional Analysis, Cambridge, MA: The MIT Press. [2] Arrow, K.J. (1951), Social Choice and Individual Values, New York: Wiley Arrow, K.J. “The Economics of Agency”, in J.W. Pratt and R.J. Zeckhauser (eds.), Principals and Agents: The Structure of Business, Boston, MA: Harvard Business School Press, pp. 37–51. [3] Axelrod, R. (1981), “The Emergence of Cooperation among Egoists”, in American Political Science Review, 75, pp. 305–18. [4] Axelrod, R. (1984), The Evolution of Cooperation, New York: Basic Books. [5] Axelrod, R. (1997), The Complexity of Cooperation, Princeton, NJ: Princeton University Press. [6] Chappel, H.W., McGregor, R.R. and Vermilyea, T. (2005), Committee Decisions on Monetary Policy, Cambridge, MA: The MIT Press. [7] Dehem, R. (1972), “Le mirage monétaire européen, son cout et ses aléas”, in Recherches Economiques deLouvain, pp. 201–12. [8] Dopfer, K. (1997), “Come emergono le istituzioni economiche: gli agenti istituzionali ed i germi delcomportamento”, in E. Benedetti, M. Mistri and S. Solari (eds.), Teorie evolutive e trasformazionieconomiche, Padova, IT: Cedam, pp. 183–211. [9] Downs, A. (1957), An Economic Theory of Democracy, New York: Harper & Row. [10] Dyson, K. and Featherstone, K.(1999), The Road to Maastricht. Negotiating Economic and Monetary Union, Cambridge, UK: Cambridge University Press. [11] Engel, C. and Rogers, J.M. (1996), “How Wide is a Border?”, in American Economic Review, 86 (5), pp. 1112–125. [12] Frenkel, J.A. and Rose, A.K. (1998), “The Endogeneity of the Optimum Currency Area Criteria”, in Economic Journal, 108 (449), pp. 1009–25. [13] Friedman, M. (1968), “The Role of Monetary Policy”, in American Economic Review, 58 (1), pp. 1–17. [14] Gilpin, R. (2001), Global Political Economy: Understanding International Economic Order, Princeton, NJ: Princeton University Press. [15] Hayek, F. (1973), Rules and Order, London, UK: Routledge. [16] Hodgson, G. (1988), Economics and Institutions. A manifesto for a Modern Institutional Economics, Cambridge, UK: Basil Blackwell. [17] Hodgson, G. (1999), Evolution and Institutions, Cheltenham, UK: Elgar. [18] Issing O., Gaspar, V., Angeloni, I. and Tristani, O. (2001), Monetary Policy in the Euro Area, Cambridge, UK: Cambridge University Press. [19] Kasper, W. And Streit, M.E. (1998), Institutional Economics, Cheltenham, UK: Elgar. [20] Kenen, F.B. (1969), “The Theory of Optimun Currency Areas: An Eclectic View”, in R.A. Mundell and A.K. Swoboda (eds.), Monetary problems in the International Economy, Chicago: University of Chicago Press, pp. 41–60. [21] Keohane, R. (1980), “The Theory of Hegemonic Stability and Change in International Economic Regimes”, in O. Holsti et al., Change in the International System, Boulder, Col: Westview Press, pp. 131–62. [22] Keohane, R. (1984), After Hegemony. Cooperation and Discord in the World Political Economy, Princeton, NJ: Princeton University Press. [23] Keynes, J.M. (1936), The general theory of Employment, Interest and Money, London, UK: MacMillan. [24] Maes, I. (2001), “Macroeconomic Thought at the European Commission in the First Half of the 1980’s”, in R. Backhouse and A. Salanti (eds.), Macroeconomics and the Real World, vol. 2, Oxford, UK: Oxford University Press, pp. 251–68. [25] Maes, I. (2002), Economic Thought and the Making of European Monetary Union, Cheltenham, UK: Elgar.
16
M. Mistri / Multi-Objective Preferences and Conflicting Objectives
[26] McKinnon, R.I. (1963), “Optimum Currency Areas”, in American Economic Review, (58), pp. 717–25. [27] Meade, J.E. (1957), “The Balance of Payments Problems in a Free Trade Area”, in The Economic Journal, (67), pp. 379–96. [28] Mundell. R.A. (1961), “A Theory of Optimal Currency Area”, in The American Economic Review, (51), pp. 657–64. [29] Nash, J.F. (1950), “The Barganing Problem”, in Econometrica, (18), pp. 155–62. [30] Nurmi, H. (1988), Rational Behavior and the Design of Institutions, Cheltenham, UK: Elgar. [31] North, D.C. (1998), “Where have we been and where are we going?”, in A. Ben-Ner and L. Putternam (eds.), Economics, Values and Organization, Cambridge, UK: Cambridge University Press, pp. 491–508. [32] Pohl, K.O. (1988), “The Further Development of the European Monetary System”, in Collection of Papers, Committee for the Study of Economic and Monetary Union, Luxembourg, pp. 129–56. [33] Rowley, C.K. (2001), “The International Economy in Public Choice Perspective”, in W.F. Shugart and L. Razzolini (eds.), The Elgar Companion to Public Choice, Cheltenham, UK: Elgar, pp. 645–72. [34] Schotter. A. (1981), The Economics of Social Institutions, Cambridge, UK: Cambridge University Press. [35] Streit, M., Mummert, V. and Kiwit, D. (1997), “Views and Comments on Cognition, Rationality and Institutions”, in Journal of Institutional and Theoretical Economics, (153), pp. 688–92. [36] Tavlas, G.S. (1993), “The New Theory of Optimal Currency”, in World Economy, 16 (6), pp. 663–85. [37] Ullman-Margalit, E. (1977), The Emergence of Norms, Oxford, UK: Clarendon Press. [38] Vanberg, V. (1994), Rules and Choice in Economics, London, UK: Routledge. [39] Young, H.P. (1998), Individual Strategy and Social Structure, Princeton, NJ: Princeton University Press.
Advances in Multiple Criteria Decision Making and Human Systems Management Y. Shi et al. (Eds.) IOS Press, 2007 © 2007 The authors. All rights reserved.
17
Multicriteria Routing Models in Telecommunication Networks – Overview and a Case Study João C.N. CLÍMACO a , José M.F. CRAVEIRINHA b and Marta M.B. PASCOAL c of Economics of the University of Coimbra and INESC Coimbra, Portugal E-mail:
[email protected] b Department of Electrical Engineering Science and Computers – Faculty of Science and Technology of the University of Coimbra and INESC Coimbra, Portugal E-mail:
[email protected] c Department of Mathematics – Faculty of Science and Technology of the University of Coimbra and INESC Coimbra, Portugal, E-mail:
[email protected] a Faculty
Abstract. Telecommunication networks have been and are in a process of extremely rapid evolution reflecting the interaction between a fast pace of technological progress and a complex socio-economic environment. This justifies the interest in using multicriteria modelling and analysis in decision processes associated with various phases of network planning and design, particularly concerning the development of routing models of multidimensional nature. Based on an overview of evolutions in telecommunication network technologies and services we begin by identifying the motivating factors for the increasing interest in multicriteria routing models. An overview of a significant number of contributions on this area is presented followed by a description of a novel bi-level hierarchical multicriteria routing optimisation model for multiservice communication networks and its application to a video traffic routing problem. Finally we outline some conclusions and future trends. Keywords. Telecommunication networks, routing, multicriteria analysis
Introduction and Motivation Telecommunication networks and the services they provide have been in a process of extremely rapid evolution. This trend reflects the very rapid pace of innovation in the telecommunication and information technologies and the rapid evolution in the pattern of increasingly advanced service offerings. This evolution is a process of major importance because of the very large and increasing investments associated with telecommunication services market (representing on average 2.7% of the GDP in OECD countries in the early 2000’s) and the great impacts it has on economy and on the society as a whole. These drastic developments in telecommunication networks generate a great variety of complex decision problems of a multidimensional nature often including incommensurable/conflicting criteria and sometimes also involving several conflict-
18
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
ing decision agents. One can say that the interaction between a complex socio-economic environment and these evolutions in telecommunication networks and services justify the increasing interest in applying multicriteria decision analysis (MCDA) in decision processes related to multiple instances of network planning, management and design as noted in [Granat and Wierzbicki, 2004] and [Clímaco and Craveirinha, 2005]. In a broader perspective based on a reflection on knowledge and technology creation theories [Wierzbicki, 2005] shows that some issues from telecommunications and informational sciences can be used to formulate a rational theory of intuition, that can be developed as a complement of multicriteria decision support. Firstly, in this section we present a short overview of the major aspects of recent evolutions in telecommunication networks for a better understanding of the more decisive factors which influence the interest in using MCDA in the areas of management, planning and design and particularly in the area of routing models, the main focus of this text. For this purpose it is important to identify the major trends of technological and socio-economic nature affecting the extreme pace of telecommunication network evolution. Secondly we will discuss the motivation for the use (and potential advantages) of multicriteria modelling in telecommunication network planning and design and analyse in more detail some key issues related to the development of multicriteria routing models. In Section 2 an overview of a significant number of contributions in various types of multicriteria routing models will be presented. The following section describes a novel bi-level hierarchical multicriteria routing optimisation model for multiservice networks and its application to a video traffic routing problem. Finally, in Section 4 we outline some conclusions and future trends in this research area. Overview of Evolution in Telecommunication Networks Concerning technological evolution it can be said, from a historical perspective, that major evolutions in telecommunication networks have been centred around two main principles of information transfer, namely ‘circuit switching’ (typical of classical telephone networks) and ‘packet switching’ (typical of Internet). The very rapid expansion of Internet was strongly accelerated in the 90’s by the release in 93 of the basic web technologies by the European Laboratory CERN. Also the telephone networks rapidly evolved since the 80’s to ISDNs (Integrated Service Networks) and to B-ISDNs (Broadband ISDNs), specially in the 90’s, mainly based on ATM (Asynchronous Transfer Mode) technology, enabling the convergence of different types of services on the same network. These trends have been accompanied and driven by the very rapid growth of the demand for data services and for new and more bandwidth “greedy” services (such as video services). The necessity of introducing in the Internet new functionalities specially concerning connection oriented services with QoS (Quality of Service) guarantees such as voice or video services led to the development of new technological platforms, namely the Integrated Service (IntServ) and the Differentiated Service (DiffServ) mechanisms and to the spreading of the MPLS (Multiprotocol Label Switching) technology. MPLS will support the implementation of an efficient and flexible multiservice network based on Internet. These evolutions were made possible, among other factors, by the developments in the transport infrastructure based on optical fibre technologies such as DWDM (Dense Wavelength Division Multiplexing). These optical technologies enable the transmission
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
19
of extremely high capacity information rates (up to the order of Tbits/s – 1012 ) with a significant degree of flexibility, enabling very large economies of scale in information transport. Furthermore rapid evolutions in digital radio communication technologies enabled a very rapid expansion in mobile networks driven by an increasing demand for mobile data services including Internet access, so that the total number of mobile subscribers tended to overtake the number of fixed lines by 2004, on a world level [Hardy et al., 2003]. Beyond the very fast pace of technological innovation three major factors have conditioned and determined the evolution of telecommunication networks: traffic growth (both quantitative and qualitative), demand for new services (specially broadband services) and the rapid liberalisation since the 90’s. As for traffic growth a major trend is the sharp increase in Internet traffic and mobile network traffic in recent years, for example with annual rates attaining 60–80% for Internet, in 2000–2001 (apud [El-Sayed and Jaffe, 2002]). In terms of economic value a 9% annual increase for Internet service and a 14% increase for mobile services were estimated for the 2000-2004 period (apud [Hardy et al., 2003]). Many analysts estimate that the average penetration of mobiles in Europe will level off around 80%. Concerning the demand for new services in large European companies it was estimated (cf [Hardy et al., 2003]) that in 2005 the main share of the total expenditure was in data services using leased lines and ISDN (33%) while 25% was assigned to voice services, 19% to the Internet, 10% to company intranet and 5% to professional audio and video-applications. Finally at the market level, in the 90’s, there was a steady evolution from regulatory monopolies to liberalisation, specially fostered, in Europe, by the “full competition” directive determining the liberalisation of infrastructures that do not carry telephone service and the total liberalisation in 1998. One should also note the strong interactions among these factors and their interactions with socio-economic factors. An example is the expansion of e-commerce that is associated with the increase in traffic volumes and with the demand for new services at the lowest possible cost. In global terms it can be said that there is a strong correlation between the technical development and expansion of communication networks and the features of economical and social evolution. It should be stressed that all these trends, very briefly described, are multifaceted and subject to various types of conflicts. An example is the contradictions raised by the recent drive for big mergers and acquisitions and the antitrust policies of the regulatory bodies (Federal Trade Commission and Federal Communications Commission in the US and the EU Competition Directorate). It can be said that the described factors favour the development of networks capable of satisfying increasing traffic volumes and more advanced services of multiple types at the lowest cost per information unit that can be carried with some degree of QoS satisfaction. The following mega trends in technological evolution can be pointed out: the convergence of Internet wired transport infrastructure towards an ‘intelligent’ optical network; the evolution of 3G (third generation) wireless network in the direction of an all IP-based (Internet protocol) converged network; the increasing importance of multidimensional QoS issues in the new technological platforms. These developments will enable the development of a new, high performance multiservice Internet implementing the concept of QoS-based packet network proposed in [El-Sayed and Jaffe, 2002]. All these trends put in evidence the increasing relevance of the issues related to the definition and assessment of multidimensional QoS parameters and the associated network control mechanisms, which constitutes a major factor to justify the interest in using MCDA, namely in routing approaches, as analysed in the next sections. Furthermore the
20
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
issues briefly analysed in the previous paragraph have a strong impact on the nature of many problems of management, negotiation, network planning and design and routing. The necessity or interest in the inclusion in the OR models associated with these problems of multiple, possibly conflicting objectives and constraints of various technical and economical/social types justifies the potential advantage of introducing multicriteria analysis methods in these areas. In fact these evolutions lead to the multiplication of new problems in many of which there is advantage in considering explicitly multiple criteria. Also the importance, in this area, of modelling negotiation processes involving various decision agents (for example network operators and service providers) and the imprecision associated with many objective functions and constraint parameters strengthens the interest in considering multicriteria analysis approaches in this context. An overview of application of MCDA tools, namely to telecommunication strategic planning and negotiation can be seen in [Granat and Wierzbicki, 2004]. A state of the art review on applications of MCDA to telecommunication networks planning and design problems was presented by [Clímaco and Craveirinha, 2005]). Motivation for Multicriteria Modelling The outline of the generic factors that must be considered in the development of OR tools in these areas enables the understanding of the motivation for the use of multicriteria modelling in the context of telecommunication planning and design and in routing problems in particular. First of all the extremely fast pace and sometimes unpredictable rhythm of technological innovation, such that, in many situations, there are several alternative technological solutions with multiple technical, economic and social implications. Secondly the increasing demand for different types of services, which may require, in a modelling context, the consideration of associated objectives and/or constraints. Also the transition to full liberalisation and the increasing competition among operators and service providers favours situations involving various decision agents with conflicting points of view and the occurrence of negotiation processes. Finally, a most relevant factor is the fact that in the new network platforms (briefly analysed above) there is an increasing relevance of multidimensional issues and possibly conflicting criteria, in particular those which refer to QoS and economic factors. As a matter of fact there is, in many decision problems, potential conflict between several QoS criteria and between some of these criteria and the economic criteria. Note that the relevant criteria to many problems in this area are not only multifaceted but commonly of an heterogeneous nature, for example economic and technical criteria. These issues make that in many situations the models for decision support in these areas become more realistic if different aspects are explicitly considered by building a consistent set of criteria rather than aggregating them a priori in a single function as was done typically in earlier OR models in this field. Multicriteria models enable the different concerns of various natures which are at stake, in a given decision problem, to be explicitly addressed, so that the decision maker may grasp the conflicting nature of the criteria and the satisfactory compromise solutions may be identified. Note that even in the cases where an a priori aggregation of the criteria is necessary, the explicit multicriteria modelling has the advantage of enabling a deeper insight, regarding the features of the problem, to be achieved. This is most relevant in routing models, the focus of the present work, the aim of which is the calculation and selection of a sequence of network resources from an origin to one or several destinations
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
21
satisfying certain QoS constraints and seeking the optimisation of network economic or performance objectives. For example the optimisation of the total network expected revenue can be conflicting with the QoS objective of services associated with end-to-end flows of lower traffic intensity (ie with lower average demand in the overall network). In routing models we now emphasize some peculiarities that are most important to the development of various types of multicriteria routing approaches. The first aspect has to do with a key feature of the routing system: whether it is static or dynamic. In static models where network routing solutions remain unchanged for a large time period it is possible to use the combination of the state of the art on multicriteria decision aiding with exact or heuristic path optimisation approaches. This is in contrast with dynamic routing models where routing solutions have to be changed in limited time periods (some times very short) in response to varying network working conditions in order to obtain the best possible network performance under a given routing framework. In these cases it is necessary to put emphasis on the development of multicriteria models capable of supporting automatic decisions with stringent time requirements. Another issue has to do with the possibility/interest in considering a hierarchical structuring of the criteria, by assigning different levels of priority to the criteria, for example criteria associated with global network performance and criteria associated with a particular node-to-node traffic flow. This is required (or advisable) in some routing models as in the case presented in the third part of this paper. Also the scale of modelling puts different challenges, here considering the term, ‘scale’ both in structural terms (that is whether the interdependencies between different networks/structures managed by different agents are explicitly taken into account) and dimensional terms (typical network dimensions that can be handled efficiently by a particular type of multicriteria model). Concerning the multi-structural scale situation the decision aiding, besides the multidimensionality above referred to, has to consider conflicts and negotiation among different operators and service providers, taking into account customers requirements. In this context, the problems we are dealing with are normally very complex. The OR models are inevitably reductionist, but multicriteria approaches can mitigate partially this drawback in many situations.
1. Overview of Multicriteria Routing Models 1.1. Background Concepts Routing is a key functionality in a telecommunication network, the aim of a routing model being the calculation and selection of a sequence of network resources (corresponding to a loopless path and usually designated as route) from an origin to one or several destinations (in the case of multipath routing), typically satisfying certain constraints normally associated with QoS requirements, and seeking the optimisation of network economic and/or performance objectives. It is well known that routing has a very strong impact on network performance (as viewed according to certain metrics associated with the routing model) and cost as can be seen in classical studies (see e.g. [Ash, 1998,Conte, 2003]). Routing in communication networks can be described and specified at different levels and from different perspectives. Next we will introduce some concepts which may
22
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
help in clarifying these aspects. While a routing principle will designate an essential general feature of the routing functionality (for example whether it is static or dynamic or single-path/multipath) we will use the term routing method to designate a certain specification of a routing principle a central element of which is the procedure (algorithm or set of rules) used to perform the path (assuming that each route is associated with a path in the network representation) calculation and path selection for any given end-toend connection request at a given time. This procedure is normally designated as routing algorithm and is executed for a given input information associated with the current network representation (network topology, arc capacities, and estimated offered demand and possibly network status information) and the prescribed connection requirements. At a lower level of specification routing is normally described through a routing technique, a technical entity that actually enables the implementation of a routing method in a given real network with a given technology and architecture, typically in the form of routing protocols, critically dependent on the features of the concrete technological platform. Note that routing protocols usually specify capabilities which are both relevant for routing and signalling functions. The PNNI (Private Network Node Interface) protocol for ATM networks and the OSPF (Open Shortest Path First Protocol) for Internet, are well known examples of routing protocols. The development of a routing method requires the specification of a routing model that includes all assumptions and logic-mathematical entities which are necessary for specifying a routing method. A key element of a routing model is the routing calculation problem typically an optimisation problem (or routing optimisation problem) where the decision variables are the path(s) to be assigned to node-to-node connection requests or ‘calls’. The term call is here considered in its broadest sense, that is an end-to-end service connection request with certain features that must be taken into account in the calculation and selection of paths by the routing method, for example a voice connection on a ISDN or a data transfer in Internet. Routing problems have different nature and a great multiplicity of formulations, depending primarily on the following factors: i) the routing principles to be used; ii) the mode(s) of information transfer (typical examples are circuit-switching, packet switching and cell-switching); iii) the network architecture and dimension (for example core network, access network or multiple interconnected networks); iv) the level of representation of the network (typically two levels may be considered: the physical or transmission network and the functional or logical network which is mapped into the physical network); v) involved decision agents in routing problems with several agents (as in routing in international networks or in Internet when the operators/administrators of various routing domains are involved). Concerning the routing problem formulation we must emphasize that the specification of the objective(s) and constraint(s) depends strongly on the described factors and on the rationale of the routing model in terms of a number of features. A first feature of the routing model is the routing optimisation framework which has to do with the scope and nature of the routing problem formulation. In this respect we may distinguish network-wide optimisation models and flow-oriented optimisation models. In networkwide optimisation models the objective function(s) are formulated at network level and depend explicitly on all traffic flows in the network, such as average total traffic carried, total expected revenue, average packet delay (averaged over all packet streams) or a function which seeks the optimisation of the utilisation of the arcs of the network in terms
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
23
of the their level of occupancy. Two examples of this type are [Erbas and Erbas, 2003] and [Martins et al., 2006]. In flow-oriented optimisation models the objective function(s) are formulated at the level of each particular node-to-node connection or flow such as number of arcs of the path, path cost (for a specific link usage path metric), mean packet delay on the particular traffic stream or end-to-end blocking probability. Examples of this type are the numerous QoS routing models which are based on single-objective constrained shortest path formulations (a review can be seen on [Kuipers et al., 2002b] and an overview in [Clímaco and Craveirinha, 2005]). A second feature to characterise the rationale of the routing model is naturally the underlying routing principle(s). A third feature refers to the nature of the chosen objective function(s) and constraints, including instances such as whether the optimisation model is single or multiobjective, the type of the functions and constraints (technical, economic, social or other). A fourth feature has to do with the representation of node-to-node demand requests or traffic offered. In this respect we can distinguish different types of models in terms of the granularity of the representation (for example representation at call level or traffic flow level ie in terms of a sequence of calls throughout time) or the nature of this representation, namely whether it is deterministic or stochastic. An example of deterministic representation is to characterise the demand by a fixed average bandwidth demand from the originating to the terminating nodes, commonly used in multicommodity network flow approaches. A stochastic representation of the traffic flows uses a stochastic model, typically some form of point process (the simplest example being an homogeneous Poisson process) to represent the node-to-node traffic flows and some approximation from Teletraffic Theory to estimate relevant parameters needed by the routing optimisation model (such as blocking probabilities or average packet delays). 1.2. Overview of Multiple Criteria Routing Models The very rapid technological evolution in communication networks and the increased demand for new services led to the development of multiservice networks of various types (as briefly analysed in the previous section) where various key functionalities, namely routing, have to deal with multiple, often heterogeneous, QoS instances. This increasing relevance of multidimensional QoS in the technological platforms led to the emergence of a routing principle designated as QoS routing. Typical QoS routing models involve the calculation of a sequence of network resources along a path satisfying several constraints on different metrics (e.g. delay, cost, number of arcs of a path and loss probability) depending on traffic attributes and service types (see e.g. in [Lee et al., 1995]) while seeking to optimise some metric. The most common formulations of the routing problems in QoS routing in the literature are: the multiple-constrained path (MCP), the multiple-constrained optimal path (MCOP) and the restricted shortest path (RSP) problems. In the MCP problem the aim is just to obtain path(s) which satisfy constraints on all metrics while in MCOP and RSP problems an objective function has to be optimised. RSP is just a particular case of the MCOP problem with one constraint alone. These routing problems are quite relevant in multiservice Internet technologies, namely MPLS, and in some ATM routing protocols. It is well known in multicriteria model analysis that a possible approach consists of transforming the objective functions into constraints, excepting one of them which is then optimised. This guarantees, in adequate conditions, that the calculated solutions
24
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
will be non-dominated concerning the original multiobjective model. Furthermore if one varies the second member of the constraints different non-dominated solutions may be obtained [Steuer, 1986]. Having this in mind we can say that the typical formulations of QoS routing models can be envisaged as a first tentative of multicriteria analysis. Next we present an overview of significant references on QoS routing models, an area where a vast literature has appeared since the 90’s. It should be noted that the common necessity (namely in dynamic routing) of determining the solution to these models, in a very short time (a few seconds or even less, in some routing methods) makes that the most common approach to its resolution is to develop heuristics typically including classical algorithms for shortest path computations. [Kuipers and Mieghem, 2002,Kuipers et al., 2002b] present a comprehensive review on QoS routing following a previous one on this topic, up to 98, in [Chen and Nahrstedt, 1998b] and a state of the art report up to 99 [Zee, 1999]. The reference list includes a number of papers on variants of QoS routing problems and various resolution procedures for these problems that are now summarised. In [Wang and Crowcroft, 1996] an analysis of various mathematical properties of the MCP problem with respect to the metrics more relevant to QoS routing, is presented. [Neve and Mieghem, 2000] deal with the MCP problem through an heuristics with tunable accuracy, based on a K-shortest algorithm, [Puri and Tripakis, 2002] present and compare several algorithms for this problem while [Mieghem et al., 2001] propose a procedure for dealing with the MCP and the MCOP problems, based on a K-shortest algorithm and [Yuan, 2002] describes two heuristics for the MCP problem. [Yuan and Liu, 2001] and [Yuan, 2002], propose heuristics for dealing with the multiconstrained problems. Earlier papers relevant to this area are: [Hassin, 1992, Guo and Matta, 1999] (focusing on the RSP problem); [Blokh and Gutin, 1996]; [Aneja and Nair, 1978] which deal with the constrained shortest path problem; [Handler and Zang, 1980] that proposes a dual algorithm for the constrained shortest path problem and [Chen and Nahrstedt, 1998a] describing two heuristics for the MCP problem, based on Dijkstra and Bellman-Ford algorithms. In [Korkmaz and Krunz, 2001] it is presented a heuristic for the MCOP problem based on modified versions of Dijkstra algorithm and in [Liu and Ramakrishnan, 2001] it is developed an exact algorithm for finding K-shortest paths satisfying multiple constraints. [Song et al., 2000] describe an algorithm for a multipath constrained problem and analyse its performance through simulation. [Shin, 2003] deals with two QoS routing problems: MCP and the problem of finding a minimum number of paths satisfying multiple QoS constraints in a WDM network; also the implementation of two routing schemes for two types of routing protocols intended to implement the analysed algorithms, is described. An in depth analysis of the complexity issues of the MCP problem, noting that the problem is NP-complete but not strong NP-complete and presenting arguments in the sense that in most practical instances of the problem, exact solutions can be achieved, is in [Kuipers and Mieghem, 2005a]. [Kuipers and Mieghem, 2005b] presents a study on the MCP problem and focus on the implementation of dominance verification techniques in QoS routing models. [Kuipers et al., 2004] presents a study on performance evaluation of MCP and RSP algorithms based on complexity analysis and simulation results. A recent comparison study, focused on exact and approximation algorithms of specific type, for the constrained optimisation routing problem is shown in [Kuipers et al., 2006].
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
25
In [Avallone et al., 2005] a comparison of algorithms for the MCOP problem, based in simulations, is also presented. A number of papers focusing on particular applications of QoS routing models are also listed in the references. A QoS inter-domain routing model for a high speed wide area network was presented in [Kim et al., 1998], using a heuristic approach. Application of QoS routing models to integrated service networks are in [Ma and Steenkiste, 1997,Ma and Steenkiste, 1998, Guerin and Orda, 2000] and [Goel et al., 2001]. Models of this type for Internet are quite numerous, particularly having in mind their interest in the Diffserv, Intserv and MPLS platforms as shown in the references: [Pornavalai et al., 1997,Pornavalai et al., 1998, Fortz and Thorup, 2000,Ergun et al., 2000] and [Rocha et al., 2006]. Applications to MPLS networks can be seen in [Banerjee et al., 2001] which presents an overview in this specific application area. An application of a QoS routing model to an ATM network, focused on a problem with multiple constraints, is in [Prasithsangaree and Niehaus, 2002]. Several routing methods require the calculation of several routes simultaneously, for a given originating node, leading to a class of routing problems designated as multipath routing problems. Examples arise in models with reliability requirements in which an active path and a back-up path, to be used in the event of failure of the former, are to be computed simultaneously for each pair of origin-destination nodes and in multicast routing where a set of paths has to be calculated for an originating node and a set of multiple destination nodes (for example in teleconferencing services in Internet). In [Kuipers et al., 2002a] a QoS routing procedure for a constrained multicast path problem, is presented. A multicast routing algorithm involving the calculation of multiple trees is in [Prieto et al., 2006]. [Al-Sharhan, 2005] proposes a multicast QoS routing model for wireless networks based on the calculation of trees satisfying multiple constraints, using a heuristic combining features of genetic algorithms and competitive learning. A multipath routing model where the load of a traffic flow can be divided by a set of alternative routes, considering several criteria, for application to Internet, is in [Fournie et al., 2006]. [Agrawal et al., 2005] presents a specific QoS routing model for robust routing design in MPLS networks, considering a two-path calculation problem and two network performance metrics obtained with and without failures in the links; a mixed-integer linear programming formulation is used. A QoS routing model with multiple constraints (MCP type problem) using a fuzzy system based routing technique is described in [Zhang and Zhu, 2005]. [Liu et al., 2005] describe a multipath QoS routing model for ad-hoc wireless networks considering four criteria and present a resolution procedure based on fuzzy set theory and evolutionary computing. Note that, in some models, the concerns which lead to certain QoS routing approaches, are relevant to more explicit multicriteria analysis. One of those cases can be found in [Widyono, 1994] where it is proposed an exact restricted shortest path (RSP) which enables, for example, to obtain successive shortest paths between pairs of nodes for different values of the right hand-side constraint on the delay, hence obtaining nondominated solutions. In this case we could put in evidence that the exact bicriterion shortest path approach in [Clímaco and Martins, 1982] could be used in this type of study in a much more efficient manner. We also would like to note that the principles underlying the bicriterion routing approach that is described in [Antunes et al., 1999] (based on a
26
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
specific K-shortest path algorithm and on the introduction of preference thresholds in the objective function space) have clear relations with the principles underlying the Jaffe’s algorithm dedicated to the MCP problem [Jaffe, 2004] and to other algorithms intended to improve some aspects of that algorithm. Other QoS routing approaches in which there is an a priori articulation of preferences in the path selection taking as basis the chosen metrics, namely bandwidth, delay or hop-count, are listed, in particular the widest-shortest and shortest-widest path approaches: [Wang and Crowcroft, 1996] (presenting a procedure for calculating the shortest path in terms of delay, with maximal value of the minimum of bandwidths of the arcs, usually designated as bottleneck bandwidth), [Ma and Steenkiste, 1997, Ma and Steenkiste, 1998] and [Orda, 1999,Oueslti-Boulahia and Oubagha, 1999] (proposing a heuristic approach based on an utility function, as an alternative to the widest-shortest path model for certain types of traffic flows in Internet) and [Mieghem et al., 2001]. Finally a reference to [Sobrinho, 2001] that presents a unified treatment of several QoS routing related path computation problems, including shortest path, widest path, widest-shortest path, most-reliable and most-reliable shortest path problems by using an algebra of weights hence treating in an articulated manner the aggregation of preferences. Next we present an overview of contributions where modelling is more explicitly multicriteria. As introductory note we think that there are potential advantages in modelling many routing problems in modern communication networks as multicriteria problems, as previously discussed, but we should stress, as an important practical limitation, that in the majority of situations, the routing solution to be used by the routing method has to be obtained in a short time (this may range from a small fraction of a second to a few seconds). Only in static routing problems or in some form of periodic dynamic routing models where the model input parameters are known in advance and remain unchanged for a significant period of time (for example, node-to-node traffic offered in different hours), an interactive procedure could be used to select the routes, in time to be memorised in routing tables associated with the nodes (which typically represent in the network graph, switches, routers, or servers). This requires the necessity of implementing automated routing calculation procedures and explains the predominance of methods in which there is an a priori articulation of preferences. Nevertheless we think that there are still advantages in considering explicit multicriteria modelling since this enables rendering the mechanisms of preference aggregation transparent. By using multicriteria approaches several aspects, namely cost and QoS parameters, can be addressed explicitly by the mathematical models, as objective functions or as constraints, thence enabling a more realistic representation of the underlying engineering problem. A very early paper related to multiobjective routing models in telecommunication networks is [Douligeris, 1991] that describes a multiobjective flow control problem in a multiclass traffic network where each class has a performance objective. The formulated non-linear multiobjective optimisation problem is transformed in a linear multicriteria program, solved by standard techniques. An early paper in this area is [Antunes et al., 1999] which proposes an explicit multiple criteria routing model for a static routing problem, (to be used in flow-oriented routing optimisation models with two path metrics as objective functions) that is formulated as a bi-objective shortest path problem; an algorithm for calculating and possibly selecting non-dominated solutions, using preference thresholds in the objective function space,
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
27
based on a K-shortest path algorithm, MPS, [Martins et al., 1999] is also described. The use of preference thresholds can be interpreted as the use of ‘soft constraints’ in contrast with the hard constraints of typical QoS routing models. This device enables the models to be flexibilized, hence permitting a more effective multicriteria analysis. [Craveirinha et al., 2003] proposes and describes the essential features of a multiple objective dynamic routing model with alternative routing (ie for each node-to-node flow a first choice path and a second choice path, to be used when the first one is blocked, have to be calculated) of periodic type based on a bi-objective-shortest path model using implied costs and blocking probabilities as path metrics. This work is the basis for [Martins et al., 2003], which analyses an instability problem in a multiobjective networkwide optimisation dynamic routing model with alternative routing, that uses two network performance objective functions (expected revenue associated with the total traffic carried and maximal node-to-node blocking probability) and a stochastic traffic model. [Martins et al., 2005] propose a heuristic based on a bi-objective shortest path model, for solving the network-wide optimisation model in the previous paper and compare its performance with reference dynamic routing methods. In [Martins et al., 2006] this model was extended to multiservice networks (corresponding to multi-rate loss traffic networks) by considering a bi-level hierarchical multiobjective routing optimisation model, that includes objective functions defined at network and service levels (including fairness objectives) and compares the performance of the associated heuristic with reference dynamic routing methods for this type of networks. [Mitra and Ramakrishnan, 2001] propose a bi-objective network-wide optimisation routing model for MPLS networks with two traffic classes (QoS and Best Effort traffic), using a lexicographic optimisation formulation. Different type of multiobjective network-wide optimisation models concerning the nature of the used objective functions are in [Knowles et al., 2000] that describes a multiobjective routing model for multiservice networks with three objective functions related to path cost, bandwidth utilisation in the arcs and a target arc utilisation, expressed in terms of bandwidths, solved by a evolutionary algorithm. [Resende and Ribeiro, 2003] propose a bi-objective routing model for private circuit routing in the Internet the objective functions of which are the packet delay and a traffic load balancing function. [Erbas and Erbas, 2003] present a three-objective optimisation model for routing in MPLS networks, considering multipath routing (through bandwidth traffic splitting), using a mixed integer formulation. Two of the objective functions are similar to the ones in the previous article and the third aims at minimizing the number of used LSPs (Label Switched Paths). The resolution approach uses an evolutionary algorithm. Other works focusing on the same type of routing model and also using evolutionary algorithms are in [Erbas and Mathar, 2002] and [Erbas and Erbas, 2003]. [Osman and M. Abo-Sinna, 2005] propose a genetic algorithm approach for dealing with multiobjective routing problems of generic type and presents a number of application results. A network-wide multiobjective routing model for MPLS networks with multiple QoS traffic classes is formulated in [Tsai and Dai, 2001] which involves the optimisation of admission control and routing performance; a lexicographic approach is used and a queueing model enables the estimation of average packet delays in the model. A discussion of methodological issues raised by multiobjective routing models in MPLS networks is presented in [Craveirinha et al., 2005]; also a proposal of a hierar-
28
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
chical multiobjective network-wide routing optimisation framework for this problematic area, is put forward. In [Cui et al., 2003] a multiobjective optimisation model for multicast routing is described; a genetic algorithm is proposed for calculating non-dominated routes and its performance is analysed. [Crichigno and Barán, 2004] present a procedure, based on the strength Pareto evolutionary algorithm (SPEA), for solving a multiobjective multicast routing problem, seeking to optimise simultaneously the cost of the tree, the maximal end-to-end delay, the average delay and maximal link utilisation; an analysis of the algorithm performance is also shown. A similar type of multicast routing problem, tackled by a genetic algorithm is in [Roy and Das, 2004]. Another multicast multiobjective routing model with traffic splitting, for application to MPLS, is described in [Meisel et al., 2003,Donoso et al., 2004]; it considers as objective functions maximal link utilisation, hop-count, total bandwidth consumption and the total end-to-end delay and uses a non-linear aggregated function of these four metrics as the basis for the resolution approach. [Fabregat et al., 2005] present an overview on multiobjective multicast routing models including a classification of several publications. An evolutionary algorithmic approach for calculating the non-dominated solution set is also proposed, which is based on a SPEA algorithm; experimental results for up to 11 objectives are equally presented. [Meisel, 2005] describes a multiobjective routing model for application to multicast routing in the Internet/MPLS, considering as objective functions the maximal link utilisation, the path hop-count, the total bandwidth consumption and total end-to-end delay; a first resolution approach, based on a weighted sum of the metrics, is proposed. A dynamic multicast routing model of the same type is also analysed in this work and a resolution procedure, based on an evolutionary algorithm, is put forward. A multiobjective multicast routing model, for application to wireless networks also using a genetic algorithmic approach is described in [Roy et al., 2002,Roy and Das, 2002]. Another multiobjective multicast routing model with constraints is presented in [Cui et al., 2003] and a genetic algorithm is proposed for its resolution. [Schnitter and Haßlinger, 2002] and [Haßlinger and Schnitter, 2003] describe a bi-objective routing model for MPLS networks, using a lexicographic type formulation, taking as objective functions the arc utilisation and the number of arcs per path. The problem is solved by a two-step heuristic procedure based on a multicommodity flow approach. In [Pinto and Barán, 2005] an ant colony algorithmic approach, for dealing with a multiobjective multicast routing problem with four objectives, in a packet network, is presented. A comparison on multipath routing algorithms for MPLS networks, using traffic splitting, that select candidate paths using multiple criteria, is presented in [Kyeongja et al., 2005]. Multicriteria routing problems are tackled in [Lukac, 2002,Lukac, 2003,Lukac et al., 2003] by using heuristic approaches based on the concept of learning automata in fuzzy environments; a routing scheme based on this approach is proposed and evaluated through experimental results in circuit-switched communication networks by considering two criteria (related to quality and price) simultaneously. The problem is solved by a two-step heuristic approach based on a multicommodity flow approach. [Kerbache and Smith, 2000] describe a multiple objective routing model for a stochastic network where the networks correspond to finite capacity queues, using a mul-
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
29
tiple objective multicommodity integer programming approach, applicable to packetswitched networks. A heuristic solution for a flow-oriented multiobjective optimisation routing problem intended for application to multiservice networks and, in particular, for routing video traffic in ATM networks was presented by [Pornavalai et al., 1998], using cost and hop-count as objective functions. This type of routing problem was considered in [Clímaco et al., 2003] and solved by an exact algorithm which enables the efficient calculation of the whole set of non-dominated paths, based on the bi-objective shortest path algorithm by [Clímaco and Martins, 1982] and on the MPS algorithm in [Martins et al., 1999]. [Granat and Guerriero, 2003] present an interactive procedure for resolving a multiobjective routing problem based on a reference point approach. The multicriteria analysis problem of selecting and ordering the solutions obtained in the context of a multicriteria shortest path routing model, taking into account that this has to be performed in an automated manner, is addressed in [Clímaco et al., 2004]; a first approach to this problem based on the use of weights and a K-shortest path algorithm, is put forward. [Clímaco et al., 2006] proposes a new method for tackling this problem that is based on a reference-point approach and apply it to a bi-objective video traffic routing problem with multiple constraints. A multiobjective formulation for a specific type of routing problem in MPLS networks related to the so-called “book ahead guaranteed services” is reported in [Thirumalasetty and Medhi, 2001]. The aim of the model is the calculation, ahead of time, and at the request of a user, of two arc-disjoint routes, with certain QoS guarantees, in order to optimise four objective functions, and uses a heuristic resolution approach. A multiobjective shortest path model with constraints for computing packet stream routes in a given area of Internet is proposed in [Beugnies and Gandibleux, 2003], using objective functions of min-sum and max-min type. [Beugnies and Gandibleux, 2006] describe a multiobjective routing model for the Internet using a multiobjective shortest path formulation that uses as path metrics total average delay, hop-count and residual bandwidth; an exact algorithm for calculating the set of efficient solutions for connections from one node to all the other nodes and a selection procedure based on a weighted Chebyshev distance to the ideal point, are developed. [Yuan, 2003] presents a bi-objective optimisation approach for multipath using traffic splitting for application to the Internet, assuming all paths are calculated from a single-objective shortest path model (implemented through an OSPF routing protocol) where the weights have to be optimised in order to obtain compromise solutions to a network optimisation bi-objective model, a type of routing method known as ‘robust OSPF routing’; the two objective functions of this model are traffic load balancing functions in full-operational and arc failure scenarios; the search for non-dominated solutions is a heuristic that uses an hash function and a diversification technique. A bi-objective routing model in a situation where the demand in the network is described by a set of offered traffic matrices is presented in [Zhang et al., 2005], by considering a weighted sum of the average and worst case network performance under the given matrices; the trade-offs between the two criteria in MPLS and OSPF based case-study networks, are analysed. An analysis of the application of the max-min fairness principle to telecommunication network design, using a lexicographic optimisation formulation, is described in [Pioro et al., 2005]; an application to a routing problem for elastic traffic (that is, a traf-
30
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
fic stream the intensity of which can adapt to the available network capacity), is also presented. A multicriteria routing approach for wireless networks that seeks routes which minimize total energy consumption, latency and bit error rate simultaneously, is described in [Malakooti and Thomas, 2006]; the resolution procedure uses a normalized weighted additive utility function and extensive results are presented. [Marwaha et al., 2004] propose a multiobjective routing approach, also for certain wireless networks (namely mobile ad-hoc networks), that seeks to deal with the uncertainties of the routing model by using a fuzzy cost function of the different metrics and an evolutionary algorithmic approach for tackling the corresponding routing problem. Another application of evolutionary algorithmic approaches to a specific multiobjective routing problem in wireless networks, namely to mobile agent routing in wireless sensor networks, can be seen in [Rajagopalen et al., 2004]. A multicriteria data routing model for wireless sensor networks, of dynamic type, is described in [Li et al., 2005], proposing a heuristic procedure for adaptive tree reconfiguration, considering three criteria associated with sensor-node properties. [Aboelela and Douligeris, 1999] present a multiple objective routing model for broadband networks (based on ATM), using a fuzzy optimisation approach. The aim of the model is maximizing the minimum membership function of all traffic class delays and the minimum membership function of the link utilisation factor of all network links. A dynamic alternative routing problem in international circuit-switched networks involving multiple network operators is tackled in [Anandalingam and Nam, 1997] through a game theoretic approach, considering the cooperative and non-cooperative case; the problem is modelled as a special type (bi-level) of integer linear programming problem and several approximate solutions, given from a branch-and-bound algorithm, are analysed. Other type of routing problems concerns routing in WDM optical networks. It is designated in its general form as the route and wavelength assignment problem (RWA in short) and is focused on the calculation of lightpaths ie a fixed bandwidth connection between two nodes via a succession of optical fibres, occupying a wavelength in each fibre. It is usually decomposed into two problems: the topological routing problem (which involves the determination of a path in the graph representative of the optical network along which the connection should be established) and the wavelength assignment problem (which involves the assignment of a wavelength on every arc of the selected path). An overview of the technical motivation and basic concepts in this area of routing is in [Assi et al., 2001] and a review of resolution approaches for the RWA problem can be seen in [Zang et al., 2000]. [Kennington et al., 2003] presents a multiobjective model for a routing and provisioning problem in WDM networks, with fixed budget, where a primary objective is to minimise a regret function related to the amount of over and under provisioning associated with the uncertainty in the demand forecast and a secondary objective is to minimise the equipment cost. A two-phase heuristic resolution approach using mixed integer linear programs, is proposed. A study on the evolution of routing models that are used on a sequence of releases of a network planning tool, is presented in [Akkanen and Nurminen, 2001], that includes a bi-objective optimisation problem involving a trade-off between route length and disjointness, when searching for a primary and a back-up path (to be used in the event of failures).
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
31
Next we address a new type of multicriteria routing optimisation model that exemplifies a challenge in multicriteria modelling.
2. A Path Hierarchical Multicriteria Optimisation Model for Multiservice Networks In this section we describe a novel bi-level hierarchical multicriteria routing optimisation model for multiservice communication networks and its application to a video traffic routing problem. In this model the first level objective functions seek to minimise the negative impact of the use of a path in the remaining traffic flows in the network while the second level objective functions seek to optimise transmission parameters of the flow associated with the chosen path. The first objective function considered in the first optimisation level assigns a cost to each arc which is the inverse of the available bandwidth in the arc. The aim of its use is to give preference to the occupation of less loaded links, having in mind to obtain a balanced distribution of traffic in the network hence favouring the possibility of acceptance of new connection requests. The second function of the first level is simply the number of arcs in the path (frequently designated as ‘hop’ count) and it is most common in routing models for Internet. The consideration of this objective function tends to minimise the number of used network resources and also favours path reliability (in the event of link or node failures). The first objective function considered in the second optimisation level is the minimal available bandwidth in all links of the path usually known as “bottleneck bandwidth” and favours the use of less congested paths as well as the use of quickest paths (in terms of transmission times). The second objective function in the second level is the sum of expected delays in the links of the path and favours the use of paths which offer the least expected delay. The priority given to the functions defined for the first level results from the fact that they tend to optimise the overall network performance in terms of network global traffic carrying capacity while the second level objective functions seek primarily to optimise quality of service parameters of the end-to-end flow to which the chosen path is associated. This type of model can in principle be applicable to different types of multiservice communication networks such as ATM (Asynchronous Transfer Mode) or Internet where the information transport mechanism is based on the forwarding of fixed size cells or variable length packets, respectively. We consider the application of this model to a video traffic routing problem by introducing three types of constraints. The first one is a constraint on the path bottleneck-bandwidth which cannot be less than the bandwidth required by the considered type of traffic flow. The second constraint refers to the maximal allowed delay on the path the bound of which has to do with specific nature of the video traffic flow. Finally a third constraint is associated with the maximal allowed delay ‘jitter’ along a path which may be expressed, under certain assumptions concerning the queueing mechanisms in the nodes, in terms of a constraint on the maximal number of links per path. Note that this is a new type of multicriteria routing model, corresponding to a bi-level hierarchical multiobjective optimisation model which may be included in the above mentioned class of flow-oriented optimisation models. Our model can be easily adapted to routing problems associated with other types of traffic flows in multiservice communication networks (for example data traffic of voice traffic) by considering the necessary adjustments in some of the objective functions and constraints.
32
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
We propose a resolution approach for this hierarchical multicriteria optimisation model which is based on the calculation of the set of non-dominated and -nondominated solutions for the first level objective functions and on the ordering of solutions in this set according to preference thresholds defined for these functions. Also bounds of acceptance are defined for the values of the second level objective functions thence used as “filters” for the non-dominated solutions of the first optimisation level. In the next sub-sections we describe the mathematical formulation of the addressed multicriteria problem, the resolution approach and the specification of the application model as well as the presentation of results of computational tests performed with this model in randomly generated networks of significant dimension (up to 3000 nodes). 2.1. Mathematical Formulation Let (N , A) be an undirected capacitated network where N is the node set and A denotes the set of arcs (or links) where every arc in A corresponds to the pair (i, j ) with i, j ∈ N . It is assumed that a transmission capacity Rij ∈ IR+ (usually expressed in bits/s) is assigned to each link. A path p from i to j is defined as a sequence of the form p = i = v1 , v2 , . . . , j = v(p) , where (vk , vk+1 ) ∈ A, for any k ∈ {1, . . . , (p) − 1}. Here (p) is called the length of p, that is, its number of nodes, while i and j are called the initial and terminal nodes of path p, respectively. We will consider only loopless paths between any given pair of nodes, that is, paths with no repeated nodes. The set of loopless paths from i to j will be denoted by Pij and P will represent the set of loopless paths from the originating node s to the terminating node t. The formulation of the path calculation problem for a given node-to-node traffic flow from s to t will also assume that some attributes of this flow are known. In the present formulation these include the minimal bandwidth required by the particular type of flow in every arc of a path, Rij . It is also assumed that there is information on the available bandwidths, bij (bij ≤ Rij ) in the links and on the average delay dij associated with the links. The cost cij of using a link (i, j ) is simply the inverse of bij . Thence we define the following path metrics which correspond to the objective functions in our model: • c(p) = (i,j )∈p cij , (path cost), • h(p) = number of arcs of path p (commonly designated as hop-count), • b(p) = min (i,j )∈p {bij }, (bottleneck bandwidth of path p), • d(p) = (i,j )∈p dij , (expected delay along path p). The first constraint has to do with the minimal required bandwidth in the path: b(p) ≥ bandwidth ,
(1)
The two remaining constraints of the path calculation problem in the case of the video traffic routing model are related to the maximal allowed delay on a path, delay , and the maximal allowed delay jitter which can be transformed, for some queueing disciplines in the nodes, in a constraint on the maximal number of arcs per path, jitter : d(p) ≤ delay ,
(2)
h(p) ≤ jitter .
(3)
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
33
Therefore, taking into account the previous considerations concerning the optimisation hierarchy, we can formulate the following path bi-level hierarchical multicriteria optimisation problem for a given node-to-node traffic flow from s to t: 1st level min{c(p) : p ∈ P } (path cost) min{h(p) : p ∈ P } (number of arcs)
(P1 )
2nd level max b(p) (bottleneck bandwidth) min d(p) (path delay)
(P2 )
subject to the constraints (1)–(3). The resolution of this problem consists of obtaining “good” compromise solutions defined in the set of the (feasible) non-dominated solutions with respect the first level objective functions, taking into account the existence of the lower priority objective functions of the second level. 2.2. Resolution Approach The first step of the resolution approach involves the calculation of non-dominated solutions with respect to the first level objective functions. Given paths p and q, p dominates q if and only if c(p) ≤ c(q), h(p) ≤ h(q) and at least one of the inequalities is strict, this condition being denoted by pD q. Thus a path p from s to t is non-dominated if there is no other path in P which dominates p. The set of non-dominated paths from s to t will be designated by PN . The calculation of non-dominated paths, solution to the first level bicriterion problem (P1 ) with constraints (1)–(3) is performed by an adaptation of the algorithm in [Clímaco and Martins, 1982], based on a loopless path ranking method by [Martins et al., 1999]. The idea is to rank the solutions by non-decreasing order of one objective function, and incorporate a dominance test in the ranking procedure, that allows to compare each solution found with the one previously obtained and thus to filter only those which are non-dominated. The first and the last solutions to be found are given by the optimal solutions in terms of each objective function. Let Mc be the greatest value of c for the non-dominated solutions determined at a given point of the algorithm, and let mh be the least number of arcs of those solutions. Next the main steps of the used algorithm are described. • • • •
PN = ∅. Compute p the feasible path with less arcs, cˆ = c(p). Compute q the lowest cost feasible path, mh = h(q), Mc = c(q). ˆ Rank feasible loopless paths pk by non-decreasing order of c, until c(pk ) > c. Apply a dominance test to pk (compare pk with previous loopless feasible paths): ∗ If c(pk ) = Mc , then If h(pk ) < mh , then pk dominates the stored candidates and it is candidate to be non-dominated. Update mh . If h(pk ) = mh , then pk is candidate to be non-dominated. If h(pk ) > mh , then pk ∈ PN .
34
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
∗ If c(pk ) > Mc , then If h(pk ) < mh , then the stored candidates belong to PN , pk is candidate to be non-dominated. Update Mc , mh and PN . If h(pk ) ≥ mh , then pk ∈ PN . A formal description and details on this algorithmic approach can be seen in [Clímaco et al., 2003]. Having in mind to widen the set of solutions which may be analysed, in some instances of this model, in the following phases of the resolution procedure the relaxation of the non-dominance definition was considered. So we also considered the calculation of -non-dominated solutions where = (c , h ) and c , h are small and positive. In this formulation path p -dominates path q iff c(p) ≤ c(q) + c , h(q) ≤ h(q) + h and at least one of the inequalities is strict. In the second stage of the resolution approach preference thresholds were defined in terms of required (aspiration level) and acceptable (reservation level) values for the first level objective functions. These preference thresholds enable the definition of regions with different priorities in the objective function space which are used for ordering the candidate solutions obtained by the former algorithm. Note that the use of preference thresholds/priority regions has in mind to enable an automatic decision in this multiobjective routing framework. Of course the threshold values can be updated, for instance taking into account the state of the network as required in dynamic routing models. The required and acceptable values for h(p) are defined in terms of the average value of the length of the shortest paths of all node pairs, mp : hreq = int(mp ) + 1, hacc = int(mp ) + arcs − 1, where arcs is an integer (arcs > 2), and int(x) denotes the smallest integer greater than or equal to x. Concerning the required and acceptable values for c(p) they are obtained in terms of the average minimal and maximal path costs for all node pairs, respectively: c¯min + cm , 2 c¯max + cm = , 2
creq = cacc where:
• c¯min and c¯max denote the average minimal and maximal path costs for all node pairs, and cmin + cmax . • cm = 2 Hence a first priority region is defined (region A as illustrated in Fig. 1) by the points for which both the required values for both functions are satisfied while in second priority regions (B1 and B2 ) only one of the requested values is satisfied, the acceptable value for the other function being met. A third priority region (C) is considered where only the reservation levels for both functions are satisfied. The third stage of the resolution procedure consists of selecting the ordered nondominated solutions obtained in the previous phase, according to acceptance bounds defined for the second level objective functions, bounds which work as a “filtering” mech-
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
hacc h
hreq
35
... .. .. .. ............................................................................................................................................................................... .. .. ... .. . . . . ....... ....... .... ....... ....... ....... ....... ....... ....... ....... ....... ........ ....... ....... ....... ....... .......... ....... . ... ... .... ..... . . ... ..... .. ... .. .. .. .. ... .... 1 ... ... ..... ..... ... ... ... ... . . . ....... ....... .. ....... ....... ....... ....... ....... ......... ....... ....... ........ ....... ....... ....... ....... ........ ....... . . . ... ... . ... . ... ..... ..... ... .. ... 2 ... ... ... .. . .. . ... . . ............................................................................................................................................................................ ..... .....
B
C
A
B
creq c cacc Figure 1. Priority regions in the space of the first level objective functions.
anism to those solutions. These bounds are bm , a lower bound on bottleneck bandwidth, and dM an upper-bound on path delay: • bm = b(p ∗ ), where p ∗ = argminq∈P 1 {d(q)}, N
• dM = d(p ), where p = argmaxq∈P 1 {b(q)}, N
PN1
denotes the set of non-dominated solutions of level 1. The higher priority solution(s) of the first level which satisfy these bounds, will then be selected. It should be remarked that the exclusive consideration of non-dominated solutions of the first level could be reductionist having in mind that there is a second level of criteria evaluation. This can be mitigated in our approach by considering -non-dominated solutions in the first level where the value of can be callibrated according to the application environment. So in a second variant of the approach the described procedures can be applied, in a perfectly similar manner, to the set of -non-dominated solutions with respect to the first level objective functions. This has the potential advantage of widening the set of possible compromise solutions to be filtered in the final stage of the resolution approach. Furthermore the use of -non-dominated solutions, in the first optimisation level, increases the flexibility in the application of the model since by increasing while widening the set of solutions under analysis can be accompanied by the tightening of the bounds defined for the second level or vice-versa. The combined variation of in the first level and the bounds in the second level, enables the calibration of the relative importance of both levels, in the final selection of solutions. This emphasizes the hierarchical and flexible nature of our multicriteria model. and
2.3. Application Model and Computational Tests The model was applied to a video traffic routing problem in randomly generated undirected communication networks with n nodes constructed on a grid with 400×240 points (with a mesh space unit of 10Km). The nodes correspond to points randomly chosen in the grid and each node has a degree between 2 and 10. In this specific application model we considered n ∈ {500, 1000, 1500, 2000, 2500, n2 3000}, 25000 origin-destination node pairs, a node degree of 4 and 10 different seeds of the random number generator. The parameters of the model associated with each arc
36
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
(i, j ), obtained for ATM (Asynchronous Transfer Mode) type networks, were: available bandwidth bij randomly generated in {0.52, . . . , 150.52} (in Mb/s), cij = 1/bij and Dij Smax Smax dij = + , + rk Rij 2c/3 where Dij is the Euclidean distance between nodes of coordinates (xi , yi ) and (xj , yj ), c is the speed of light, Ruv = 155.52 × 106 bit/s is the bandwidth capacity of (u, v), rk = 1.5 × 106 bit/s is the token generation rate of the leaky bucket, and Smax = 53 × 8 bit is the size of an ATM cell. The constraint on the bandwidth is bandwidth = 1.5 Mb/s and the considered delay constraint was delay = 60 ms. It is assumed that each node may be modelled as a queueing system with WFQ (Weighted Fair Queueing) service discipline enabling the bound on jitter to be represented through a constraint on the number of arcs: jitter = ma(s, t) + arcs , where ma(s, t) is the minimal number of arcs of a feasible path from s to t and arcs varies typically from 2 to 4. The set of values of bij was partitioned into intervals Ii with a predefined percentage of values: Ii = {0.52 + 2k : k = 15i, . . . , 15(i + 1) − 1}, i = 0, 1, 2, 3, and, I4 = {0.52 + 2k : k = 60, . . . , 75}. In the computational tests presented to illustrate the results of the application model we considered arcs = 4 and the following percentages of values of bij in the intervals Ii : I0 50%
I1 20%
I2 15%
I3 10%
I4 5%
Table 1 shows results obtained for one pair of nodes in a network with 1000 nodes, considering only non-dominated solutions for the first optimisation level (Table 1(a)); the preference thresholds are shown on the right hand side of this table. Table 1(b) shows that in this case the two non-dominated solutions satisfy the acceptance bounds (the values of these bounds are indicated on the right hand-side of this Table) obtained from the second level objective functions, envisaged as a “filter” to those solutions. Since these two solutions are in the same priority region C, any of them can be used as compromise solution to the problem. Table 2 shows similar results for the same network, now considering -nondominated solutions for the first optimisation level, defined by (c , h ) with c = 10% and h = 1. In this case only two of the five -non-dominated solutions in Table 2(a) satisfy the acceptance bounds defined for the second level objective functions, as shown in Table 2(b). As in the previous example any of these two solutions may be considered as compromise solution to the addressed problem. Tables 3 and 4 show the same type of results for a network with 2000 nodes, considering non-dominated solutions for the first optimisation level in Table 3 and -nondominated solutions with c = 10%, h = 1 in Table 4. The three -non-dominated solutions of the first level, in Table 4(a), satisfy the acceptance bounds as indicated in Ta-
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
37
Table 1. Results for one pair of nodes in a network with n = 1000 and c = h = 0 (a) 1st level solutions: Zone
c × 102
h
b
d
C
4.235
6
134.52
43.7075
C
4.943
5
64.52
31.5921
h
c × 102
Req.
4
4.0540
Acc.
6
5.5880
(b) Solutions filtered by the bounds of the 2nd level (both with the same rank): c × 102
h
b
d
4.235
6 5
134.52 64.52
43.7075 31.5921
4.943
Acc.
dM
bm
43.7075
64.52
Table 2. Results for one pair of nodes in a network with n = 1000 and c = 10%, h = 1 (a) 1st level solutions: Zone
c × 102
h
b
d
C C C C
4.235 4.527 4.587 4.676
6 6 6 6
134.52 122.52 118.52 100.52
43.7075 60.9800 54.2588 47.7221
C
4.943
5
64.52
31.5921
Req. Acc.
h
c × 102
4 6
4.0540 5.5880
(b) Solutions filtered by the bounds of the 2nd level (both with the same rank): c × 102
h
b
d
4.235
6
134.52
43.7075
4.943
5
64.52
31.5921
Acc.
dM
bm
43.7075
64.52
Table 3. Results for one pair of nodes in a network with n = 2000 and c = h = 0 (a) 1st level solutions: Zone
c × 102
h
b
d
B1
4.117
5
110.52
35.8444
Req. Acc.
h
c × 102
4 6
4.2350 5.6100
(b) Solutions filtered by the bounds of the 2nd level: c × 102
h
b
d
4.117
5
110.52
35.8444
Acc.
dM
bm
35.8444
110.52
ble 4(b). Taking into account that the first of these solutions lies in the priority region B1 while the two others are in the region C, the first one would be the selected compromise solution obtained by the proposed resolution procedure. In order to access key features of the problem under analysis, the average number of non-dominated and -non-dominated solutions for the first optimisation level, in the tested networks are shown in Fig. 2, for the different values of the number of nodes, considering n2 /25000 s − t node pairs in each network. These results show that although the objective functions are not strongly conflicting, there is a quite significant number of problem instances with several non-dominated solutions. Also, as expected, the number
38
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks Table 4. Results for one pair of nodes in a network with n = 2000 and c = 10%, h = 1
(a) 1st level solutions: Zone
c × 102
h
b
d
B1
4.117
5
110.52
35.8444
C
4.574
5
80.52
22.7663
C
4.578
6
122.52
36.0932
h
c × 102
Req.
4
4.2350
Acc.
6
5.6100
(b) Solutions filtered by the bounds of the 2nd level:
# n. d. s o l s
50 40 30 20 10 0
c × 102
h
b
d
4.117
5
110.52
35.8444
4.574 4.578
5 6
80.52 122.52
22.7663 36.0932
........................................ ....... ...... ..... ....... ...... ...... ....... . . ...... . ...... . .. ...... ...... . . . . . . . ............ .. .. .... .... .... ... . . . .. .... ....
Acc.
# n. d. s o l s
............................................................................................................................................. .............................................................................................................................................
350 300 250 200 150 100 50 0
500 1000 1500 2000 2500 3000 n
Average
bm 80.52
.................... ..... ........ . ......... ..... ........ .......... ..... ........ .......... .... . . . ..................... . ... . . . . .... ..... ..... .... ..... . . . . .... .... ..... ..... ............................................................................................................................................. .............................................................................................................................................
500 1000 1500 2000 2500 3000 n
(a) c = h = 0 ....................................
dM 36.0932
(b) c = 10%, h = 1 ....................................
Max
....................................
Min
Figure 2. Number of obtained non-dominated solutions.
of solutions of the first optimisation level to be analysed by the resolution procedure significantly increases when -non-dominated solutions are allowed, as shown in Fig. 2.(b). All the solutions of the considered types can be calculated exactly and ranked according to the proposed resolution approach, in short processing times and with modest memory requirements. This is illustrated in Fig. 3 where average CPU times are shown for the proposed algorithm and the tested networks, obtained on an AMD Athlon computer at 1.3 Ghz with 512 Mbytes of RAM, using Linux. The discussed model provides a possible platform, of new type, to explore extensively the multifaceted nature of the routing problems in multiservice networks by using hierarchical multicriteria optimisation. The developed resolution approach for this model enables the calculation of exact solutions and has good computational performance for networks of significant size up to certain limits with respect to the average node degree, delay and jitter constraints compatible with many practical applications.
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
S e c s
16 12 8 4 0
........ ............................ ................................. ..................... ...... . . . . . . ...... ...... ...... ........ ....... . . . . . . . ........
..........................................................................................................................................................................................................................................................................................
S e c s
............. ............................... ............................... ............... ...... . . . . . . ...... ...... ...... ........ ........ . . . . . . . .......
16 12 8 4 0
..........................................................................................................................................................................................................................................................................................
500 1000 1500 2000 2500 3000 n
500 1000 1500 2000 2500 3000 n
(a) c = h = 0 ....................................
39
(b) c = 10%, h = 1
Average
....................................
Max
....................................
Min
Figure 3. CPU times.
3. Conclusions and Future Trends The presented work shows that telecommunication network routing is clearly an area where it is justified and potentialy advantageous the use of multicriteria modelling and analysis as noted in [Granat and Wierzbicki, 2004] and [Clímaco and Craveirinha, 2005]. Nevertheless there is a quite significant number of challenges and issues that can possibly be tackled through multicriteria analysis or which require the development and improvement of existing multicriteria approaches. Another obvious conclusion refers to the extreme diversity of network routing models and the very great variety of formulations of the associated optimisation problems and, in many cases, the significant number of proposed resolution approaches. Concerning QoS routing models the number of models and proposed algorithms has increased steadily since the mid 90’s, a seminal paper being [Lee et al., 1995]. The major reason for this fact is the increasing importance of multidimensional QoS features that need to be incorporated in the new multiservice network functionalities, specially in relation to Intserv, Diffserv and MPLS protocols for Internet. The multitude of variants of constrained QoS routing problems, involving a diversity of metrics, the various resolution approaches and the study of their practical implications in terms of routing and signalling protocols gave rise to an increasing number of publications. A recent survey on the most important open issues and challenges and a presentation of ongoing and future research topics in this particular area can be seen in [Masip-Bruin et al., 2006]. Another problematic area where there has been recently an increasing interest has to do with multipath models, where more than one path has to be calculated for each originating node. In particular the sub-area of multicast routing (associated with pointmultipoint problems) has attracted increasing attention both in terms of QoS routing and more explicitly multicriteria models, as a result of the fast emergence of multimedia applications such as audio, video services and video-conferencing, specially in the Internet. A typical specification of multicast routing models in a QoS routing context involves the calculation of constrained minimal cost trees (Steiner trees). This is an NP-complete problem and as noted in [Masip-Bruin et al., 2006] there is, in these models, a trade-off
40
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
between the efficient use of network resources and QoS features. In this type of problems and in multiobjective multicast routing there are great challenges and issues to be tackled. Other type of multipath routing models that deserve attention, concerning the use of explicit multicriteria approaches, is alternative routing, a seminal work, in the context of a multiobjective network-wide optimisation model, being [Craveirinha et al., 2003]; the possibility of using alternating routing (a common technique in classical ISDN) regained importance having in mind the possibilities opened by new connected-oriented technologies, as in MPLS. Other type of multipath routing problem where we think that there are great challenges concerning multicriteria modelling and analysis is ‘robust routing’ a type of problem which involves typically the simultaneous calculation of a pair of paths, the active and the back-up-path. This is particularly important in MPLS networks with resilience requirements and in networks the routes of which carry an extremely large amount of traffic as in WDM optical networks. The route and wavelength assignment (RWA) models in optical networks (briefly characterised in Section 2.2), also deserve an effort concerning the use of multicriteria approaches, namely by considering explicitly different metrics expressed in terms of the number of used optical fibres in the network links and of the wavelengths used along the topological path. The consideration of resilience objective(s) or constraints in association with the standard RWA problem (namely involving the simultaneous consideration of an active and a back-up optical path) adds another dimension to the RWA problem formulations which leads to a new and challenging type of problem of robust routing design. Another type of multipath routing where the introduction of multicriteria routing approaches is a challenging issue is routing with traffic splitting, (a routing principle where the node-to-node bandwidth demand may be split by a number of paths) a situation that has already been considered in the literature, in terms of multicriteria modelling, as in the evolutionary multiobjective approach in [Erbas and Erbas, 2003]. Another open sub-area of multipath routing where one can envisage in the future the use of multicriteria modelling is probabilistic loadsharing, a routing principle where the node-to-node offered traffic attempts to use each of the paths in a path set according to certain probabilities. Also the development of hierarchical multicriteria routing models, enabling the consideration of different objectives (for example network performance, service performance and/or fairness objectives) with different optimisation levels, deserves more attention in the future. Significant examples in this context are the lexicographic optimisation models in [Pioro et al., 2005] (focusing on fairness issues) and the hierarchical multiobjective models mentioned in Section 2.2. In fact this particular area raises important challenges and issues in terms of modelling (in particular the specification of the objective functions in a given application context, the hierarchisation of these functions and the adequacy of the traffic modelling approach as analysed in [Craveirinha et al., 2005] in the context of MPLS), in terms of resolution approach and concerning the representation of the system of preferences. At a methodological level one can say that most routing problems are NP-complete or NP-complete in the strong sense. This high degree of complexity has made the use of heuristics of various kinds the most common resolution approach, often using, as auxiliary procedures, exact algorithms. This means, in our opinion, that the development of exact approaches has to rely on the devise of adequate multicriteria shortest path models, able of calculating the non-dominated solutions, as in [Clímaco et al., 2003], thence openning a significant field for further research.
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
41
It should be stressed that all these challenges and issues are further complicated by the necessity of developping multicriteria models and resolution approaches capable of supporting automatic decisions within certain time limits, a practical critical requirement in on-line and dynamic routing models. Furthermore one should note that the seminal ideas by Milan Zeleny [Zeleny, 1982] considering the exploitation of creating new alternatives, eliminating the conflicts among the different criteria, must also be taken into account in future research. Finally it is important to emphasize that in static and dynamic routing models of many kinds, already tackled in the literature, a lot of research effort should pursue, having in mind that many results are still not satisfactory. From this perspective, and also having in mind the open issues and challenges outlined above, one can say that this research area is still in its infancy.
References [Aboelela and Douligeris, 1999] Aboelela, E. and Douligeris, C. (1999). Fuzzy generalized network approach for solving an optimization model for routing in B-ISDN. Telecommunication Systems, 12:237– 263. [Agrawal et al., 2005] Agrawal, G., Huang, D., and D. Medhi (2005). Network protection design for MPLS networks. In Proceedings of the 5th International Workshop on Design of Reliable Communication Networks, Naples, Italy. [Akkanen and Nurminen, 2001] Akkanen, J. and Nurminen, J. K. (2001). Case study of the evolution of routing algorithms in a network plannig tool. The Journal of Systems and Software, 58(3):181–198. [Al-Sharhan, 2005] Al-Sharhan, S. (2005). A fast evolutionary algorithm for multicast routing in wireless networks. In Proceedings of the Internet and Multimedia Systems, and Applications (IMSA 2005), pages 77–106, Hawaii, USA. [Anandalingam and Nam, 1997] Anandalingam, G. and Nam, K. (1997). Conflict and cooperation in designing international telecommunication networks. Journal of the Operational Research Society, 48:600–611. [Aneja and Nair, 1978] Aneja, Y. and Nair, K. (1978). The constrained shortest path problem. Naval Research Logistics Quarterly, 25:549–555. [Antunes et al., 1999] Antunes, C. H., Clímaco, J., Craveirinha, J., and Barrico, C. (1999). Multiple objective routing in integrated communication networks. In Smith, D. and Key, P., editors, Teletraffic Engineering in a Competitive World. Proceedings of the 16th International Teletraffic Congress – ITC 16, pages 1291– 1300. Elsevier. [Ash, 1998] Ash, G. R. (1998). Dynamic Routing in Telecommunication Networks. McGraw-Hill. [Assi et al., 2001] Assi, C., Shami, A., Ali, M. A., Kurtz, R., and Guo, D. (2001). Optical networking and real-time provisioning: an integrated vision for the next-generation internet. IEEE Network, 15(4):36–45. [Avallone et al., 2005] Avallone, S., Kuipers, F., Ventre, G., and Mieghem, P. V. (2005). Dynamic routing in QoS-aware traffic engineered networks. In Proceedings of EUNICE 2005: Networks and Applications Towards a Ubiquitously Connected World, IFIP WG 6.6, WG 6.4 and WG 6.9 Workshop, pages 222–228, Colmenarejo, Spain. Universidad Carlos III de Madrid. The paper has also been published by Springer (ISBN-10: 0-387-30815-6), edited by C. Delgado Kloos, A. Marin, and D. Larrabeiti, pp. 45–58, 2006. [Banerjee et al., 2001] Banerjee, A., Drake, J., Lang, J., Turner, B., Kompella, K., and Rekhter, Y. (2001). Generalized multiprotocol label switching: an overview of routing and management enhancements. IEEE Communications Magazine, 39(1):144–149. [Beugnies and Gandibleux, 2003] Beugnies, F. and Gandibleux, X. (2003). Multiobjective routing in IP networks. In 7th PM2O Workshop (Programmation Mathématique Multi-Objectifs), Valenciennes, France. [Beugnies and Gandibleux, 2006] Beugnies, F. and Gandibleux, X. (2006). A multi-objective routing procedure for IP networks. In The 18th International Conference on Multiple Criteria Decision Making, Chania, Grèce. [Blokh and Gutin, 1996] Blokh, D. and Gutin, G. (1996). An approximate algorithm for combinatorial optimization problems with two parameters. Australasian Journal of Combinatorics, 14:157–164.
42
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
[Chen and Nahrstedt, 1998a] Chen, S. and Nahrstedt, K. (1998a). On finding multi-constrained paths. In Proceedings of International Communications Conference ’98 IEEE, pages 874–879, New York. [Chen and Nahrstedt, 1998b] Chen, S. and Nahrstedt, K. (1998b). An overview of quality of service routing for next-generation high-speed networks: problems and solutions. IEEE Network, 12(6). [Clímaco and Craveirinha, 2005] Clímaco, J. and Craveirinha, J. (2005). Multiple criteria decision analysis – state of the art surveys. In Figueira, J., Greco, S., and Erghott, M., editors, Int. Series in Operations Research and Management Science, volume 78, chapter Multicriteria Analysis in Telecommunication Network Planning and Design – Problems and Issues, pages 899–951. Springer. [Clímaco et al., 2003] Clímaco, J., Craveirinha, J., and Pascoal, M. (2003). A bicriterion approach for routing problems in multimedia networks. Networks, 41(4):206–220. [Clímaco et al., 2006] Clímaco, J., Craveirinha, J., and Pascoal, M. (2006). An automated reference pointlike approach for multicriteria shortest path problems. Journal of Systems Science and Systems Engineering, 15(3):314–329. [Clímaco and Martins, 1982] Clímaco, J. and Martins, E. (1982). A bicriterion shortest path algorithm. European Journal of Operational Research, 11:399–404. [Clímaco et al., 2004] Clímaco, J., Craveirinha, J., and Pascoal, M. (2004). Routing calculation in multimedia: A procedure based on a bicriteria model. In Neittaanmi, P., Rossi, T., Majava, K., and Pironneau, O., editors, Proceedings of the European Congress on Computational Methods in Applied Sciences and Engineering ECCOMAS’04, Jyväskylä, Finland. [Conte, 2003] Conte, M. (2003). Dynamic Routing in Broadband Networks, volume 3. Springer. [Craveirinha et al., 2005] Craveirinha, J., Girão-Silva, R., and Clímaco, J. (2005). A meta-model for multiobjective routing in MPLS. In Proceedings of the 5th International Conference on Decision Support for Telecommunications and Information Society, pages 11–34, Varsaw, Poland. [Craveirinha et al., 2003] Craveirinha, J., Martins, L., Gomes, T., Antunes, C. H., and Clímaco, J. (2003). A new multiple objective dynamic routing method using implied cost. Journal of Telecommunications and Information Technologies, 3:51–59. special issue, Ed. by the Institute of Telecommunications, Warsaw, 2003. [Crichigno and Barán, 2004] Crichigno, J. and Barán, B. (2004). Multiobjective multicast routing algorithm for traffic engineering. In IEEE International Conference on Computers and Communication Networks (ICCCN’2004), Chicago, Estados Unidos. [Cui et al., 2003] Cui, X., Lin, C., and Wei, Y. (2003). A multiobjective model for QoS multicast routing based on genetic algorithm. In Proceedings of International Conference on Computer Networks and Mobile Computing (ICCNM’03), page 49. [Donoso et al., 2004] Donoso, Y., Fabregat, R., and Marzo, J.-L. (2004). A multi-objective optimization scheme for multicast routing: A multitree approach. Telecommunication Systems, 27(2–4):229–251. [Douligeris, 1991] Douligeris, C. (1991). Multiobjective telecommunication networks flow control. In Proceedings of IEEE Southeastcon ’91, volume 2, pages 647–651, Williamsburg, VA, USA. [El-Sayed and Jaffe, 2002] El-Sayed, M. and Jaffe, J. (2002). A view of telecommunications network evolution. IEEE Communications Magazine, 40(12):74–81. [Erbas and Erbas, 2003] Erbas, S. C. and Erbas, C. (2003). A multiobjective off-line routing model for MPLS networks. In Charzinski, J., Lehnert, R., and Tran-Gia, P., editors, Proceedings of the 18th International Teletraffic Congress. Elsevier. [Erbas and Mathar, 2002] Erbas, S. C. and Mathar, R. (2002). An off-line traffic engineering model for MPLS networks. In Proceedings of IEEE 27th Annual Conference on Local Computer Networks, pages 166–174, Tampa, Florida. [Ergun et al., 2000] Ergun, F., Sinha, R., and Zhang, L. (2000). QoS routing with performance-dependent cost. In Proceedings of INFOCOM2000, volume 1, pages 137–146. [Fabregat et al., 2005] Fabregat, R., Donoso, Y., Baran, B., Solano, F., and Marzo, J. (2005). Multi-objective optimization scheme for multicast flows: a survey, a model and a MOEA solution. In LANC ’05: Proceedings of the 3rd international IFIP/ACM Latin American conference on Networking, pages 73–86, New York, NY, USA. ACM Press. [Fortz and Thorup, 2000] Fortz, B. and Thorup, M. (2000). Internet traffic engineering by optimizing OSPF weights. In Proceedings of INFOCOM2000, volume 2, pages 519–528. [Fournie et al., 2006] Fournie, L., Hong, D., and Randriamasy, S. (2006). Distributed multi-path and multiobjective routing for network operation and dimensioning. In Proceedings of the 2nd Conference on Next Generation Internet Design and Engineering, 2006 (NGI’06).
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
43
[Goel et al., 2001] Goel, A., Ramakrishnan, K. G., Kataria, D., and Logothetis, D. (2001). Efficient computation of delay-sensitive routes from one source to all destinations. In Proceedings of INFOCOM2001 Conference, volume 2, pages 854–858. IEEE. [Granat and Guerriero, 2003] Granat, J. and Guerriero, F. (2003). The interactive analysis of the multicriteria shortest path problem by the reference point method. European Journal of Operational Research, 151(1):103–118. [Granat and Wierzbicki, 2004] Granat, J. and Wierzbicki, A. P. (2004). Multicriteria analysis in telecommunications. In Proceedings of the 37th Annual Hawaii International Conference on System Sciences, Hawaii, USA. [Guerin and Orda, 2000] Guerin, R. and Orda, A. (2000). Networks with advanced reservations: The routing perspective. In Proceedings of INFOCOM2000, pages 26–30. [Guo and Matta, 1999] Guo, L. and Matta, I. (1999). Search space reduction in QoS routing. In Proceedings of the 19th IEEE International Conference on Distributed Computing Systems, volume 3, pages 142–149. [Handler and Zang, 1980] Handler, G. Y. and Zang, I. (1980). A dual algorithm for the constrained shortest path problem. Networks, 10:293–310. [Hardy et al., 2003] Hardy, D., Malleus, G., and Mereur, J. (2003). Networks: Internet, Telephony, Multimedia. Springer. [Hassin, 1992] Hassin, R. (1992). Approximation schemes for the restriced shortest path problem. Mathematics of Operations Research, 17(1):36–42. [Haßlinger and Schnitter, 2003] Haßlinger, G. and Schnitter, S. (2003). Optimized Traffic Load Distribution in MPLS Networks, chapter 7, pages 125–141. Kluwer Academic Publishers. [Jaffe, 2004] Jaffe, J. M. (2004). Algorithms for finding paths multiple constraints. Networks, 14:95–116. [Kennington et al., 2003] Kennington, J., Lewis, K., Olinick, E., Ortynski, A., and Spiride, G. (2003). Robust solutions for the DWDM routing and provisioning problem: Models and algorithms. Optical Networks Magazine, 4(2):74–84. Technical Report 01-EMIS-03. [Kerbache and Smith, 2000] Kerbache, L. and Smith, J. (2000). Multi-objective routing within large scale facilities using open finite queueing network. European Journal of Operational Research, 121:105–123. [Kim et al., 1998] Kim, S., Lim, K., and Kim, C. (1998). A scalable QoS-based inter-domain routing scheme in a high speed wide area network. Computer Communications, 21:390–399. [Knowles et al., 2000] Knowles, J., Oates, M., and Corne, D. (2000). Advanced multi-objecive evolutionary algorithms applied to two problems in telecommunications. British Telecom Technology Journal, 18(4): 51–65. [Korkmaz and Krunz, 2001] Korkmaz, T. and Krunz, M. (2001). A randomized algorithm for finding a path subject to multiple QoS requirements. Computer Networks, 36:251–268. [Kuipers et al., 2002a] Kuipers, F., Korkmaz, T., Krunz, M., and Mieghem, P. V. (2002a). A review of constraint-based routing algorithms. http://www.tvs.et.tveldft.nl/people/fernado/papers/TRreviewqosalg.pdf. [Kuipers et al., 2004] Kuipers, F., Korkmaz, T., Krunz, M., and Mieghem, P. V. (2004). Performance evaluation of constraint-based path selection algorithms. IEEE Network, 18(5):16–23. [Kuipers and Mieghem, 2002] Kuipers, F. and Mieghem, P. V. (2002). MAMCRA: A constrained-base multicast routing algorithm. Computer Communications, 25(8):801–810. [Kuipers and Mieghem, 2005a] Kuipers, F. and Mieghem, P. V. (2005a). Conditions that impact the complexity of QoS routing. IEEE/ACM Transaction on Networking, 13(4):717–730. [Kuipers and Mieghem, 2005b] Kuipers, F. and Mieghem, P. V. (2005b). Non-dominance in QoS routing: an implementational perspective. IEEE Communications Letters, 9(3):267–269. [Kuipers et al., 2002b] Kuipers, F., Mieghem, P. V., Korkmaz, T., and Krunz, M. (2002b). An overview of constraint-based path selection algorithms for routing. IEEE Communications Magazine, 40(12):50–55. [Kuipers et al., 2006] Kuipers, F., Orda, A., Raz, D., and Mieghem, P. V. (2006). A comparison of exact and e-approximation algorithms for constrained routing. In Proceedings of Networking 2006 – Fifth IFIP Networking Conference, Coimbra, Portugal. [Kyeongja et al., 2005] Kyeongja, L., Armand, T., Aurelien, N., Ahmed, R., and Cheeha, K. (2005). Comparison of multipath algorithms for load balancing in a MPLS network. In Proceedings of International conference on information networking, ICOIN 2005, Jeju Island, South Korea. [Lee et al., 1995] Lee, W. C., Hluchyj, M. G., and Humblet, P. A. (1995). Routing subject to quality of service constraints in integrated communication networks. IEEE Network, 9(4):46–55. [Li et al., 2005] Li, Q., Beaver, J., Amer, A., Chrysanthis, P., Labrinidis, A., and Santhankrishnan, G. (2005).
44
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
Multi-criteria routing in wireless sensor-based pervasive environments. Jounal of Pervasive Computing and Communications, 1(4). [Liu and Ramakrishnan, 2001] Liu, G. and Ramakrishnan, K. (2001). A∗ prune: An algorithm for finding K shortest paths subject to multiple constraints. In Proceedings of INFOCOM 2001 IEEE Conference, pages 743–749, Anchorage. [Liu et al., 2005] Liu, H., Li, J., Zhang, Y.-Q., and Pan, Y. (2005). An adaptive genetic fuzzy multi-path routing protocol for wireless ad hoc networks. In Proceedings of the Sixth International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing and First ACIS International Workshop on Self-Assembling Wireless Networks (SNPD/SAWN’05), pages 468–475, Washington, DC, USA. IEEE Computer Society. [Lukac, 2002] Lukac, K. (2002). Multicriteria dynamic routing in communication networks based on F learning automata. In Proceedings of Communications and Computer Networks. [Lukac, 2003] Lukac, K. (2003). Multicriteria telecommunication traffic routing based on stochastic automata in fuzzy environments. PhD thesis, Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia. [Lukac et al., 2003] Lukac, K., Lukac, Z., and Tkalic, M. (2003). Behaviour of F learning automata as multicriteria routing agents in connection oriented networks. In Proceedings of FUZZ-IEEE2003, The IEEE International Conference on Fuzzy Systems, pages 296–301, St. Louis, MO. [Ma and Steenkiste, 1997] Ma, Q. and Steenkiste, P. (1997). On path selection for traffic with bandwidth guarantees. In Proceedings of IEEE International Conference on Network Protocols (ICNP ’97), pages 191–202, Atlanta, Georgia. [Ma and Steenkiste, 1998] Ma, Q. and Steenkiste, P. (1998). Routing traffic with quality-of-service guarantees in integrated services networks. In Proceedings of NOSSDAV’98. [Malakooti and Thomas, 2006] Malakooti, B. and Thomas, I. (2006). A distributed composite multiple criteria routing using distance vector. In Proceedings of IEEE International Conference on Networking, Sensing and Control, ICNSC’06, pages 42–47, Ft. Lauderdale, FL, USA. [Martins et al., 1999] Martins, E., Pascoal, M., and Santos, J. (1999). Deviation algorithms for ranking shortest paths. International Journal of Foundations of Computer Science, 10(3):247–263. [Martins et al., 2003] Martins, L., Craveirinha, J., and Clímaco, J. (2003). A new multiobjective dynamic routing method for multiservice networks – modelling and performance. In Proceedings of the International Networks Optimization Conference (INOC 2003), pages 404–409, Evry/Paris, France. Institut National des Télécommunications. [Martins et al., 2006] Martins, L., Craveirinha, J., and Clímaco, J. (2006). A new multiobjective dynamic routing method for multiservice networks: modelling and performance. Computational Management Science, 3:225–244. [Martins et al., 2005] Martins, L., Craveirinha, J., Clímaco, J., and Gomes, T. (2005). On a bi-dimensional dynamic alternative routing method. European Journal of Operational Research, 166:828–842. Special issue on Advances in Complex Systems Modeling, Ed. by M. Makowski, Y. Nakamori and H.-J. Sebastian. [Marwaha et al., 2004] Marwaha, S., Srinivasan, D., Tham, C. K., and Vasilakos, A. (2004). Evolutionary fuzzy multi-objective routing for wireless mobile ad hoc networks. In Proceedings of Congress on Evolutionary Computation (CEC’2004), volume 2, pages 1964–1971, Portland, Oregon, USA. IEEE Service Center. [Masip-Bruin et al., 2006] Masip-Bruin, X., Yannuzzi, M., Domingo-Pascual, J., Fonte, A., Curado, M., Monteiro, E., Kuipers, F., Mieghem, P. V., Avallone, S., Ventre, G., Aranda-Gutiérrez, P., Hollick, M., Steinmetz, R., and L. Iannone, K. S. (2006). Research challenges in qos routing. Computer Communications, NoE E-Next special issue, 29(5). http://www.nas.its.tudelft.nl/ people/Fernando/papers/CompCom_QoSRouting.pdf. [Meisel, 2005] Meisel, Y. (2005). Multi-Objective Optimization Scheme for Static and Dynamic Multicast Flows. PhD thesis, Girona, Spain. [Meisel et al., 2003] Meisel, Y., Fabregat, R., and Fàbrega, L. (2003). Multi-objective scheme over multitree routing in multicast MPLS networks. In Proceedings of the IFIP/ACM Latin America Networking Conference 2003 (LANC03), volume 6, La Paz, Bolivia. ACM Press. [Mieghem et al., 2001] Mieghem, P. V., Neve, H. D., and Kuipers, F. (2001). Hop-by-hop quality of service routing. Computer Networks, 37(3-4):407–423. [Mitra and Ramakrishnan, 2001] Mitra, D. and Ramakrishnan, K. (2001). Techniques for traffic engineering of multiservice, multipriority networks. Bell Labs Technical Journal, 6(1):139–151.
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
45
[Neve and Mieghem, 2000] Neve, H. D. and Mieghem, P. V. (2000). Tamcra, a tunable accuracy multiple constraints routing algorithm. Computer Communications, 33:667–679. [Orda, 1999] Orda, A. (1999). Routing with end-to-end QoS guarantees in broadband networks. IEEE/ACM Transactions of Networking, 7(3):365–374. [Osman and M. Abo-Sinna, 2005] Osman, M. and M. Abo-Sinna, A. M. (2005). An effective genetic algorithm approach to multiobjective routing problems (morps). Applied mathematics and computation, 163(2):769–781. [Oueslti-Boulahia and Oubagha, 1999] Oueslti-Boulahia, S. and Oubagha, E. (1999). An approach to routing elastic flows. In Proceedings of the 16th International Telegraffc Congress, pages 1311–1320, Washington, USA. Elsevier Science B. V. [Pinto and Barán, 2005] Pinto, D. and Barán, B. (2005). Solving multiobjective multicast routing problem with a new ant colony optimization approach. In LANC ’05: Proceedings of the 3rd international IFIP/ACM Latin American conference on Networking, pages 11–19, New York, NY, USA. ACM Press. [Pioro et al., 2005] Pioro, M., Dzida, M., Kubilinskas, E., Nilsson, P., Ogryczak, W., Tomaszewski, A., and Zagozdzon, M. (2005). Applications of the max-min fairness principle in telecommunication network design. In Proceedings of the 1st Conference on Next Generation Internet Networks Engineering (NGI 2005), pages 219–225, Rome, Italy. [Pornavalai et al., 1997] Pornavalai, C., Chakraborty, G., and Shiratori, N. (1997). QoS based routing algorithm in integrated services packet networks. In Proceedings of IEEE International Conference on Network Protocols (ICNP’97), pages 167–174, Atlanta, Georgia. [Pornavalai et al., 1998] Pornavalai, C., Chakraborty, G., and Shiratori, N. (1998). Routing with multiple QoS requirements for supporting multimedia applications. Telecommunication Systems, 9:357–373. [Prasithsangaree and Niehaus, 2002] Prasithsangaree, P. and Niehaus, D. (2002). Multiple QoS routing in large pnni atm networks with heavy traffic. In Proceedings of the 2002 International Symposium on Performance Evaluation of Computer and Telecommunication Systems (SPECTS’02), pages 438–442, San Diego, California. [Prieto et al., 2006] Prieto, J., Barán, B., and Crichigno, J. (2006). Multitree-multiobjective multicast routing for traffic engineering. In Proceedings of the 1st. International Conference on Artificial Intelligence in Theory and Practice, IFIP World Computer Congress 2006, Santiago, Chile. [Puri and Tripakis, 2002] Puri, A. and Tripakis, S. (2002). Algorithms for routing with multiple constraints. In Proceedings of SWAT 2002, 8th Scandinavian Workshop on Algorithm Theory, volume 2, pages 338– 347, Turku, Finland. [Rajagopalen et al., 2004] Rajagopalen, B., Pendarakis, D., Saha, D., Ramamoorthy, R., and Bala, K. (2004). IP over optical netwoks: architectural aspects. IEEE Communications Magazine, 38(9):94–102. [Resende and Ribeiro, 2003] Resende, M. and Ribeiro, C. (2003). A grasp with path-relinking for private virtual circuit routing. Networks, 41(3):104–114. [Rocha et al., 2006] Rocha, M., Sousa, P., Rio, M., and Cortez, P. (2006). QoS constrained internet routing with evolutionary algorithms. In Proceedings of 2006 IEEE Congress on Evolutionary Computation, pages 9270–9277, Vancouver, Canada. [Roy et al., 2002] Roy, A., Banerjee, N., and Das, S. K. (2002). An efficient multi-objective QoS routing algorithm for real-time wireless multicasting. In Jackson, P., editor, Proceedings of IEEE Semiannual Vehicular Technology Conference, volume 3, pages 1160–1164, Birmingham, Alabama. [Roy and Das, 2002] Roy, A. and Das, S. (2002). Optimizing QoS-based multicast routing in wireless networks: A multi-objective genetic algorithmic approach. In Proceedings of the Second IFIP-TC6 Networking Conference (Networking 2002), volume 2345 of Lecture Notes in Computer Science, pages 28–48, Pisa, Italy. [Roy and Das, 2004] Roy, A. and Das, S. (2004). Qbmmrp: A QoS-based mobile multicast routing protocol using multiobjective genetic algorithms. Wireless Networks, 10(3):271–286. [Schnitter and Haßlinger, 2002] Schnitter, S. and Haßlinger, G. (2002). Heuristic solutions to the lsp-design for MPLS traffic engineering. In Proceedings of Telecommunication Network Strategy and Olanning Symposium, Munich, Germany. [Shin, 2003] Shin, D. (2003). Multicriteria Routing for Guaranteed Performance Communications. PhD thesis, Purdue University. CERIAS Tech Report 2003-21. [Sobrinho, 2001] Sobrinho, J. (2001). Algebra and algorithms for QoS path computation and hop-by-hop routing in the internet. In Proceedings of IEEE INPCOM2001, Anchorage. [Song et al., 2000] Song, J., Pung, H., and Jacob, L. (2000). A multi-constrained distributed QoS routing
46
J.C.N. Clímaco et al. / Multicriteria Routing Models in Telecommunication Networks
algorithm. In Proceedings of the Eighth IEEE International Conference on Networks (ICON’00), pages 165–171. [Steuer, 1986] Steuer, R. (1986). Multiple Criteria Optimization: Theory Computation and Application. John Wiley & Sons. [Thirumalasetty and Medhi, 2001] Thirumalasetty, S. and Medhi, D. (2001). MPLS traffic engineering for survivable book-ahead guaranteed services. http://www.estp.umkc.edu/public/papers/ dmedhi/tm-bag-te.pdf. [Tsai and Dai, 2001] Tsai, W. and Dai, W. (2001). Joint routing and rate assignment in MPLS based networks. In Proceedings of the Ninth IEEE International Conference on Networks (ICON’01), page 196. [Wang and Crowcroft, 1996] Wang, Z. and Crowcroft, J. (1996). Quality-of-service routing for supporting multimedia applications. IEEE Journal on Select Areas in Communications, 14(7):1228–1234. [Widyono, 1994] Widyono, R. (1994). The design and evaluation of routing algorithms for real-time channels. Technical Report 94-024, University of California at Berkeley & International Computer Science Institute. [Wierzbicki, 2005] Wierzbicki, A. (2005). Telecommunications, multiple criteria analysis and knowledge theory. Journal of Telecommunications and Information Technology, 3. http://www.itl.waw.pl/ czasopisma/JTIT/2005/3/3.pdf. [Yuan, 2003] Yuan, D. (2003). A bicriterion optimization approach for robust OSPF routing. In Proceedings of the 3rd IEEE workshop on IP Operations and Management (IPOM 2003), pages 91–98. [Yuan, 2002] Yuan, X. (2002). Heuristic algorithms for multiconstrained qualiy-of-service routing. IEEE/ACM Transactions on Networking, 10(2). [Yuan and Liu, 2001] Yuan, X. and Liu, X. (2001). Heuristic algorithms for multi-constrained quality of service routing. IEEE INFOCOM. [Zang et al., 2000] Zang, H., Jue, J., and Mukherjee, B. (2000). A review of routing and wavelength assignment approaches for wavelength-routed optical WDM networks. Optical Networks Magazine, 14(1): 47–60. [Zee, 1999] Zee, M. (1999). Quality of service routing - state of the art report. http://searchpdf. adobe.com/proxies/0/9/25/62.html. Ericsson 1/0362-FCP NB 102 88, 1999. [Zeleny, 1982] Zeleny, M. (1982). Multiple Criteria Decision Making. McGraw-Hill, New York. [Zhang et al., 2005] Zhang, C., Ge, Z., Kurose, J., Liu, Y., and Towsley, D. (2005). Optimal routing with multiple traffic matrices – tradeoff between average and worst case performance. In Proceedings of the 13th IEEE International Conference on Network Protocols, Boston, Massachusetts, USA. [Zhang and Zhu, 2005] Zhang, R. and Zhu, X. (2005). Fuzzy routing in QoS networks. Fuzzy Systems and Knowledge Discovery, 3614:880–890.
Advances in Multiple Criteria Decision Making and Human Systems Management Y. Shi et al. (Eds.) IOS Press, 2007 © 2007 The authors. All rights reserved.
47
Post-Merger High Technology R&D Human Resources Optimization Through the De Novo Perspective Chi-Yo HUANG a and Gwo-Hshiung TZENG b,c,∗ Department of Industrial Education, National Taiwan Normal University 162, He-ping East Road, Section 1, Taipei 106, Taiwan b Institute of Management of Technology, National Chiao Tung University 1001, Ta-Hsueh Road, Hsinchu 300, Taiwan c Department of Business Administration, Kainan University No. 1 Kainan Road, Luchu, Taoyuan 338, Taiwan
a
Abstract. In high technology firms, merger and acquisition (M&A) already has become the major strategy for enriching product portfolios, entering new markets, and thus enhancing the core competences in research and development (R&D). R&D human resource (HR) is the most critical factor to develop the competitive advantage and the post-merger R&D performance of knowledge-based labor intensive high technology firms, in general, and the IC design houses and IC design service companies in particular. However, little literature has discussed how the post-merger R&D human resources should be optimized so as to achieve the best R&D performance and thus, final success of the merger. Meanwhile, traditional literature on R&D resource optimization has focused mainly on optimizing existing resources, which is not realistic in the real world where external R&D human resources can be leveraged. Thus, this research will develop an analytic framework to best utilize the post-merger R&D human resources and achieve the best performance by optimizing internal resources and leveraging external resources by using the De Novo programming proposed by Professor Milan Zeleny. An empirical study of optimizing the post-combination R&D human resources in a merger of an integrated circuit (IC) design service company by a professional semiconductor foundry company is given as an illustration for the analytic procedures. The results demonstrate that the de Novo programming can best optimize the post-merger R&D human resources in merger and can be applied to other M&A cases. Keywords. Merger and Acquisition (M&A), human resource optimization, R&D resource optimization, high technology, R&D management, De Novo programming
Introduction Contemporary capitalism has witnessed strong merger and acquisition (M&A) activities that are being used increasingly by firms to strengthen and maintain their position in the market place (Scherer and Ross, 1990), and that have been recognized by many ∗
Corresponding author: G.H. Tzeng (Distinguished Chair Professor); E-mail:
[email protected]; Tel: +886-936-516698; Fax: +886-3-5165185.
48
C.-Y. Huang and G.-H. Tzeng / Post-Merger High Technology R&D Human Resources Optimization
as a relatively fast and efficient way to expand into new markets and incorporate new technologies (Schuler and Jackson, 2001). Successive M&A waves have contributed to deeply reconfiguring firms’ organizational structures and their core competencies (Bertrand and Zuniga, 2006). M&A activities have increased substantially since 1990, with a significant portion of the activity occurring in technology-based industries (Ranft and Lord, 2000). Human capital, the continued investment in people’s skills, knowledge, education, health and nutrition, abilities, motivation as well as effort (Zeleny, 2005), has long been argued as a critical resource in most firms (Pfeffer, 1994). Drucker (1999) mentioned that the most valuable asset of a twenty-first century institution will be its knowledge workers and their productivity (Drucker, 1999). Meanwhile, research and development (R&D) activity is considered one of the most important parts of maintaining a lead, especially in high tech industries, e.g., chemicals, drugs, electric and electronics, and machinery (Lee and Shim, 1995). Specifically, R&D activity seems to contribute substantially to high-tech industries in gaining competitive advantage as well as superior market performance (Tassey, 1983; Lee and Shim, 1995). Thus, in knowledge-intensive and innovation-driven high technology industries, highly skilled R&D human capital may be one of the most sought-after strategic resources (Ranft and Lord, 2000). In dynamic global markets, acquisitions have emerged as an important means for firms to gain technological capabilities (Coff, 1999; Ranft and Lord, 2000; Ranft and Lord, 2002; Graebner, 2004) that are likely to be embedded to a large degree in the tacit and socially complex knowledge of the acquired firms’ individual and collective human capital (Ranft and Lord, 2000). Acquirers may seek to obtain the in-depth experience and skills of specific groups of technical and managerial personnel in the target firm. In general, human resources (HRs) are especially important for the industries characterized by rapid innovation, technological complexity, reliance on highly specialized skills and expertise, and the pace and magnitude of technological change. Also, the breadth and depth of knowledge-based resources required to compete may not allow firms to internally develop all the technologies and capabilities they need to stay competitive (Ranft and Lord, 2002). Although R&D HRs are so critical for post M&A performance, not much literature has discussed how the post-merger high technology R&D HRs should be optimized so as to achieve the best R&D performance and, thus, final success of the merger. Much of the existing research highlights questionable acquisition motives, problems regarding valuation and premia paid, and disappointing financial performance (e.g., Ravenscraft and Scherer (1987)) (Ranft and Lord, 2002). Cassiman et al. (2005) mentioned that the link between M&A and R&D is, despite its importance, even less well examined in the literature, at least directly. Schuler and Jackson (2001) also stated that one possible reasons is that no model or framework can serve as a tool to systematically understand and manage the people issues in M&A (Schuler and Jackson, 2001). We may find that the above-mentioned general M&A issues have seldom been addressed, not to mention the post-merger high technology R&D HR optimization. To honor Professor Milan Zeleny’s significant contribution in multiple criteria decision making, knowledge management and human systems as well as the celebration of his 65th birthday and to optimize the post-merger high technology R&D HRs, the De Novo programming (Zeleny, 1981, 1990) will be introduced to resolve this issue. The authors propose a multi-objective optimization methodology based on the De Novo programming (Zeleny, 1981, 1990) which will be used to re-design the post merger HRs so as to maximize the revenue of the acquiring firm. In order to demonstrate the usefulness and effectiveness of the De Novo programming on optimizing high tech-
C.-Y. Huang and G.-H. Tzeng / Post-Merger High Technology R&D Human Resources Optimization
49
nology R&D HRs, a numerical example modified from a real merger in the IC industry is presented. Meanwhile, optimization of the numerical example by using the traditional multiple objective decision making (MODM) methodology also will be presented as a comparison to show the effectiveness and advantages of the De Novo approach. According to the numerical results, we show that the redesigned HRs based on the De Novo programming can achieve a much better performance (1.238 times) than the HRs optimized by using traditional mathematical programming approach(es). Thus, high technology HRs can best be optimized by using the De Novo approach. The remainder of this paper is organized as follows. In Section 2, the concepts of high technology HRs optimization in M&A are introduced. In Section 3, the concept of the De Novo programming is introduced for optimizing the post merger R&D HRs in M&A. In Section 4, the background of M&A of IC design service companies by the professional foundries will be described. Then, Section 5 presents an empirical study of optimizing the post-merger R&D HRs in an M&A of an IC design service company by a professional semiconductor foundry. Discussions will be presented in Section 6. Finally, Section 7 will conclude the paper with observations, conclusions and recommendations for further study.
1. Concepts of M&A and High Technology R&D HR Optimization By securing external resources, M&A already has been recognized as one of the most important strategies to enhance a high technology firm’s competitiveness. In this section, literatures regarding the importance of M&A in today’s highly competitive environment, resource-based views of human capital, and roles of R&D HRs as a key issue to M&A in the knowledge-based high technology industries will be reviewed. 1.1. M&A in High Technology Industries Firms today need to be fast growing, efficient, profitable, flexible, adaptable, future-ready, and have a dominant market position. Without these qualities, firms believe that it is virtually impossible to be competitive in today’s global economy (Schuler and Jackson, 2001). To strengthen and maintain the position in the market place, M&A are increasingly being used by firms. They are seen by many as a relatively fast and efficient way to expand into new markets and incorporate new technologies (Schuler and Jackson, 2001). In high technology industries characterized by rapid innovation, technological complexity, and reliance on highly specialized skills and expertise, the pace and magnitude of technological change, as well as the breadth and depth of knowledge-based resources required to compete, may not allow firms to internally develop all the technologies and capabilities they need to stay competitive (Ranft and Lord, 2002). Thus, many of the acquisitions in the 1990s appeared to be motivated by high technology firms’ need to obtain critical technologies or capabilities. In contrast to acquisitions that used to achieve economies of scale, gains in market share, or geographical expansion, many acquisitions are attempting to obtain highly developed technical expertise and skills of employees, high-functioning teams for product development or other functions, or specific new technologies in fast-paced industries (Kozin and Young, 1994; Wysocki, 1997). In the real world, acquisitions in computer hardware and software, electronics, telecommunications, biotechnology, and pharmaceuticals dominate most M&A activities (Ranft and Lord, 2000). These industries frequently place among the top 10
50
C.-Y. Huang and G.-H. Tzeng / Post-Merger High Technology R&D Human Resources Optimization
most active M&A industries in the Securities Data Corporation’s annual M&A almanacs (Ranft and Lord, 2000). R&D activity is considered one of the most important parts of maintaining a lead, especially in high tech industries, e.g., chemicals, drugs, electric and electronics, and machinery (Lee and Shim, 1995). Specifically, R&D activity seems to substantially contribute to high-tech industries gaining competitive advantage as well as superior market performance (Tassey, 1983; Lee and Shim, 1995). M&A could serve as a major channel for reorganizing R&D activities since they enable firms to quickly expand their knowledge base by accessing new technological assets. Furthermore, M&A allow them to transfer their own knowledge to new product markets and provide other uses for their R&D and production capabilities (Capron, 1999; Cassiman et al., 2005). More generally, M&A offer merging firms the opportunity to redefine their internal R&D processes, especially their research programs (Bertrand and Zuniga, 2006). 1.2. Application of the Resource-Based View on HRs The resource-based theory, that a firm’s sustained competitive advantage relies on the very nature of the resources it is able to get and mobilize (Autier and Picq, 2005), posits that consolidating the activities of the firms may spur the development and deployment of resources that will enable the firm to develop a sustainable competitive advantage (Capron, 1999; Krishnan and Park, 2002; Cloodt et al., 2006). Based on the resource-based view of the firm, performance differences across firms can be attributed to the variance in the firms’ resources and capabilities. Resources that are valuable, unique, and difficult to imitate can provide the basis for firms’ competitive advantages (Barney, 1991; Amit and Schoemaker, 1993). In turn, these competitive advantages produce positive returns (Hitt et al., 2001; Peteraf, 1993). Strategic management resources management researchers have identified the resource-based view of the firm (Wernerfelt, 1984; Barney, 1986; Amit and Schoemaker, 1993;) as having a great potential to analyze the firm’s HR strategies (Wright and McMahan, 1992; Autier and Picq, 2005). According to Wright and McMahan (1992), when applied to HRs, the resource based view perspective teaches us that there are four conditions for HRs to constitute a lasting competitive advantage: 1) HRs must produce value for the firm in terms of the variety of skills, their adaptation to the requirements and needs of the firm, and their contribution to the constitution of “core skills”; 2) they must be rare (on the work market); 3) they must be difficult for competitors to imitate; and 4) they must be un-substitutable by other types of resources (Wright and McMahan, 1992; Autier and Picq, 2005). The resource-based view does seem particularly relevant to gain an understanding to the arbitration used by firms in industries with high levels of knowledge and human capital like, for example, high technology activities, consulting activities, and, more generally, all activities with high intellectual added value “knowledge based” activity (Hitt et al., 2001). Competing through people is especially relevant in knowledge-based industries (Autier and Picq, 2005). This is because people in these contexts are the most distinctive resource. First, HR strategies evolve significantly to suit company development stages. Second, as companies grow, they tend to get rid of rare, specific and non-substitutable HRs in favor of more commonplace and generic HRs. Value creation moves progressively from individual creativity to collective productivity (Autier and Picq, 2005).
C.-Y. Huang and G.-H. Tzeng / Post-Merger High Technology R&D Human Resources Optimization
51
1.3. M&A of High Technology R&D HR High-technology companies usually face environments that are fast-paced, turbulent, unstable and uncertain (Jolly, 2005). In these environments, companies face strong competitive pressures: knowledge flows from one place to another, demands change quickly, and labor markets might be very tense (Shanklin and Ryans, 1987; Jolly, 2005). Technology-intensive companies have been found to have several common characteristics that differentiate them from traditional companies, namely: 1) a product that is highly advanced technologically; 2) a greater priority placed on R&D; 3) frequent innovations; 4) a high geographic concentration; 5) a high mortality rate; and 6) an abnormally high turnover rate among technical personnel (Gomez-Mejia et al., 1990a; Gomez-Mejia et al., 1990b; Cardy and Dobbins, 1995; Tremblay and Chenevert, 2005). An important feature of high-technology industries is their large proportion of knowledge workers (Jolly, 2005). According to Autier and Picq (2005), competing through people is especially relevant in knowledge-based industries. This is because people in these contexts are the most distinctive resource. Apparently, the management of R&D HRs is an important challenge in value creation for high technology companies (Delorme and Cloutier, 2005). According to Ranft and Lord (2000), many acquisitions of high-tech firms are motivated by the acquirers’ desire to enhance their strategic technological capabilities. However, these capabilities are likely to be embedded to a large degree in the tacit and socially complex knowledge of the acquired firms’ individual and collective human capital. This presents a dilemma for acquirers because, unlike tangible or financial assets, the acquired firms’ valuable human assets cannot be purchased or owned outright, and the individuals can leave the firm at any time (Ranft and Lord, 2000). Retention, therefore, is likely to be of central importance during acquisition implementation in knowledge-intensive firms. In knowledge-based high technology firms, those employees who possess critical individual expertise and skills, or those who in combination possess valuable team- or group-based capabilities, may be critical to determine the overall success of the acquisition (Ranft and Lord, 2000). Although M&As are regarded as a relatively fast and efficient way to expand into new markets and incorporate new technologies (Schuler and Jackson, 2001) and thus, a critical means by which technology firms obtain the resources needed to compete in global markets (Graebner, 2004), their successes are by no means assured (Schuler and Jackson, 2001). Many challenges face companies in the wake of a takeover (Thomson and McNamara, 2001). The positive impact of technological M&As depends on a firm’s ability to integrate this knowledge and to alter existing routines in the organization of its research (Capron and Mitchell, 2000; Cloodt et al., 2006). Schuler and Susan (2001) also mentioned that major reasons for M&A success include well-thought out goals and objectives, well-managed M&A team, successful learning from previous experience, key talent retained, etc. In contrast, while some failure can be explained by financial and market factors, a substantial number can be traced to neglected HR issues and activities (Schuler and Jackson, 2001). Numerous studies confirm the need for firms to systematically address a variety of HR issues and activities in their M&A activities (Schuler and Jackson, 2001). Apparently, in knowledge-intensive and innovation-driven industries, highly skilled human capital may be one of the most sought-after strategic resources (Ranft and Lord 2000). Acquirers may seek to obtain the in-depth experience and skills of specific groups of technical and managerial personnel in the target firm (Ranft and Lord, 2002).
52
C.-Y. Huang and G.-H. Tzeng / Post-Merger High Technology R&D Human Resources Optimization
1.4. High Technology R&D HR Optimization and Current Problems HR is considered the most important asset of an organization, but very few organizations are able to fully harness its potential (Ahmad and Schroeder, 2003). The challenge for companies is not just to acquire knowledge bases but also to integrate them in order to improve the post-M&A innovative performance (Ahuja and Katila, 2001; Child et al., 2001; Haspeslagh and Jemison, 1991). Although numerous researches and literatures studied the importance of M&A in high technology industries and importance of the high technology R&D HRs, few literatures addressed the topic of high technology R&D HRs optimization. In the following section, the concepts of traditional linear programming and the De Novo programming on resolving resource optimization will be introduced as a foundation for high technology R&D HRs optimization.
2. Traditional Mathematic Programming Versus the De Novo Programming Based on the above literature review results, we know that R&D HR is the most valuable asset to be acquired in the high technology M&A. However, little literature has discussed this problem. In this section, we will propose an R&D HR optimization methodology based on the concept of De Novo programming proposed by Professor Milan Zeleny (1990). To honor his significant contribution in multiple criteria decision making, knowledge management and human systems as well as the celebration of his 65th birthday, this research modified the definitions by Professor Zeleny and produced the definitions indicated below. 2.1. Redefining Boundaries Frequently used concepts like system, system design, system optimization etc., are now found to be unexpectedly (almost exclusively) limited to an a priori fixed or given system boundary and the implied input/output system characterization. This system configuration is typically expressed in terms of a set of constraints, defining the feasible domain of the model. Even the latest texts, with titles like Globally Optimal Design or Principles of Optimal Design, do not deal with system design, configuration or reconfiguration, or with anything “optimal.” Rather, a (somehow) given (bounded, well constrained, pre-configured) system is accepted and then a search for decision or design variables that maximize a single (also given) criterion is initiated. This puzzling and confusing inadequacy comes from superficial but persistent attempts to “marry” non-systemic mathematical algorithms with more advanced systems thinking, as well as from the unwillingness of analysts to face the facts, even if these are well known and understood. Any system is defined by its boundaries or constraints, thus separating feasible sets of alternatives, options or designs (vectors of variables) from their environment. Consequently, system design, redesign, configuration, and optimization must involve purposeful and well-directed “reshaping” of system boundaries or constraints. Simply selecting alternatives or options from an a priori given, preconfigured system is not-enough. System design is a process of creation, not selection, of alternatives. Linear system geometry can be represented in two dimensions as the feasible space of solutions, i.e., constraint surface or intersecting straight lines, and the contours of objectives f, i.e., straight parallel lines. It is clear that unidimensional optimal solutions
C.-Y. Huang and G.-H. Tzeng / Post-Merger High Technology R&D Human Resources Optimization
max
53
max
Feasible Space
Good unavailable options
Feasible Space
max
max
(a) Linear programming
(b) De Novo programming
Source: (Zeleney, 1990) Figure 1. The feasible options using (a) linear programming and (b) De Novo programming.
will be found by moving any and all of the component lines of f to their own furthest point of the feasible domain. For example, if objectives space f = ( f1 = Profit; f 2 = Quality) , then the representations in Figs 1(a) and 1(b) are sufficient to demonstrate our problem. In Fig. 1(a), the polyhedron of system-feasible options is well defined and given. Maximizing the functions f1 and f 2 separately, leads to two different optimal solutions and levels of performance (designated as max). If System I remains fixed, observe that the maximal separately attainable levels of both criteria lead to an infeasible “ideal” option. The tradeoffs between quality and profit are explicit and must be dealt with (selecting from the heavy boundary, i.e., nondominated solutions, of System I). In Fig. 1(a), observe that the system is poorly designed because there exists a set of good, currently unavailable options that would make the “ideal” point feasible and allow the maxima of f1 and f 2 (Quality and Profit) to be attained at the same time. In other words, reshaping the feasible set (reconfiguring the constraints) in order to include the “missing” alternatives, if realizable at the same or comparable costs, would lead to superior system design in which higher levels of criteria performance are possible. Such desirable “reshaping” of the feasible set is represented in Fig. 1(b), where System II of system-feasible options is displayed. Given System II, both objectives are maximized at the same time and so, in a clear sense, System II is superior in design to System I. From all such possible “reshapings” of system configurations, given some cost or effort constraint, the best possible optimal design or configuration can be chosen. Clearly, implementing and operating System I, when System II is both feasible and available, other things being equal, is an inexcusable and damaging act that, especially in the realm of socio-economic and production systems, cannot be justified by merely claiming incompetence. In fact, any and all morally justifiable systems should be configured optimally, i.e., in the best possible way, and not through vaguely defined “previous synthesis”, based on habit, intuition, or political manipulation. In Fig. 1(b), such a system with no quality-profit tradeoffs is represented. Observe that the maximal separately attainable levels of both criteria now form a feasible ideal option. Consequently, the tradeoffs between quality and profit do not exist (heavy tradeoff boundary of System I is “inactive” in System II).
54
C.-Y. Huang and G.-H. Tzeng / Post-Merger High Technology R&D Human Resources Optimization
2.2. Single Criterion De Novo Formulation Standard linear programming formulation of the single-objective product-mix HR optimization problem based on Professor Zeleny (1990) is defined as follows:
max cx s.t. Ax ≤ b, x ≥ 0.
(1)
That is, given the levels of m HRs,
⎡ a11 ⎢ ⎢ A = ⎢ ai1 ⎢ ⎢ ⎢a ⎣ m1
a1 j
aij
amj
where aij stands for the j
th
a1n ⎤ ⎥ ⎥ ain ⎥ , ⎥ ⎥ amn ⎥⎦
resource to be used for the i
th
product or service.
b = [b1 , , bi , , bm ] where bi stands for the available amount of the i th reT
source.
x = [ x1 , , x j , , xn ]T where decision variable x j stands for the
amounts of the j
th
product/service to be produced/provided. We would like to
maximize the revenue, the value of the objective function f = cx =
∑c x j
j
j
. Because
all components of b are determined in advance, problem (1) deals with the optimization of given HRs. However, when the purpose is to design an optimal system instead of modifying a given system, formulation of the above optimization problem can be modified as
max f = cx s.t. Ax ≤ b, pb ≤ bgt , x j ≥ 0 for ∀j .
(2)
That is, given the average salaries for each category of the m
HRs,
p = [ p1 , , pi , , pm ] , and total available HRs budget bgt, allocate the budget so T
that the resulting (optimal) portfolio of HRs b = [b1 , , bi , , bm ]
maximizes
revenue of the company, f = cx . In fact, truly optimal portfolio of a free market producer should more appropriately maximize the difference (cx - pb) rather than simply cx . It can be shown that the optimal design problem (2) is equivalent to a continuous “knapsack” problem (3) below:
max f = cx s.t. pAx ≤ pb = bgt , x j ≥ 0 for ∀j.
(3)
C.-Y. Huang and G.-H. Tzeng / Post-Merger High Technology R&D Human Resources Optimization
55
where, suppose next that vector x of decision variable is feasible in (3). Let Ax = b ; then Ax − b = 0 , while pAx = pb ≤ bgt . Because the objective functions of (2) and (3) are identical, equivalence of both feasible sets (in x ) guarantees equivalence of solutions. Since the “knapsack” solution is
x* = [0, , bgt / c j , , 0]T
(4)
ci /( pA)i = max(c j /( pA) j ) ,
(5)
where j
the optimal solution to (3) is given by (4) and
b* = Ax* .
(6)
Observe that under present assumptions this solution remains nondegenerate and unique, unless condition (5) fails to identify i uniquely. 2.3. Multiple Criterion De Novo Formulation An optimal HR optimization becomes fully useful as a practical model when the system is designed with multiple objectives. The formulations of the multiple objective HR problems remain straightforward:
max cx s.t. Ax ≤ b, pb ≤ b, x ≥ 0. where f k = ck x =
∑
kj
(7)
ckj x j , j = 1, 2,..., n; k = 1, 2,..., q are k th objective func-
tions f k to be maximized simultaneously. Obviously, pAx ≤ pb = bgt follows from (7), and thus defining vector of unit cost v = [v1 , , vn ] = pA , we can rewrite (7) as
max f =Cx s.t. vx ≤ bgt , x ≥ 0.
(8)
where, f is a q × 1 vector of objectives, C is a q × n matrix. Using the methodology of De Novo single–criterion optimal design introduced in Section 2.2., we can solve problem (8), for vector x of decision variables and vector
b of resources, with respect to each objective function f k separately. Let the im*
proved objective f k = max f k , k = 1,..., q, subject to the constraints of (8). Let *
*
*
the vector f = ( f1 , , f k ) of improved objectives to achieve the aspired level
56
C.-Y. Huang and G.-H. Tzeng / Post-Merger High Technology R&D Human Resources Optimization
denotes the multicriteria performance of the ideal design relative to given vector b of resources. Obviously, f
*
must be achievable for a given budget vector b ∗
∗
∗
and
∗
the total improved budget can be expressed as pb = bgt .
f * , there is a corresponding
Observe that to each of the q values of
f k* -optimal portfolio b* of improved resources calculated as in the previous section. As vector f *
*
represents the metaoptimal performance, we can find its corresponding
*
x and b by solving the following problem: min vx s.t. Cx ≥ f * , x ≥ 0.
(9)
Solving problem (9) identifies the minimum budget b mum performance f
*
can be realized through x
*
*
at which the metaopti*
and b . Solution to (9), b
*
*
and x , can be designated as metaoptimum solution. We use the optimum-path ratio
r in pb = bgt and pb∗ = bgt ∗ . r = bgt / bgt ∗
(10) *
*
*
and establish the final solution as x = rx , b = rb and f = rf . The optimum-path ratio r provides an efficient tool for virtually instantaneous optimal redesign of even large-scale linear systems. This simple and unprecedented ability to flexibly maintain complex multi-criteria production systems in the state of optimality, while expanding or contracting investment or budget bgt along the optimal path, is a powerful competitive tool.
3. The IC Design Service Industry Background A revolution is at work in the high technology industry: there is an unstoppable growth of business-to-business high technology services (Viardot, 2005). IC design service firms are typical examples for the knowledge based labor intensive high technology firms. In this section, IC design service industry will be introduced as a background for the empirical analysis in Section 5. As observed by Lu (2004), the IC industry was characterized either by vertical captive systems or by innovative technologies during the 1970s to the mid-1980s,. Thus, vertically integrated companies, such as IBM, and start-ups, such as Intel, which first introduced innovative products such as microprocessors and DRAMs (Dynamic Random Access Memories), were able to flourish. From the mid-1980s into the 1990s, industry trends shifted to favor open systems driven by standard-product mass producers, such as Japanese IDMs (Integrated Device Manufacturers), with strong manufacturing power, especially in memories. From the 1990s to today, industry trends gave rise to fabless design houses (e.g. nVidia, Xilinx) and semiconductor foundry compa-
C.-Y. Huang and G.-H. Tzeng / Post-Merger High Technology R&D Human Resources Optimization
57
Source: Chu et al. (2005) Figure 2. Evolution of global IC industrial infrastructure.
nies (e.g. TSMC, UMC, SMIC) with core competencies either in their products or fast and flexible deliverables (Lu, 2004). Evolution of global IC industrial infrastructure is demonstrated in Fig. 2. Entering the twenty-first century, the global IC industry already has opened up the third industrial revolution under the driving force of the 3C (computer, consumer electronics, and communication) applications and Systems-On-Chip (SOC) (Lai et al., 2003), a single integrated circuit chip that uses computing engines (MPU/DSP), memories, analog blocks (e.g. RF for wireless communication), and some custom logic to “glue” a system together (Tseng, 1999). The coming SOC era will be characterized by more stringent competition based on advanced technologies with larger investments and shorter product cycles, and driven by application needs and multiple-heterogeneous-function integration. A successful company must have a correct business structure and positioning in addition to its essential technical competence (Lu, 2004). Several emerging business models being developed by contenders in this new SOC Olympics (Lu, 2004): SIP providers, design foundries, design service providers, and system design integrators (Lai et al., 2003). Building resilient and strong supply chain partnerships is the best way to approach this SOC challenge. By partnering with companies that specialize in specific steps of the product delivery value chain, system and IC designers can benefit from best-in-class product delivery capability and economies of scale, ensuring first-to-market and first-to-volume (Gloski, 2003) design delivery. All parties become focused and aligned by the primary objective of delivering system/IC products. Partnering within such a supply chain enables companies to focus on their core competency and competitive differentiation, continuously driving innovation and product excellence (Gloski et al., 2003).
58
C.-Y. Huang and G.-H. Tzeng / Post-Merger High Technology R&D Human Resources Optimization
According to James (2005b), IC design services are suddenly prominent as the results of a convergence of the following trends: 1) Chip designs are growing too complex for most companies to handle on their own. As the number of components on a chip increases exponentially, it becomes difficult to find the engineering resources to design all the circuitry, even with automated tools. Because SoC forces more and more functionality onto a single chip, design projects now frequently require the presence of specialized experts in fields such as analog design and power management. The number of designs may be fewer, but the chips are larger and companies’ resources are limited. 2) Even purchasing chip IP – a strategy intended to reduce design complexity-can have unintended consequences. Prebuilt circuits save engineers from having to design everything from scratch. However, IP is far from plug-and-play, and design teams usually need specialized IP expertise (or even the knowledge and advice of the original designer) to make the IP work correctly. 3) Another broad trend, offshoring, is making design services available to a wider array of companies. Many IC design service providers are located in India and China, where trained circuit designers can be employed at a fraction of the cost of comparably trained United States-based engineer. Some of the largest IC design firms, such as Wipro Technologies and HCL Technologies, are located in India, and China is not far behind. 4) The trend toward design for manufacturing (DFM) also creates more demand for design services. DFM requires designers to be intimate with the peculiarities of the foundry’s manufacturing processes. Because few circuit designers have that kind of specialized knowledge, many design teams draw heavily on the design expertise of their foundry’s manufacturing engineers. Design service capabilities of SOC means the capabilities to provide customers who finish SOC specification design or SOC circuit design the rest procedures of processes required for SOC commercialization. The detailed SOC design service procedures include front end design, backend (place and layout), and turnkey (tapeout the layout to semiconductor wafer fab for SOC fabrication) services. The design service capabilities definitely are helpful for the SOC design success, since strong design service capability implies strong SIP integration capability, which is a key factor for SOC success. According to James (2005), design service is a labor-intensive business that needs to be closely managed to be profitable. To be credible, a design services provider needs a staff of highly skilled experts, who must be paid regardless of whether they’re actually working on a project. Conversely, when all the experts are slaving away, the provider loses the crucial ability to react to customer emergencies, such as an unexpected slip in a guaranteed deadline (James, 2005b). Based on customers’ requirements, semiconductor foundry companies also provide the virtual re-integration of IP, library, EDA, and design-center alliances. By linking the design resources with state-of-the-art facilities through Internet-based communication, foundries can play a critical role in delivering design services to the IC industry (Chiang, 2001) As the SOC ASIC market and the foundry market are merged, semiconductor foundry companies can drive a substantial share of design and manufacturing work to the companies that are best suited to perform it (James, 2005a). Thus, major semiconductor foundry companies acquired design service companies to provide best IC/SOC design services and thus, expand their market share in foundry. One typical example is TSMC’s acquisition of Global Unichip Corp. in 2003. Although foundries pursued competitive advantages through M&A of R&D HRs oriented knowledge-based design service companies, how to best optimize the post merger R&D HRs and, thus, retain R&D HRs and develop the competitive advantages
C.-Y. Huang and G.-H. Tzeng / Post-Merger High Technology R&D Human Resources Optimization
59
has become the major concern for managers of the foundries. Section 4 illustrates the optimization of post merger R&D HR by using both the De Novo programming and traditional multiple objective programming and makes comparisons.
4. Empirical Study of Post-Merger R&D HR Resource Optimization in the IC Industry In this section, an empirical study of the post-merger R&D HR resource optimization modified from a real M&A case from a semiconductor foundry and a fabless design service company is used to demonstrate the profit maximization and system optimization based on the De Novo approach. Both the design service division of foundry X and design service company Y are providing silicon intellectual properties (SIPs) and IC design services for customers who finish IC specification design or IC circuit design the remaining procedure of processes required for IC commercialization. These detailed procedures include front end design, backend (place and layout), and turnkey (tapeout of the layout to semiconductor wafer fab for IC fabrication) services. The average profit generated from the wafer sales generated by providing SIPs to a customer will be $3 million for each project, while the average profit generated by providing IC design services to a customer will be $1.2 million for each project. Meanwhile, the profits generated from SIPs sales to a customer will be $1.2 million while the average profits for providing IC design services to a customer will be $300,000. Sixty (60) design engineers, fifty (50) CAD engineers, thirty (30) layout engineers, and ten (10) project managers are the available R&D resources for either the design service division of foundry X or design service company Y. Fifteen (15) IC design engineers, three (3) CAD (computer aided design) engineers, four (4) IC layout engineers, and one (1) project manager are needed to design an SIP project. On the other hand, three (3) IC design engineers, two (2) CAD (computer aided design) engineers, three (3) IC layout engineers, and one (1) project manager are needed for a design service project. The HR costs for hiring an IC design engineer, a CAD engineer, a layout engineer, and a project manager are be $40,000 ( p1 ), $30,000 ( p2 ), $15,000 ( p3 ), and $60,000 ( p4 ) respectively. The total budget for the design service division of foundry X and the design service company Y is the same, $4,950,000. Suppose the business scales of both the design service division of the foundry X and the design service company Y are the same. Thus, both the foundry and the design service company can determine its optimal resource allocation for SIP project number ( x1 ) and design service project number ( x2 ) by using mathematical programming as follows by assuming that the foundry or the design service company is providing both wafer manufacturing and IC design services where the profits are from wafer sales ( f1 ) and SIP sales and design services fee ( f 2 ),
60
C.-Y. Huang and G.-H. Tzeng / Post-Merger High Technology R&D Human Resources Optimization
max f1 = 3000000 x1 + 1200000 x2 , max f 2 = 1200000 x1 + 300000 x2 , s.t. 15 x1 + 3 x2 ≤ 60, 3 x1 + 2 x2 ≤ 50, 4 x1 + 3 x2 ≤ 30, x1 + x2 ≤ 10, x1 , x2 ≥ 0. Using the traditional mathematical programming by setting the weights of the two objectives as equal and x1 , x2 as integer, we can easily solve the optimal distribution of a resource portfolio at x1 = 3 and x2 = 5. Either firm X or Y can achieve maximum profits for both wafer sales as well as SIP sales and design services as f1 = $15,000,000 and f 2 = $5,100,000. The total profits for either the design service division of foundry X or the design service company Y is 20,100,000. Thus, the total profits that can be generated by both companies will be $40,200,000 and the total project numbers which can be handled by both companies concurrently will be 6 ( x1 ) and 10 ( x2 ), respectively. Thus, the optimal solution for above problems is
f * = ( f1* , f 2* ) = (30000000, 10200000) . The De Novo Approach For the max f1 , we solve
max f1 = 3000000 x1 + 1200000 x2 , s.t.
810000 x1 + 285000 x2 ≤ 9900000, x1 , x2 ≥ 0
and the answer can be solved by integer programming with LINGO as
x1* = 0, x2* = 34, f1* = 40800000, and bgt1 ≈ 9900000. For the max f 2 , we solve
C.-Y. Huang and G.-H. Tzeng / Post-Merger High Technology R&D Human Resources Optimization
61
max f 2 = 1200000 x1 + 300000 x2 , s.t.
810000 x1 + 285000 x2 ≤ 9900000, x1 , x2 ≥ 0
and the answer can be solved by integer programming with LINGO as
x1* = 12, x2* = 0, f 2* = 14400000, and bgt2 ≈ 9900000. Then according to (9), we may get
min 810000 x1 + 285000 x2 , s.t.
3000000 x1 + 1200000 x2 ≥ 40800000, 1200000 x1 + 300000 x2 ≥ 14400000, x1 , x2 ≥ 0.
After
solving
the
above
problems,
we
can
find
*
*
the
ideal
point,
f = (40800000, 14400000) , the synthetic solution, x = (10, 9) , and the * synthetic budget bgt = 10665000 . The ratio r must be calculated to contract the synthetic solution to an optimal designed solution x . The results can be shown as follows,
r = bgt / bgt * = 9900000 /10665000 = 0.92827, x = rx ∗ = (0.92827 ×10, 0.92827 × 9) ≈ (9, 8). Then, the merger profit can sum function f1 as (3000000 × 9) + (1200000 × 8) = 36600000 while the merger profit can sum function f 2 as (1200000 × 9) + (300000 × 8) = 13200000. The total alliance profit is 36600000 + 13200000 = 49800000. So, the engineering HRs should be optimized as follows: Design engineer number should be increased to 159. CAD engineer number should be decreased to 43. Layout engineer number should be remained the same as 60. Project manager number should be decreased to 17.
62
C.-Y. Huang and G.-H. Tzeng / Post-Merger High Technology R&D Human Resources Optimization
Thus, total headcount increases for design engineer should be 39. On the other hand, 48 CAD engineers and 3 project managers are more than expected. Finally, the total number for layout engineer should be remained the same. Since the total profit generated after the vertical integration, $49,800,000, is greater than the total profits, $40,200,000, generated by foundry X and design service company Y, this optimization is meaningful.
5. Discussions In this paper, we demonstrated that the De Novo programming can optimize the post-merger high technology HR optimization problem. The new optimization method can serve as a foundation for strategic decisions of M&A, strategic alliance, and vertical integration in the knowledge based labor intensive high technology firms. In the real world, external resources are always among the alternatives for firms to acquire to enhance its competitiveness. Thus, this proposed optimization method that leverages external HRs is more realistic in contemporary capitalism. In contrast to traditional mathematical programming approaches which are based on existing resources only, the De Novo programming method can achieve better results than existing mathematical programming and better satisfying enterprises’ needs. From the comparisons given in the empirical study in Section 4, we found that the post-merger profits based on the HR designed by De Novo is $49,800,000. On the other hand, the post-merger profits based on the HR optimized by traditional mathematical programming is only $40,200,000. That is, the profits of the designed HR leveraging outside HRs by using merger can be 1.238 times better than the optimized system based on existing resources by using the traditional mathematical programming-based optimization approaches. Meanwhile, based on the results calculated from traditional mathematical programming that was demonstrated in the first part of Section 4, it seems the synergy of the merger of the IC design service company by the semiconductor foundry company is not maximized since the post merger profits can also be achieved by an independent IC design service company and an independent semiconductor foundry company. This also implies that the synergy of a merger will be maximized when the HRs are redesigned. For the extra HRs (e.g., design engineers) to be used to optimize the HRs, possible strategies include new M&A of other IC/SOC design service companies, establishing offshore design service centers, or leveraging other design service companies for lower priority or lower profit projects. Meanwhile, for surplus HRs (e.g. CAD engineers in the empirical study) after the merger, possible HR strategies include re-training the surplus engineers as IC design engineers, or forming alliances with other design service firms that are short of CAD engineers and providing services to the firms. Layoff of surplus HRs should be the last choice since it could easily impact other employers being acquired and cause high turn over rates after the merger. The original purpose for M&A, acquiring external HRs to achieve the core competences of the firm, could be biasing. Finally, this proposed methodology also can be applied to firms in optimizing other R&D resources (e.g., equipment or materials) or resources belonging to other functional departments in the industries.
C.-Y. Huang and G.-H. Tzeng / Post-Merger High Technology R&D Human Resources Optimization
63
6. Conclusions In this paper, the authors proposed an optimization programming methodology for the post M&A high technology HRs problem that was significant but seldom discussed. In contrast to traditional mathematical programming methodologies based on existing resources, the De Novo programming redesigns the HRs needed and best optimizes this optimization problem. By comparing it with traditional multiple objective programming using an example modified from a real M&A case in the IC design industry, this De Novo programming was proven effective. In future studies, the proposed methodology can be applied to optimization of HRs problems as a reference for M&A decisions. Meanwhile, the De Novo programming can be applied on any resource optimization problem in which extra resources are available and get satisfactory results.
References [1] Ahmad, S. and Schroeder, R. G., 2003. The Impact of Human Resource Management Practices on Operational Performance: Recognizing Country and Industry Differences. Journal of Operations Management. 21, 19–43. [2] Amit, R. and Schoemaker, P., 1993. Strategic Assets and Organizational Rent. Strategic Management Journall. 14, 33–46. [3] Autier, F. and Picq, T., 2005. Is the Resource-Based “View” a Useful Perspective for SHRM Research? The Case of the Video Game Industry. International Journal of Technology Management. 31, 197–203. [4] Barney, J. B., 1986. Strategic Factor Markets: Expectations, Luck, and Business Strategy. Management Science. 32, 1231–1241. [5] Barney, J. B., 1991. Firm Resources and Sustained Competitive Advantage. Journal of Management. 17, 99–129. [6] Bertrand, O. and Zuniga, P., 2006. R&D and M&A: Are Cross-Border M&A Different? An Investigation on OECD Countries. International Journal of Industrial Organization. 24, 401–423. [7] Capron, L., 1999. The Long Term Performance of Horizontal Acquisitions. Strategic Management Journal. 20, 987–1018. [8] Capron, L., Mitchell, W., 2000. Internal versus external knowledge sourcing: evidence from telecom operators in Europe, Working Paper, INSEAD, France. [9] Cardy, R. L. and Dobbins, G. H., 1995. Human Resources, High Technology, and Quality Organizational Environment: Research Agendas. The Journal of High Technology Management Research. 6, 261–279. [10] Cassiman, B., Colombo, M. G., Garrone, P., and Veugelers, R., 2005. The Impact of M&A on the R&D Process: An Empirical Analysis of the Role of Technological- and Market-Relatedness. Research Policy. 34, 195–220. [11] Chiang, S.-Y., 2001. Foundries and the Dawn of an Open IP Era. Computer. 34, 43–46. [12] Chu, P.-Y., Teng, M.-J., Huang, C.-H., and Lin, H.-S., 2005. Virtual Integration and Profitability: Some Evidence From Taiwan’s IC Industry. International Journal of Technology Management. 29, 152–172. [13] Cloodt, M., Hagedoorn, J., and Kranenburg, H. V., 2006. Mergers and Acquisitions: Their Effect on the Innovative Performance of Companies in High-Tech Industries. Research Policy. 35, 642–654. [14] Coff, R., 1999. How Buyers Cope With Uncertainty When Acquiring Firms in Knowledge-Intensive Industries: Caveat Emptor. Organization Science. 10, 144–161. [15] Delorme, M. and Cloutier, L. M., 2005. The Growth of Quebec’s Biotechnology Firms and the Implications of Underinvestment in Strategic Competencies. International Journal of Technology Management. 31, 240–255. [16] Drucker, P. F., 1999. Knowledge Worker Productivity. California Management Review. 41, 79–94. [17] Gloski, G., Khan, A., Patel, K., Ruddy, P., Sherwani, N., and Vasishta, R., Panel Session: COT – customer owned trouble. In. Proceedings of the Design Automation Conference (DAC) 2003, 91–92. [18] Gomez-Mejia, L., Balkin, D., and Welbourne, T., 1990a. The Influence of Venture Capitalists on Management Practices in High Technology Industry. Journal of High Technology Management. 1, 107–118. [19] Gomez-Mejia, L., Balkin, D. B., and Milkovich, G. T., 1990b. Rethinking Rewards for Technical Employees. Organizational Dynamics. 18, 62–75.
64
C.-Y. Huang and G.-H. Tzeng / Post-Merger High Technology R&D Human Resources Optimization
[20] Graebner, M. E., 2004. Momentum and Serendipity: How Acquired Leaders Create Value in the Integration of Technology Firms. Strategic Management Journal. 25, 751–777. [21] Hitt, M. A., Bierman, L., Shimizu, K., and Kochhar, R., 2001. Direct and Moderating Effects of Human Capital on Strategy and Performance in Professional Service Firms: A Resource-Based Perspective. Academy of Management Journal. 44, 13–28. [22] James, G., 2005a. Virtual versus Vertical, Electronic Business, 2005. [23] James, G., 2005b, Success by Design, Electronic Business, 2005 [24] Jolly, D. R., 2005. Editorial: Human Resource Management in High-Tech Companies. International Journal of Technology Management. 31, 197–203. [25] Krishnan, H. A. and Park, D., 2002. The Impact of Work Force Reduction on Subsequent Performance in Major Mergers and Acquisitions: an Exploratory Study. Journal of Business Research. 55, 285–292. [26] Lai, H.-C., Shyu, J. Z., and Tzeng, G.-H., 2003. Fuzzy integral MCDM approach for evaluating the effects of innovation policies: an empirical study of IC design industry in Taiwan, in Proceedings of the Portland International Conference on Management of Engineering and Technology (PICMET). [27] Lee, J. and Shim, E., 1995. Moderating Effects of R&D on Corporate Growth in U.S. and Japanese Hi-Tech Industries: An Empirical Study. Journal of High Technology Management Research. 6, 179–191. [28] Lu, N. C., 2004. Emerging technology and business solutions for system chips, in Digest of Technical Papers, IEEE, 25–31. [29] Peteraf, M. A., 1993. The Cornerstones of Competitive Advantage: A Resource-Based View. Strategic Management Journal. 14, 179–191. [30] Pfeffer, J., 1994, Competitive advantage through people, Harvard Business School Press, Boston, M.A. [31] Puranam, P., Singh, H., and Zollo, M., 2002, The inter-temporal tradeoff in technology grafting acquisitions, Working paper, London Business School. [32] Ranft, A. L. and Lord, M. D., 2000. Acquiring New Knowledge: the Role of Retaining Human Capital in Acquisitions of High-Tech Firms. Journal of High Technology Management Research. 11, 295–319. [33] Ranft, A. L. and Lord, M. D., 2002. Acquiring New Technologies and Capabilities: a Grounded Model of Acquisition Implementation. Organization Science. 13, 420–441. [34] Ravenscraft, D. and Scherer, F. M., 1987, Mergers, Sell-Offs and Economic Efficiency, The Brookings Institution, Washington. [35] Scherer, F. M. and Ross, D., 1990, Industrial Market Structure and Economic Performance, Houghton Mifflin, Boston, M.A. [36] Schuler, R. and Jackson, S., 2001. HR Issues and Activities in Mergers and Acquisitions. European Management Journal. 19, 239–253. [37] Shanklin, W.-L. and Ryans, J.-K., 1987, Essentials of Marketing High-Tech, Lexington Books, Lexington, MA. [38] Tassey, G., 1983. Competitive Strategies and Performance in Technology-Based Industries. Journal of Economic Business. 35, 21–40. [39] Thomson, N. and McNamara, P., 2001. Achieving Post-Acquisition Success: The Role of Corporate Entrepreneurship. Long Range Planning. 34, 669-697. [40] Tremblay, M. and Chenevert, D., 2005. The Effectiveness of Compensation Strategies in International Technology Intensive Firms. International Journal of Technology Management. 31, 222–239. [41] Tseng, F. C., 1999. Semiconductor Industry Evolution for 2lst Century, in Proceedings of the Symposium on VLSI Circuits Digest of Technical Papers, IEEE, 1–4. [42] Viardot, E., 2005. Human Resources Management in Large Information-Based Services Companies: Towards a Common Framework? International Journal of Technology Management. 31, 317–333. [43] Wernerfelt, B., 1984. A Resource-Based View of the Firm. Strategic Management Journal. 5, 171–180. [44] Wright, P. and McMahan, G., 1992. Theoretical Perspective for Strategic Human Resource Management. Journal of Management. 18, 295–320. [45] Zeleny, M., 1981. On the Squandering of Resources and Profits Via Linear Programming. Interfaces. 11, 101–107. [46] Zeleny, M., 1990. Optimal Given System Vs. Designing Optimal System: The De Novo Programming Approach. International Journal of General System. 17, 295–307. [47] Zeleny, M., 2005, Human Systems Management: Integrating Knowledge, Management and Systems, World Scientific Publishing, Singapore.
Advances in Multiple Criteria Decision Making and Human Systems Management Y. Shi et al. (Eds.) IOS Press, 2007 © 2007 The authors. All rights reserved.
65
An Example of De Novo Programming David L. OLSON a and Antonie STAM b Department of Management, University of Nebraska Lincoln, NE 68588-0491,
[email protected] b Department of Management, University of Missouri Columbia, MO 65211,
[email protected] a
Abstract. One of the great contributions to science by Dr. Zeleny has been his insight that in many real life decision situations it is important to design an optimal system, rather than optimizing a given system. In his honor, we offer a small example of the use of de novo programming.
De Novo Programming One of Professor Zeleny’s innovative ideas with respect to managerial decision making was de novo programming (Zeleny 1981; 1982, pp. 338–344; 1986). The essence of de novo programming is recognition that decision making in a complex environment requires a flexible approach in which the manager seeks to design an optimal system, rather than optimizing a given system. The distinction between goal and constraint is not always clear, and the decision maker may for instance want to see the impact of varying the right hand side of some of the resource constraints on current profit levels. Moreover, over time external pressures in the system environment will influence the characteristics of the system, and will therefore affect the appropriate course of action by the manager. In practice, this implies that, in order to achieve the overall organizational goals and objectives, the manager needs to have not only a long term planning horizon, but also a short-range outlook, with requirements and constraints that are liable to change with time.
Demonstration Model A small Czech automobile manufacturer opened a facility in Walla Walla, WA to compete in the northwest U.S. region. The company produces eight styles of vehicle, covering the most popular models of cars – the Mini (MINI), Coupe (CPE), SUV, Midsized Sedan (MID), Station Wagon (SW), Minivan (VAN), Luxury Sedan (LUX) and Super Luxury Sedan (PIG) (see Table 1). At the Walla Walla facility, vehicle models manufactured in the Czech Republic are modified (“spruced up”) for the US market. Specifically, the vehicles are outfitted with non-standard, more powerful engines (4, 6 or 8 cylinder), durable and attractive chrome alloy plating and selected interior plastic parts. The expected maximum number of autos that can be sold by style as estimated by the
66
D.L. Olson and A. Stam / An Example of De Novo Programming
Marketing Department and the minimum requirements representing legal production commitments through signed contracts with retailers are given in Table 1. The mileage (MPG) and unit profit for each model are provided in Table 1 as well. Table 1. Automobile Model Variables Variable
Style
MINI CPE SUV MID SW
Small car Coupe OverRoller Midsized Station wagon Van Luxury Super luxury
VAN LUX PIG
Class Small Small Medium Medium Medium
Minimum Number of Cars 300 200 500 300 100
Maximum Number of Cars 1500 3000 2000 5000 1000
Unit Profit 500 550 550 600 800
Mileage (MPG) 30 28 18 22 19
Large Large Large
100 50 10
2000 1200 500
900 1000 2000
17 15 8
Resource limitations in the production process at Walla Walla include the number of engines by size, and the amount of chrome, plastic, and labor available. Other costs include such things as painting, advertising, sales expense, and overhead. The relevant resource and cost figures are provided in Table 2. The auto company also has a policy that at least 40% of its production be small cars (defined here as variables MINI, CPE, and SUV). The government has set industry targets for an average mileage of 25 MPG, but this is not a strict requirement as of yet. Table 2. Resource and Cost Information Resource 4 cylinder engines 6 cylinder engines 8 cylinder engines Chrome alloy (tons) Plastic (Cubic feet) Labor (man hours) Production cost/car Other costs Sales price/car Profit/car Resources 4 cylinder engines 6 cylinder engines 8 cylinder engines Chrome alloy Plastic Labor
MINI 1
0.1 4 30 6660 2440 9600 500
CPE 1
0.2 4 32 5084 4566 10200 550
Units Each Each Each Tons Cubic feet Man hours
SUV
MID
SW
1
1
1
0.2 5 31 12272 2578 15400 550
0.3 6 35 9820 3080 13500 600
0.4 8 40 10480 2720 14000 800
Cost/Unit $500 $600 $800 $4000 $100 $12
VAN
LUX
PIG
1 0.4 7 42 13604 4496 19000 900
1 0.7 10 45 25140 5860 32000 1000
1 1.0 12 50 26600 11400 40000 2000
Units Available 1000 2000 500 10000 25000 120000
The boss, Václav, considered a world leader in Marketing educated at VŠE in Prague, has based his initial production plan in Table 3 on demand. He made sure that the plan stayed well within available demand limits and satisfied contractual agreements. The average gas mileage of this plan was 22.2, only a few MPG below the government target.
67
D.L. Olson and A. Stam / An Example of De Novo Programming
Table 3. Initial Plan Car
4cylEng
MINI
1
CPE
1
6cylEng
8cylEng
Chrome
Plastic
Labor
Unit Profit
Plan
0.1
4
30
500
1000
0.2
4
32
550
2000
SUV
1
0.2
5
31
550
1000
MID
1
0.3
6
35
600
3000
SW
1
0.4
8
40
800
500
VAN
1
0.4
7
42
900
1000
LUX
1
0.7
10
45
1000
600
2000
250
PIG
1 ≤
≤
available
1000
2000
500
Plan
3000
4500
1850
1
≤
12
≤
50
≤
≤
Maximize
10000
25000
120000
2870
55000
331500
6350000
Václav was quite pleased with the expected profit of $6.35 million for the first month’s operations. However, the chief accountant Karel pointed out that the plan was impractical, infeasible, and unworkable. The firm did have sufficient chrome for the plan, but none of the other resources required were within available limits. Moreover, the percentage of small cars was substantially below the 40% level. Therefore, Karel suggested linear programming. This led to the model and solution in Tables 4 and 5. Table 4. Initial Model A Chrome
Plastic
Unit Profit
min
MINI
Car
4cylEng 1
6cylEng
8cylEng
0.1
4
Labor 30
500
300
max 1500
CPE
1
0.2
4
32
550
200
3000
SUV
1
0.2
5
31
550
500
2000
MID
1
0.3
6
35
600
300
5000
SW
1
0.4
8
40
800
100
1000
VAN
1
0.4
7
42
900
100
2000
LUX
1
0.7
10
45
1000
50
1200
PIG
1
1
12
50
2000
10
500
≤
≤
500
10000
25000
120000
available
≤
≤
1000
2000
≤
≤
Maximize
In addition to the constraints and coefficient matrix implied by Table 4, Karel included the constraint that at least 40% of the cars sold are small cars in the linear program: MINI + CPE + HB ≥ 0.4(MINI + CPE + HB + MID + SW + VAN + LUX + PIG) Karel triumphantly noted that the percentage of small cars in the optimal solution to Model A in Table 5 was 44.9%, thus easily satisfying the company requirement of 40%.
68
D.L. Olson and A. Stam / An Example of De Novo Programming
Table 5. Model A Solution Car
4cylEng
6cylEng
8cylEng
Chrome
Plastic
Labor
Unit Profit
UNITS
MINI
1
0.1
4
30
500
300
CPE
1
0.2
4
32
550
464.1 700
SUV
1
0.2
5
31
550
MID
1
0.3
6
35
600
300
SW
1
0.4
8
40
800
1000
1
0.4
7
42
900
100
LUX
1
0.7
10
45
1000
50
PIG
1
1
12
50
2000
350
2610234
VAN
available
764.1
2000
500
1177.8
21756.3
120000
1000
2000
500
10000
25000
120000
Reduced costs are negative for MINI, MID, VAN, and LUX (all of which are at their minimum constraints), and at about $95 for SW, which is at its maximum constraint level of 1000 (reduced cost 0 for all others, neither at their constrained minima or maxima). Average miles per gallon for this plan is almost 20.0, a little lower than the prior plan. Constraint shadow prices for resources are about $17 for 6 cylinder engines, $1,141 for 8 cylinder engines, and $17 for labor hours (0 for all other resources). Karel received recognition for generating a feasible plan, but there was some friction because the solution yielded only about $2.61 million in profit, when expectations had been set at a much higher level. He took the next job offer he received and went to work for Trabant, a longtime automotive competitor in the former East Germany.
Month 2: Management was driven by Adam Smith’s dicta to gain more profit (the American way). Analysis by Martina, the firm’s chief engineer of Czech descent who received an MBA from Fordham University, indicated that the demand for station wagons was limiting the firm from gaining additional profit. In fact, the company could afford to spend up to $95 per vehicle to stimulate demand for station wagons. Management brainstorming led to two ideas – one to advertise to convince the public that they needed station wagons to better provide for family driving needs, the other to lobby governmental officials to see the need for providing the public with what they wanted rather than imposing additional restrictions on gas mileage average targets. Careful marketing analysis indicated that for a decreased profit rate of $10 per station wagon, the advertising campaign could lead to a monthly demand of 1600 station wagons. The next planning model B reflected these changes, leading to the second month’s solution in Table 6. This solution had exactly 40% of its mix consisting of small cars, and an average MPG of 20. Profit went up slightly (by $1,565) over the prior month, with the product mix increasing variable SW while reducing variables CPE and SUV. As profit increased, Martina was written up in an international business periodical, quit the firm, moved to Bohemia and went into private consulting.
69
D.L. Olson and A. Stam / An Example of De Novo Programming
Table 6. Model B Solution Car
4cylEng
6cylEng
8cylEng
Chrome
Plastic
Labor
Profit
UNITS
MINI
1
0.1
4
30
500
300
CPE
1
0.2
4
32
550
425.9
0.2
5
31
550
564.4
SUV
1
MID
1
0.3
6
35
600
300
SW
1
0.4
8
40
790
1135.6 100
VAN
1
0.4
7
42
900
LUX
1
0.7
10
45
1000
50
PIG
1
1
12
50
2000
350
2611799
available
725.9
2000
500
1197.3
22010.4
120000
1000
2000
500
10000
25000
120000
Month 3: The US government announced that, since automobile manufacturers were evidently not taking their MPG standards seriously, it was forcing firms to produce an average of at least 22 MPG starting now. The corporate lawyer Nadezda wanted analysis to identify the costs associated with meeting government requirements of 22 MPG, and ultimately 24 MPG, to enable assessment of how much the firm should be willing to pay lobbyists to delay government implementation of their plan. Table 7. Model C-1 Solution (MPG = 22) Car
4cylEng
MINI
1
CPE
1
6cylEng
8cylEng
Chrome
Plastic
Labor
Profit
UNITS
0.1
4
30
500
800 200
0.2
4
32
550
SUV
1
0.2
5
31
550
500
MID
1
0.3
6
35
600
1364.5
SW
1
0.4
8
40
790
100
VAN
1
0.4
7
42
900
100
LUX
1
0.7
10
45
1000
50
PIG
1
1
12
50
2000
317.9
2458408
3432.4
available
1000
1964.5
467.9
1062.2
20501.2
120000
1000
2000
500
10000
25000
120000
Nadezna judged that they couldn’t hire an effective lobbyist for that little an amount, so she recommended that the firm change its production plan in the short run. As shown in Table 7, the 22 MPG plan reduced profit by $153,391 (from $2,611,799 to $2,458,408). Table 8 shows that the plan that was optimal at 24 MPG involved an even greater dent in monthly profit:
70
D.L. Olson and A. Stam / An Example of De Novo Programming
Table 8. Model C-2 Solution (MPG = 24) Car
4cylEng
6cylEng
8cylEng
Chrome
Plastic
Labor
Profit
UNITS
MINI
1
0.1
4
30
500
800
CPE
1
0.2
4
32
550
200
SUV
1
0.2
5
31
550
500
MID
1
0.3
6
35
600
395.0
SW
1
0.4
8
40
790
100
900
100
VAN
1
0.4
7
42
LUX
1
0.7
10
45
1000
50
PIG
1
1
12
50
2000
10
1261001
2155.0
available
1000
995.0
160
463.5
10990.0
70675.0
1000
2000
500
10000
25000
120000
Attaining an average MPG of 24 with current resources would lead to a loss of profit of over $1.35 million per month. This caught the board’s attention, and Nadezda was given approval to hire a lobbyist. However, the political climate turned out to be antilobbyist that month, so Nadezda quit the firm to go to work for the European Union in Brussels.
Month 4: Under the leadership of Václav, those remaining with the firm implemented the plan meeting the short-range target of 22 MPG average, and hired extra engineers to work on improving mileage on their vehicles. For $100,000 per month, this staff was able to increase mileage on MINIs to 33 MPG, on CPEs to 30 MPG, on MIDs to 23 MPG, and on LUXs to 16 MPG. The chief of operations, Tomas, felt that increasing the labor force would improve profits, as the firm had always been running out of labor. He suggested increasing labor pay to $18 per hour, which resulted in 130,000 labor hours per month available. This reduced profit per car slightly, as did the extra engineer force. The new solution reflecting these changes is given in Table 9. Table 9. Model D Solution (with Extra Labor, Increased Mileage) Car
4cylEng
MINI
1
CPE
1
6cylEng
8cylEng
Chrome
UNITS
MPG
0.1
Plastic 4
Labor 30
Profit 470
800
33 30
0.2
4
32
518
200
SUV
1
0.2
5
31
519
500
18
MID
1
0.3
6
35
565
450
23
SW
1
0.4
8
40
790
1050
19
VAN
1
0.4
7
42
858
100
17
LUX
1
0.7
10
45
955
50
16
PIG
1
1
12
50
1950
350
8
2638900
3500
available
1000
2000
500
1200
23000
127600
1000
2000
500
10000
25000
130000
22
71
D.L. Olson and A. Stam / An Example of De Novo Programming
The solution to Model D increased profit over $180,000, which more than paid for the extra cost of engineers (about $100,000 per month). Tomas was given a promotion and raise, along with guaranteed job security for another month. Month 5: The next month found a shift in market demand, in great part caused by skyrocketing gasoline prices. Marketing Research cautioned that new maximum demands were anticipated, with higher demand for MINI and CPE, and drops in demand for the four largest models. The revised market demand figures and the corresponding optimal solution to Model E are given in Tables 10 and 11, respectively: Table 10. Revised Market Demand Variable MINI CPE SUV MID SW VAN LUX PIG
Class Small Small Medium Medium Medium Large Large Large
Small car Coupe OverRoller Midsized Station wagon Van Luxury Super luxury
Minimum 300 200 500 300 100 100 50 10
Maximum 1800 4000 2000 5000 1500 1000 600 200
Profit 470 518 519 565 790 858 955 1950
Mileage 33 30 18 23 19 17 16 8
Table 11. Model E Solution (Plan Under Changed Market Demand) Car
4cylEng
MINI
1
CPE
1
6cylEng
8cylEng
Chrome 0.1
Plastic
Labor 4
Profit
30
470
UNITS 600
MPG 33
0.2
4
32
518
400
30
SUV
1
0.2
5
31
519
500
18
MID
1
0.3
6
35
565
300
23
SW
1
0.4
8
40
790
1200
19
VAN
1
0.4
7
42
858
100
17
LUX
1
0.7
10
45
955
200
16
PIG available
1
1
12
50
1950
200
8
1000
2000
500
1190
23000
128000
2533000
3500
77000
1000
2000
500
10000
25000
130000
22
This solution found a reduction in monthly profit of over $105,000 for the month, which sent investors into a depression – reducing the price of stock, and leading to the firing of Tomas. However, Václav retained his position, promising better times to come when a more favorable business regulatory climate was obtained, and the government took more effective action to reduce the outrageously high price of gasoline. Month 6: At the time, Václav’s aunt was dating Markku, a Finnish employee of Volkswagen familiar with multiple criteria decision making. Specifically, Markku recalled seeing a
72
D.L. Olson and A. Stam / An Example of De Novo Programming
demonstration years ago of an interactive multiple criteria software product called VIG, a system in which constraint levels could be treated in a flexible manner, rather than as given. In other words, by defining those constraints for which this was appropriate as goals rather than hard constraints, the user could view tradeoffs between profit, various resource levels and policy levels for mpg graphically during an interactive process (Korhonen and Laasko 1986; Korhonen and Wallenius 1987). The user interactive was graphically based and did not require the decision maker to specify any numeric weights. Instead, the user was presented with various nondominated solutions, and could change the direction of the search for the “most preferred solution.” A solution is said to be nondominated if none of the goal levels in the current solution can be improved without sacrificing the level of at least one of the other goals. The user interaction in VIG was very intuitive, resembling driving a car on the surface of nondominated solutions (including shifting gears, backing up and changing direction). The advantage of using VIG (or a similar interactive multicriteria software product) is that the resource inventory strategy can be fine tuned according to the demand for these resources under the current conditions (as opposed to the original rule, in which the resources available were fixed, based on an entirely different demand situation in the past). For instance, reflecting on the current situation, Václav noted that the company had way too much chrome alloy in inventory, tying up a substantial amount of money. Clearly, the company can do with reduced inventory levels of chrome. Rethinking the decision rules that the company had had in place for an extended period, Václav reconsidered corporate strategy and (with some strong-arming) renegotiates with retailers to eliminate the contractual minimum requirements). This was not as difficult as it might seem, because the argument to let the market decide which models are viable, without maintaining an artificial inventory of models that customers do not want to buy appealed to retailers. Václav also discovered that nobody really remembered the rationale for the rule of at least 40% small vehicles (probably it was introduced at a time when this measure was deemed to crate a positive image with an environmentally conscious target market), but over time fuel efficiency has improved to the point that the 40% restriction was no longer needed – hence this constraint was dropped. The changes in the model are reflected in Model F in Table 12. Implementing these flexibilities, and exploring various tradeoffs between profit levels, average mpg and resource levels, Markku and Václav came up with the “most preferred” scenario in Table 13. Table 12. Model F Variables Variable MINI CPE SUV MID SW VAN LUX PIG
Small car Coupe OverRoller Midsized Station wagon Van Luxury Super luxury
Class Small Small Medium Medium Medium Large Large Large
Minimum 0 0 0 0 0 0 0 0
Maximum 1800 4000 2000 5000 1500 1000 600 200
Profit 470 518 519 565 790 858 955 1950
Mileage 33 30 18 23 19 17 16 8
73
D.L. Olson and A. Stam / An Example of De Novo Programming
Table 13. Model F Solution Car
4cylEng
6cylEng
8cylEng
Chrome
Plastic
Labor
Profit
UNITS
MPG
MINI
1
0.1
4
30
470
0
33
CPE
1
0.2
4
32
518
1300
30
SUV
1
0.2
5
31
519
377
18
MID
1
0.3
6
35
565
112
23
SW
1
0.4
8
40
790
1500
19
VAN
1
0.4
7
42
858
0
17
LUX
1
0.7
10
45
955
284
16
PIG 1300
1989
1
1
12
50
1950
200
8
484
1367.8
24977
139977
2778563
3773
22
In the “most preferred” of Table 13, monthly profit was improved substantially (from $2,533,000 to $2,778,563), by abandoning the MINI and VAN models, focusing instead on the remaining models. This solution had previously been infeasible in the presence of the minimum production requirements. In the most preferred solution, the number of CPE was greatly increased, as was the level of SW. LUX was increased by almost 50% as well. In the decision process, the amounts of resources (plastic, chrome and labor) were allowed to vary reasonably around their original levels, within reason. While the optimal production scheme did require some additional labor, there was no need (actually, there never was) to maintain large quantities of chrome. In the most preferred solution, only 1,368 tons of chrome were needed, while the original resource level was 10,000 tons. The need for plastic was about the same as in earlier solutions. Always having affinity for the Marketing angle, Václav noted that in the solution to Model F SW and PIG were at the maximum demand levels, suggesting that an advertising campaign stimulating the demand for these models might be most fruitful. The Board of Directors surprised Václav and Markku with a nice end-of-the-year bonus in the form of company stock options, after which Václav realized his long held dream and retired comfortably to Karlovy Vary in order to enjoy the spas there and devote time to his passion of writing poetry and proze. Markku received a nice promotion within Volkswagen.
Conclusions The small example presented in this paper shows that managers (decision makers) need always evaluate and re-evaluate the system as well as its environment, and always be open to seeking opportunities to plan more efficiently and effectively. It is crucial to never take the current system for granted – over time the system will most likely require adjustment, not only due to changes in the legal and competitive environments, but also because the goals and objectives of the organization evolve over time (Zeleny 2005a). Organizational forces will resist these changes, so the manager must be forceful and persistent. The key concept in our example is to design an optimal system based on all available knowledge – to achieve this one requires an effective knowledge management system and flexible, creative employees who understand the general long term management goals
74
D.L. Olson and A. Stam / An Example of De Novo Programming
and objectives. This is consistent with Zeleny’s de novo programming (Zeleny 1981; 1982; 1986; Shi 1995), his eight basic concepts of optimality (1998; 2005a; 2005b) and his linear programming examples (Zeleny 1981; 2005a). Specifically, our example illustrates Zeleny’s (1998) concept of optimal system design with a single objective (changing environment, necessitating modified constraint levels and constraint matrix coefficients), and the concept of optimal pattern matching with a single objective (modified constraint levels, modified constraint matrix coefficients, modified objective function coefficients). Our example could easily be expded by considering e.g. the MPG restriction as a criterion (to be maximized), considering explicit tradeoffs between these criteria and re-analyzing the system with modified coefficients for the criteria. Justification for converting the MPG constraint to a criterion could be change in government law that allows companies to self-monitor fuel efficiency, rather than complying with strict limitations. Rarely will an optimization problem be of a fixed nature – real life problems call for flexibility and evolve over time, almost by definition. Zeleny’s contributions to the understanding and implementation of flexible systems is of great value to the profession. The authors have used his 1981 article on de novo programming numerous times in their teaching, and consider the underlying concepts and ideas as crucial building stones in the development of each student’s critical thinking skills and abilities.
References [1] [2] [3] [4] [5] [6] [7] [8] [9]
Korhonen, P. and J. Wallenius, “Pareto Race,” Naval Research Logistics, 35:6, 1988, 615–623. Korhonen, P. and J. Laakso, “A Visual Interactive Method for Solving the Multiple Criteria Problem,” European Journal of Operational Research, 24, 1986, 277–287. Shi, Y., “Studies on Optimum-Path Ratios in Multicriteria De Novo Programming Problems,” Computers & Mathematics with Applications, 29:5, 1995, 43–50. Zeleny, M., “On the Squandering of Resources and Profits via Linear Programming,” Interfaces 11:5, 1981, 101–107. Zeleny, M., Multiple Criteria Decision Making. New York: McGraw-Hill Book Company, 1982. Zeleny, M. “Optimal system design with multiple criteria: De Novo programming approach,” Engineering Costs and Production Economics, 10, 1986, 89–94. Zeleny, M. (1998), “Multiple criteria decision making: Eight concepts of optimality,” Human Systems Management, 17, 97–107. Zeleny, M., “The Evolution of Optimality: De Novo Programming,” in Evolutionary Multi-Criterion Optimization, Lecture Notes in Computer Science, 2005a, 1–13. Zeleny, M., Human Systems Management: HSM, Integrating Knowledge, Management and Systems, World Scientific Publishing Company, 2005b.
Advances in Multiple Criteria Decision Making and Human Systems Management Y. Shi et al. (Eds.) IOS Press, 2007 © 2007 The authors. All rights reserved.
75
Multi-Value Decision-Making and Games: The Perspective of Generalized Game Theory on Social and Psychological Complexity, Contradiction, and Equilibrium Tom R. BURNS a and Ewa ROSZKOWSKA b Center for Environmental Science and Policy, Stanford University and Uppsala Theory Circle, Department of Sociology, University of Uppsala, Box 821, 75108 Uppsala, Sweden, e-mail:
[email protected] b University of Bialystok, Faculty of Economics, 15-062 Bialystok, Warszawska 63 and Bialystok School of Economics, 15-732 Bialystok, Choroszczańska 31, Poland, e-mail:
[email protected] a
Abstract. Game theory in its several variants can be viewed as a major contribution to multi-agent modeling, with widespread applications in economics and the other social sciences. One development of classical game theory, Generalized Game Theory (GGT), entails its extension and generalization through the formulation of the mathematical theory of rules and rule complexes and a systematic grounding in contemporary social sciences. Social theory concepts such as norm, value, belief, role, social relationship, and institution as well as game can be defined in a uniform way in terms of rules and rule complexes. Such a conceptual toolbox enables us to model social interaction taking into account economic, socio psychological, and cultural aspects as well as incomplete or imprecise or even false information. The article presents foundation and applications of GGT, among others: (1) GGT provides a cultural/institutional basis for the conceptualization and analysis of games in their social context, showing precisely the ways in which the social norms, values, institutions, and social relationships come into play in shaping and regulating game processes. (2) It formulates the concept of judgment as the basis of action determination. (3) GGT distinguishes between open and closed games. The structure of a closed game is fixed; in open games, actors have the capacity to transform game components such as the role components or the general “rules of the game”. Rule formation and re-formation is, therefore, a function of interaction processes. (4) GGT reconceptualizes the notion of “game solution as well as equilibrium. Some “solutions” envisioned or proposed by actors with different frameworks and interests are likely to be contradictory or incompatible. Under some conditions, however, players may arrive at “common solutions” which are the basis of game equilibria. (5) The theory distinguishes different types of game equilibria, such as instrumental, normative, social and so forth. (6) While GGT readily and systematically incorporates the principle that human actors have bounded factual knowledge and computational capability, it emphasizes their extraordinary social knowledge ability and competence: in particular, their knowledge of diverse cultural forms and institutions such as family, market, government, business or work organization, and hospitals, among others, which they bring to bear in their social relationships and game interactions. In concluding, the paper provides a scheme comparing and contrasting GGT and classical game theory on a number of central theoretical dimensions.
76
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
PART I. OVERVIEW1 Socially Embedded Game Theory: Social Rules, Roles, and Judgment Modalities Game theory in its several variants can be viewed as a major contribution to multiagent modeling. In their classic work, Von Neumann and Morgenstern (1944:49) defined a game as simply the totality of the rules which describe it. They did not, however, elaborate a theory of rules. Other limitations derive from the relatively unrealistic cognitive and social psychological assumptions of the theory and to matters of the weak empirical relevance and applicability of the theory to the analysis of concrete social phenomena. The cumulative critique has been massive and its summary would require a book. Our purpose here is more constructive. One relevant development of classical game theory, Generalized Game Theory (GGT), entails an extension and generalization, overcoming in a systematic way several of the most serious limitations of classical theory. Above all, GGT has involved extending social and cognitive-normative dimensions as well as mathematical aspects of game theory.2 (1) In GGT, games are conceptualized in a uniform and general way as rule complexes in which the rules may be imprecise, possibly inconsistent, and open to a greater or lesser extent to modification and transformation by the participants (Burns and Gomolińska, 1998; 2000a, 2001, Burns, Gomolińska, and Meeker, 2001; Gomolińska, 1999, 2002, 2004, 2005). Rules and rule configurations are mathematical objects (the mathematics is based on contemporary developments at the interface of mathematics, logic, and computer science).3 GGT has developed the theory of combining, revising, replacing, and transforming rules and rule complexes.4 The notion of rule complex was introduced as a generalization of a set of rules. Informally speaking, a rule complex is a set consisting of rules and/or other rule complexes. The organization of rules in rule complexes provides us with a powerful tool to investigate and describe various sorts of rules with respect to their functions such as 1 2
3
4
This presentation extends and elaborates earlier papers, in particular Burns and Roszkowska (2005, 2006). One of the extensions concerns multi-criteria evaluations and decisions-making as well as multiple modalities of judgment and action determination, which relates directly to one of the many innovative initiatives which Milan Zeleny launched. He engaged, among others, myself and one of my mathematical collaborators in his very early exploration and support of alternative approaches to questions of evaluation and decision-making, in particular the modelling of multiple criteria decision-making. More than thirty-five years ago, he was involved in organizing an international conference out of which emerged the book that he and J.L. Cochrane (1973) edited and in which one of my early papers in this area appeared (Burns and Meeker, 1973). Zeleny's initiative was at that time seen by many economists and rational choice people as peripheral (if not irrelevant); only later did it become increasingly part of mainstream efforts to break out of the straightjacket of rational-choice and related utilitarian theories. The mathematical formalization of rule is the following: Let L be a language, where the object and meta levels may be not separated, and FOR the set all formulae obtained according to some formation rule. Rule r is a triary relation r∈℘((FOR)2 ×FOR such that for any triple (X,Y,γ)∈r, card(X)=card(Y)< 0. For (X,Y,γ)∈r we say that X is a set of premises, Y is a set of justifications, γ is a conclusion of r. Formally (X,Y,γ)∈r means: If all elements of X hold and all elements of Y may hold, then we conclude γ. In fact, a rule r is a default rule (Reiter R, 1980) in the language L. A rule complex is obtained according to the following formation rules: (1) Any finite set of rules is a rule complex; (2) If C1, C2 are rule complexes, then C1 ∪C2 and ℘(C1) are rule complexes; (3) If C1 ⊆ C2 and C2 is a rule complex, then C1 is a rule complex. In words, the class of rule complexes contains all finite sets of rules, is closed under the set-theoretical union and the power set, and preserves inclusion. For any rule complexes C1 and C2, C1 ∩ C2, C1 − C2 are also rule complexes. A complex B is a subcomplex of the complex A if B=A, or B may be obtained from A by deleting some rules from A and/or redundant parentheses.
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
77
values, norms, judgment rules, prescriptive rules, and meta-rules as well as more complex objects composed of rules such as roles, routines, algorithms, models of reality as well as social relationships and institutions. (2) Classical game theory assumes a particular social and cultural structure – the absence of genuine social relationships and normative orders – where the actors are completely “autonomous” or independent from one another and devoid of moral character. Each player judges the situation in terms only of her own individual desires or values. There is no concern with others as such or with powerful norms and social relationships which characterize most human affairs (since we tend to be highly elaborated social animals). This barren world is illustrated by the classical rational agent who assigns values or preferences to outcomes and the patterns of interactions in terms of their implications for herself – and only herself – and tries to maximize her own narrow gains or utility. Such an extremely vacuous conception of social structure is not up to the mark for systematic social science. Actors are not only interdependent in action terms but in social relational, institutional, and cultural-moral terms. GGT enables us to model multi-agent social systems in which the agents have a complex web of social relationships and are engaged in different roles and role relationships (see Fig. 2). GGT entails then a cultural institutional approach to game conceptualization and analysis (Burns, 1990; Burns, 1994; Burns et al., 1985; also see Ostrom, 1990; Scharpf, 1997).5 The general game structure can be represented by a rule complex G. Such a rule complex may be imprecise, possibly inconsistent, and open to a greater or lesser extent to modification and transformation by the participants.6 Given a concrete interaction situation St in context t (time, space, social and physical environment), some rules and subcomplexes of the general game structure G are activated and implemented. A well-specified game G(t) in the situation St in context t is an interaction order where the participating actors typically have defined roles and role relationships and are subject to normative regulation (see Fig. 2). 7 The G(t) complex includes then as subcomplexes of rules the players’ social roles vis-à-vis one another along with other relevant norms and rules in the situation S (and context t). A social role is a particular type of rule complex, operating as the basis of the player’s values, perceptions, judgments and actions in relation to other actors in their particular roles in the defined game. In sum, GGT treats games as socially embedded in their cultural and institutional contexts (Granovetter, 1985) (see Fig. 2). The participants – in defining and perceiving an interaction situation, assessing it and developments in it, and judging actions and consequences of actions – do so largely from the perspectives of their particular roles and social relationships in the given cultural-institutional context. The role relationships within given institutional arrangements entail contextualized rule complexes including values and norms, the modes for classifying and judging actions and for pro5
6
7
Rules and rule systems are key concepts in the new institutionalism (Burns and Flam, 1987; March and Olsen, 1984; North, 1990; Ostrom, 1990; Powell and DiMaggio, 1991; Scott, 1995, among others), and evolutionary sociology (Burns and Dietz, 1992, 2001; Hodgson 2002; Schmid and Wuketits, 1987) and are closely related to important work in philosophy on“language games” (Wittgenstein, 1958). Not all games are necessarily well-defined with, for instance, clearly specified and consistent roles and role relationships. Many such situations can be described and analyzed in “open game” terms (Burns, Gomolińska, and Meeker, 2001). Most modern social systems of interest can be characterized as multi-agent social systems in which the agents have different roles and role relationships and operate on the basis of particular judgment functions (see later). They interact (or conduct games) generating interaction patterns, outcomes, and developments. A two role model (see Fig. 2) is considered below.
78
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
viding “internal” interpretations and meanings (Burns, 1990, 1994; Burns and Flam, 1987). (3) Classical game theory makes heroic and largely unrealistic assumptions about the cognitive and computational capabilities of players. Among other things, it assumes complete, shared, and valid knowledge of the game. Also, unrealistic assumptions are made about the abilities of players to compute (for example, payoffs and, in some variants, the maximization of payoffs) and about the consistency of their preferences or utilities. The player is an egoist who at the same time tries to be a strategist, taking into account how other(s) might respond to her and whether or not her own choice or action is the “best response” to others’ expected actions (see below). She “takes into account” the other only in order to make a best choice for self. Each actor searches through her action space (as in the 2-person game) and finds that action which is the best response to “the best of other(s)”. GGT treats the information available – or the knowledge of the participants – as variables. In most interaction situations, information is far from complete, is usually imprecise (or fuzzy), and even contradictory (Burns and Roszkowska, 2002, 2004; Roszkowska and Burns, 2000). Moreover, information is typically distributed unequally among players or utilized by them in diverse ways, including even ineffective ways. The level and quality of knowledge of a player i is representable in GGT as a knowledge complex. This complex may be modified during the course of the game. Some information, which classical game theory would consider essential, may be nonessential in particular GGT games. For instance, payoffs might not be precisely specified or might be altogether unknown to one or more of the participants. The implications of these conditions differ depending on the established social relationships among the players. Those in solidarity relationships would be inclined to trust in one another’s good will in dealing cooperatively with many types of problems confronting them. That is, they tend to rely on cooperative potentials inherent in their relationships. Information about individual payoffs would not be essential in many games where the players have strong underlying solidarity relationships, which would predispose them to “correct” ex post unfair results or developments. The actors are predisposed to focus especially on the characteristics of the action (“cooperativeness”) and interactions (“reciprocity”). Moreover, in the face of a veil of ignorance (ex ante) or unanticipated consequences (ex post), they would expect that they could together solve emergent problems (of course, there may be cases where solutions fail to materialize and even “betrayals” occur). In games where agents are alienated from one another, they experience high uncertainty and would want substantially more information not only about outcomes but also about the “character” of other players and their established ways to interpret and enact rules. In cases where such information is unavailable, players tend to rely on standard operating procedures and habitual modalities, requiring much less information, or information of another type than required for instrumental modality. Finally, in open games, there is never full information. Actors generate information as they develop strategies in the game and as the game unfolds, transforming selected rules and rule complexes. In sum, in GGT, players’ knowledge may be only partial, possibly even invalid to varying degrees. Cognitive and computational capabilities are strictly bounded and, at the same time may vary substantially among players. Judgment and action determinations are also likely to vary, for instance due to the different roles actors play and possibly their different interests in the interaction situation. Their interactions and out-
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
79
comes depend in part on their beliefs as well as estimates of one another’s beliefs, values, and judgment qualities. They operate with models of the situation. These constructions may contain incomplete and imperfect information (and possibly even false information) (Burns and Gomolińska, 2001; Burns and Roszkowska, 2004). Also, communication processes among players may entail persuasion and deception which influence beliefs, evaluations, and judgments in the course of game processes. GGT thus starts to approach the complexity and peculiarities of actual social games. (4) GGT is based on the principle of multiple modalities. This corresponds to rational choice theory’s principle of maximizing utility, that is there are several distinct modalities of action determination, each with its own “logic” (Burns and Gomolińska, 2000; Burns, Gomolińska, and Meeker, 2001). At the same time, the theory encompasses instrumental rationality as a special type of modality corresponding in some respects to the rational choice approach of game theory, but allows for much more variability in the information and calculation conditions than does classical theory. It also encompasses additional modes of decision-making and action that are fully intelligible and empirically grounded, but are not reducible to the principle of rational choice. Modalities differ in the dimensions of action and interaction on which they focus and operate. A modality focus may be on, for instance: (i) the outcomes of the action (“consequentialism” or “instrumental rationality”); (ii) compliance with a norm or law prescribing particular action(s) (“duty theory”); (iii) the emotional qualities of the action (“feel good theory”); (iv) the expressive qualities of the action (action oriented to communication and the reaction of others as in “dramaturgy”); (v) or combinations of these. Each modality entails a logic of generating or determining action with a particular judgment calculus, requiring as inputs specific types of data or information and generating particular evaluative, decisional and action outputs. Each modality is a particular way of paying attention and organizing and selecting situational data in the interaction situation St; it activates particular rule complexes and applies salient values, norms, and routines in making judgments and determining action. A narrow focus on outcomes as in the modality of instrumental rationality – ignoring the qualities including ethical qualities of action and interaction – implies that actors behave as if “the ends justify the means.” This of course over-simplifies judgmental computations. But the same one-sidedness and imbalance characterize those who focus only on the intrinsic qualities of actions, ignoring outcomes as in normative or procedural rationality. A narrow focus on the intrinsic properties of action treats certain action(s) as “right” regardless of outcomes, even catastrophic ones. However, once actors are motivated by and take into account multiple values – for instance, considering ethical qualities of actions as well as their instrumental outcomes – they are likely to be faced with dilemmas and tendencies to blocked or erratic behavior (Burns, Gomolińska, and Meeker, 2001) (see later). Role incumbents focus on specific qualia in particular contexts, because, among others, (1) such behavior is prescribed by their roles, (2) such behavior is institutionalized in the form of routines, (3) the actors lack time, sufficient information, or computational capability to deal with other dimensions (qualia). Thus, games may be played out in different ways, as actors operating within opportunity structures and constraints, determine their choices and actions, and, in general, exercise their agency (Burns and Roszkowska, 2002, 2004, 2005b; Burns et al., 2005a): (5) The characterization of games as open or close (or varying in their type and degree of openness) is one of the major features of GGT. Classical games are special
80
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
cases of closed games with specified players, action alternatives and outcomes and particular, anomic type relationships among players (Burns, Gomolińska, and Meeker, 2001). Such closed game situations with specified alternatives and outcomes are distinguishable from open game situations. In the case of open games, actors may construct and elaborate strategies and outcomes in the course of interaction, for instance in the case of a bargaining game in market exchange (Burns et al., 2001). In such bargaining processes, established social relationships among the actors involved guide the construction of options and the patterns of interaction and outcomes. In bargaining games there is a socially constructed “bargaining space” (settlement possibilities) varying as a function of the particular social relationship in the context of which the bargaining interactions take place. The relationship – the particular social rules and expectations associated with the relationship – make for greater or lesser deception and communicative distortion, greater or lesser transaction costs, and likelihood of successful bargaining. The difficulties – and transaction costs – of reaching a settlement are greatest for agents who are enemies or pure rivals. They would be more likely to risk missing a settlement than pragmatic “egoists”. This is because rivals tend to suppress the potential cooperative features of the game situation in favor of pursuing their rivalry. Pure “egoists” are more likely to effectively resolve some of the collective action dilemmas in the bargaining setting. Friends may exclude bargaining altogether as a precaution against undermining their friendship relationship. Or, if they do choose to conduct business together, they would tend to make sacrifices and, otherwise, accommodate one another. However, their mutual tendencies to self-sacrifice may also make for negotiation difficulties and increased transaction costs in reaching a settlement (Burns, Gomolińska, and Meeker, 2001). (6) Game transformation in GGT follows from the notion of “open games”. It is conceptualized in terms of the rewriting (updating and revising) as well as restructuring of rules and rule complexes: agents may modify rules, may throw some out, introduce new rules or activate (or deactivate) them; these may also consist of a combination of several such operations. Transformative operations are likely to be taken when one, several, or all players in a game find no game consequences acceptable, for instance, the non-optimal outcome of “rationally” based non-cooperation in the PD game. The game rules that have led to this outcome may be rejected by some of the players; they would try instead to introduce, for instance, coordination rules – that is, they would take initiatives to establish an institutional arrangement – which increases the likelihood of obtaining the optimal cooperative outcome in the PD game. Other reasons for transforming games is to make them more consistent with core values and norms, or with the particular social relationship(s) among the players. For instance, players with differences in status and authority are predisposed to transform a symmetric game into an asymmetric game more appropriate for their relationship. Or similarly, actors with an egalitarian or democratic type relationships would be inclined to try to transform an asymmetric game (with differences in action opportunities and payoffs) into a symmetric game more compatible with their given social relationship. Such game transformations reflect, of course, not only the players’underlying normative orientations but their transformative capabilities. (7) “Game solution” is reconceptualized in GGT. In classical theory, the theorist or social planner specifies an equilibrium (there may be several) which is the “solution to the game”. In this perspective, an equilibrium is typically a “solution” to the game. Above we pointed out that in the case of the PD game, one or more players may reject some game rules because they prove to be ineffective or to lead to suboptimal
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
81
(even disastrous) outcomes. They respond to the dilemma by introducing, for example, particular coordination rules which increase the likelihood of obtaining the optimal (cooperative) outcome. These coordination rules are a “solution” to the “PD problem”. The transformed game structure results in one or more “common acceptable solution(s)” to the PD game. In the GGT perspective, social agents define and understand “solutions” on the basis of the institutional context, their social relationships, value complexes, and cognitive-judgment frames. They have “standpoints” from which they identify problems and propose solutions.8 The solutions proposed may or may not converge on one or more outcome(s). A common or general game solution is a multi-agent strategy or interaction pattern that satisfies or realizes the relevant norm(s) or value(s) of the players, resulting in a state that is judged acceptable – or even satisfactory – by the game players. An “acceptable solution” is the best result attainable under the circumstances; in a certain sense this makes for an “equilibrium” state, although not necessarily a normative equilibrium (see below). Solution proposals of the actors may diverge. There might be no common solution, at least initially; in other words, no multi-agent strategy or outcome is acceptable to all participants. For instance, in a negotiation situation; the positions of the players might be too far apart, and no agreement or settlement can be reached. An “equilibrium” in such a game is then the state of not bargaining or playing the game (Roszkowska and Burns, 2002). What is judged a solution for one agent (or several agents) from a particular perspective or perspectives may be judged as a problem from the particular perspective(s) of other players. In other words, any game may entail particular “problems” for one or more players, while others may not experience a “problem” in the situation. Realizing a norm or value or achieving a goal is a “solution” to the problem of unrealized goals, values, or norms. The players may have different views on satisfactory or even acceptable “solutions”. Or the differences may occur between individual and collective agents. Thus, we distinguish situations where proposed solutions are convergent (that multiple actors find it acceptable or even highly satisfactory) as opposed to a situation where the solutions proposed by different agents contradict one another – they are divergent proposals. Clearly, not every game has a common solution, that is, the basis of a game equilibrium (Roszkowska and Burns, 2002). (8) GGT reconceptualizes game equilibrium. GGT distinguishes different types of game equilibria (Burns and Roszkowska, 2004, 2005a, 2006; Burns et al. 2005a). One such is the Nash equilibrium. It is a game state from which no actor in the game can improve his or her individual situation by choosing an action or outcome differing from this equilibrium. Elsewhere (Burns and Roszkowska, 2004) we have generalized the Nash equilibrium in terms of our conceptualization of players’ judgment complexes and their evaluative judgments.9 As indicated earlier, an interaction or game equilibrium is a type of common solution where the participants find a particular interaction pattern or outcome as acceptable or even satisfactory. The key to this conception are the judgment processes 8
9
The theorist (as well as arbitrators) also have “standpoints” and can propose “solutions”. Whether the players accept such solutions is another matter. The Nash equilibrium entails m individual solutions which aggregate to a type of common solution, which is an equilibrium under some limited conditions (Burns and Roszkowska, 2004).
82
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
whereby “problems” are “solved” or partially solved. When there is convergence in the solutions, then an equilibrium state is possible. If there is divergence, however, then no equilibrium obtains (unless “solutions” are imposed, for instance, by a dictator). The players endorse or pursue their different, incompatible “solutions.” In many game situations, players are normatively or cultural interdependent in that they belong to and participate in an established social group or organization, or interact in the context of established normative controls. The agents acting collectively or in an organized way (for example, through an authority or voting procedure) judge game patterns and outcomes with particular consequences from the perspective of a common norm complex applied to the game players. The consequences may refer to the interaction itself (as in performing a ritual properly) or the outcomes (the distribution of goods (or bads)), or both. The production of normatively satisfying patterns of interaction or outcomes is the basis of a major GGT concept, namely normative equilibrium (Burns and Roszkowska, 2004, 2005a, 2006). Normative equilibria are a function of (1) the particular relationship(s) among the actors and the value or norm vI appropriate or activated in the situation S at a given time t and (2) of the concrete situation S in which rule complexes are applied: the action possibilities found or constructed in the situation and the consequences attributed or associated with the action(s). The participants know (or believe) that others accept or are committed to these equilibria — or to the rules that produce them. This makes for a “social reality” which is more or less predictable; it provides a space for planning and developing complex, individual and collective strategies. Normatively based game equilibria are patterns or sets of consequences generated through actors realizing – or anticipating the realization of – situationally relevant values and norms (or, the collective patterns and consequences are judged in themselves to realize or satisfy shared values). Such interaction patterns and outcomes have normative force and contribute to institutional order(s). An activity, program, outcome, condition or state of the world is in a normative equilibrium if it is judged to realize or satisfy appropriate norm(s) or value(s) v I in the situation S for each and every participant. The normative equilibria associated with performances of roles, norms, and institutional arrangements make for social facts and “focal points”10 to which participants orient (Schelling, 1963; Burns and Roszkowska, 2004, 2005a, 2006). While the concept of normative equilibria may be applied to role performances and to individuals who follow a norm, we have especially utilized the concept in terms of game normative equilibria in a given institutional and situational context. This means that the game participants judge an m-tuple aI = (a1, a2, ..., ai, ..., am) on the basis of whether it realizes or satisfies vI where vI represents a collective norm, normative procedure, or institutional arrangement. Examples of procedures to produce normative equlibria are democratic processes, adjudication, and negotiation as well as the exercise of legitimate authority; they are particularly relevant as devices to resolve conflict under conditions of contentiousness and conflict (Burns and Roszkowska, 2007) (see later).
10
Schelling (1963:57–58) refers also to “clues,”“coordinators” that have “some kind of prominence or conspicuousness.” From a conceptual point of view, his characterization is vague. For instance, “But it is a prominence that depends on time and place and who people are. Ordinary folk lost on a plane circular area may naturally go to the center to meet each other….But in the final analysis we are dealing with imagination as much as with logic… … Poets may do better than logicians at this game.”
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
83
There may also be stable game patterns which are not normative equilibria in that they lack moral force or necessary legitimacy. Game players might, nevertheless, accept them because they perceive them to be the best possible options under the circumstances (as in the Nash equilibrium). For instance, in closed games, there are interaction patterns which do not permit the full realization or satisfaction of important values to which participants are oriented. They may accept the patterns pragmatically or conditionally — as long as they are constrained to play the given game. But such equilibria – lacking players’ commitments, and confidence or trust in them – cannot be enduring (as is the case of Nash equilibria (Burns and Roszkowska, 2004, 2005a, 2006)). This also applies to equilibria that are imposed, that is, collective solutions imposed, by dictators and dominant groups. Inherently, such solutions are only equilibria to the extent that the dictator or adjudicator can force the participants to comply with the imposition and it is thereby stable. In the following section of the article, we elaborate on the ideas outlined here, before going on to a few applications, which illustrate GGT analysis.
PART II. ELABORATIONS 1. Role Based Game Theory Suppose that a group or population I = {1, …, m} of actors is involved in a situationally defined game G(t). ROLE(i,t,G) denotes actor i’s role complex in G(t) at moment t ∈ T (we drop the “G” indexing of the role):11 ROLE(i,t) ⊆ gG(t), where t∈T
(1)
The game structure G(t), in moment t∈T, consists then of a configuration of two or more roles together with R, that is, some general rules (and rule complexes) of the game: G(t) = [ROLE(1,t), ROLE(2,t),...., ROLE(k,t); R].
(2)
R contains rules and rule complexes which describe and regulate the game such as the general “rules of the game”, general norms, practical rules (for instance, initiation and stop rules in a procedure or algorithm) and meta-rules, indicating, for instance, how seriously or strict the roles and rules of the game are to be implemented, and also possibly rules specifying ways to adapt or to adjust the rule complexes to particular situations. The role complexes includes, among other things: particular beliefs or rules that define the reality of relevant interaction situations; norms and values relating, respectively, to what to do and what not to do and what is good or bad; repertoires of strategies, programs, and routines; and a judgment complex to organize the determination of decisions and actions in the game. As indicated earlier, GGT has identified and analyzed several types of judgment modalities, for instance: routine or habitual, normative, and instrumental modalities. The rule complexes of a game in a particular social con11
A ⊆g B represents that A is a subcomplex of B.
84
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
Figure 1. Relationships between components of complex G(t) for two actors.
text guide and regulate the participants in their actions and interactions at the same time that in “open games” the players may pursue their purposes and goals, restructuring and transforming the game and, thereby, the conditions of their actions and interactions. Role relationships provide then contextualizing frames of appropriate rules including values and norms, particular ways in which actions are classified and judged, and “internal” interpretations and meanings (Burns and Flam, 1987). For instance, “noncooperation” in a prisoners’ dilemma (PD for short) situation will not be merely “defection” in the case that the actors are friends or relatives in a solidary relationship. Rather, in such a social relation, it is a form of “disloyalty” or “betrayal” and subject to harsh judgment and sanction. In the case of enemies, “defection” in the PD game would be fully expected and considered “natural” — neither shameful nor contemptible, but right and proper rejection of or damage to the other, and, hence, not a matter of “defection” at all. Such a perspective on games enables us to systematically identify and analyze the symbolic and moral aspects associated with established social relationships. An actor’s role is specified in GGT in terms of a few basic cognitive and normative components, which are rule subcomplexes (1) the complex of beliefs, MODEL(i,t), frames and defines the situational reality, key interaction conditions, causal mechanisms, and possible scenarios of the interaction situation; (2) there is a complex of values, VALUE(i,t), including values and norms relating, respectively, to what is good or bad and what should and should not be done in the situation; (3) there are defined repertoires of possible strategies, programs, and routines in the situation, ACT(i,t); (4) a judgment complex or function, J(i,t), is utilized by actor i to organize the determination of decisions and actions in relation to other agents in situation St. The judgment complex consists of rules which enable the agent i to come to conclusions about truth, validity, value, or choice of strategic action(s) in a given situation. In general, MODEL(i,t), VALUE(i,t), ACT(i,t), and J(i,t) are activated in a game situation S and at moment of time t∈T and are the key components in decision-making, action, and interaction (see Fig. 1) Some relationships between components of game complex G(t) are represented in Fig. 1 where A is a subcomplex of B is denoted by →.
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
85
GGT investigates and models multi-agent social systems in which the agents have different roles and role relationships. Game players in G consist of a population of agents or a single collective agent (for instance, a group of people who are organized to make collective decisions such as a “public policy decision”). The G structure is translated into a process through the actors defined by G activating and implementing or realizing rules and subcomplexes of G in interaction situation St in context t (time, space, social and physical environment), G(t): G(t) ⊆g G, where t∈T. The G(t) complex includes then as sub-complexes of rules the players’ particular social roles vis-à-vis one another along with other relevant norms and rules in the situation S (and time t). Interactions or games taking place under well-defined conditions entail then the application and implementation of relevant rules and rule complexes of game complex G(t). This is usually not a mechanical process. Actors conduct situational analyses; they find that rules have to be interpreted, filled in, and adapted to the specific circumstances.12 Some interaction processes may be interrupted or blocked because of application problems: contradictions among rules, situational constraints, social pressures from actors within G(t) and also pressures originating from agents outside the game situation, that is in the larger social context. In general, not only do human agents apply relevant values and norms specified in their roles vis-a-vis one another in situation S, but they bring to their roles values and norms from other social relationships. For example, their roles as parents may come into play and affect performance in work roles (or vice versa). They also develop personal “interests” in the course of playing their roles, and these may violate the spirit if not the letter of norms and values defining appropriate role behavior. More extremely, they may reject compliance and willfully deviate, for reasons of particular interests or even ideals. Finally, agents may misinterpret, mis-analyze, and, in general, make mistakes in applying and performing rules. In general, role based behavior is not fully predictable or always reliable. Figure 2 represents the role relationship {ROLE(1), ROLE(2),R} of players 1 and 2, respectively, in their positions in an institutionalized relationship in which they play a game G(i,t) in the context t. Such role relationships typically consist of shared as well as interlocked rule complexes (see later). Different types of game structures, interaction processes, and outcomes are presented in Part III based on this type of model. In the following sections, we indicate some of the theoretical implications of GGT: namely, the symmetry/asymmetry of different role based games, the degree of openness of games, the diverse judgment modalities of game players, and the principle of action determination which replaces maximization of utility.
2. Symmetric and Asymmetric Games GGT provides a systematic theoretical basis on which to represent and analyze symmetric as well as asymmetric games (and the social structures in which they are em12
More generally, GGT stresses the complex judgment process of following or applying a rule (Burns and Gomolińska, 2000). This may not be a trivial matter, as Wittgenstein (1958) and Winch (1958) pointed out. We limit ourselves to the following observations. Some of the actors in I may allege a violation of the norm. This may not entail a dispute over the norm itself, but over its application, an issue of fact. Related problems may arise: some of the actors have conflicting interpretations of the meanings of the norm or of its particular application in the situation S. Or the participants, while adhering to the common norm, introduce different (and possibly incompatible) rules of other sorts, potentially affecting the scope of a norm and the equilibrium in the situation.
86
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
Figure 2. Two Role Model of Interaction Embedded in Cultural-Institutional and Natural Context.
bedded). 13 Actors are distinguished by their positions and roles in society, by the asymmetries in their relationships (superordinate/subordinate, high status/low status, master/slave), by their different endowments, access to resources (including special information, networks, etc.) or “social capital”. Such variation implies different action capabilities and repertoires. Expected patterns of interaction and equilibria will vary accordingly. Also, the actors’ different information and belief components in their MODELS, their diverse values, standards, and goals in VALUE complexes, the available repertoires of strategies (ACT), and their possibly different judgment complexes (J) for action determination are distinguishable and analyzable in GET (see Figs 1 and 2) If such variation is specified, taken into account, and analyzed in game investigations, then empirically diverse interaction patterns and outcomes become more readily describable, understandable, and predictable.
13
The structure of game theory limits it to describing and analyzing more or less symmetrical games.
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
87
3. Open and Closed Games GGT distinguishes between closed and open games. Classical games (as well as parlour games) are typically closed games with specified, fixed players, fixed value (or preference) structures and judgment complexes (for instance, maxmin or other optimization procedure) as well as fixed action alternatives and outcomes, whereas most real human games and interaction processes are more or less open. Under pure closed game conditions, these are specified and invariant for each actor i∈I, situation St, and game G(t).14 In open games, the actors participating in G(t) can transform one or more role components, possibly even the general “rules of the game” R. For instance, one or more players may re-construct or elaborate ACT(I,t) in the course of their interactions. Or, they may change value complexes (including changes in their preferences or utility functions), or modify their models and judgment complexes in such open games. Thus, in a bargaining process, the actors often introduce during the course of the negotiations new options or strategies – or undergo shifts in their values and judgment complexes. In such processes, the particular social relationships among the actors involved – whether relations of solidarity, anomie, or rivalry – guide the construction of options and the restructuring of interaction and outcomes. Thus, each actor i in I tends to reconstruct her role, e.g., her repertoire of actions, ACT(i,t) or other role components in the course of her interactions in accordance with the norms and values relevant to her role or the social relationship appropriate in the situation St at time t. In the case of open interaction situations where the players construct their actions and interactions in an ongoing process, the operational differences in cognitive and informational terms between normative and instrumental modalities as well as other modalities are particularly noteworthy. With normative modality, the players construct an action (or actions) which entails or corresponds to prescribed intrinsic properties or qualities of the action (or actions). In the case of instrumental modality, the actors are supposed to produce an outcome or state of the world with prescribed features, that is, they must find or construct an action (or actions) that they believe produces or leads to the prescribed consequences – the properties of the action itself are not prescribed. Note that the instrumental modality requires a model of causality linking actions to outcomes, or enabling the specification of such linkages. Analytically, GGT distinguishes degrees of game closure (or openness). The degree of closure on the individual level may be distinguishable precisely in terms of the degree of fixedness of players’ role complexes: value, model, action, and judgment complexes at time t in game G(t). On the collective level, it would concern the degree of fixedness of the role relationships and the rules of the game.
14
Open and closed games can be distinguished more precisely in terms of the properties of the action complex, ACT(I,t,G) for the group of players I at time t in the game G(t) (see Burns and Gomolińska, 2000). In closed game conditions, ACT(i,t,G) is specified and invariant for each actor i, in I, situation S, and game G(t). Such closure is characteristic of classical games (as well as parlour games), whereas most real human games are open. In open games, the actors participating in G(t) construct or “fill in” ACT(I,t,G), as, e.g., in a bargaining process where the actors alter their strategies or introduce new strategies during the course of their negotiations. The applications of GGT to open and closed games illustrates the concrete effects of social embeddedness on game structures and processes, in particular the impact of social relationships on the interaction patterns and outcomes.
88
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
4. Multi-Criteria Judgments and Judgment Modalities In general, judgment is an universal cognitive-normative process connected with classification, cognition, evaluation, and action (Burns and Gomolińska, 2000, 2001; Burns et al., 2005a; Burns and Roszkowska, 2005a, 2005b, 2006). Judgment operates on objects, such as values, norms, beliefs, data, strategies with such conclusion as beliefs, data, programs, and procedures (see Fig. 3). A special type of judgment of interest here is decision making. The judgment complex J(t) consists of rules which enable the agent i to come to conclusions about truth, validity, value, or choice of strategic action(s) in a given situation. Judgment is a process of operation on objects and entails comparing and determining the degree of similar or goodness of fit of two or more objects of consideration in situation St at time t. Judgments can take the form: “similar”, “dissimilar”, “not decided”, “almost similar”, etc. The capacity of actors to judge similarity or likeness (that is, up to some threshold, which is specified by a meta-rule or norm of stringency), plays a major part in the construction, selection, and judgment of action.15 The focus is on the similarity of the properties of an object with the properties specified by a rule such as a value or norm. But there may also be comparison-judgment processes concerning the similarity (or difference) of an actual pattern or figure with a standard or prototypical representation (Sun, 1995). The types of objects on which judgments can operate are: values, norms, beliefs, data, and strategies as well as other rules and rule complexes. Also there are different kinds of outputs or conclusions of judgment operations such as evaluations, beliefs, data, programs, procedures, and other rules and rule complexes (see Fig. 3). Types of Judgment. Several types of judgments can be distinguished: for instance, value, truth, action, and factual judgments. For our purposes here, we concentrate on action judgment specified by a value or norm. The action judgment process can involve one option, two options, or a set of options. In case of a single option judgment, each actor i estimates the “goodness of fit” of this option in relation to her values in VALUE(i,t). In the case of two options, the actor judges which of them is better (and possibly how much better). In the case of a set of three or more options, the actor chooses one (or a few) from the set of options as “better than the others” with respect to salient values. For example, let B be a set of possible action alternatives (options). In making their judgments and decisions about an action b from B, the players activate relevant or appropriate values and norms from their value complexes. In determining or deciding a particular action b, a player(s) compares and judges the similarity between the option b from the set B and the appropriate, primary value or goal v which is to be satisfied or realized in decisions and performances in G(t), as specified, for instance, in her role complex. More precisely, the actor judges if a finite set of expected or predicted qualia or attributes of option b, Q(b), are sufficiently similar to the set of those qualia Q(v) which the primary norm or value v (or a vector of values) prescribes (Burns et al., 2005a, Burns and Roszkowska 2004, 2005a). This type of judgment underlies the principle of action determination which follows. The principle of action determination states: Given the interaction situation St and game G(t), an actor i in Role (i,t) oriented to the value v (or a vector of values) specifying dimensions and standards Q(v), then player i tries to construct, or to find an 15
This is also the foundation for rule-following or rule-application activity (Burns and Gomolińska, 2000).
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
89
Figure 3. General Model of Judgement.
option b (b∈B), where b is characterized by dimensions and levels Q(b), which satisfy the following equation,16 16
This simple equation can be extended to apply to special cases of maximazation. For ACT(i,t) = (a1, a2,..., ap) let the results of judgment of similarity be some expression describing degree of dissimilarity dj (that is, the gap between a particular action performed or to be performed and the norm or value specifications of vi). J(i,t)(Q(ak),Q(vi)) = dj, where ak∈ ACT(i,t) . We simplify this: J(i,t)(Q(ak),Q(vi)) = J(i,t)(ak)=dj where it is understood that the judgment of the action ak is based on a comparison and assessment with respect to the given value or norm vi. That is, the desirable qualia Q(vi) of action are specified by vi and are compared to the expected qualia Q(ak) of any action ak under consideration (Burns and Roszkowska, 2005a). An action ak may be cognitively formulated in a complex manner where the qualia associated with ak, Q(ak), include such “consequences” as the responses of other agents. The players in making their judgments may consider and weigh combinations of actions such as cooperation or non-cooperation as well as other. Given two (or more) alternatives, dj, dr , dj>dr (or dj≥dr ) means that the actor judges that the action ak better realizes (or, at least not worse in realizing) vi than does as. Given that J(i,t)(ak) =dj and J(i,t)(as) =dr, she would then prefer ak to as if and only if J(i,t)(ak) >J(i,t)(as) She would chose to enact ak rather than as (of course, there is no basis for her to make a choice in the case J(i,t)(ak)=J(i,t)(as)) More generally, given a repertoire of actions, players are able to rank order (at least, a subset of them) with respect to the capacity of actions to realize the value or norm vi: J(i,t)( a k1 )>...>... >J(i,t)( a k i )>...> J(i,t)( a k p ), where a k i ∈ ACT(i,t) Given an action repertoire ACT(i,t), then the action determination judgment entails finding that action which best fits (“goodness of fit”) or is most consonant with vi. The actor chooses among the given options in her fixed repertoire that action a* which maximazes dj. The “goodness of fit” assessment is based on the comparison of the anticipated consequences of actions with the consequences prescribed or indicated by the norm. Formally, actor i selects the action a* (a*∈ ACT(i,t)) for which J(i,t)(a*) = Max[J(i,t)(ak)] for all ak ∈ACT(i,t).
90
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
J(i,t) (Q(b),Q(v)) = sufficiently similar
(3)
Such an action b is a “satisfier” or “realizer” of v.17 The equation implies that the actor i should “enact b” (in other words, the conclusion of the judgment process is to “do b” since Q(b) is judged to sufficiently satisfy Q(v). Or, in the case that there are several options, Q(b) is judged more similar to Q(v) than other options in B, and the agent would be disposed to execute “b”. The players in the game G(i,t) more or less perform their respective roles and role relationship(s). Within this already established institutional arrangement or social structure, each and all act in accordance with the action determination principle applied in their respective roles, generating interaction patterns, outcomes, and developments. Multiple Modalities of Judgment and Action. GGT’s principle of action determination – which corresponds to the principle of maximizing utility in rational choice theory – subsumes several distinct modalities of action determination, each with its own “logic” (Burns and Gomolińska, 2000; Burns, Gomolińska, and Meeker, 2001). The theory specifies in addition to instrumental rationality (corresponding to rational choice) other modes of judgment and action that are fully intelligible, but not reducible to the single principle of rational choice. 18 Such modalities are distinguished as, for instance, normatively oriented action, instrumental rationality, dramaturgicalcommunicative action, and playfulness as well as combinations of these. The modalities of action determination are distinguished by the prescribed dimensions or “consequences”, Con(r), associated with the action, dimensions specified by the norm or value r in the player’s value complex. The player is oriented to, attends, and tries to regulate corresponding consequences, Con(a) in the actions they construct or consider for choice. In an instrumental modality, for instance, the value of an action derives from evaluative judgments of specific outcomes or outputs of the action, whereas the value of action in the case of normative modality derives from judgments of the qualities of the action itself (including possibly the status or intentionality of the actor) (see Table 1). The judgment modalities for determining action are substantially different, cognitively and normatively. The information, cognitive, and evaluative requirements differ among the different modalities, because of the different focuses of attention and judgment bases. In consequentialist judgment, the actor is value-oriented to action outcomes and their qualities. We have contrasted this to a normative orientation where the value focus is on the intrinsic qualities of an action itself. Of course, both types of value judgment may apply at the same time and even result in, for instance, the classic contradiction between ends and means (see later). In general, actors are oriented often to multiple values in their interaction situations. This may result in dilemmas, or contradictory conclusions about what to do except in cases where there is convergence in the judgments (see later discussion). In the case of dilemmas, the action judgment process will involve the use of procedures such as weighting schemes, lexicographic, and other methods to resolve them. Or, resolution may be achieved through higher order or meta-rules giving priority to one or another of the contradictory values, or even finding This equation may be understood as maximizing goodness of fit (Burns and Gomolińska, 2000). In earlier work (Burns and Roszkowska 2002, 2004), we elaborated this model using a fuzzy set conceptualization. The general formulation of equation (3) relates to the notion of “satisficing” introduced by Simon (1969). 18 Jon Elster – in “discovering” normative action as distinct from rational action – has come to recognize at least two modalities, although his treatment is discursive rather than systematic. 17
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
91
Table 1. Consequences associated with modalities of action19 Modality
Consequences or Qualities Attended to
Instrumental Action
ConR(a) = outcomes
Normative Action
ConN(a) = intrinsic normative qualities of the act as a realization or expression of a norm
Dramaturgical-communicative action20
ConDC(a) = dramatic qualities of action and outcomes
Emotional action
ConE(a) = emotional qualities or experience of action and/or outcome
Aesthetic action
ConAE(a) = aesthetic qualities of an action and its outcomes.
Play
Conplay(a) = aspects of action correspond to “serious” aspects but are not serious. The consequences may be outcomes, intrinsic normative qualities, or dramaturgic or communicative action.
Composite21
Combinations or mixtures of the consequences indicated above.
ways of transcendence through higher order integrating values (Burns, Gomolińska, and Meeker 2001; Burns and Meeker, 1973; Cochrane and Zeleny, 1973). Typically, each modality has an overall purpose or aim as well as a particular form for determining and enacting activity. Above we have identified a few simple modalities such as instrumental rationality, 22 normative rationality, and dramaturgicalcommunicative habitual modalities. Each particular modality entails a basic logic of generating action. In any given modality, an actor focuses on particular pieces of information or data in situation St, activates particular rule complexes and orientations, which are applied in the course of organizing perceptions, making judgments, and determining action. There are, of course, combinations and a variety of variants. For our purposes, the modalities discussed here, including instrumental rationality, provide a wide but as of yet incomplete spectrum of action models in social life. 19
These types do not make up a typology, but represent several, empirically relevant ideal types. The communicative or expressive modality is a modality to express or communicate a particular message to others, e.g. adherence to or involvement with a particular norm or value, or a feeling (typically, there is a grammar to such expressions as those of anger, affection, intimacy, etc.). Actors in their roles often enact particular rule complexes or norms in order to express certain established conceptions of self. This is an identity enacting– and communicating– action. 21 Most action is composite, where actors concern themselves with several different types of consequences. A pure modality results typically in exaggerations and absurdities. For the purely instrumentally-oriented agent, the ends justify the means. On the other hand, the behavior of the pure bureaucratic personality following rules in an absolute or dogmatic sense may result in great harm to the organization or to others. 22 The instrumental rationality may be oriented to different “selves”, the individual herself, or particular groups or organizations to which she belongs and identifies. That is, she may be oriented to a particular personal interest or someone else’s, whether individual or collective. The latter relates to Adam Smith’s idea that human beings are social beings and take an interest in the fortune and happiness of others, individuals or collectives. In general, he was aware of individual relational and interrelational ethics of emotions and virtues (Muldrew, 1993:104). 20
92
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
Modality Determination. What determines the particular modality or modality players use? From what has been stated earlier, the short answer is apparent: an actor’s role or particular rules in the role indicate which consequences one should attend to, thus indicating the appropriate modality or basis for determining action in a given situation. There may be practical constraints on such a determination, however. Situational constraints may be such that the actor i is unable to determine the action on the basis of outcomes or qualities of the action. Information is lacking or there are constraints on acquiring the information, so the actor is unable to operate with, for instance, the instrumental modality (paying attention to outcomes), but may resort to acting “as if” utilizing a dramaturgical-communicative modality. Or, she makes use of rules of thumb, standard operating procedures, habits that have in the past led to appropriate outcomes. Habitual or routine type modalities entail executing a program, script, or procedure without deliberation or reflection, or weighing of alternatives. Such modalities are analytically distinguishable from consequentialist and normative modalities, where the actor makes evaluative judgments as well as calculations in the course of their activities. People often resort to utilizing the habitual modality for reason of efficiency – it requires much less situational data and time; information and operational costs are low in comparison to full-fledged instrumental or normative modalities. In the following section, we apply the modality concept in describing and analyzing interaction processes and equilibrium focusing on habitual, instrumental, and normative types of modality. PART III. APPLICATIONS; SOCIAL RELATIONSHIPS, JUDGMENT CALCULI, INTERACTION PATTERNS, AND EQUILIBRIA In the following, we illustrate briefly a few applications of GGT in game analysis. 1. Game Processes: Interaction Patterns and Outcomes To illustrate how games are played, let us consider the role relationship {ROLE(1), ROLE(2),R} of players 1 and 2, respectively, in their positions within an institutionalized relationship in which they play a game G(i,t) in context t. Such role relationships typically consist of shared as well as interlocked rule complexes. The concept of interlocked complementary rule complexes means that a rule in one actor’s role complex concerning his or her behavior toward the other, there is a complementary rule in the other’s actor’s complex.23 Thus, games may be played out in different ways, as actors operating within opportunity structures and constraints, determine their choices and actions, and, in general,
23
The concept of interlocked complementary rule complexes means that a rule in one actor’s role complex concerning his or her behavior toward the other, there is a corresponding rule in the other actor’s complex. For instance, in the case of a superordinate-subordinate role relationship (Burns and Flam, 1987), a rule k in ROLE(1) specifies that actor 1 has the right to ask actor 2 certain questions, to make particular evaluations, and to direct actions and sanction 2. In the ROLE(2) complex there is an interlocked rule or rule complex m, obligating 2 to recognize and respond appropriately to actor 1 asking questions, making particular evaluations, directing certain actions, and sanctioning actor 2. This notion of complementary interlocked rule complexes is basic to coherent role relationships.
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
93
exercise their agency (Burns and Roszkowska, 2004, Roszkowska and Burns, 2002; Burns et al., 2005a, 2006): • •
• • •
consequentialist-oriented interactions. Actors pay attention to the outcomes of their actions, apply values in determining their choices and behavior on the basis of outcomes realizing values. normativist-oriented interactions. Actors pay attention to, and judge on the basis of norms the qualities or attributes of action and interaction, applying general as well as role specific norms in determining what are right and proper actions. emotional interactions. symbolic communication and rituals. routine interactions, that is, the actors utilize habitual modalities (bureaucratic routines, standard operating procedures (s.o.p.’s), etc.) in their interaction.
Or, there may be some combination of these, including mixtures such as agents oriented to outcomes interacting with others oriented to qualities of the action. Or, someone following a routine interacts with another agent who operates according to a “feel good” principle. For our purposes here, we focus on the first two patterns where action determination takes place according to the application of values in value judgments – this according to the principle of action determination (equation (3)). The role relationship is characterized by rule complexes including algorithms and quasialgorithms (that is, they are not complete and require “filling-in” by the players). The value directed judgment processes construct and/or select among available alternatives, programs, and action complexes in general. In other words, the actors check to see if, and to what extent, their values are realized in actions they undertake vis-à-vis one another. In their “strategic” or instrumental actions and interactions, the focus of the actors is on the outcomes of actions and interactions, that is the relevant value v specifies Q(v), the outcomes or consequences (Q(con(A)) which an action A is to satisfy. In the case of normative (that is, “non-consequentialist”) action, the players focus on the properties of actions and interactions themselves, that is, v specifies Q(v) which the intrinsic properties or qualities of the action itself (Q(pro(A)) is to realize. Cognitively and evaluatively, these are substantially different modalities of action determination. (1) Consequentialist-oriented interactions. Given the rule complex {ROLE(1), ROLE(2),R} of actors 1 and 2, respectively, the actors orient to trying to realize role specified values in the outcomes or payoffs of action(s) under consideration. The actors focus on dimensions and qualia of action outcomes, “states of the world”, or “payoffs” associated either with a constructed action or a set of action alternatives under consideration in a choice situation, that is v specifies Q(v) which the outcomes Q(con(A)) of an action A are to satisfy. One form of such a mode of action determination is found in classical game theory, entailing the players of the game attempting to maximize or optimise a result or outcome for self. In this case, the actors are assumed to be selfinterested, autonomous agents. Related forms of interaction have been investigated by Burns (1990), Burns, Gomolińska and Meeker (2001), Burns and Roszkowska (2002, 2004). These entail, among others, (1) variation in the goals of the actors; actors may be oriented strategically to care about one another, for instance, in trying to help one another (“other-orientation”), or they may be oriented to joint or collective beneficial outcomes; (2) the actors orient in their game situation to multiple values, which may result in dilemmas, or divergent or contradictory conclusions about what to do (Burns,
94
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
Gomolińska, and Meeker, 2001). Typically, their action judgment process in such cases will involve the use of procedures such as weighting schemes, lexicographic, and other methods to resolve dilemmas. The value orientations, the relationship between the actors’ values, and their modalities of action judgment will depend on the type of social role relationship between the actors.24 Consequentialist-oriented equilibria. In classical game theory, autonomous agents concern themselves with their particular self interest: One major result is the Nash equilibrium, from which no actor in the game can improve his or her individual situation by choosing an action or outcome differing from the equilibrium pattern. GGT considers another variant more empirically grounded than either preference ordering or maximization of utility. Each actor engaged in a game has in the context of their particular role relationship a value complex with an ideal or maximum goal they aim for and also a minimally acceptable level. So, an outcome that satisfies the minimum for each actor would make for a type of equilibrium. However, those who had hoped for a better result are likely to be disappointed and are inclined to search for other possibilities. Thus, the equilibrium is an unstable one. The more that the agents realize their more ideal expectations, the more stable the equilibrium. For instance, this is true of negotiated contracts and prices on a market. Related work (Burns, Gomolińska, and Meeker, 2001; Burns and Roszkowska, 2002, 2004, 2005a, 2006) shows that there are multiple equilibria, and that the social relationships among the actors (for example, relations of solidarity, competition, or enmity) determine the particular equilibrium (or equilibria) as well as lack of equilibria that obtain in a particular game, for instance, a prisoners’ dilemma game. Actors with a competitive relationship where each tries to outdo the other will lack an equilibrium in a strict case. In consequentialist oriented interaction, the actors cannot determine an equilibrium if they fail to obtain (or to be able to use) information about outcomes and about the connection between actions and outcomes. Also, disequilibrium results if actors’ expectations or predictions about outcomes satisfying values fail to materialize. (2) Normativist-oriented interactions. In a context t, the rule complex {ROLE(1,t), ROLE(2,t),R} specifies how actors in their roles should act vis a vis one another. The actors pay attention to such qualities of the interaction as, for instance, “cooperativeness”, “taking one another into account”, or “fair play”. Such determinations entail a comparison-judgment of an action or actions focusing on its (their) qualities that satisfy or realize one or more norms applying to the intrinsic properties of the actions. That is, v specifies Q(v), which the properties of the action A, Q(pro(A)), should satisfy. Again, actors in solidary relationships focus on producing actions and interactions that are defined as “cooperative”, as “solidary”, as “fair play”, etc. Rivals
24
For instance, (1) Solidary actors expect one another to determine action(s) which realize a collective value or mutual satisfaction, “ a just division”, etc. These are the appropriate values to which the action determination process is oriented and which the actors try to realize in constructing and/or selecting actions in the course of their interactions. (2) Competitive or rivalry relationship. Each actor is motivated (operating with value orientations) trying to construct or find strategies that give results better for self than for other (“relative gain”). The methods of classical game theory as well as other methods may be used for constructing or selecting actions that would give expected results. (3) Relationship of rational, autonomous actors (with feelings of indifference, strictly speaking, this is not a role relationship). The actors only concern themselves about the best result for self, ignoring the other agent. Depending on situational conditions, the agents may be motivated to cooperate (for instance, convergent payoffs in the situation) or to compete (divergent payoffs in the game).
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
95
would focus on producing “competitive activities” and accomplishing “better-thanother” results. Normativist equilibria. Such equilibria obtains when the actors are able to satisfy sufficiently the norm(s) which apply to their actions and interaction. Of course, if the satisfaction level is minimal, the equilibrium would be an unstable one, because one or more actors would be inclined to try to improve performance of a given scheme or to construct another action scheme. Stability would, of course, obtain if they judge other schemes to be unavailable. In general, however, under such conditions of dissatisfaction, the equilibrium would be an unstable one (Burns and Roszkowska, 2004, 2006), the agents waiting for an opportunity to transform the conditions. Obviously, disequilibrium obtains if conditions in the situation make it impossible to perform right or appropriate actions. In addition, mistakes may be made so that the actual performances fail to satisfy the appropriate norm(s). The situation would also be problematic if the actors cannot obtain sufficient information either to enact the norm or to judge the qualities of their actions. Those who are in competitive social relationships generate actions that have the qualities of “competitiveness”, “one-upmanship”, etc. “Equilibrium” patterns would entail the actors generating activities satisfying the norms of competitive action. Note that there is a type of interaction equilibrium among rivals when they focus on the qualities of the action rather than on the outcomes (where equilibrium is not possible in a strict sense, since there is no outcome or payoff acceptable to all players). Several remarks are in order about the different judgment modalities for determining action: (1) Interaction processes may be characterized by some combination of routine, outcome oriented and normatively oriented determinations, for instance, combinations of instrumental calculation with normative. But as indicated earlier, this requires particular judgment procedures to resolve value dilemmas or conflicts when they arise in the course of action determination. (2) Actors who interact according to a common rule regime are aware that they are playing a particular game together – that is, with given rules, to which players orient (but which they may adhere to or comply with to varying degrees). G(t) is not just any collection of rules whatever. Of course, the actors may disagree (or try to deceive one another) about what the actual rules are. (3) In the GGT conception, a game structure may be a means or a procedure to find solutions, for instance, to resolve problems of coordination or conflict, but this is not necessarily the case. The structure may impede or block altogether effective problem-solving or conflict resolution (Burns and Roszkowska, 2007, Burns el al. 2005a). (4) In the GGT framework, rules are distinguished from the performance of rules, namely the process of applying and implementing rules in concrete activities (in time and space). Among these activities is not only the performance of particular action rules such as norms and prescribed procedures but adaptation to interaction situations and conditions. (5) In the GGT perspective, the results of a performed game are: (a) patterns of interaction which are largely predictable within rough limits on the basis of the actors’ role relationship(s) in the game and the other characteristics of the game complex G(t); (b) Some outcomes will be equilibria, others not. When one or another outcome is a normative equilibrium (that is, satisfying or realizing common norms or values), then it is likely to be a stable, enduring result. Otherwise, one or more of the players (or even external players) may find it in their interest to challenge the result. Nevertheless, role performances may or may not result in equilibria. Equilibria are empirically meaningful states in a game process. Certain game contexts – and configurations of rule complexes – have greater likelihood of ending in stable results or outcomes than others.
96
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
2. Classical Closed Games: The PD and Other Games Here we will consider games where the players’ social relationships vary but some major features of the game cannot be changed (see Table 2). Several of the most common social relationships are used for illustrative purposes: status or authority relations (hierarchy or domination), solidary relations, rivalry and antagonistic relationships. The values which the players apply and their action determinations in any given interaction situation, for instance, the prisoners’ dilemma game, will differ as a function of their established social relationship. In a symmetrical, solidary relationship, there is a normative order orienting the players to cooperating with one another and assigning high value to mutually satisfying interactions and outcomes. In, for instance, the prisoners’ dilemma (PD) game, this action would be one of mutual cooperation. Consider the standard PD game: An action ak may be formulated in a cognitively complex manner where the qualia associated with ak, Q(ak), include such “consequences” as the responses of one or more other agents. Thus, the players in making their judgments may consider and weigh combinations of actions such as cooperation (CC) or non-cooperation (-C-C) as well as other patterns in the PD game. In the prisoners’ dilemma (PD) game, actors with a solidary egalitarian relationship would not experience a dilemma, other things being equal. Cooperation (CC) is the “right and proper interaction” for the players. Player 1 selects C expecting player 2 to select C. player 2 selects C expecting player 1 to select C. the outcome CC would best satisfy their mutual value orientations in the situation. The other possible interactions, for instance, the asymmetric outcomes (-CC,C-C) fail to satisfy the norms of cooperation and reciprocity which typically apply in such relationships. Of course, there may be limits to their solidarity (that is, solidarity may not be an absolute or sacred principle). But this judgment is a function of the particular outcomes, the value complex and meta-rules which actors bring to the situation (Burns, Gomolińska, and Meeker, 2001; Burns and Meeker, 1974; Burns and Roszkowska 2006). Actors with other types of relationships would reason and judge differently. In the case of players with a status or authority relation, the asymmetric interaction (and outcome) -CC is right and proper. The players in this relationship operate with a primary norm specifying asymmetrical interaction and payoffs. The person of superior status or authority dominates and her subordinate(s) show deference and a readiness to accept leadership or initiatives from the superior person.25 The principle of distributive justice in the case of such a hierarchical relationship implies asymmetry. The PD is not then a dilemma for such participants, other things being equal. Once again, of course, the question of limits of this interaction pattern is relevant. How far does a subordinate go in acting consistent with the norms and values of the role relationship, at the expense of missed opportunities for realization of other values – including causing harm to a superior agent who may be resented. Other stable patterns of interaction and outcome would be associated with other types of relationships. For instance, players whose relationship is one of enmity or rivalry are likely to produce in the PD game the interaction (-C-C) and its outcome. But this outcome poses no dilemma for such players. In an antagonistic relationship, the actors value actions or outcomes that hurt the other most (possibly at considerable cost 25
On a personal level, the lower status person might want something else but within some limits of acceptance behaves in a way consonant with the relationship.
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
97
Table 2. Outcome Matrix for 2-Actor PD Game26
ACTOR 2 Cooperate(C) Not Cooperate(-C) Cooperate(C)
5,5
–10, 10
10,–10
–5,–5
ACTOR 1 Not Cooperate(-C)
to self, maximizing difference is not the point unless this may be interpreted or defined as maximally causing harm). In any case, -C leads to the best outcome (in terms of each player’s value orientation toward the other) regardless of what the other does. 27 In the case of an established relationship of rivalry, the players would aim for outcomes that would maximize the difference between outcomes for self and other, that is asymmetrical outcomes favoring self. Aiming (hoping) for the asymmetric outcome, each would choose to enact –C in the game. The most likely outcome is the non-cooperative one: –C-C. It is neither a normative equilibrium nor minimally satisfactory. Hence, it is unlikely to be stable, as numerous observations verify. In these games which tend toward mutual harm, there may be limits to how far the players are prepared to go with such mutually harmful interactions. Rational egoists having a relationship with no mutual concern with one another experience a genuine dilemma in the PD game. They each are predisposed to choose -C, trying to maximize gain for self, but the resulting pattern is sub-optimal. This is an obvious problem for instrumentally oriented agents, who are likely to realize that they could do better by cooperating – and have no reasons of antagonism or rivalry not to do so. They would, if at all possible, try to do something about the situation, for instance, transforming the game if possible. Hence, the PD game with -C-C equilibrium, which is a Nash equilibrium but Pareto non-optimal,28 is an unstable one for rational egoists, provided that they have the capacity to transform the game – for instance, either through repeated play or through pre-game negotiations, or other game restructuring possibilities. The predicted patterns of interaction and equilibrium in a PD game context as a function of the players’ social relationships are presented in Table 3. The expected results in other standard games (closed, of course) are derivable in a straightforward manner (Burns, 1990). Thus, solidary players in a “zero-sum game” 29 would pursue interactions minimizing their joint losses. In any “positive sum” or
26
The payoff numbers in the matrix are for illustration only. Action judgments in GGT are typically constructed on the basis of orderings (partial orderings). 27 Other norms may come into play, which modify such behavior. For instance, there may be powerful norms of civility limiting extreme actions in the case of some game situations such as this one. Restraints are imposed on the relationship and its instantiations. 28 An outcome that is not Pareto optimal is one where the actors, if they cooperate in restructuring their pattern—or underlying rules—can improve the payoffs for some (or all) of them without reducing the payoffs for others, namely through movement to the cooperative interaction. Pareto optimal points are stable against universal coalitions, because it is not possible to deviate from such states without hurting some players. Thus, this acts as a constraint on collective shifts (Scharpf, 1997; Tsebelis, 1990). Nevertheless such shifts are relatively common in social life (Burns and Roszkowska, 2007). 29 Games of “total conflict” are those in which what one player gains, the other loses. In a certain sense, this type of game is a distributional game rather than one of mutual destruction that characterize the confrontation game (or “game of chícken”).
98
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
Table 3. Expected Patterns of Interaction and Equilibria in a PD Game Situation as a Function of Selected Common Social Relationships TYPE OF SOCIAL/STATUS RELATIONSHIP
CHARACTERISTIC VALUE COMPLEX AND RULES
APPLICATION TO THE PRISONERS’ DILEMMA GAME. TYPES OF EQUILIBRIA
SOLIDARY
The actors are governed by the value of solidarity (joint gains or sharing of gains that is, symmetric distribution) and norms of cooperation and selfsacrifice.
The norms of the relationship are satisfied by (CC), also the symmetric outcome of (CC) are right and proper. The actors try to decide jointly on (CC) unless segregated from one another, in which case they try to take one another into account and “coordinate” cognitively. The (CC) pattern provides an optimal outcome, also satisfying the relationship’s principle of distributive justice. (CC) is therefore a normative equilibrium.
RIVALRY
Contradictory values. Each is oriented to surpassing the other (maximizing the difference in gains between self and other). The only acceptable outcome for each would be an asymmetric one where self gains more (or loses less) than other. But these expectations are mutually contradictory.
The actors choose separately: (-CC) for actor 1 and (C-C) for 2 would be judged right and proper, respectively. The likely (and situational) outcome, -C-C, in the game fails to satisfy the distributional rules which motivate them. Neither normative nor situational equilibrium obtains. The result is unstable, because each would try to transform the game.
ADVERSARY
The value orientation of each is to cause harm to the other.
The actors choose separately. The action –C would be judged as right and proper, consistent with the orientation of each. Outcomes when the other suffers (–C-C), or (-CC) for player 1 or (CC) for player 2 would satisfy the normative orientations of both players. Since the nonoptimal outcome (–C-C) satisfies each of their values or goals vis-à-vis the other, namely to harm the other, this would be a type of equilibrium based on parallel value orientations.
HIERARCHY/ DOMINATION
Norm specifying appropriate interaction: player 1 has the right to take initiatives and decide and 2 has the obligation to show deference. Right and proper outcomes are also asymmetric, with 1 receiving more than 2 (which satisfies the relation’s principle of asymmetric distributive justice).
Player 1 decides, player 2 accedes, according to the normative rules of their relationship. The asymmetric interaction (-CC) satisfies the norm of differentiation, and the unequal payoff satisfies the principle of distributive justice. (-CC) is therefore a normative equilibrium.
RATIONAL EGOISTS (INDIFFERENCE)
Each follows the principle of instrumental rationality (strategies derive value from their accomplishments for self).
Rational calculation leads to the (–C-C) pattern of interaction, which is sub-optimal. This would be a situational equilibrium, but unsatisfactory and therefore unstable. Rational actors would be predisposed to work out coordinating mechanisms in order to achieve the optimum outcome, that is, a“common solution”.
(COMPETITIVE)
No interaction pattern or outcomes has collective normative force.
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
99
coordination game, they would try to select interaction(s) maximizing their joint gains. On the other hand, rivals in a zero-sum game would each pursue options to produce maximum differences between self and other results (favoring of course self). Enemies would look to cause maximal harm to the other (but possibly within some cost limits). 30 Solidary players in a game of “chicken would choose to avoid confrontation all together. Rational actors in the “game of chicken” would avoid the extreme and risky action to the extent that they are risk-adverse. Enemies would (and do) risk catastrophic play in a game of “chicken”(at least up to the threshold of unacceptable losses to self). Rivals might also risk such catastrophic play. In general, one can identify types of closed games that are problematic for particular social relationships. Players with solidary relationships would find problematic highly asymmetrical zero-sum games and would try to transform them (or avoid playing them).31 Rivals, on the other hand, would appreciate such games, and would find games with symmetric outcomes highly problematic; in general, they would want to find or construct games with real differences in outcomes, favoring of course self. Those with adversary relationships would not relish game situations lacking opportunities to cause harm-to-the other.
3. Open Socially Embedded Games In general, actors in institutionalized relationships are more or less predictable and understandable to one another through shared characterization and knowledge of their relationship(s). This proposition applies even to many open game situations. Participants can by virtue of their established social relationship with its moral grounds take into account in their judgments and calculations the scope of what they may “reasonably” request or expect from one another (Burns, Gomolinska, and Meeker, 2001) (miscalculations and mis-judgments nevertheless occur, of course). Moreover, the knowledge of the principles or meta-rules defining limits and the scope of commitment to a particular value complex means that the players can to a greater or lesser extent predict some of the likely consequences of adaptations and elaborations (among other things, the unfolding) of their relationship. 30
In the case of actors who are hostile to one another (but this applies to rivals as well), there are likely to still be limits to their commitment to “hurt or undo the other”. Under extreme conditions, they may experience the dilemma between acting in a manner consistent with their relationship (e.g. causing maximum harm to the other in an adversarial relationship) or restraining self and avoiding the risk of substantial loss to self. The strength of the desire to survive or to avoid “excessive” loss or suffering would be decisive here, but these are assessments exogenous to the logic of their relationship. Such considerations would lead to mutual deterrence. The deterrence may, of course, breakdown under some conditions – where, for instance, one or the other players goes over the limit, either through accident, miscalculation, or brinkmanship, and the other responds in kind, unleashing a process which is difficult, once underway, to curb, because of powerful pressures toward reciprocation. Thus, such a conflict tends to escalate. 31 Each player has, however, certain rough limits with respect to the “sacrifices” that she is prepared to make. For instance, actor i has a maximum value above which she is not willing to go for the sake of the relationship (or, if she does, she is intentionally or unintentionally potentially redefining the relationship (as more solidary and entailing a higher level of commitment). The other actor j may accept this limitation, acknowledging such a norm by not pressing i beyond such a threshold. Thus, the maximum value sets a limit for equilibrium interactions between actor i and j. Of course, the greater the value of a social relationship to the participants, the higher the limit or maximum, and the higher the level of cooperation, self-sacrifice, and commitment. In general, agents in institutionalized solidary relationships are predisposed to make sacrifices up to the value of the relationship (Burns, 1990; Burns et al., 2001). Failure to live up to these implicit mutual obligations or commitments would tend to undermine the relationship.
100
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
In an open game, where the players construct their actions and interactions, utilizing appropriate value(s) and norm(s) to guide each of them, processes such as the following would be likely to occur. Each of the players in, for instance, a solidary relationship, finds the action a1 and a2, respectively, which sufficiently realize (or comply with) vi; these would entail cooperative or sacrificial type actions.32 Equation (3) would be satisfied in that the players judge a1a2 as right and proper “cooperative interaction.” Recall that the judgment is based on a comparison and judgment of similarity of the expected properties or attributes of an action a, Q(a), with the attributes specified by the value or norm vi, Q(vi) for each actor i. They would try to construct and select right and proper interaction patterns and avoid or reject incorrect or normatively deviant interactions. On the other hand, the open process may entail constructing interactions under conditions of competitive or antagonistic social relationships. Such players would also generate new interactions and outcomes, possibly developing or adopting new technologies and strategies, as they strive to outdo or harm one another. While the game complex undergoes transformation, the competitive or antagonistic character (or identity) of the relationship – and the interaction patterns – are invariant (or are reproduced). This is a type of dynamic equilibrium (obviously, in this case that there is no normative equilibrium which the players can agree to accept or find collectively compelling). A mediator may assist in such situations; she helps them establish a new basis for playing the game(s) (for instance, moving from total mistrust and mutual aggressivity to partial trust and cooperativity). Ultimately, a new social relationship is established through such a process. 4. Multiple Values and Contradictions and Their Resolution in GGT33 In classical theory, there are no apparent contradictions or inconsistencies. These are assumed away in the name of rationality or veiled behind the notion of consistent preference orderings. In GGT, because actors hold and apply values from different perspectives. They hold potentially contradictory value judgments and propose incompatible normative equilibria. In a world of contradiction, incompatibility, and imbalance, how is social order achieved. This is a fundamental social science question. Force works but it is inherently unstable. Many procedures in modern democratic society (such as a democratic vote, or adjudication, or multi-lateral negotiation) result in outcomes that are widely accepted as normative equilibria (at least within some range) and accomplish social order. Complex Value Fields: Contradictory Roles, Norms, and Values Most interaction situations are characterized by the activation of multiple norms and values; often there is a clash among multiple, contradictory or incompatible norms and values appropriate or applicable in the situation St. Equilibrium with respect to one norm or law entails disequilibrium with respect to another norm. Actors (individually as well as collectively) are faced with having to choose between contradictory norms or values, which is also to select among potential normative patterns, that is, normative equilibria. 32
For our purposes here, it is sufficient to consider a general norm such as “the principle of reciprocity” or “cooperativity” applying to both actors. Their roles are likely to prescribe differing and role specific norms for each. 33 The normative outcomes generated by such procedures are characteristically non-Nash.
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
101
Of course, as long as contradictory norms and values are not activated and brought into concrete or practical relation with one another – either on an ad hoc or situational basis or on an institutionalized basis – problems of conflict, uncertainty and disequilibria need not arise. But the potentialities are ever-present. Normative contradictions may arise in that two norms applicable in the situation at the same time imply contradictory behavior: (1) role conflict: ROLE(i,t,G 1) associated with one relationship such as the workplace indicates action ai and ROLE(i,t,G2) associated with family relationships prescribes another action bi contradictory to the first; (2) or there may be two situationally appropriate, but contradictory norms, r 1 and r2, which participants are to follow in the given situation St; (3) the realization of two rules or rule complexes may imply incompatible activities to perform at the same time in the situation St; the activities are incompatible in that they require the use of the same scarce resources such as time, energy, money, political capital; (4) i’s role does not fit with j’s role in the i/j relationship; that is, there is a contradiction inherent in their role relationship, for instance, i has been socialized in her role to dominate j, but j has been taught to act more or less independently; (5) global norms may be incompatible with norms on the interpersonal or role level. For instance, global norms supporting equality violate norms characteristic of local status relationships, such as superordinatesubordinate relations in private enterprises and government agencies. Or norms of loyalty and integration within an interpersonal or family relationship contradict solidarity norms with respect to a larger integrating collectivity such as the state or a religious or political movement. (6) an institutionalized procedure is considered right and fair, but the outcomes are judged to be unfair or, possibly, inefficient. For instance, a market may be judged to operate fairly, but its distributive outcomes are judged unfair and unacceptable (Sen,1998; Burns and Gomolińska, 2001; Burns and Roszkowska, 2006, 2007). (7) Ends and means of action do not fit one another (Merton, 1968). One important consequence of this perspective is not only that multiple equilibria are to be expected, but that they may clash with one another, because of conflicts in values, between values and norms, between roles that imply different expectations and normative behaviour. Contradictory and ambiguous situations are common in social life.34 Actors find themselves in such predicaments, and develop strategies to deal with the contradictions and predicaments (see below). In general, normative contradictions arise when the attempt to realize one norm through appropriate action blocks the accomplishment of another norm or value. Resolution may be available in that established meta-values give priority to certain norms and values over others. This implies that the accomplishment of particular normative equilibria take precedence over others. Or, rather than prioritizing (or rank ordering through meta-values in their value complexes), actors may partition their activities into institutional domains or subdomains. A more or less consistent complex C of norms and values is defined as appropriate and applicable in situation S 1 and another complex C* is defined as appropriate and applicable in another, distinct situation S 2, where C≠C* and S1≠S2. This becomes the basis of an ecology or composite of more or less incompatible normative equilibria; such a composite is often one of the aims of institu-
34
For instance, to be a good businessman may require breaking religious norms or family role obligations. Besides those rules which indicate what is the morally right thing to do, there are other rules which indicate what is practical, advantageous, or necessarily (possibly even with arguments about the way immoral behavior will contribute to morality that is otherwise compromised at the moment).
102
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
tional design. Thus, even in the absence of a fully “harmonic” world (which can only be an ideal),35 one may build into an institutional arrangement strategies, procedures, roles, etc. to deal with many inconsistencies and contradictions. Failure to do so results in uncertainty, tensions, conflict and major performance failings. Even given complete unanimity on some procedures or outcomes, normative equilibria – from a multi-dimensional perspective – may be problematic. One may follow or adhere to a right and proper procedure or institutional arrangement, but in doing so, produce outcomes judged to be normatively unacceptable. Or right and proper outcomes may be obtainable, but the means of accomplishing them (that is, the particular procedures and institutional arrangements) are judged as improper or illegitimate. Actors try to construct institutions and patterns of action which realize composite normative equilibria – reflecting an “ethics of responsibility” (Weber, 1977). For instance, one tries on the individual level to balance the “demands” (values and expectations) of, for instance, work, family, and church. On the institutional level, one tries to combine, for instance, fairness with efficiency. Resolutions are sought through composite institutional design, ecological partitioning of domains, or through prioritizing the norms. In the absence of such resolutions, actors will experience uncertainty and lack confidence in their capabilities to act properly and effectively. There will be unpredictability and instability. The great pluralism of human values – and their contradictory relations – are expressed in public life in the variety of forms of disagreement and conflict. In sum, the multiplicity of normative equilibria follows directly from the notion of a value complex with multiple norms and values. It also reflects the fact that multiple perspectives or agencies are possible in any given situation. Judgment perspectives differ on individual, subgroup, organizational, and systemic levels. Procedures to Resolve conflicts and Construct Normative Equilibria Modern societies are characterized by substantial differences in values and lifestyles, endowments, powers, wealth, etc. And there are substantial differences among individuals and groups in their commitments to diverse values and norms, for instance, about what they believe to be “good” for themselves as well as for society. No single norm or value complex applies. Even in cases where participating actors apply the same “appropriate value complex,” they may come to divergent interpretations and judgments. In a word, disagreement. How may social consensus – and normative equilibria – be achieved, if at all, under conditions of conflicting perspectives and attitudes. One option is to simply impose an order by force. This establishes an equilibrium, which may be a normative equilibrium for the dominant agent (Burns and Gomolińska, 2000). But those coerced into this social order are unlikely to accept the equilibrium as normatively right and proper. It would lack moral force and the readiness of actors to adhere to the equilibrium pattern when not observed or forced (nor would they be particularly inclined to pressure others to conform). Such a coercive equilibrium fails to resolve the conflicts and to achieve consensus – and it obviates the use and development of institutionalized procedures to achieve “consensus” on important public issues or collective choices, as essential to, and observable in, democratic societies. Collective deliberative processes are likely to lead to the conclusion of a “stable social contract” (Rawls, 1993) – that is, one form of
35
One form of the ideal is that one can find a greater value that encompasses contradictory norms, transcending them in a certain sense; or one finds a symbolic/expressive way of acting that appears to satisfy or realize both norms.
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
103
normative equilibrium. This relates, as we shall see, to the legitimating force of democratic and related processes. Adjudication, democratic procedure, and negotiation are three of the major ways of dealing with conflict and disequilibria in modern society and achieving normative equilibria, that is to legitimize social choices.36 Contentious issues or conflicts are submitted to institutionalized procedures for adjudication or democratic choice (Burns and Roszkowska, 2007). Procedural and outcome equilibria both refer to states of the system that actors judge to be fair or just according to shared or community norms and values. The application of a legitimating procedure is tantamount to satisfying a norm or principle. The procedure is judged by the community as a whole as the right and proper way to do things, whatever the outcome of the procedure (or, at least, within certain limits). This might be the case, for example, in utilizing a democratic procedure to make a collective choice (rather than utilizing bureaucratic or authoritarian procedures). Voting conducted according to a democratic procedure or algorithm is intended to legitimize the outcomes of voting as just or right per se, i.e. whatever they might be.37 The correct following of such normatively prescribed procedures is expected to result in normative equilibrium outcomes. They are normative equilibria, because they have been arrived at or constructed in ways which are socially recognized as “just,” “right and proper”, or legitimate. In such ways, many institutional arrangements, laws, and a constitution may be established as normative equilibria in their own right.38 Procedures of adjudication, democratic vote, and right and proper negotiation are social technologies to resolve conflicts and establish normative equilibria. Properties of such procedures are two-fold: (1) they embody or realize a particular norm or normative complex – having a moral weight or legitimacy in itself, for instance in terms of entailing standards of transparency and fairness; (2) the procedure leads by definition – or is believed to lead – to fair or just outcomes, whatever these may be. The weight of legitimacy assigned by rigorously following the procedure takes precedence over the actual outcomes. In general, normatively grounded outcome equilibria may be obtainable through the utilization of a collective agreed on procedure to choose among conflicting values or to resolve value conflicts; the value of resulting outcomes may derive from the procedure itself – a type of institutional alchemy. A voting or a negotiation procedure with the participation of opposing agents legitimizes resulting outcomes and give them normative force. An outcome is collectively defined or understood as right and proper by virtue of having resulted from application of the correct procedure.39 Participants (and
36
Competitive markets and market negotiations are other institutional arrangements, which enable a society to deal systematically with many conflicts, and where the participants produce “contracts” that are normative equilibria. 37 But, on one level of judgment, outcomes of voting where there is no consensus are disequilibria in a certain sense. 38 Typically, there are restrictions on what outcomes would be acceptable, for example, because of constitutional restrictions, or the restrictions of powerful community norms which make certain patterns not even publically conceivable. 39 Of course, the informational conditions and the context itself may contribute to an effective procedure. Rawl’s“veil of ignorance” is a social condition which facilitates arriving at normative agreements. Also, there may be roles such as judges and mediators which carry normative weight in conducting procedures so that they are likely to lead to normative equilibria. This raises questions about the design of such procedures, a matter which cannot be addressed here.
104
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
other agents) are likely to sanction negatively those who refuse to accept the procedure and its outcomes, for the refusal is tantamount to criticizing or denigrating the procedure. As we show elsewhere, there are limits, however, to the legtimation of outcomes (Burns and Gomolińska, 2000; Burns, Gomolińska, and Meeker, 2001). This is obvious in the case of issues that are viewed by some are many of the participants as sacred in character.
5. Concluding Remarks Regarding GGT and Classical Game Theory Our generalization of classical game theory implies that there are multiple game theories or models reflecting or referring to different social relationships and corresponding rationalities or interaction logics. Classical game theory is, therefore, a quite general but nevertheless limited model in its scope. It is applicable to a particular type of social relationship: namely that between unrelated or anomic agents acting and interacting in accordance with particular “rationality” rules and modalities. The actors lack sentiments — either for or against — one another. And they are purely egoistic in their relationship. Moreover, their games are closed ones. They may not change the rules such as the number and qualities of participants, the specific action opportunity structures and outcomes, the shared modality of action, their value complexes and models of the interaction situation. The creative aspect of all human action, as exhibited in open games, has been acknowledged by Tsebelis (1990), but he recognizes that such problems cannot be addressed systematically within the classical game theory framework. Table 4 identifies several key dimensions which distinguish game theory (and rational choice theory), on the one hand, and GGT, on the other. While sharing a number of common elements, GGT and game theory exhibit substantial differences in conceptualizing and modeling human action and interaction. GGT has been applied to a wide variety of social phenomena: among others: • • • • • • •
formalization of social relationships, roles, and judgment and action modalities as rule complexes (Burns and Gomolińska, 2000; Burns, Gomolińska, and Meeker, 2001; Gomolińska, 1999, 2002, 2004, among others); reconceptualization of prisoners dilemma game and other classical games as socially embedded games (Burns, Gomolińska, and Meeker, 2001; Burns and Roszkowska, 2004, 2006; Burns et al., 2005a); models of societal conflict resolution and regulation (Burns et al., 2005a; Burns and Roszkowska, 2007); rethinking the Nash equilibrium (Burns and Roszkowska, 2004, 2006; Burns et al. 2005b); fuzzy games and equilibria (Burns and Roszkowska, 2002, 2004) socio-cognitive analysis and belief revision (Burns and Gomolińska, 2001; Roszkowska and Burns, 2002, Burns and Roszkowska 2005b, 2006); simulation studies in which GGT is applied, for instance, in the formulation of multi-agent simulation models of regulatory processes (Burns et al., 2005a, 2005b).
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
105
Table 4. Comparison of Generalized Game Theory and Classical Game Theory THE GENERAL THEORY OF GAMES Game rule complex, G(t) – together with physical and ecological constraints– structure and regulate action and interaction. Players: Diverse types of actors in varying roles; actors as creative, interpreting, and transforming beings
CLASSICAL GAME THEORY Game constraints (“rules”) which include physical constraints Players: universal, super-rational agents lacking creativity and transformative capabilities. Games may be symmetrical or asymmetrical – actors have differ- Mainly symmetry ent roles, positions of status and power, endowments; also, diversity in role components: value, model, act, judgment/modality, etc. They operate in different social and psychological contexts. Game transformation based on the innovative or creative capabili- Game structures are fixed ties of players; exogenous agents may also engage in shaping and reshaping games Open and closed games (this follows from the preceding) Closed games VALUE(i,G(t)) complex: A player’s value and evaluative structures Utility function or preference orderderive from the social context of the game (institutional setup, ing is given and exogenous to the social relationships, and particular roles). game. MODEL(i,G(t)) complex. A player’s model of the game situation Perfect or minimally imperfect which may be based on highly incomplete, fuzzy, or even false information about the game, its information. Imprecise (or fuzzy/rough) data as well as imprecise players, their options, payoffs, and rules and norms, strategies, and judgment processes. Reasoning preference structures or utilities. processes may or may not follow standard logic. Crisp information, strategies, decisions. ACT(i,G) complex. It represents the repertoire of acts, strategies, Set of possible strategies and comroutines, programs, and actions available to player i in her particular munication conditions. Communirole and role relationships in the game situation. In classical game cation rules are axioms at the start theory, a particularly important class of actions (and constraints on of the game and apply to all players. action) concern communication. In GGT communication condi- Non-cooperative games do not allow tions and forms are specified by the rules defining action opportuni- for communication. Cooperative ties in a given game. The diverse forms of communication and their games allow for communication uses or functions affect game processes and outcomes: for instance, (and the making of binding agreeto provide information or to influence the beliefs and judgments of ments). the other. Communication may even entail deception and fabrication. Moreover, actors may or may not use available opportunities in the interaction situation to communicate with one another or to follow the same rules (degree of asymmetry). JUDGMENT/MODALITY: J(i,G(t))-complex. Multiple modalities of Singular modality: Instrumental action determination including instrumental, normative, habitual, rationality or “rational choice”. play, and emotional modes of action determination, among others, Maximization of expected utility which depend on context and definitions of appropriateness. The as a universal choice principle. universal motivational factor is the human drive to realize or achieve particular value(s) or norm(s).. Bounded capabilities of cognition, judgment, and choice. Con- Super-capabilities of deliberation tradiction, incoherence and dilemmas, arise because of multiple and choice according to fixed axivalues and norms which do not always fit together in a given situa- oms of rationality. Hamlet syndrome tion. Consistency and coherence are socially constructed and vul- is not possible. nerable. Solution concept: “solutions” are defined from a particular stand- An “equilibrium” is the solution to point or model of each player. Disagreements among actors about the game appropriate or satisfactory solutions is expected. A common or general game solution satisfies or realizes the values or goals of the multiple players in the game. Different types of equilibria, generalized Nash equilibrium, nor- Mainly Nash equilibrium (which mative and other social equilibria including equilibria imposed by conflates different types of socially an authority or dictator. distinct and meaningful equilibria) Morality. Sacred norms and values which command commit- Nothing is sacred ment and sacrifice. Some values belong to a sacred core, grounded in identity, status, role(s), and institutions to which agents may be strongly committed.“Not everything is negotiable”.
106
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
References [1] Burns, T. R. (1990) “Models of Social and Market Exchange: Toward a Sociological Theory of Games and Social Behavior.” In: C. Calhoun, M. W. Meyer, and W. R. Scott (eds.), Structures of Power and Constraints: Papers in Honor of Peter Blau. Cambridge, Cambridge University Press. [2] Burns, T. R. (1994) “Two Conceptions of Human Agency: Rational Choice Theory and the Social Theory of Action.” In: P. Sztompka (ed), Human Agency and the Reorientation of Social Theory. Amsterdam: Gordon and Breach. [3] Burns, T. R., Baumgartner, T. and DeVillé, P., (1985) Man, Decisions, Society, London/New York, Gordon and Breach. [4] Burns, T. R., Caldas J. C. and Roszkowska E. (2005a) “Generalized Game Theory’s Contribution to Multi-agent Modelling: Addressing Problems of Social Regulation, Social Order, and Effective Security.” In: B. Dunin-Keplicz, A. Jankowski, A. Skowron, and M. Szczuka (eds.), Monitoring, Security and Rescue Techniques in Multiagent Systems. Springer Verlag, Berlin/London. [5] Burns, T. R. Caldas J. C., Roszkowska E., and Wang J. (2005b) “Multi-Agent Modelling of Institutional Mechanisms: The Approach Of Generalized Game Theory.” Paper presented at the World Congress of the International Institute of Sociology, Stockholm, Sweden, July, 2005. [6] Burns, T. R. and Dietz T. (2001) “Revolution: The Perspective of the Theory of Socio-cultural Evolution.” International Sociology, in preparation. [7] Burns, T. R. and Dietz T. (1992) “Cultural Evolution: Social Rule Systems, Selection, and Human Agency.” International Sociology, Vol. 7:259–283. [8] Burns, T. R. and Flam, H. (1987) The Shaping of Social Organization: Social Rule System Theory with Applications London: Sage Publications. [9] Burns, T. R. and Gomolińska A. (1998) “Modeling Social Game Systems by Rule Complexes.” In: L. Polkowski and A. Skowron (eds.), Rough Sets and Current Trends in Computing. Berlin/Heidelberg, Springer-V. [10] Burns, T. R. and Gomolińska A. (2000)“The Theory of Socially Embedded Games: The Mathematics of Social Relationships, Rule Complexes, and Action Modalities.” Quality and Quantity: International Journal of Methodology Vol. 34(4):379–406. [11] Burns, T. R. and Gomolińska A. (2001) “Socio-cognitive Mechanisms of Belief Change: Application of Generalized Game Theory to Belief Revision, Social Fabrication, and Self-fulfilling Prophesy.” Cognitive Systems, Vol. 2(1):39–54. [12] Burns, T. R., Gomolińska, A. and Meeker L. D. (2001) “The Theory of Socially Embedded Games: Applications and Extensions to Open and Closed Games.” Quality and Quantity: International Journal of Methodology, Vol. 35(1):1–32. [13] Burns, T. R. and Meeker D. (1973)“A Mathematical Model of Multi-Dimensional Evaluation, Decisionmaking, and Social Interaction.” In: Cochrane and M. Zeleny (1973). [14] Burns, T. R. and Meeker D. (1974) “Structural Properties and Resolutions of the Prisoners’ Dilemma Game.” In: A. Rapaport (ed.), Game Theory as a Theory of Conflict Resolution. Holland: Reidel. [15] Burns, T. R. and Roszkowska E. (2002) “Fuzzy Judgment in Bargaining Games: Diverse Patterns of Price Determination and Transaction in Buyer-Seller Exchange.” Paper presented at the 13th World Congress of Economics, Lisbon, Portugal, September 9–13. Also, appears as Working-Paper No. 338, Institute of Mathematical Economics, University of Bielefeld, 2002 (http://www.wiwi.uni-bielefeld. de/~imw/papers/338). [16] Burns, T. R. and Roszkowska, E. (2004) “Fuzzy Games and Equilibria: The Perspective of the General Theory of Games on Nash and Normative Equilibria” In: Pal S. K., Polkowski L., Skowron A. editors, Rough-Neural Computing. Techniques for Computing with Words. Springer-Verlag, 435–470. [17] Burns T. R., Roszkowska E. (2005a) Generalized GameTheory: Assumptions, Principles, and Elaborations Grounded in Social Theory, In Search of Social Order, “Studies in Logic, Grammar, and Rhetoric”, Vol. 8(21):7–40. [18] Burns, T. R. and Roszkowska E. (2005b) “Social Judgment In Multi-Agent Systems: The Perspective Of Generalized Game Theory.” In Ron Sun (ed.), Cognition and Multi-agent Interaction. Cambridge: Cambridge University Press. [19] Burns, T. R. and Roszkowska E. (2006) “Economic and Social Equilibria: The Perspective of GGT.” Optimum-Studia Ekonomiczne Nr 3(31):16–45 [20] Burns, T. R. and Roszkowska E. (2007) “Conflict and Conflict Resolution: A Societal-Institutional Perspective.” In: M. Raith, Procedural Approaches to Conflict Resolution. Springer Press, Berlin/London. In press. [21] Cochrane, J. and Zeleny M. (eds.) (1973) Multiple Criteria Decision-Making. Columbia, S.C.: University of South Carolina Press.
T.R. Burns and E. Roszkowska / Multi-Value Decision-Making and Games
107
[22] Gomolińska, A. (1999) “Rule Complexes for Representing Social Actors and Interactions.” Studies in Logic, Grammar, and Rhetoric, Vol. 3(16):95–108. [23] Gomolińska, A. (2002) “Derivability of Rules From Rule Complexes”. Logic and Logical Philosophy, Vol.10:21–44. [24] Gomolińska, A. (2004) “Fundamental Mathematical Notions of the Theory of Socially Embedded Games: A Granular Computing Perspective.” In: S. K. Pal, L. Polkowski, and A. Skowron (eds.) RoughNeural Computing: Techniques for Computing with Words. Springer-Verlag, Berlin/London, pp. 411– 434. [25] Gomolińska, A. (2005) “Toward Rough Applicability of Rules.” In: B. Dunin-Keplicz, A. Jankowski, A. Skowron, and M. Szczuka (eds.) Monitoring, Security, and Rescue Techniques in Multiagent Systems. Springer-Verlag, Berlin/London, pp. 203–214. [26] Granovetter, M., (1985), “Economic Action and Social Structure: The Problem of Embeddedness”, American Journal of Sociology, 91, pp. 481–510. [27] Hodgson, Geofrey M. (2002). “The Evolution of Institutions: An Agenda for Future Theoretical Research”. Constitutional Political Economy, pp. 111–127. [28] March, J. R. and Olsen J. P. (1989) Rediscovering Institutions: The Organizational Basis of Politics. New York: Free Press. [29] Merton, R. K. (1968) Social Theory and Social Structure. Glencoe, Ill.: Free Press. [30] Muldrew, J. C. 1993 “Interpreting the Market: The Ethics of Credit, and Community Relations in Early Modern England.” Social History. [31] North, N. C. (1990) Institutions, Institutional Change, and Economic Performance. Cambridge: Cambridge University Press. [32] Ostrom, E. (1990) Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge: Cambridge University Press. [33] Powell, W. W. and DiMaggio P. J. (eds.) (1991) The New Institutionalism in Organizational Analysis. Chicago: University Press. [34] Rawls, J., (1993), Political Liberalism, New York, Columbia Univ. Press. [35] Reiter R. (1980), A logic for default reasoning,“Artificial Intelligence”, Vol. 13, 81–132. [36] Roszkowska, E., and Burns T. R. (2000) “Market Transaction Games and Price Determination. The Perspective of the General Theory of Games.” Paper presented at Games 2000, the First World Congress of Game Theory, Bilbao, Spain, July 24–28. [37] Scharpf, F. W. (1997) Games Real Actors Play: Actor-Centered Institutionalism in Policy Research. Boulder, Colorado: Westview Press. [38] Schmid, M. and Wuketits F. M. (eds.) (1987) Evolutionary Theory in the Social Sciences. Dordrecht: Reidel. [39] Schelling, T. C. (1963) The Strategy of Conflict. Cambridge: Harvard University Press. [40] Scott, W. R. (1995) Institutions and Organizations. London: Sage Publications. [41] Sen, A. (1998) “Social Choice and Freedom.” Nobel Prize Lecture, University of Uppsala, Uppsala, California, December 13, 1998. [42] Simon, H. (1969) The Sciences of the Artificial. Cambridge: MIT Press. [43] Sun, R. (1995) “Robust Reasoning: Integrated Rule-basede and Similarity-based Reasoning.” Artifiical Intelligence, 75, 2:241–295. [44] Tsebelis, G. (1990) Nested Games: Rational Choice in Comparative Politics. Berkeley: University of California Press. [45] von Neumann, J. and Morgenstern O. (1944) Theory of Games and Economic Behaviour. Princeton: Princeton University Press. [46] Weber, M. (1977) From Max Weber: Essays in Sociology. Edited with an introduction by H. H. Gerth and C. W. Mills. Oxford: Routledge and Paul Kegan. [47] Winch, P. (1958) The Idea of a Social Science and Its Relation to Philosophy. London: Routledge & Kegan. [48] Wittgenstein, L. (1958) Remarks on the Foundations of Mathematics. Oxford: Blackwell.
108
Advances in Multiple Criteria Decision Making and Human Systems Management Y. Shi et al. (Eds.) IOS Press, 2007 © 2007 The authors. All rights reserved.
Comparing Economic Development and Social Welfare in the OECD Countries: A Multicriteria Analysis Approach Evangelos GRIGOROUDIS*, Michael NEOPHYTOU and Constantin ZOPOUNIDIS Technical University of Crete, Department of Production Engineering and Management, University Campus, 73100 Chania, Greece Abstract. Although the evaluation problem of socio-economic development of countries has been extensively studied, there is still a huge debate among the economists about the assessment of this concept. Thereby, a large number of quantitative macroeconomic indicators and alternative methodological approaches have been proposed in order to analyze and compare economic and human development. The accordance between citizens’ prosperity and development of a country’s economy is an interesting issue that may be discussed in the framework of multicriteria analysis. The main objective of the presented study is to compare economic development and social welfare and explore how countries’ performance on these dimensions is related. For this reason, a large number of macroeconomic indicators have been assessed as evaluation criteria for either economic development or social welfare. The evaluation methodology is based on the multicriteria method PROMETHEE II, where a large number of scenarios for different distribution of criteria weights is examined. The presented pilot application refers to the thirty (30) member countries of Organization for Economic Co-operation and Development (OECD), for which several macroeconomic data for the period 1990–2002 have been collected. The most important results are presented through a series of relative comparison diagrams. These diagrams analyze the performance of a country in comparison to other countries’ performance. Additionally, they present the evolution of social welfare and economic development performance during the examined time period. The results provided seem to justify the perception that economic development and social welfare are strongly related, although they are not always in complete accordance. Keywords. Promethee method, Economic development, Social welfare
1. Introduction Two of the main characteristics of a society – and by extension of a country – are: a) its economy and how it is developed through and b) the prosperity of its members. Analyzing these characteristics is considered as a major socio-economic problem over the last decades, which has been strongly affected by changes in the general socioeconomic environment (e.g. globalization, trade liberalization, technological progress, etc). For this reason, many governments and policy makers have made significant efforts focusing on increasing the productivity and advancing the competitiveness of country economies, which may be considered as basic conditions in order to achieve higher development rates. *
Corresponding author: Tel. +30-8210-37346, Fax +30-28210-69410, Email:
[email protected].
E. Grigoroudis et al. / Comparing Economic Development and Social Welfare
109
On the other hand, governments have to adopt policies to tackle all the negative effects arising from globalization and trade liberalization, so as to ensure that trade, natural resources, human capabilities, research and educational institutions, government organizations, financial systems and cultural and social values are mutually supportive in favor of sustainable development. Evidently, all these efforts oriented to ensure higher development rates, may also affect the prosperity of a society. Thus, it is important to explore and analyze how economic development is related to social welfare. According to literature, the desirability of economic growth is an emerging question in contemporary development studies. As noted by Clarke (2003), a dominant view both within the literature and public policy is that economic growth is desirable, as it is the best means to increase social welfare, and enhancing social welfare is a rational objective of society and governments. Apart from the aforementioned problems, there is also a huge debate on the concept of development. However, may economists note that development entails a modern infrastructure (both physical and institutional), and a move away from low value added sectors such as agriculture and natural resource extraction. Developed countries, in comparison, usually have economic systems based on continuous, self-sustaining economic growth in the tertiary and quaternary sectors and high standards of living. In this context, international organizations, governments and policy makers are interested in analyzing economic progress and social welfare in order to ensure sustainable development. This has motivated leading international organizations to analyze these issues and publish relevant reports. However, most of theses reports simply include the performance of each country in several macroeconomic development indicators or present a naïve overall performance evaluation. The main aim of the presented study is not to evaluate and rank countries based on sustainable development performance, but rather to comparatively analyze economic development and social welfare. The evaluation criteria are based on a large number of macroeconomic indicators, while the comparison analysis concerns the member countries of the Organization for Economic Co-operation and Development (OECD). The applied methodological framework aims at evaluating separately the overall performance of economic development and social welfare. This evaluation methodology is based on the PROMETHEE II multicriteria method, where a large number of scenarios for different distribution of criteria weights is examined. Furthermore, a series of conceptual diagrams are proposed in order either to support the comparative analysis of different countries, or to give a clearer view of the evolution of economic and human development. This paper consists of 5 sections. Section 2 discusses the relation between human development and economic progress and presents the most important research efforts for developing a sustainable development indicator. The next section is devoted to the applied methodological approach and the data used in this study, while the empirical results of the multicriteria analysis are given in Section 4. Finally, Section 5 presents some concluding remarks, the limitations of the study, as well as future research in the context of the proposed approach.
2. Human Development and Economic Progress The linkage between human development and economic growth has been extensively studied during the last two decades. Human development has recently been advanced
110
E. Grigoroudis et al. / Comparing Economic Development and Social Welfare
as the ultimate objective of human activity in place of economic growth (Ramirez et al., 1997). As the first UNDP Human Development Report states, the basic objective of development is to create an enabling environment for people to enjoy long, healthy and creative lives (UNDP, 1990, pp. 9), while human development is defined as a process of enlarging people’s choices (UNDP, 1990, pp. 10). Human Development Reports have been published by the United Nations (UN) since 1990 in a yearly basis. These reports focus on four important capabilities: to lead a long and healthy life, to be knowledgeable, to have access to the resources needed for a decent standard of living and to participate in the life of the community. In order to measure human development, the UN consider all the parameters focusing to these capabilities. Although economic growth is an important factor for human development, several researchers argue about the applied measurement methodology. Moreover, human outcomes do not depend on economic growth and levels of national income alone. They also depend on how these resources are used (e.g. developing weapons, producing food, buildings places or providing clean water). Furthermore, human outcomes such as democratic participation in decision-making or equal rights for men and women do not depend on the incomes of a national economy. For these reasons, the Human Development reports present an extensive set of indicators (33 tables and 200 indicators) on important human outcomes achieved in countries around the world, such as life expectancy at birth or under-five mortality rates (reflecting the capability to survive), or literacy rates (reflecting the capability to learn). They also include indicators on important means of achieving these capabilities, such as access to clean water, and on equity in achievement, such as the gaps between men and women in schooling or political participation. While this rich array of indicators provides measures for evaluating progress in human development in its many dimensions, policy makers also need a summary measure to evaluate progress, particularly one that focuses more sharply on human well-being than on income. For this purpose, the Human Development Index (HDI) has been included in every Human Development Report, considering in addition indicators for gender (gender-related development index and gender empowerment measure) and poverty (human poverty index). These indices give an overview of some basic dimension of human development, but they should be considered taking into account other indicators, as well as the assumptions about the underlying data. The HDI is a summary measure for human development that is used to evaluate a ranking of the UN countries. It measures the average achievements in a country in three basic dimensions of human development (UNDP, 2004): – –
–
A long and healthy life, as measured by life expectancy at birth. Knowledge, as measured by adult literacy rate (weighted by 2/3) and the combined primary, secondary and tertiary gross enrolment ratio (weighted by 1/3). A decent standard of living, as measured by Gross Domestic Product per capita.
Although the HDI is a useful stating point, it is important to mention that the concept of human development is much broader and more complex than any summary measure can capture, even when supplemented by other indicators. The HDI is not a compressive measure, since it does not include important aspects of human develop-
E. Grigoroudis et al. / Comparing Economic Development and Social Welfare
111
ment, notably the ability to participate in the decisions that affect one’s life and to enjoy the respect of others in the community. A person can be rich, healthy and well educated, but without this ability human development is impeded. In the same spirit with UN, the OECD publishes in a yearly basis a large number of reports about economic development and social welfare. The annual report of OECD measures the economic and social development of each member country by using a huge amount of indicators. It presents all the work of the organization that has been done in the last year and also discusses some future issues about global economy and social welfare. For example, in the 2004 annual report, OECD highlights the need of new investments on the field of the economy and particularly to new technologies. Also, it motivates member countries’ governments to reduce the unemployment by creating new jobs for people over 55 years old (OECD, 2004a). During 2001 OECD decided to include a list of additional social indicators, that have been produced in the past, but were “out of fashion” during the 1980s and the 1990s. This new list attempts to satisfy the growing demand for quantitative evidence on whether our societies are getting more or less unequal, healthy, dependent and cohesive. These social indicators give emphasis on child well-being and disabled people (OECD, 2003). Other researchers have also developed several similar indexes to measure and compare the benefits and costs of growth. The Index of Sustainable Economic Welfare (ISEW), proposed by Daly and Cobb (1989) is one of the very first and most important efforts. The ISEW tries to describe the change of sustainable economic welfare by measuring the portion of economic activity that delivers genuine increases in our quality of life (i.e. “quality” economic activity, in one sense). For example, it makes a subtraction for air pollution caused by economic activity, and makes an addition to count unpaid household labor, such as cleaning or child-minding. It also covers areas such as income inequality, other environmental damage, and depletion of environmental assets. Since 1989, many studies have been published referring to the calculation of the ISEW, particularly for European countries (see for example Diefenbacher, 1994; Stockhammer et al., 1997; Jackson et al., 1997). The Genuine Progress Indicator (GPI) is another important concept in green and welfare economics that has been proposed as a replacement metric for Gross Domestic Product (GDP) and an alternative measure of economic growth (Lawn, 2003). The GPI measures whether or not a country’s growth, increased production of goods, and expanding services have actually resulted in the improvement of the welfare (or wellbeing) of the people in the country. Some researchers emphasize that the GDP vs. the GPI is analogous to the difference between the gross profit of a company and the net profit (see for example Fig. 1). The net profit is what determines the long term health of the company, and similarly the GPI will be zero if the cost increases of crime and pollution equal the increases in production of goods and services (assuming all other factors being constant). Consequently, the ISEW and the GPI are designed as more reliable approximations of the sustainable development of nation’s citizens. The sustainable development economic welfare implied here is the welfare a nation enjoys at a particular point in time, given the impact of past and present activities. In all of the aforementioned studies, despite the expressed criticism, it is accepted that a clear connection between economic growth and human development exists. On the one hand economic growth provides the resources to permit sustained improve-
112
E. Grigoroudis et al. / Comparing Economic Development and Social Welfare
40 35
(thousands USD)
30 25 20
GDP per capita
15 10 5
GPI per capita
0 1950 1954 1958 1962 1966 1970 1974 1978 1982 1986 1990 1994 1998 2002 Figure 1. GDP vs. GPI in the USA (Venetoulis and Cobb, 2004).
ments in human development, while on the other hand, improvements in the quality of life may boost important economic indicators (e.g. consumption, unemployment, etc).
3. Methodology 3.1. Economic Development and Social Welfare Indicators The main objective of the presented study is the comparison analysis of economic development and social welfare, as mentioned in the previous sections. This comparison is based on several national, mostly macroeconomic indicators, which are usually published in order to give a clear view of the evolution of economic and human development. Data provided by the World Bank (2004) and OECD (2003, 2004a, 2004b) have been used for the assessment of economic development and social welfare indicators. The final selection of these indices is based on the availability of data, as well as the desired properties that the criteria hierarchy should hold. Particularly, the set of criteria assessed are assumed to have the properties of monotonicity, exhaustiveness, and nonredundancy (Roy, 1996). In the presented study, twelve (12) evaluation criteria of economic development have been used, as presented in Fig. 2. These criteria are able to provide a complete view of the economy of each country, and consist of the following main dimensions: economic efficiency, expenditures and investments, cost of life, savings and debts, and trade. All of these indices are assumed to have a positive impact on economic development, with an exception referring to inflation rate, consumer price index, and total debts, which are considered as decreasing criteria. Similarly, a set of thirty three (33) indicators have been used as evaluation criteria of social welfare. As shown in Fig. 3, these indicators cover the following main evaluation dimensions: population and income distribution, employment, health, education,
113
E. Grigoroudis et al. / Comparing Economic Development and Social Welfare
Economic Development
Economic efficiency
Expenditures
Cost of life
Savings-Debt
Trade
Gross domestic product per capita
Gross national expenditure per capita
Inflation consumer prices
Total debt service per capita
Trade per capita
Gross value added at factor cost per capita
Household final consumption expenditure per capita
Consumer price index
Gross domestic savings per capita
Trade balance
Gross capital formation per capita
Credit to private sector per capita
Figure 2. Evaluation criteria for economic development.
Social Welfare
Population-Income distribution
Employment
Health
Education
EnvironmentEnergy
Transportation
Population growth
Labor force, female
Life expectancy at birth
Public spending on education
Electric power consumption (kwh per capita)
Air transport, passengers carried to total population
Rate of population with age (0-14) and (65+) to rate of population with age 15-64
Unemployment, total
Birth rate, crude (per 1,000 people)
School enrollment, primary
CO2 emissions (metric tons per capita)
Passenger cars (per 1,000 people)
Population, female
Rate of male unemployment to female unemployment
Hospital beds (per 1,000 people)
School enrollment, secondary
Adjusted savings: carbon dioxide damage
Vehicles (per 1,000 people)
Unemployment, youth total
Mortality rate, infant (per 1,000 live births)
School enrollment, tertiary
Adjusted savings: mineral depletion
Passenger cars to Vehicles (per 1,000 people)
Health expenditure per capita
Personal computers per 100 residents
Adjusted savings: energy depletion
Total road network to the total extent of country
Health expenditure, total
Computers installed in the education to total of students
Rate of urban population to population living in the province Urban population growth to total population growth GINI index
Public to private expenses for health
Figure 3. Evaluation criteria for social welfare.
environment and energy, and transportation. The indicators that are assumed to have a negative impact on social welfare include: urban population (rate of urban population to population living in the province, rate of urban population growth to total population growth), unemployment (total unemployment rate, female unemployment, youth unemployment), infant mortality rate, and environmental related indices (CO2 emissions, adjusted savings from carbon dioxide damage, mineral depletion, and energy depletion). All the other indicators are considered as increasing criteria in this particular case. It should be noted that, in contrast to the evaluation of economic development, the assessment of social welfare indicators is rather difficult, given the diversity of societies and the way they determine their quality of life. This is the main reason for the large number of indicators used in this case, aiming at covering very different aspects of social prosperity. Since comparison analysis is the main aim of the study, an effort has been made in order to normalize collected data. For example, pure economic and monetary-type indicators have been assessed using PPP (Purchasing Power Parity), or measured as a per-
114
E. Grigoroudis et al. / Comparing Economic Development and Social Welfare
Table 1. Economic development data Evaluation criteria
Measure
Gross domestic product per capita (+)
PPP (current international $)
Gross value added at factor cost per capita (+)
PPP (current international $)
Gross national expenditure, per capita (+)
PPP (current international $)
Household final consumption expenditure per capita (+)
PPP (current international $)
Gross capital formation per capita (+)
PPP (current international $)
Inflation consumer prices (-)
annual %
Consumer price index (-)
1990 = 100
Total debt service per capita (-)
PPP (current international $)
Gross domestic savings per capita (+)
PPP (current international $)
Credit to private sector per capita
PPP (current international $)
Trade per capita (+)
PPP (current international $)
Trade balance (+)
export to import proportion
centage of GDP (Gross Domestic Product) or GNI (Gross National Income). Other indices have been measured in a percentage or proportion format, while several indicators have been normalized according to the total population of the country (i.e. per capita). The complete list of all evaluation criteria for economic development and social welfare, as well as their measurement form are given in Tables 1–2. 3.2. Data and Preliminary Analysis The analysis in the presented study refers to the thirty (30) member countries of OECD (Table 3) and covers the period from 1990 to 2002. It should be emphasized that this set includes the most important world leading economies (like USA, Japan, etc.), and thus, provided results should be discussed accordingly. However, less developed economies (like Turkey, Mexico, Slovakia, etc.) are also analyzed, as members of OECD. It should be noted that this set of countries is assumed homogenous, since it includes the worldwide developed economies. Although, it is difficult to find a universally accepted definition of developed and developing countries, many researchers emphasize that a developing country has a relatively low standard of living, an undeveloped industrial base, a low per capita income, a widespread poverty, and a low capital formation. Eventually, the final database includes values for the total set of 45 indicators (economic development and social welfare) for each one of the 30 OECD countries and for each year in the period the 1990–2002. Also, several preliminary statistical analyses have been also conducted in order to explore this large dataset: 1.
Descriptive statistics (average, range, standard deviation) have been calculated in order to test the reliability of data. The results provided for each indicator and country show that the variation observed is not particularly high.
E. Grigoroudis et al. / Comparing Economic Development and Social Welfare
115
Table 2. Social welfare data Evaluation criteria
Measure
Population growth (+)
annual %
Population distribution of age groups (-)
rate of population with age (0–14) and (65+) to population with age 15–64%
Population, female (+)
% of total
Urban population (–)
rate of urban population to population living in the province
Urban population growth (–)
urban population growth to total population growth
GINI index (–)
(0–1)
Labor force, female (+)
% of total labor force
Unemployment, total (–)
% of total labor force
Unemployment, female (–)
rate of male unemployment to female unemployment
Unemployment, youth total (–)
% of total labor force ages 15–24
Life expectancy at birth, total (+)
years
Birth rate, crude (+)
per 1,000 people
Hospital beds (+)
per 1,000 people
Mortality rate, infant (–)
per 1,000 live births
Health expenditure per capita (+)
current US$
Health expenditure, total (+)
% of GDP
Public to private expenses for health (+)
% proportion
Public spending on education, total (+)
% of GDP
School enrollment, primary (+)
% gross
School enrollment, secondary (+)
% gross
School enrollment, tertiary (+)
% gross
Personal computers (+)
per 100 residents
Computers installed in the education (+)
computers to total of students
Electric power consumption (+)
kwh per capita
CO2 emissions (–)
metric tons per capita
Adjusted savings: carbon dioxide damage (–)
% of GNI
Adjusted savings: mineral depletion (–)
% of GNI
Adjusted savings: energy depletion (–)
% of GNI
Air transport (+)
passengers carried to total population
Passenger cars (+)
per 1,000 people
Vehicles (+)
per 1,000 people
Passenger cars to vehicles
per 1,000 people
Road network (+)
total road network to the total extent of country
116
E. Grigoroudis et al. / Comparing Economic Development and Social Welfare
Table 3. OECD countries Australia
AUT
Hungary
HUN
Norway
NOR
Austria
AUS
Iceland
ISL
Poland
POL
Belgium
BEL
Ireland
IRL
Portugal
PRT
Canada
CAN
Italy
ITA
Slovakia
SVK
Czech Republic
CZE
Japan
JPN
Spain
ESP
Denmark
DEN
Korea
KOR
Sweden
SWE
Finland
FIN
Luxembourg
LUX
Switzerland
CHE
France
FRA
Mexico
MEX
Turkey
TUR
Germany
DEU
Netherlands
NLD
United Kingdom
UK
Greece
GRE
New Zealand
NZL
USA
USA
2.
The exceptions can be justified by the evident worldwide improvement of a particular indicator (e.g. use of personal computers), or by the known significant progress of a particular country (e.g. increase of GDP in Ireland, decrease of inflation rate in Poland). In order to test the assumed positive or negative impart of these indicators, a correlation analysis approach has been used. These analyses have been conducted separately for the economic development and the social welfare indicators. The results may justify the assumed type of impact, since increasing criteria are positively correlated, while the correlation between an increasing and a decreasing criterion is negative.
3.3. The PROMETHEE II Method Outranking relations methodologies is one of the main approaches of multicriteria decision analysis, developed by Professor Bernard Roy almost four decades ago. Roy (1968, 1996) defines the outranking relation as a binary relation S between alternatives a and b in a given set of alternatives A, such that aSb (a outranks b) if there are enough arguments to decide that a is at least as good as b, while there is no essential reason to refute this statement. The PROMETHEE II multicriteria method (Preference Ranking Organization Method of Enrichment Evolutions) is known to be one of the most efficient and simplest outranking relation methodologies proposed by Brans and Vincke (1985). Given that PROMETHEE is suitable for performance evaluation problems where multiple criteria are involved, the method is used in the presented study in order to evaluate a global economic development and a global social welfare index for each country and year based on the aforementioned marginal indicators. The construction of the outranking relation through the PROMETHEE II method involves the consideration of performance of the alternatives (countries) on a set of n evaluation criteria (indicators). To each criterion j a weight p j ≥ 0 is given depending on its importance (the criteria weights sum up to 1, i.e., ∑ j =1 p j = 1 ). The higher the n
weight of a criterion, the more important is for the evaluation of the overall perform-
E. Grigoroudis et al. / Comparing Economic Development and Social Welfare
117
ance of the alternatives. The criteria weights constitute the basis for the assessment of the degree of preference for alternative a over alternative b. This degree is represented in the preference index π (a, b) defined as follows: n
π (a, b) = ∑ p j H j (d ab )
(1)
j =1
The preference index for each pair of alternatives (a, b) ranges between 0 and 1; the higher it is (closer to 1) the higher is the strength of the preference for a over b. According to (1) the preference index is calculated as the weighted average of partial preference of a over b on each criterion j. To measure the partial preference of a over b on a criterion j the function H j (d ab ) ∈ [ 0,1] is used, which is an increasing function of the difference d ab = g j (a) − g j (b) , where g j (a) and g j (b) are the performance of alternative a and b, respectively on the criterion j. As Vincke (1992) notes, H j is a kind of preference intensity function, and thus the following situations may occur: c.
If alternatives a and b have similar performance on criterion j and consequently the preference of a over b is expected to be low, then H j (d ab ) ≈ 0 .
d.
On the other hand, if d ab = g j (a) − g j (b) > 0 , then the performance of alternative a on criterion j is considerably higher than the performance of alternative b, and consequently it is expected that a is strongly preferred to b, and thus H j (d ab ) ≈ 1 .
The function H j can be of different forms, depending upon judgment policy of decision maker. Brans and Vincke (1985) propose six general forms, which cover a wide range of practical situations. In the presented study, the Gaussian form of the H j was used for all criteria (Fig. 4). The use of the Gaussian form requires the specification of only one parameter ( σ ), while the fact that it does not have discontinuities contributes to the stability and the robustness of the obtained results (Brans et al., 1986). The result of the comparisons made for all pairs of alternatives (a, b) are organized in a directed graph (value outranking graph), such as the one shown in Fig. 5. The nodes of the graph represent the alternatives under consideration, whereas the arcs connecting pairs of nodes a and b represent the preference of alternative a over b (if the direction of the arc is α→b) or the opposite (if the direction of the arc is b→a). Each arc is associated with a flow representing the preference index π (a, b) as defined in (1). The sum of all flows leaving a node a is called the leaving flow of the node, denoted by φ + (a) . The leaving flow provides a measure of outranking character of alternative a over all the other alternatives. In a similar way, the sum of all flows entering a node a is calling the entering flow of the node, denoted by φ − (a) . The entering flow measures the outranked character of alternative a compared to all other alternatives. The difference between the leaving and the entering flow φ (a) = φ + (a) − φ − (a) provides the net flow for the node (alternative), which constitutes the overall evaluation measure of the performance of the alternative a.
118
E. Grigoroudis et al. / Comparing Economic Development and Social Welfare
1
H(d)
d σ Figure 4. The Gaussian preference function.
c Flows: π(a,c)
φ (a ) = φ + ( a ) − φ − ( a ) = [π (α , b) + π (α , c )] − [π (b, a ) + π (c, a )]
π(c,b)
π(c,a)
φ (b) = φ + (b) − φ − (b) = [π (b, a ) + π (b, c )] − [π (a , b) + π (c, b)]
π(b,c)
φ (c ) = φ + (c) − φ − (c) = [π (c,α ) + π (c, b)] − [π ( a, c) + π (b, c)]
π(a,b) a
π(b,a)
b Figure 5. Example of a value outranking graph.
Assuming that m alternatives are considered, the net flow may range in ⎡⎣ −m, m ⎤⎦ : a. b.
The case φ (a) ≈ − m designates that alternative a is strongly outranked by the other alternatives. The case φ (a) ≈ m designates that alternative a strongly outranks the other alternatives.
On the basis of their net flows the alternatives are ranked from the best (alternatives with high positive net flows) to the worst ones (alternatives with low net flows). As Brans and Mareschal (2005) emphasize, the net flow φ (.) may be compared with a utility function, since it provides a complete ranking. Furthermore, it should noted that the net flow requires relatively simple preference information (weights and preference functions), while it is based on comparative rather than absolute judgments.
E. Grigoroudis et al. / Comparing Economic Development and Social Welfare
119
Figure 6. Relative comparison diagram.
3.4. Comparison Analysis Diagrams The implementation of the PROMETHEE method is able to estimate global indicators for both economic development and social welfare. The presented comparison analysis diagrams combine these global indices in order to discriminate comparing countries. Each of these maps is divided into quadrants according to the estimated performance of economic development and social welfare (Fig. 6): a. b. c. d.
Strong quadrant: countries located to this quadrant appear a relatively high performance in both economic development and social welfare. Economic quadrant: this quadrant refers to countries which give comparatively greater importance to economic development performance. Weak quadrant: the performance of countries located in this quadrant is relatively low in both economic development and social welfare. Social quadrant: this last quadrant refers to countries that appear a relatively high global social welfare indicator, but their performance in economic development is rather low.
It should be emphasized that these diagrams are relative comparison maps, and thus a normalization of the PROMETHEE results is required, according to the following formula:
X i′ =
Xi − X n
( Xi − X ) ∑ i =1
2
for i = 1, 2,… , n
(2)
where X i is the PROMETHEE results (global economic development or social welfare index), X i′ is the normalized value of X i , X is the average of X i , and n is the sample size.
120
E. Grigoroudis et al. / Comparing Economic Development and Social Welfare
It should be noticed that the following important properties hold for the normalized data: n
n
X i′ = 0 and ∑ X ′2i = 1 ∑ i =1 i =1
(3)
with −1 ≤ X i′ ≤ 1 for i = 1, 2,… , n , while the cut-off level for axes is recalculated as the centroid of all points in the diagram. Thus, the development of relative comparison diagrams is able to overcome potential sensitivity problems of the PROMETHEE analysis, as well as the problem of defining appropriate cut-off levels for the economic development and social welfare axes. 4. Empirical Results 4.1. Multicriteria Analysis Approach and Scenario Development An assessment procedure based on the PROMETHEE II method has been applied in this study in order to evaluate the performance of the OECD countries on economic development and social welfare. The procedure has been applied separately using the 12 economic development and the 33 social welfare criteria, as presented in the previous sections. The selection of an appropriate preference function H j for each assessed criterion is one of the most important phases of PROMETHEE methods. As previously mentioned, the Gaussian form (Gaussian criterion) has been selected for all criteria in the presented study, which is given by the following formula: ⎛ −d 2 ⎞ ⎟ ⎜ 2σ 2j ⎟ ⎝ ⎠
H j (d ) = 1 − exp ⎜
(4)
where d is the difference between the performance level of countries a and b , i.e. d = g j (a) − g j (b) , and σ j is the standard deviation of the values of criterion j . It should be noted that the Gaussian criterion may be considered as a continuous approximation of other types of proposed preference functions (i.e. linear preference and indifference area function). For the parameters σ j 10 different scenarios were considered ranging between
0.25s j and 2.5s j , where s j is the standard deviation of all differences d ab for all countries a and b on criterion j . When low values for σ j are considered, the preference for a country a over a country b can be high even when the performance of two countries on criterion j is similar. On the other hand, when higher values for σ j are employed, the preference for a country a over a country b will be high only if the performance of country a on criterion j is considerably higher than the performance of country b .
E. Grigoroudis et al. / Comparing Economic Development and Social Welfare
121
Another important step for implementing the PROMETHEE II method is the assessment of criteria weights. These weights are non-negative numbers, independent from the measurement units of the criteria, and they are able to represent the relative importance of the criteria. As emphasized by Brans and Mareschal (2005), assessing weights to the criteria is not straightforward, since it involves the priorities and perceptions of the decision-maker. However, since there is no real decision-maker in the presented study, this decision-making process is not applicable and information about criteria weights is unavailable. Furthermore, it should be noted that several sensitivity analysis tools have been developed in the context of PROMETHEE methodologies, in order to examine the impact of different weight to the final decision and help decisionmakers to fix them (see for example Decision Lab and Promcalc software packages). For all these reasons, a simulation approach has been also employed for the assessment of criteria weights, in order to calculate preference index π in formula (1). In particular, 50 random weighting scenarios were generated. In each scenario the criteria weights were considered as random numbers uniformly distributed in the interval [1, 100]. The weights randomly generated in this way were then rescaled such that they sum up to 1. The combination of the 50 weighting scenarios with the 10 scenarios of the parameter σ resulted to the consideration of 500 scenarios for each examined year. In each scenario a different evaluation score (i.e. net flow) was obtained according to the corresponding parameters of the PROMETHEE II method. The final evaluation score for each country in the examined year was calculated as the average of the results provided by the aforementioned 500 scenarios. 4.2. Comparison Analysis per Year Based on the aforementioned multicriteria analysis approach, an overall evaluation score (i.e. net flows of the PROMETHEE II method) for both economic development and social welfare has been calculated for each year. These scores indicate the overall performance of a country in these particular socio-economic dimensions and may give a complete ranking of countries. However, the aim of the presented study is to explore the relationship between economic development and social welfare and comparatively analyze economies with different structure and orientation. Using the approach presented in Section 3.4, a series of comparison diagrams may be developed. It should be emphasized that these are purely comparison diagrams, and thus, the location of any country should be perceived in comparison with the other OECD countries. Figure 7 presents these relative comparison diagrams for the 1990 and 2002 results and the most important findings may be focused on the following: –
–
The majority of the countries are located in the “strong” quadrant, indicating high performance on both economic development and social welfare. This is not unexpected, since OECD countries include the most important worldwide strong economies. However, the exact location of the countries in these comparison diagrams varies between different time periods. Examining the “strong” quadrant, it can be easily observed that Luxembourg is a distinct case, since it appears to have the highest performance on economic development during 1990 and 2002. This result may be justified by the relatively high performance of Luxembourg on several economic develop-
122
1990
SOCIAL WELFARE
High
E. Grigoroudis et al. / Comparing Economic Development and Social Welfare
SWE ISL FIN NOR USA CAN AUS NLD DEN JPN FRA DEU BEL UK
NZL HUN KOR SVK GRE
CZE IRL PRT
ITA
CHE LUX
AUT
ESP
POL MEX
Low
TUR
Low
High
High
ECONOMIC DEVELOPMENT
2002 ISL
SOCIAL WELFARE
SWE
HUN
POL
CHE DEN NZL FIN NLD USA UK AUS DEU JPN FRA BEL CAN PRT AUT CZE ESP ITA KOR SVK
NOR IRL
LUX
GRE
TUR Low
MEX Low
High
ECONOMIC DEVELOPMENT
Figure 7. Relative comparison diagram (1990 vs. 2002 results).
–
–
ment indicators, like GDP per capita (highest among OECD countries), consumption, investments, etc. The less developed OECD countries (like Turkey, Mexico, Poland, Greece, etc) are located in the “weak” quadrant. It is important to mention that the majority of these countries have not improved their performance during the examined time period. The exception to this finding concerns Ireland, which achieved to move to the “strong” quadrant in 2002, mainly due to significant improvements on several development indicators. This is mentioned by several economists as the “Irish miracle” and it is consistent with numerous economical studies (see for example Fitzpatrick and Huggins, 2004). New Zealand is the only country located in the “social” quadrant according to both 1990 and 2002 results. In general, north European and Scandinavian countries (e.g. Iceland, Sweden, etc) appear to have the highest performance on social welfare. These countries, however, have also a relatively high performance on economic development, and thus they are located in the “strong” quadrant. Finally, it should be noted that several countries are very close to the intersection of axes, and thus categorization appears rather uncertain.
In order to have an overall view of the relation between economic development and social welfare, the average net flows provided by the PROMETHEE II method are calculated. Thereby, an overall comparison diagram may be developed, taking into account the average performance of each country in the period 1990–2002. This dia-
123
High
E. Grigoroudis et al. / Comparing Economic Development and Social Welfare
SOCIAL WELFARE
ISL SWE
CHE
FIN AUS NLD USA DNK NZL FRA DEU JPN UK CAN BEL
HUN
PRT
CZE
SVK KOR
NOR LUX
AUT
IRL ITA
ESP
GRE POL MEX
Low
TUR
Low
High
ECONOMIC DEVELOPMENT
Figure 8. Relative comparison diagram (average of 1990–2002 results).
gram is shown in Fig. 8, where it should be noted that most of the previous findings are also valid here. The most important remark that arises from Fig. 8 is that economic development and social welfare seem to be mutual dependent. This means that citizens’ prosperity is, by average, in accordance with the performance of their country’s economy. This finding may be justified by the fact that the majority of the countries have either a high performance on both economic development and social welfare (“strong” quadrant), or a low performance on both of these evaluation dimensions (“weak” quadrant). Few exceptions to the previous remark may include one country located in the “social” quadrant (New Zealand) and other two countries (Italy, Ireland) located in the “economic” quadrant, although the categorization of these countries may be questionable (i.e. they are located to other quadrants also). However, it should be emphasized that the aforementioned linkage between economic development and social welfare is limited to the content and the assumptions of the proposed approach. Thereby, it should be noted that Fig. 8 presents an “average situation” of the period 1990–2002. Also, Figs 7–8 are relative comparison diagrams, and thus, the location of a country depends also on the performance of other countries. In addition, it is important to mention that the comparison standards include the other OECD countries, i.e. the most worldwide strong economies. For these reasons, extreme cases do not appear in these comparison diagrams, i.e. countries with very high economic development and very low social welfare performance, or the opposite. Therefore, the fact that there are no countries that may be characterized as purely “economical” or purely “social” may justify the previous remark. 4.3. Comparison Analysis per Country The results presented in this section are focused on the comparison analysis of a particular country for the examined time period. Such comparison diagrams are developed by using normalization formula (2) on the yearly evaluation scores of a country. The
124
E. Grigoroudis et al. / Comparing Economic Development and Social Welfare
High
main objective of these diagrams is to present the evolution of social welfare and economic development performance during the period 1990–2002. For this reason, in this particular case, the different quadrants of the comparison diagrams concern the examined years (the labeling of the quadrants is similar to the one proposed in Section 3.4). However, it should be emphasized that in these diagrams, the performance of a country in a particular year is compared to its performance in the other years of the examined period. Thus, in contrast to the analysis presented in the previous section, these diagrams cannot be used to compare different countries. The most characteristic cases of these comparison diagrams are presented in Fig. 9, where the following results reveal: USA
1990 1991
SOCIAL WELFARE
1992 1994 2001
1993
1995
1996 1997
2000
1998 1999
Low
2002
Low
High
High
ECONOMIC DEVELOPMENT
JAPAN
1998
1995
1997
1994
SOCIAL WELFARE
2000 2001
1999 1996
1991 1992
1990
2002
Low
1993
Low
High
High
ECONOMIC DEVELOPMENT
GREECE
1991
1990
SOCIAL WELFARE
1994 1992 1995
2000 1999
2001
2002
1993 1997 1998
Low
1996
Low
High
ECONOMIC DEVELOPMENT
Figure 9. Characteristic relative comparison diagram for selected countries.
High
E. Grigoroudis et al. / Comparing Economic Development and Social Welfare
125
IRELAND
1998
2002
2001
1999
SOCIAL WELFARE
2000 1997
1995
1996
1992 1994
1991
1993
Low
1990
Low
High
High
ECONOMIC DEVELOPMENT
LUXEMBOURG 1992
SOCIAL WELFARE
1995
1990
1993
1991
1998
1994
1996
1997 1999 2000
2001 Low
2002 Low
High
High
ECONOMIC DEVELOPMENT
SWITZERLAND 1998
SOCIAL WELFARE
2000 2002
1997
1999 1996
200 1994
1992
1991 1990
1995
Low
1993
Low
High
ECONOMIC DEVELOPMENT
Figure 9. (Continued.)
–
–
The USA appear to follow a clockwise movement from the “social” quadrant in 1990 to the “weak” quadrant in 2002. It is important to mention that two distinct time periods appear in this case: from 1990 to 1998 USA had improved economic development performance (at the expense of social welfare performance), while economic development performance had been significantly deteriorated from 1998 to 2002 (social welfare performance appears unvarying during this period). An opposite movement (counterclockwise) appears in the case of Japan and Greece. Japan had moved from the “economic” quadrant in 1990 to the
126
E. Grigoroudis et al. / Comparing Economic Development and Social Welfare
–
–
“weak” quadrant in 2002, although a temporary improvement of social welfare performance is noticed during 1994–2001. On the other hand, although Greece is located in the “strong” quadrant in 1990, its performance in both evaluation dimensions has declined during 1991–1996 (particularly economic development performance). However, after 1996, the performance of Greece has significantly increased, and as a result, economic development performance reached its highest value in 2002. As already mentioned, Ireland achieved to significantly improve its economic development and social welfare performance during the examined time period. This can be easily observed in the relative comparison diagram (it has been moved from the “weak” quadrant in 1990 to the “strong” quadrant in 2002). A different situation may be noticed for the case of Luxembourg, where a movement from the “social” quadrant in 1990 to the “economic” quadrant in 2002 can be observed. This kind of movement implies an improvement of the economic development and a deterioration of the social welfare performance. Exactly the opposite appears in the case of Switzerland (it has been moved from the “economic” quadrant in 1990 to the “social” quadrant in 2002).
5. Concluding Remarks The main objective of the paper is the empirical comparative analysis of economic development and social welfare, using a multicriteria analysis approach. The most important results are based on a series of relative comparison diagrams that have been developed for the purpose of the analyses. The main advantage of these diagrams is that they include a normalization process that gives the ability to compare the performance of different countries in a specific year (or the performance of a particular country during a time period). Thereby, the presented study may demonstrate how multicriteria analysis may be applied in a real-world complex problem, like human development evaluation. The results provided seem to justify the perception that economic development and social welfare are strongly related. In this framework, the prosperity for citizens cannot be improved, without strengthening a country’s economy. Although there is a huge debate on this issue, the presentation of different economic schools of though is beyond the purpose of this paper. Of course there are exceptions to the previous rule, showing that economic development and social welfare are not always in complete accordance. This is evident particularly in the relative comparison analysis per country. The limitations of the presented study concern mainly the assessment of the evaluation criteria. As already mentioned, the assessed criteria hierarchy (see Figs 2–3) for both economic development and social welfare is limited, in many cases, by the availability of data. Also, criteria independency, in these value hierarchies, should be further discussed and analyzed. Moreover, it should be emphasized that in the presented study, similarly to all other research efforts, economic development and social welfare are not directly measured. Instead, the assessed criteria evaluate particular aspects or outcomes of these concepts. This is the main reason why several researchers urge that it is not possible to have an objective evaluation without considering the qualitative dimension of socio-economic development. The applied simulation approach for determining appropriate criteria weights may be considered as another important limitation. However, as emphasized in the previous
E. Grigoroudis et al. / Comparing Economic Development and Social Welfare
127
sections, there is no real decision-maker in the presented study, and thus, criteria weights cannot be objectively assessed. Furthermore, similar approaches have been also applied in other studies, based on scenario design (Baourakis et al., 2002; 2005; Kosmidou et al., 2004) or desired properties of the criteria weights set (Despotis, 2004; 2005). It is important to mention that in order to examine the reliability of the provided results, additional non-parametric statistical tests have been applied. Particularly, the correlation between the HDI ranking of countries and the ranking provided by the PROMETHEE results (economic development or social welfare scores) is calculated using Kendall’s tau. The correlation coefficient is relatively high, varying between 0.6 and 0.7 for the examined period 1990–2002. This paper should be considered as an empirical pilot study, for exploring the relation between economic development and social welfare using multicriteria decision analysis. Future research efforts may expand comparison analysis to include other countries, since provided results are limited by the fact that OECD countries include the most developed worldwide economies. Moreover, non-monotone preferences for the evaluation criteria may also be considered. Finally, the development of a causal model may explore possible interactions among evaluation criteria. In general, the study of the dynamic behavior of economic development and social welfare should be studied (e.g. explore possible dependencies among countries’ performance of different years).
References [1]
[2]
[3] [4] [5] [6] [7] [8] [9] [10]
[11] [12] [13]
Baourakis, G., Doumpos, M., Kalogeras, N., and Zopounidis, C. (2002). Multicriteria analysis and assessment of financial viability of agri-businesses: The case of marketing co-operatives and juice producing companies, Agribusiness, 18 (4), 543–558. Baourakis, G., Kalogeras, N., Zopounidis, C., and Van Dijk, G. (2005). Evaluating the financial performance of agri-food firms: a multicriteria decision-aid approach, Journal of Food Engineering, 70 (3), 365–371. Brans, J.P. and Mareschal, B. (2005). PROMETHEE methods, in: Figueira, J., Greco, S., and Ehrgott, M. (eds.), Multiple criteria decision analysis: State of the art surveys, Springer, New York, pp. 163–196. Brans, J.P. and Vincke, Ph. (1985). A preference ranking organization method: The PROMETHEE method for multiple criteria decision-making, Management Science, 31 (6), 647–656. Brans, J.P., Vincke, Ph., and Mareschal, B. (1986) How to rank and how to select projects: The PROMETHEE method, European Journal of Operational Research, 24 (2), 228–238. Clarke, M. (2003). Is economic growth desirable? A welfare economic analysis of the Thai experience, PhD Thesis, Victoria University, Melbourne. Daly, H.E. and Cobb, J. (1989). For the common good: Redirecting the economy towards community, the environment, and a sustainable future, Beacon Press, Boston. Despotis, D.K. (2004). A reassessment of the human development index via data envelopment analysis, Journal of the Operational Research Society, 56 (8), 969–980. Despotis, D.K. (2005). Measuring human development via data envelopment analysis: The case of Asia and the Pacific, Omega, 33 (5), 385–390. Diefenbacher, H. (1994). The Index of Sustainable Economic Welfare: A case study of the Federal Republic of Germany, in: Cobb, C. and Cobb, J.J. (eds.), The Green National Product: A Proposed Index of Sustainable Economic Welfare, University Press of America, Lanham, 215–245. Fitzpatrick, R.C. and Huggins, L.P. (2004). The Irish economic resurgence and small nation development, Employee Responsibilities and Rights Journal, 13 (3), 135–145. Jackson, T., Marks, N., Ralls, J., and Stymne, S. (1997). Sustainable economic welfare in the UK, 1950–1996, New Economics Foundation, London. Kosmidou, K., Doumpos, M., Voulgaris, F., and Zopounidis, C. (2004). Economic and technological aspects of the European competitiveness: A multicriteria approach, Journal of Economic Integration, 19 (4), 690–703.
128
E. Grigoroudis et al. / Comparing Economic Development and Social Welfare
[14] Lawn, P.A. (2003). A theoretical foundation to support the Index of Sustainable Economic Welfare (ISEW), Genuine Progress Indicator (GPI), and other related indexes, Ecological Economics, 44 (1), 105–118. [15] OECD (2003). Society at a glance: OECD social indicators, OECD Publication, available at http://www. oecd.org. [16] OECD (2004a). OECD Annual report for 2004, OECD Publication, available at http://www.oecd.org. [17] OECD (2004b). Understanding economic growth, OECD Publication, available at http://www.oecd.org. [18] Ramirez, A., Ranis, G., and Stewart, F. (1997). Economic growth and human development, Center Discussion Paper, 787, Yale University. [19] Roy, B. (1968). Classement et choix en présence de points de vue multiples: La méthode ELECTRE, R.I.R.O, 8 (2), 57–75. [20] Roy. B. (1996). Multicriteria methodology for decision aiding, Kluwer Academic Publishers, Dordrecht. [21] Stockhammer, E., Hochreister, H., Obermayr, B., and Steiner, K. (1997). The Index of Sustainable Economic Welfare (ISEW) as an alternative to GDP in measuring economic welfare: The results of the Austrian (revised) ISEW calculation 1955–1992, Ecological Economics, 21 (1), 19–34. [22] UNDP, (1990). Human Development Report 1990, Oxford University Press, New York. [23] UNDP, (2004). Human Development Report 2004, Oxford University Press, New York. [24] Venetoulis, J. and Cobb, C. (2004). The Genuine Progress Indicator 1950–2002 (2004 Update), Redefining Progress, available at www.rprogress.org/publications/2004/gpi_march2004update.pdf. [25] Vincke, P. (1992) Multicriteria decision aid, John Wiley and Sons, New York. [26] World Bank (2004). World development indicators 2004, available at http://www.worldbank.org.
Part 2 Social and Human System Management
Advances in Multiple Criteria Decision Making and Human Systems Management Y. Shi et al. (Eds.) IOS Press, 2007 © 2007 The authors. All rights reserved.
131
The Enlightenment, Popper and Einstein Nicholas MAXWELL Emeritus Reader and Honorary Senior Research Fellow University College London Gower Street, London WC1E 6BT, UK E-mail:
[email protected] Introduction I am delighted to be contributing to this volume in honour of Milan Zeleny. I hope he will forgive me for taking the reader, at least initially, on a journey back to the 18 th century. I do this with the present very much in mind. For it is my view that the Enlightenment, despite its heroic qualities, also made serious mistakes – mistakes we still suffer from today, without realizing what they are or where they come from. The basic idea of the French Enlightenment of the 18th century was to learn from the progress of natural science how to achieve social progress towards an enlightened world. This profoundly important and immensely influential idea was passionately pursued by Voltaire, Diderot, Condorcet, and other philosophes. I shall call this idea The Enlightenment Programme. The philosophes had their hearts in the right place. Unfortunately, in developing the idea, the philosophes blundered. They sought to implement a damagingly defective version of The Enlightenment Programme. This version was further developed throughout the 19th century by all those concerned with social science, from Comte and Marx to Mill, and then built into the institutional structure of academic inquiry in the early years of the 20th century. The outcome is what we are suffering from today: a kind of academic inquiry damagingly irrational when judged from the standpoint of helping to promote human welfare. When assessed from this standpoint, academic inquiry today, I shall argue, violates three of the four most elementary rules of reason one can think of. Our global problems, I shall argue, are the outcome of this rarely noticed, severely irrational character of academic inquiry as it mostly exists today. We urgently need to bring about a revolution in the aims and methods of academe. I shall develop this argument by considering in turn four versions of The Enlightenment Programme: 1. The Traditional Enlightenment Programme. 2. The Popperian Version of the Enlightenment Programme. 3. The Improved Popperian Enlightenment Programme. 4. The New Enlightenment Programme. As one goes down this list, defects are progressively corrected, the Programme is improved, until with The New Enlightenment one arrives at a version of The Enlightenment Programme well designed, rationally designed, to help humanity make progress towards a civilized, enlightened world. As I will explain, all too briefly, Einstein can be associated with a part of The New Enlightenment. He did not however
132
N. Maxwell / The Enlightenment, Popper and Einstein
explicitly advocate it as I shall expound it here, although I like to think that he would have approved of it. During the course of the argument I shall consider two kinds of inquiry, two conceptions of inquiry, which I shall call knowledge-inquiry and wisdom-inquiry. Knowledge-inquiry is to be associated with the first two versions of The Enlightenment Programme, wisdom-inquiry with the last two versions. Knowledge-inquiry, what we by and large have today, is so seriously irrational that it violates three of the four most basic rules of reason conceivable. Wisdom-inquiry is what results when knowledge-inquiry is modified just sufficiently to comply with all four rules of reason. The aim of achieving world enlightenment, world civilization, a good world, is of course deeply problematic. It is not just a question of how we get there; what we should be striving to achieve is in itself profoundly problematic. Most traditional ideas about what would constitute a good, a civilized world have amounted to various kinds of hells on earth, and in any case have been hopelessly unrealisable. What do I mean by an enlightened world? In the circumstances, the reader is right to be highly suspicious. All I can say, at the moment, is: please give me the benefit of the doubt for the time being. When it comes to discussing the fourth, New Enlightenment Programme I shall address this question of what we should mean by an enlightened or civilized world, what our basic aim ought to be in this context, and I hope that you will find what I say eminently sensible. The argument I am about to unfold is spelled out in much more detail in my books From Knowledge to Wisdom (Blackwell, 1984), What’s Wrong With Science? (Bran’s Head Books, 1976) and, most recently and lucidly Is Science Neurotic? (Imperial College Press, December 2004). Aspects of the argument are to be found also in The Comprehensibility of the Universe (Oxford University Press, 1998, pbk. 2003), and The Human World in the Physical Universe (Rowman and Littlefield, 2001). See also my website: www.nick-maxwell.demon.co.uk.
1. The Traditional Enlightenment Programme According to The Traditional Enlightenment Programme, in order to implement the basic Enlightenment idea of learning from the progress of natural science how to achieve social progress towards an enlightened world what needs to be done is to create social science alongside natural science. Francis Bacon stressed the fundamental importance of improving our knowledge of nature in order to transform the human condition for the better. The philosophes, reasonably enough, held that it was also vitally important to improve knowledge of the social world. They, and their successors – Comte, Marx, Mill and many others – set about creating and developing social sciences: economics, anthropology, psychology, sociology, political science, history. These social sciences were then built into the institutional structure of academic inquiry in the early 20 th century with the creation of departments of social science in universities all over the world. The outcome of this Traditional Enlightenment Programme is what we have, by and large, today: academic inquiry devoted to the acquisition of knowledge. First, knowledge is to be acquired; then it can be applied to help solve social problems. The intellectual aim of inquiry, of acquiring knowledge is to be sharply distinguished from
N. Maxwell / The Enlightenment, Popper and Einstein
133
the social or humanitarian aim of promoting human welfare. In the first instance, academic inquiry seeks to solve problems of knowledge, not social problems of living. Values, politics, expressions of feelings and desires, political philosophies and philosophies of life must all be excluded from the intellectual domain of inquiry to ensure that the pursuit of objective, factual knowledge does not degenerate into mere ideology or propaganda. In order to produce what is of real human value – genuine, objective factual knowledge – inquiry must, paradoxically, exclude from the intellectual domain of inquiry all expressions of human problems, suffering and values (although of course factual knowledge about these things can be developed). At the centre of knowledge-inquiry there is an even more restrictive conception of science. According to this orthodox view, claims to scientific knowledge must be assessed impartially with respect to the evidence, with respect to empirical success and failure. Metaphysical theses – theses which are neither empirically verifiable nor falsifiable, are to be excluded from science. (One form of this idea is Popper’s famous demarcation criterion: a theory, in order to be scientific, must be falsifiable.) The Traditional Enlightenment and its outcome, knowledge-inquiry, were opposed. They were opposed by Romanticism. Whereas the Enlightenment valued science, reason, knowledge, evidence, method, the Romantic opposition found all this oppressive and dictatorial, and valued instead art, imagination, passion, inspiration, genius, self-realization. Blake, Keats, Coleridge, Kiekergaard, Nietzsche, Dostoevsky and many other poets, novelists, artists and thinkers opposed reason and science and instead put their faith in the liberating power of art, inspiration and imagination. Romanticism too had an impact on academic inquiry, on some aspects of social science, and the humanities. It led to such movements as existentialism, phenomenology, structuralism, post-structuralism, post-modernism, and social constructivist conceptions of knowledge. Academia today might be said to consist of knowledge-inquiry – the outcome of putting The Traditional Enlightenment Programme into academic practice – plus the Romantic opposition which has influenced the fringes of academia in such areas as cultural studies, philosophy, and the history and sociology of science. Knowledge-inquiry – and especially modern science and technology – has indeed transformed the human condition for the good, just as Bacon and the philosophes hoped it would. It has led to an immense enrichment in the quality of human life, in industrially advanced countries at least. We are vastly healthier and wealthier than our ancestors of 200 years ago, thanks to modern science and technology. We have all the benefits of modern transport, communications, and other modern amenities made possible by science and technology. And science is of great value to us directly, in enhancing our knowledge and understanding of the universe and ourselves. But knowledge-inquiry has had bad effects as well. For modern science and technology have made possible modern industry and agriculture, the rapid growth of world population, which in turn have led to almost all our modern global problems: 1. 2.
3.
Global warming. The lethal character of modern war and terrorism – and the ill-conceived and dangerous “war on terrorism”. The threat posed by modern armaments, conventional, chemical, biological and nuclear. Rapid population growth.
134
N. Maxwell / The Enlightenment, Popper and Einstein
4. 5. 6. 7. 8. 9.
Gross inequalities of wealth across the globe. Destruction of tropical rain forests and other natural habitats, the mass extinction of species, and the pollution of earth, sea and air. Depletion of finite natural resources. Dictatorial regimes (helped to stay in power by the resources of modern technology). Annihilation of languages, cultures and traditional ways of life. Aids epidemic (spread by modern transport, and even, possibly, by vaccination with dirty needles).
Given that world politics are run along the lines of a version of gang warfare writ large, the bad consequences of modern science and technology are all but inevitable. For modern science leads to an immense increase in our power to act (for some at least) via technology and industry. As I have indicated, this has been used for good, in countless ways but, almost inevitably, it will be, and has been, used for bad, either intentionally, as in the case of millions killed in war, or unintentionally (at least initially), as in the case of global warming and extinction of species. What has gone wrong? The source of the trouble is the profound, damaging irrationality of knowledge-inquiry, the profound defects in The Traditional Enlightenment Programme. Knowledge-inquiry is so irrational that, when judged from the standpoint of helping to promote human welfare it violates three of the four most elementary rules of reason conceivable. What do I mean by “reason”? Reason, as I use the term, appeals to the idea that there are general methods or strategies which, if put into practice, give us, other things being equal, our best chance of solving our problems, realizing our aims. Reason does not decide for us, it helps us to decide well for ourselves. Four absolutely basic rules of reason are the following: (1) Articulate, and try to improve the articulation of, the problem to be solved. (2) Propose and critically assess possible solutions. (3) When necessary, break recalcitrant problems into easier-to-solve preliminary, subordinate, specialized problems. (4) Interconnect basic and specialized problem-solving so that each may guide the other.1 In order to enhance the quality of human life, make progress towards an enlightened world, the problems we need to solve are, fundamentally, problems of living, problems of action, not problems of knowledge. Even where new knowledge and technology are needed, as in agriculture or medicine for example, it is always what this enables us to do (or refrain from doing) that enables us to achieve what is of value (except, of course, in so far as new knowledge is in itself of value). Thus a kind of inquiry rationally devoted to promoting human welfare would give absolute priority to the tasks of (1) articulating our problems of living, and (2) proposing and critically assessing possible solutions, possible actions, policies, political programmes, legislation, philosophies of life. This knowledge-inquiry cannot do. The intellectual domain of knowledge-inquiry is restricted to tackling problems of knowledge. Intellectual priority cannot be given to 1
We shall encounter these four rules of rational problem-solving again when we come to The Improved Popperian Enlightenment Programme below.
N. Maxwell / The Enlightenment, Popper and Einstein
135
articulating, and trying to discover solutions to, problems of living within knowledge-inquiry, for problems of living and ideas for their solution require for their formulation expressions of human desires and aspirations, human suffering, values and ideals, proposals for action, political programmes and philosophies, all of which must be excluded from the intellectual domain of knowledge-inquiry. Knowledge-inquiry puts rule (3) into practice to a quite extraordinary extent. Modern academic inquiry consists of a vast maze of more and more specialized disciplines – sub-disciplines within disciplines within disciplines. But, because rules (1) and (2) are not, and cannot be, put into practice, rule (4) cannot be implemented either. If our basic problems of living, and ideas for their solution, are not articulated, specialized problem-solving pursued in accordance with rule (3) cannot guide and be guided by basic problem-solving, in accordance with rule (4). Thus rules (1), (2) and (4) are violated in a wholesale, structural way by knowledge-inquiry, by modern academic inquiry, and only rule (3) is implemented. It is this longstanding, wholesale, structural irrationality of modern academic inquiry that is at the root of our current global problems and our incapacity to tackle them effectively: the combination of an immensely successful natural science and associated technological research vastly increasing our power to act on the one hand, and the absence of inquiry rationally devoted to enhancing our power to resolve our conflicts and problems of living in increasingly cooperative ways on the other hand. Science without wisdom, one might say, is the crisis of our times, the one behind all the others. Where, exactly, did The Traditional Enlightenment Programme go wrong? It is important to appreciate that three steps are involved in putting the basic idea of the Enlightenment Programme into practice – the key idea, that is, of learning from scientific progress how to achieve social progress towards an enlightened world. Step 1. Specify correctly what the progress-achieving methods of natural science are. Step 2. Generalize these progress-achieving methods so that they become fruitfully applicable to any worthwhile, problematic human endeavour, and not just to science. Step 3. Apply them to the highly worthwhile and problematic endeavour of achieving world enlightenment, world civilization. The Traditional Enlightenment got (and gets) all three steps wrong. The big failure is step 3: instead of applying progress-achieving methods (generalized from those of science) to social life, to other institutions besides that of natural science, the philosophes in effect applied the methods they came up with to social science. Instead of progress-achieving methods being used to promote social progress towards an enlightened world, the methods they arrived at were used to promote knowledge of social phenomena. Academia as it exists today – knowledge-inquiry plus some Romantic opposition – is the outcome of putting into academic practice this botched version of The th Enlightenment Programme, botched by the philosophes of the 18 century French Enlightenment.
2. The Popperian Version of the Enlightenment Programme Karl Popper corrects some – but only some – of the blunders of The Traditional Enlightenment. His version of the Enlightenment Programme is to be found in his first
136
N. Maxwell / The Enlightenment, Popper and Einstein
four books: The Logic of Scientific Discovery, The Open Society and Its Enemies, The Poverty of Historicism, and Conjectures and Refutations. Even though Popper did not present his work in this way, what one finds in these books is a line of argument that in effect amounts to a profound reformulation and improvement of The Traditional Enlightenment. The Popperian version of The Enlightenment Programme might be summed up like this: Step 1. Falsificationism. Step 2. Critical Rationalism. Step 3. The Rational Society = The Open Society. In The Logic of Scientific Discovery Popper points out that scientific theories cannot be verified, but they can be falsified. Scientific method consists in putting forward highly falsifiable conjectures, which are then subjected to ruthless attempts at empirical falsification. When a theory is falsified, scientists must think up an even more falsifiable conjecture, which predicts everything its predecessor predicts, is not falsified by the experiment that falsified its predecessor, and predicts additional phenomena as well. As a result of proceeding in this way, science is able to make progress because falsehood is constantly being detected and eliminated by this process of conjecture and refutation. As a result of discovering a theory is false, scientists are forced to try to think up something better. Popper then generalized this falsificationist conception of scientific method, in accordance with step 2 above, to form his conception of (critical) rationality, a general methodology for solving problems or making progress. As Popper puts it in The Logic of Scientific Discovery “inter-subjective testing is merely a very important aspect of the more general idea of inter-subjective criticism, or in other words, of the idea of mutual rational control by critical discussion” (p. 44). In The Open Society and Its Enemies and The Poverty of Historicism Popper applies critical rationalism to problems of civilization, in accordance with step 3 above of The Enlightenment Programme. From all the riches of these two books, I pick just two points, two corrections Popper makes to ideas inherited from the Enlightenment. First, there is Popper’s devastating criticism of historicism. Historicism can be viewed as the outcome of an especially defective attempt to put step 3 of The Enlightenment Programme into practice. If one seeks to develop social science alongside natural science, and if one takes the capacity of Newtonian science to predict states of the solar system far into the future as a paradigmatic achievement of natural science, one may be misled into holding that the proper task of social science is to discover laws governing social evolution. Historicism is the doctrine that such laws exist. Popper decisively demolishes historicism, and demolishes the above rationale for adopting historicism. In doing so, he demolishes one influential and especially defective version of the traditional Enlightenment Programme. Second, Popper’s revolutionary contributions to steps 1 and 2 of The Enlightenment Programme (just indicated) lead to a new idea as to what a “rational society” might be, one that is fully in accordance with liberal traditions, and not entirely at odds with such traditions. A major objection to The Enlightenment Programme is overcome. If one upholds pre-Popperian conceptions of science and reason, and construes reason, in particular, as a set of rules which determine what one must accept or do, the very idea of “the rational society” is abhorrent. It can amount to little more than a tyranny of reason,
N. Maxwell / The Enlightenment, Popper and Einstein
137
a society in which spontaneity and freedom are crushed by the requirement that the rules of reason be obeyed. When viewed from the perspective of Popper’s falsificationism and critical rationalism, however, all this changes dramatically. Popper’s falsificationist conception of science requires that theories are severely tested empirically. But, in order to make sense of this idea of severe testing, we need to see the experimentalist as having at least the germ of an idea for a rival theory up his sleeve (otherwise testing might degenerate into performing essentially the same experiment again and again). This means experiments are always crucial experiments, attempts at trying to decide between two competing theories. Theoretical pluralism is necessary for science to be genuinely empirical. And, more generally (implementing step 2), in order to criticize an idea, one needs to have a rival idea in mind. Rationality, as construed by Popper, requires plurality of ideas, values, ways of life. Thus, for Popper, the rational society is the open society, the society in which diverse ways of life can flourish. In short, given pre-Popperian conceptions of science and reason, the Enlightenment idea of creating a rational society guides one towards a kind of tyranny of reason, the very opposite of a free or open society. Adopt improved Popperian conceptions of science and reason, and the Enlightenment ideal of the rational society is one and the same as the ideal of the free, open society. At a stroke, a major objection to the Enlightenment Programme is overcome. Despite the enormous improvements that Popper has made to The Traditional Enlightenment Programme, his version of the Programme is still defective. I now discuss two ways in which Popper’s version of the Programme needs to be improved. Both involve changing dramatically Popper’s conception of social science. It is important to note that Popper defends a highly traditional conception of social science. According to him, the methods of social science are broadly the same as those of natural science. But it is this key element of the 18th century Enlightenment, so profoundly influential over subsequent developments, that constitutes The Traditional Enlightenment’s greatest blunder. Popper endorses, and fails to correct, this blunder. In addition, Popper defends his falsificationist version of the orthodox view that evidence alone decides what theories are to be accepted in science: but all versions of this orthodox conception of science are untenable, as we shall see below. Furthermore, Popper defends a version of knowledge-inquiry: but this, as we have seen, is damagingly irrational. Popper’s philosophy is a step in the right direction, but further steps need to be taken.
3. The Improved Popperian Enlightenment Programme The Improved Popperian Enlightenment Programme can be summarized like this: Step 1. Science proceeds by implementing the four rules, (1) to (4), of rational problem-solving, indicated above. First, rules (1) and (2) are put into practice in an attempt to solve the fundamental problem: What is the nature of the universe? Unfalsifiable, metaphysical ideas are proposed and critically assessed, attempted solutions to this fundamental problem. (This initial step was taken by the Presocratic philosophers.) Then rules (3) and (4) are put into practice. More precise, specialized, falsifiable theories are proposed, and then critically assessed from the two standpoints of (a) compatibility with the best metaphysical conjecture concerning the nature of the universe, and (b) the capacity successfully to predict empirical phenomena (criticism
138
N. Maxwell / The Enlightenment, Popper and Einstein
here taking the especially severe form of attempted empirical refutation). Almost all of science today is the outcome of implementing (3) and (4). Step 2. This problem-solving conception of scientific method is then generalized to form the four rules of rational problem-solving, (1) to (4), formulated above. Step 3. Academia is then transformed so that its basic task becomes to help humanity resolve its conflicts and problems of living in increasingly cooperatively rational ways, by putting the four rules of rational problem-solving increasingly into practice in personal, social and institutional life. Step 3 requires that knowledge-inquiry be modified just sufficiently so that the four rules of rational problem-solving are put into academic practice. If this were to be done, it would bring about a revolution in academic inquiry. The outcome would be a new kind of inquiry, which I shall call wisdom-inquiry. Let us now see, in general terms, what this new kind of inquiry would look like. First, two preliminary points. (a) In order to make progress towards a good, enlightened world, the problems we need to solve are, fundamentally, as I have already remarked, problems of living, problems of action, rather than problems of knowledge or technology. Even when new knowledge or technology is essential, as in medicine or agriculture, it is always what this enables us to do (or refrain from doing) that achieves what is of human value (except when knowledge is of value per se). (b) In order to make progress towards a good world we need to discover how to resolve our conflicts and problems of living in increasingly cooperative ways. There are degrees of cooperativeness, from its complete absence at one extreme – all out annihilation of the opposition – to threat of war or murder, to threats of less extreme kinds as in industrial disputes, to manipulation, voting, bargaining, to extreme cooperation at the other extreme – the attempt being made to find that option that is in the best interests of all those involved by means of cooperatively rational discussion. In various contexts, there are limits to the extent to which cooperation is desirable. Nevertheless, in our world, plagued by brutal conflict, injustice, manipulation and threat, there is room for greater cooperation. Granted, then, that the task of academic inquiry is to put the four rules of problem-solving rationality into practice in such a way as to help humanity learn how to make progress towards a civilized, enlightened world, the primary intellectual tasks must be: (1) To articulate, and try to improve the articulation of, those social problems of living we need to solve in order to make progress towards a better world. (2) To propose and critically assess possible, and actual, increasingly cooperative social actions – these actions to be assessed for their capacity to resolve human problems and conflicts, thus enhancing the quality of human life. These intellectually fundamental tasks are undertaken by social inquiry, at the heart of the academic enterprise. Social inquiry also has the task of promoting increasingly cooperatively rational tackling of problems of living in the social world – in such contexts as politics, commerce, international affairs, industry, agriculture, the media, the law, education. Academic inquiry also needs, of course, to implement the third rule of rational problem solving; that is, it needs:
N. Maxwell / The Enlightenment, Popper and Einstein
139
(3) To break up the basic problems of living into preliminary, simpler, analogous, subordinate, specialized problems of knowledge and technology, in an attempt to work gradually towards solutions to the basic problems of living. But, in order to ensure that specialized and basic problem solving keep in contact with one another, the fourth rule of rational problem-solving also needs to be implemented; that is, academic inquiry needs: (4) To interconnect attempts to solve basic and specialized problems, so that basic problem-solving may guide, and be guided by, specialized problem-solving. There are a number of points to note about this “rational problem-solving” conception of academic inquiry. Social inquiry is not, primarily, social science; it has, rather, the intellectually basic task of engaging in, and promoting in the social world, increasingly cooperatively rational tackling of conflicts and problems of living (see my Is Science Neurotic?, chs. 3 and 4 for further details). Social inquiry, so conceived, is actually intellectually more fundamental than natural science (which seeks to solve subordinate problems of knowledge and understanding). Academic inquiry, in seeking to promote cooperatively rational problem-solving in the social world, must engage in a two-way exchange of ideas, arguments, experiences and information with the social world. The thinking, the problem-solving, that really matters, that is really fundamental, is the thinking that we engage in, individually, socially and institutionally, as we live; the whole of academic inquiry is, in a sense, a specialized part of this, created in accordance with rule 3, but also being required to implement rule 4 (so that social and academic problem-solving may influence each other). Academic inquiry, on this model, is a kind of peoples’ civil service, doing openly for the public what actual civil services are supposed to do, in secret, for governments. Academic inquiry needs just sufficient power to retain its independence, to resist pressures from government, industry, the media, religious authorities, and public opinion, but no more. Academia proposes to, argues with, learns from, attempts to teach, and criticizes all sectors of the social world, but does not instruct or dictate. It is an intellectual resource for the public, not an intellectual bully. The basic intellectual aim of inquiry may be said to be, not knowledge, but wisdom – wisdom being understood to be the desire, the active endeavour and the capacity to realize what is desirable and of value in life, for oneself and others (“realize” meaning both “to apprehend” and “to make real”). Wisdom includes knowledge, know-how and understanding but goes beyond them in also including the desire and active striving for what is of value, the ability to experience value, actually and potentially, in the circumstances of life, the capacity to help realize what is of value for oneself and others, the capacity to help solve those problems of living that need to be solved if what is of value is to be realized, the capacity to use and develop knowledge, technology and understanding as needed for the realization of value. Wisdom, like knowledge, can be conceived of not only in personal terms but also in institutional or social terms. Thus, the basic aim of academic inquiry, according to the view being indicated here, is to help us develop wiser ways of living, wiser institutions, customs and social relations, a wiser world. Diagram 1 provides a cartoon sketch of wisdom-inquiry.
140
N. Maxwell / The Enlightenment, Popper and Einstein
Diagram 1. Wisdom-Inquiry Implementing Rules of Rational Problem-Solving.
It is important to appreciate that the conception of academic inquiry that we are considering is designed to help us to see, to know and to understand, for their own sake, just as much as it is designed to help us solve practical problems of living. It might seem that social inquiry, in articulating problems of living and proposing possible solutions, has only a severely practical purpose. But engaging in this intellectual activity of articulating personal and social problems of living is just what we need to do if we are to develop a good empathic or “personalistic” understanding of our fellow human beings (and of ourselves) – a kind of understanding that can do justice to our humanity, to what is of value, potentially and actually, in our lives. In order to understand another person as a person (as opposed to a biological or physical system) I need to be able, in imagination, to see, desire, fear, believe, experience and suffer what the other person sees, desires, etc. I need to be able, in imagination, to enter into the other person’s world; that is, I need to be able to understand his problems of living as he understands them, and I need also, perhaps, to understand a more objective version of these problems. In giving intellectual priority to the tasks of articulating problems of living and exploring possible
N. Maxwell / The Enlightenment, Popper and Einstein
141
solutions, social inquiry thereby gives intellectual priority to the development of a kind of understanding that people can acquire of one another that is of great intrinsic value. In my view, indeed, personalistic understanding is essential to the development of our humanity, even to the development of consciousness. Our being able to understand each other in this way is also essential for cooperatively rational action. And it is essential for science. It is only because scientists can enter imaginatively into each other’s problems and research projects that objective scientific knowledge can develop. At least two rather different motives exist for trying to see the world as another sees it: one may seek to improve one’s knowledge of the other person; or one may seek to improve one’s knowledge of the world, it being possible that the other person has something to contribute to one’s own knowledge. Scientific knowledge arises as a result of the latter use of personalistic understanding – scientific knowledge being, in part, the product of endless acts of personalistic understanding between scientists (with the personalistic element largely suppressed so that it becomes invisible). It is hardly too much to say that almost all that is of value in human life is based on personalistic understanding. (For further details see my From Knowledge to Wisdom, pp. 172–89 and 264–75, and my The Human World in the Physical Universe, chs. 5–7). The basic intellectual aim of the kind of inquiry we are considering is to devote reason to the discovery of what is of value in life. This immediately carries with it the consequence that the arts have a vital rational contribution to make to inquiry, as revelations of value, as imaginative explorations of possibilities, desirable or disastrous, or as vehicles for the criticism of fraudulent values through comedy, satire or tragedy. Literature and drama also have a rational role to play in enhancing our ability to understand others personalistically, as a result of identifying imaginatively with fictional characters – literature in this respect merging into biography, documentary and history. Literary criticism bridges the gap between literature and social inquiry, and is more concerned with the content of literature than the means by which it achieves its effects. Another important consequence flows from the point that the basic aim of inquiry is to help us discover what is of value, namely that our feelings and desires have a vital rational role to play within the intellectual domain of inquiry. If we are to discover for ourselves what is of value, then we must attend to our feelings and desires. But not everything that feels good is good, and not everything that we desire is desirable. Rationality requires that feelings and desires take fact, knowledge and logic into account, just as it requires that priorities for scientific research take feelings and desires into account. In insisting on this kind of interplay between feelings and desires on the one hand, knowledge and understanding on the other, the conception of inquiry that we are considering resolves the conflict between Rationalism and Romanticism, and helps us to acquire what we need if we are to contribute to building civilization: mindful hearts and heartfelt minds. All this differs dramatically from academic inquiry as it mostly exists at present, devoted primarily to the pursuit of knowledge. The differences all stem, however, from the simple demand that academic inquiry puts the above four rules of rational problem-solving into practice in seeking to help promote human welfare by intellectual and educational means. Popper stressed that science, and all rational discussion, employ one and the same method, namely that “of stating one’s problem clearly and of examining its various proposed solutions critically (The Logic of Scientific Discovery, p. 16). How, then, does
142
N. Maxwell / The Enlightenment, Popper and Einstein
The Improved Popperian Enlightenment Programme (IPEP) improve on Popper? It does so in four ways. First, IPEP requires that science arrives at a conjectural metaphysical (i.e. unfalsifiable) solution to the fundamental problem of the nature of the universe before the specialized scientific enterprise of conjecturing and testing empirically falsifiable theories can get underway. This point will be spelled out in more detail in a moment. Second, IPEP emphasizes the importance of specialization. Popper was too much opposed to specialization to stress its importance. He failed to appreciate that rule (4) of rational problem-solving, as formulated above, if implemented, keeps rampant specialization in check, and ensures that fundamental and specialized problem-solving guide each other. Third, IPEP holds that social inquiry is not primarily science, or even the pursuit of knowledge; its task ought to be to promote cooperatively rational tackling of problems of living in the real world. Popper, as I have already remarked, failed to correct the greatest blunder of The Traditional Enlightenment, in that he held that social inquiry is science, with methods essentially the same as those of natural science (although he did express some tentative doubts about this in his later writings). Finally, whereas IPEP requires that we bring about a revolution in the aims and methods of academic inquiry in the interest of reason and humanity, Popper continued to defend knowledge-inquiry to the end of his life.
4. The New Enlightenment Programme The Improved Popperian Enlightenment Programme fails to do justice to the profoundly problematic character of the aims of science, and the aims of life. The New Enlightenment puts this right. It can be summarized like this. Step 1. Aim-oriented empiricism (a conception of the progress-achieving methods of science which does full justice to the profoundly problematic character of the aims of science). Step 2. Aim-oriented rationality, a conception of rationality arrived at by generalizing aim-oriented empiricism much as Popper generalized falsificationism to arrive at critical rationalism. This conception of rationality is designed to help us improve problematic aims as we live, as we act. Step 3. Aim-oriented rationality is put into practice in personal, social, institutional and political life in such a way that we make progress towards what is best in the profoundly problematic goal of a good, enlightened, civilized world. I take these three steps in turn. Most scientists and philosophers of science take for granted some version of the orthodox view that the basic aim of science is knowledge of truth, the basic method being to assess claims to knowledge impartially with respect to evidence. (Popper’s falsificationism is a version of this orthodox view). The view is, however, untenable. In physics (where the relevant issues arise in their most naked form) two considerations govern selection of theories: (a) empirical success and failure, and (b) unity, or non ad hoc character of the theory in question. Given any accepted, empirically successful physical theory, such as Newtonian theory let us say, countless empirically more successful rival theories can easily be thought up by arbitrarily modifying Newtonian theory, in a wholly ad hoc way, to produce non-Newtonian predictions for unobserved phenomena (or correct predictions for phenomena that clash with Newtonian theory,
N. Maxwell / The Enlightenment, Popper and Einstein
143
such as the orbit of Mercury). At the same time independently empirically confirmed hypotheses can be added on to such an arbitrarily modified version of Newtonian theory, thus increasing the empirical success of the resulting theory. There are endlessly many empirically more successful rivals to Newtonian theory which, quite properly, are not considered for a moment within science because they are all horribly ad hoc and disunified. They are what might be described as “patchwork quilt” theories, in that they are made up of different sets of laws, for different ranges of phenomena, arbitrarily stuck together, instead of just one set of laws applicable to all the phenomena with which the theory deals. Now comes the crucial point. The fact that, in physics, unified theories are persistently selected against the evidence, in preference, that is, to endlessly many empirically more successful patchwork quilt rivals, means that physics makes a persistent, substantial, influential, highly problematic but implicit metaphysical (i.e. untestable) assumption: the universe is such that no patchwork quilt theory is true. Some kind of underlying unity exists in nature, in that the same set of laws governs all phenomena. Theories in physics are only accepted if they are (a) sufficiently empirically successful, and (b) sufficiently in accord with the metaphysical assumption of unity of law. The orthodox conception of science, which holds that no substantial thesis about the world must be accepted as a part of scientific knowledge independent of evidence, is untenable. Furthermore, precisely because the thesis that there is unity in nature is an assumption which, in the form that it is accepted at any stage in the development of science, is more than likely to be false, it is essential that this assumption is made explicit so that it can be critically assessed, modified, and improved. Rigour, quite generally, requires that assumptions that are substantial, problematic and implicit be made explicit so that they can be criticized and, perhaps, improved. In order to facilitate the capacity of science to criticize, and improve, this problematic assumption, we need to construe science as making a hierarchy of assumptions, these assumptions becoming less and less substantial, and hence more and more likely to be true, as we go up the hierarchy: see diagram 2. At the top of the hierarchy, there is the assumption that the universe is such that we can acquire some knowledge of our local circumstances, sufficient to make life possible. Making this assumption can only help, and cannot hinder, the acquisition of knowledge whatever the universe is like. We are justified in accepting this as a permanent tenet of scientific knowledge. As we descend the hierarchy, from level 7 to level 3, the corresponding theses assert more and more about the world and thus are more and more likely to be false. At level 5 there is the thesis that the universe is comprehensible in some way or another, there being some overall explanation for phenomena. At level 4 there is the much more specific thesis that the universe is physically comprehensible, the same physical laws governing all phenomena. This is the thesis we have already encountered. At level 3 there is an even more specific thesis, which asserts that the universe is physically comprehensible in some specific way. This thesis is almost bound to be false. The thesis th at this level has changed a number of times in the history of physics. In the 17 century it was the thesis that the universe is made up of corpuscles that interact by contact; then came the thesis that the universe is made up of point-atoms that interact by means of forces at a distance; then that it is made up of a unified field. Today there is the idea that
144
N. Maxwell / The Enlightenment, Popper and Einstein
Diagram 2. Aim-Oriented Empiricism.
it is made up of minute quantum strings in ten or eleven dimensions of space-time. The mere fact that we have changed our ideas several times at this level as science has developed indicates that the latest version of this assumption, string theory, is quite likely to be false, and in need of further revision. Diagram 2 makes things look rather complicated. But the basic idea is extremely simple. By representing the problematic assumption of unity in nature as a hierarchy of assumptions which become increasingly problematic as we descend the hierarchy, going from an assumption at the top which we will never want to reject or revise, to an assumption at the bottom (at level 3) which is almost bound to be false, and thus in need of revision, we provide ourselves with a fixed framework of assumptions and associated methods within which much more specific and problematic assumptions and associated methods can be revised in the light of our improving scientific knowledge. As our knowledge improves, assumptions and methods improve: our knowledge about how to improve knowledge improves. There is something like positive feedback between improving knowledge, and improving knowledge about how to improve knowledge – the heart of scientific rationality, and the methodological key to the immense success of modern science.
N. Maxwell / The Enlightenment, Popper and Einstein
145
All this can be recast in terms of aims and methods. For we can construe physics as having, as its aim, to make precise, in the form of a true, testable theory, the thesis at level 3. A somewhat less specific, and less problematic, version of this aim is to make precise, as a testable theory, the thesis at level 4. And so on up to level 7. Thus diagram 2 can be interpreted as representing the nested aims and methods of science, aims and associated methods becoming increasingly problematic, and increasingly likely to need revision, as we descend the hierarchy. This hierarchical view arises because the basic aim of science is profoundly problematic, and hence science needs to find the best way of improving its aim, and associated methods, as it proceeds. The first scientist explicitly to do science in this aim-oriented empiricist way was Albert Einstein. Einstein devoted his life to trying to discover unity in theoretical physics. In particular, he sought to unify Newtonian theory – a particle theory – and Maxwell’s theory of classical electrodynamics – a field theory. All his great contributions to physics in 1905 are variations on this theme of seeking unity in nature – unifying, somehow, particles and fields. In particular, special relativity arose out of the attempt to unify – to reconcile the principle that the laws of nature have the same form with respect to all inertial reference frames (which comes from Newton) with the principle that the constancy of the velocity of light is a law of nature (which comes from Maxwell). On the face of it, these two principles clash horribly, this clash highlighting the clash between Newton’s and Maxwell’s theories. (How can a pulse of light have the same velocity with respect to two people moving with respect to each other?) Einstein discovered that if we adjust our ideas about space and time, the two principles can be reconciled: this discovery is special relativity. The two principles are the two basic postulates of the theory. But special relativity can also be interpreted as making a quasi-metaphysical assertion about the nature of space and time (space-time is Minkowskian); and it can be interpreted as a methodological principle: in order to be acceptable, a theory must comply with special relativity (it must be Lorentz invariant). It is noteworthy that Einstein always saw his theory in this light: he always called it “the relativity principle”. We thus have a theory, a contribution to physics, which emerges from the search for unity, which has a quasi-metaphysical aspect (space-time is Minkowskian), and which is also used, with extraordinary fruitfulness for subsequent physics, as a methodological principle. Furthermore, the methodological principle is subsequently revised: this happens with the advent of Einstein’s theory of general relativity, which asserts that space-time has variable curvature (contradicting the assumption of special relativity that space-time is flat). All this illustrates key features of aim-oriented empiricism. As does general relativity, its method of discovery and character. And furthermore, Einstein came explicitly to stress key features of aim-oriented empiricism – for example, that nature is physically comprehensible – even though he did not advocate the fully fledged doctrine, with all the details indicated in diagram 2. (For further discussion see my “Induction and Scientific Realism, Part 3”, British Journal for the Philosophy of Science 44, 1993, pp. 275–305.) So much for the answer to the first step of The New Enlightenment Programme. What are the progress-achieving methods of science? Answer: aim-oriented empiricism. We come now to the second step. The crucial point, here, is that it is not just in science that aims are problematic; this is the case in life too, either because different aims conflict, or because what we believe to be desirable and realizable lacks one or other of these features, or both. Above all, the aim of creating global civilization is inherently and
146
N. Maxwell / The Enlightenment, Popper and Einstein
profoundly problematic. Quite generally, then, and not just in science, whenever we pursue a problematic aim we need first, to acknowledge the aim; then we need to represent it as a hierarchy of aims, from the specific and problematic at the bottom of the hierarchy, to the general and unproblematic at the top. In this way we provide ourselves with a framework within which we may improve more or less specific and problematic aims and methods as we proceed, learning from success and failure in practice what it is that is both of most value and realizable. Such an “aim-oriented” conception of rationality is the proper generalization of the aim-oriented, progress-achieving methods of science. We need to generalize diagram 2 in such a way that the hierarchy of aims might correspond to any problematic aim in life, and not just that of science. (For details see my books referred to above, in particular Is Science Neurotic?, chs. 3 and 4.) Any conception of rationality which systematically leads us astray must be defective. But any conception of rationality, such as Popper’s critical rationalism, which does not include explicit means for the improvement of aims, must systematically lead us astray. It will do so whenever we fail to choose that aim that is in our best interests or, more seriously, whenever we misrepresent our aim – as we are likely to do whenever aims are problematic. In these circumstances, the more “rationally” we pursue the aim we acknowledge, the worse off we will be. Systematically, such conceptions of rationality, which do not include provisions for improving problematic aims, are a hindrance rather than a help; they are, in short, defective. (As we have seen, science specifically, and academia more generally, at present misrepresent basic intellectual aims. This is the central theme of Is Science Neurotic?.) Aim-oriented empiricism and its generalization, aim-oriented rationality, incorporate all the good points of The Improved Popperian Enlightenment Programme, and improve it further by being designed to help science and other worthwhile endeavours progressively improve problematic aims and methods. We come now to the third step of The New Enlightenment Programme. The task, here, is to help humanity gradually get more aim-oriented rationality into diverse aspects of social and institutional life – personal, political, economic, educational, international – so that humanity may gradually learn how to make progress towards an enlightened world. Social inquiry, in taking up this task, needs to be pursued as social methodology or social philosophy. What the philosophy of science is to science, as conceived by aim-oriented empiricism, so sociology is to the social world: it has the task of helping diverse valuable human endeavours and institutions gradually improve aims and methods so that the world may make social progress towards global enlightenment. (The sociology of science, as a special case, is one and the same thing as the philosophy of science.) And a basic task of academic inquiry, more generally, becomes to help humanity solve its problems of living in increasingly rational, cooperative, enlightened ways, thus helping humanity become more civilized. The basic aim of academic inquiry becomes, I have already said, to promote the growth of wisdom. Those parts of academic inquiry devoted to improving knowledge, understanding and technological know-how contribute to the growth of wisdom. The New Enlightenment Programme thus has dramatic and far reaching implications for academic inquiry, for almost every branch and aspect of science and the humanities, for its overall character and structure, its overall aims and methods, and its relationship to the rest of the social world (see From Knowledge to Wisdom and Is Science Neurotic?).
N. Maxwell / The Enlightenment, Popper and Einstein
147
Diagram 3. Aim-Oriented Rationality Applied to the Task of Creating Civilization.
As I have already remarked, the aim of achieving global civilization is inherently problematic. This means, according to aim-oriented rationality, that we need to represent the aim at a number of levels, from the specific and highly problematic to the unspecific and unproblematic. Thus, at a fairly specific level, we might, for example, specify civilization to be a state of affairs in which there is an end to war, dictatorships, population growth, extreme inequalities of wealth, and the establishment of democratic, liberal world government and a sustainable world industry and agriculture. At a rather more general level we might specify civilization to be a state of affairs in which everyone shares equally in enjoying, sustaining and creating what is of value in life in so far as this is possible. And at a more general level still, we might specify civilization to be that ideal, realizable state of affairs we ought to try to achieve in the long term, whatever it may be. A cartoon sketch of what is needed is indicated in diagram 3, arrived at by generalizing diagram 2 and applying the outcome to the task of creating a better world. As a result of building into our institutions and social life such a hierarchical structure of aims and associated methods, we create a framework within which it
148
N. Maxwell / The Enlightenment, Popper and Einstein
becomes possible for us progressively to improve our real-life aims and methods in increasingly cooperative ways as we live. Diverse philosophies of life – diverse religious, political, economic and moral views – may be cooperatively developed, assessed and tested against the experience of personal and social life. It becomes possible progressively to improve diverse philosophies of life (diverse views about what is of value in life and how it is to be realized) much as theories are progressively and cooperatively improved in science. Aim-oriented rationality is especially relevant when it comes to resolving conflicts cooperatively. If two groups have partly conflicting aims but wish to discover the best resolution of the conflict, aim-oriented rationality helps in requiring of those involved that they represent aims at a level of sufficient imprecision for agreement to be possible, thus creating an agreed framework within which disagreements may be explored and resolved. Aim-oriented rationality cannot, of itself, combat non-cooperativeness, or induce a desire for cooperativeness; it can however facilitate the cooperative resolution of conflicts if the desire for this exists. In facilitating the cooperative resolution of conflicts in this way, aim-oriented rationality can, in the long term, encourage the desire for cooperation to grow (if only because it encourages belief in the possibility of cooperation). Einstein did not advocate The New Enlightenment Programme, as formulated here. He did however remark that “perfection of means and confusion of goals seems, to my opinion, to characterize our age”. I agree entirely. The New Enlightenment Programme, and the associated version of wisdom-inquiry, are designed specifically to help us put right what Einstein so correctly saw as the fundamental fault of our age.
References [1] [2] [3]
N. Maxwell, What’s Wrong With Science? Bran’s Head Books, 1976. N. Maxwell, From Knowledge to Wisdom, Basil Blackwell, 1984. N. Maxwell, The Comprehensibility of the Universe: A New Conception of Science, Oxford University Press, 1998, pbk. 2003. [4] N. Maxwell, Can Humanity Learn to Become Civilized? The Crisis of Science without Civilization, Journal of Applied Philosophy 17, 2000, 29–44. [5] N. Maxwell, The Human World in the Physical Universe: Consciousness, Free Will and Evolution, Rowman and Littlefield, 2001. [6] N. Maxwell, Is Science Neurotic? Imperial College Press, December 2004. [7] N. Maxwell, Popper, Kuhn, Lakatos and Aim-Oriented Empiricism, Philosophia 32, 2005, 181–239. [8] K. Popper, The Logic of Scientific Discovery, Hutchinson, 1959. [9] K. Popper, The Poverty of Historicism, Routledge and Kegan Paul, 1960. [10] K. Popper, The Open Society and Its Enemies, Routledge and Kegan Paul, 1962. [11] K. Popper, Conjectures and Refutations, Routledge and Kegan Paul, 1963.
Advances in Multiple Criteria Decision Making and Human Systems Management Y. Shi et al. (Eds.) IOS Press, 2007 © 2007 The authors. All rights reserved.
149
Value Focused Management (VFM): Capitalizing on the Potential of Managerial Value Drivers Boaz RONEN 1,a, Zvi LIEBER b and Nitza GERI c Faculty of Management, Tel Aviv University, Ramat Aviv, Tel Aviv 69978, Israel Phone: +972-3-6441181, Fax: +972-3-6441267, Email:
[email protected] b Faculty of Management, Tel Aviv University, Ramat Aviv, Tel Aviv 69978, Israel Phone: +972-3-6441181, Fax: +972-3-6441267, Email:
[email protected] c The Department of Management and Economics, The Open University of Israel, 108 Ravutski Street, P. O. Box 808, Raanana 43107, Israel Phone: +972-9-7781911, Fax: +972-9-7780668, Email:
[email protected] a
Abstract. The goal of the firm is to maximize shareholder value. While most firms devote their main efforts to exploit financial value drivers such as mergers and acquisitions, not enough attention is being paid to managerial value drivers like reducing time to market, increasing throughput, or improving logistics, operations and supply chain management, although these managerial drivers have a much greater potential for value creation. This paper focuses on managerial value drivers and presents Value Focused Management (VFM), which is a methodology for enhancing the organization value by identifying its value drivers, quantifying their estimated contribution, and prioritizing them according to their relative value creation potential and difficulty of implementation. VFM combines Value Based Management (VBM) with the Theory of Constraints (TOC) along with practices such as the focusing matrix, and provides managers with a structured process that includes a focused diagnosis of the organization, followed by a comprehensive implementation plan which helps them direct their efforts towards the most promising value drivers. VFM has been successfully implemented in dozens of organizations worldwide. This paper analyzes a case study of a supermarket chain which demonstrates VFM’s potential as an effective practical methodology to guide companies in their ongoing quest to increase shareholder value. Keywords. Value Focused Management (VFM), Value Based Management (VBM), Theory of Constraints (TOC), Shareholder Value, Performance Measurement
1. Introduction The goal of the firm is to maximize shareholder value (Copeland et al., 1996; Pitman, 2003; Sundaram & Inkpen, 2004). Therefore, management should focus on value creation by exploiting the firm’s value drivers. A value driver is any important factor that significantly affects the value of the firm (Amit & Zott, 2001). There are two main sorts of value drivers: financial and non-financial which are also termed managerial 1
Corresponding author.
150
B. Ronen et al. / Value Focused Management (VFM)
value drivers. Financial value drivers include actions such as capital structure changes, mergers and acquisitions, public offering or dividend distribution. These financial activities are performed by top management, and usually their impact on shareholder value can be evaluated ex-ante as well as ex-post. Managerial value drivers include actions like strategic changes, reducing time to market, increasing throughput, or improving logistics, operations and supply chain management. While most firms devote their main efforts to exploit financial value drivers, not enough attention is being paid to managerial value drivers, although they have a much greater potential for value creation. This paper focuses on managerial value drivers since they are generally disregarded in both research and practice, and presents an effective methodology for capitalizing on the considerable potential of these underutilized value drivers. While the goal of the firm seems clear, the challenges that organizations face are: how to measure value creation, and how to ensure that all the decisions are made according to their impact on value? In this aspect, managerial value drivers are much harder to manage and measure than the financial ones. The financial management and operations management fields provide organizations with various approaches to cope with these challenges. The financial approach which is most identified with value creation is value based management (VBM) (Copeland et al., 1996). Another performance measurement approach that attracted much interest in recent years is Economic Value Added (EVA®) (Stewart, 1992), which is sometimes used together with VBM. Whereas these two approaches answer the issue of value creation measurement, they do not provide organizations with a satisfactory practical mechanism to ensure that all the decisions are made according to their impact on value (Malmi & Ikaheimo, 2003). From the operations management perspective, the Theory of Constraints (TOC) asserts that the goal of the firm is “to make more money now and in the future” (Goldratt & Cox, 1986). TOC offers a process that leads organizations towards fulfilling this goal and provides a set of performance measures to support decision making. Although TOC enhances value creating actions, it is not explicitly connected to financial performance measures such as EVA® and other VBM measures. Hence, it may be hard to evaluate the impact of these actions on value creation. Moreover, if management has to choose between several alternatives, TOC lacks a tool to evaluate the relative potential long-term impact of each action. In light of the advantages and drawbacks of the abovementioned approaches, managers need a focused methodology that will integrate the advantages of VBM and TOC, provide a common language across all functional areas and align all the organizational decision making with the goal. This paper suggests value focused management (VFM), as a methodology for enhancing the organization value by identifying its value drivers, quantifying their estimated contribution, and prioritizing them according to their relative value creation potential and difficulty of implementation. The next section presents the theoretical basis of VFM which combines VBM with TOC along with practices such as the focusing matrix, and reviews the relevant literature. The third section introduces the VFM methodology. The fourth section analyzes a case study of a supermarket chain which demonstrates VFM’s potential as an effective practical value creation methodology. The managerial value drivers which are examined in the case study are highly relevant to many organizations in the retail industry. The last sections discuss VFM’s contribution to value creation in light of the case study analysis, provide implications for implementation and conclude the article.
B. Ronen et al. / Value Focused Management (VFM)
151
2. Theoretical Background 2.1. Value Creation from a Financial Management Perspective Value Based Management is a management approach for measuring and managing businesses with the explicit objective of creating superior long-term value for shareholders (Ittner and Larcker, 2001). VBM’s leading principle is that all the decisions at all organizational levels should be made according to their impact on value (Copeland et al., 1996). VBM provides managers with two principal tools: the first is discounted cash flow (DCF) valuation and the second is value driver analysis which helps managers focus on the key drivers of corporate value. In the DCF approach, the value of a firm is defined as its future expected cash flow discounted at a rate which reflects the cash flow risk. Another framework for valuation is Economic Value Added (Stewart, 1992), which is a version of the residual income periodic performance measure (Otley, 1999). EVA® is defined as the net operating profit after tax (NOPAT) less the opportunity cost of the capital used by the business (Stern and Shiely, 2001). Conceptually, DCF and EVA® are equivalent formulas for estimating the continuing value of a firm (Copeland et al., 1996). However, EVA® is useful for evaluating the company’s performance in a single period, such as a year. Ittner and Larcker (2001) define six basic steps which are usually included in VBM frameworks: 1. 2. 3. 4. 5. 6.
Choosing specific internal objectives that lead to shareholder value enhancement. Selecting strategies and organizational designs consistent with the chosen objectives. Identifying the specific value drivers. Developing corresponding action plans, selecting performance measures, and setting targets. Evaluating organizational and managerial performance. Assessing and modifying the organization’s VBM process in light of current results.
Nevertheless, both DCF and EVA® do not sufficiently support organizational decision making (Malmi & Ikaheimo, 2003). EVA® has two additional drawbacks which probably hinder its use as a dominant performance measure: its reliance on accounting data that can be manipulated and its short term focus (O’Hanlon & Peasnell, 1998). Ittner & Larcker (2001) regard VBM broadly and include the Balanced Scorecard (BSC) approach (Kaplan & Norton, 1992, 1996) as an integral part of the VBM perspective. Since the BSC incorporates non-financial measures, such as customer satisfaction, it might have been used to identify and evaluate managerial value drivers. However, the BSC relies on multiple objectives (Otley, 1999), which compete for people’s attention and send confusing signals regarding the goal. Hence, the BSC is not considered as a useful performance measurement framework (Pitman, 2003). Malmi and Ikaheimo (2003) who studied VBM utilization in six Finnish-based organizations, observe that VBM does not provide enough practical guidance for decision making and suggest the following guidelines for improving VBM as a practical management approach: • •
Aim to create shareholder value. Identify the value drivers.
152
B. Ronen et al. / Value Focused Management (VFM)
• •
Connect performance measurement, target setting and rewards to value creation or value drivers. Connect decision making and action planning, both strategic and operational, to value creation or value drivers.
These Guidelines will be used in Section 5 to evaluate the contribution of value focused management. 2.2. Value Creation from an Operations Management Perspective The Theory of Constraints (TOC) (Goldratt and Cox 1986; Goldratt 1994) claims that the attention of management should be focused on the few constraints which prevent the organization from achieving its goal. TOC is gaining increasing acceptance among practitioners as well as academics (Rahman, 1998; Gupta, 2003), and its application provided thousands of organizations worldwide with significant performance improvements, such as increased throughput, reduced inventory levels and shorter lead time (Mabin & Balderstone, 2000). While reports of successful TOC implementation are mainly from manufacturing organizations especially aerospace, apparel, automotive, electronics, furniture, semiconductor, steel and heavy engineering (Mabin & Balderstone, 2003), TOC has also been implemented in diverse non-manufacturing industries including financial institutions (Smith, 2004), enterprise software (Ioannou & Papadoyiannis 2004) health services (Ronen et al., 2006) and also in the public sector (Shoemaker & Reid, 2005). Goldratt (1991) initially defined the five focusing steps of TOC for maximizing the performance of a system (see steps 3–7 below). Ronen and Spector (1992) enhanced the process by adding two preliminary steps (see steps 1–2 below). These two steps are particularly important regarding sub-systems such as business units that each one of them is considered a separate profit center, or in situations of dynamic constraints when the binding constraint changes over time. Therefore, the seven focusing steps are (Ronen et al. 2001): 1. 2. 3. 4. 5. 6. 7.
Define the system’s goal. Determine global performance measures. Identify the system’s constraints. Decide how to exploit the system’s constraint. Subordinate the system to the constraint. Elevate the system’s constraint. If, in the previous steps, a constraint has been broken, go back to step 3. Do not let inertia become the system’s constraint.
Value Focused Management (VFM), which is presented in the next section, draws on TOC in two aspects, first the seven focusing steps serve as a conceptual framework for VFM and second, the TOC performance measures are used to identify managerial value drivers. The performance measures profile is a tool to support global decisionmaking that examines alternative courses of action through the organization’s global performance measures. It is a two-dimensional matrix of which the columns represent the alternative actions and the rows represent the performance measures, as shown in Table 1. The first three measures were defined by Goldratt & Cox (1986) and the other three were suggested by Eden et al. (1993). Each organization should modify the performance measures profile to its special needs by dropping or changing the suggested measures or adding new ones.
153
B. Ronen et al. / Value Focused Management (VFM)
Table 1. The Performance Measures Profile Performance Measures T
Throughput
OE
Operating expenses
I
Inventory
LT
Lead time
Q
Quality
DDP
Due-date performance
Alternative A
Alternative B
….
TOC provides organizations with tools and performance measures which ensure that all the decisions are made according to their impact on value, and supports decision-making at all the organizational levels. The applicability of TOC has been demonstrated in hundreds of successful reports (Mabin and Balderstone, 2003). Therefore, TOC answers one of the two main challenges of value creation. Yet, it does not offer a satisfactory answer to the value creation measurement challenge, since it is not explicitly connected to the financial performance measures which are commonly used to evaluate the firm. Especially, TOC lacks a tool for evaluating the relative potential long-term impact of alternative actions. The definition of performance measures has a crucial effect on value creation. Since people behave as they are measured, the measures must guide employees to act in ways that advance the overall goal of the organization (Otley, 1999). Appropriate performance measures should have the following attributes (Geri & Ronen, 2005): • • • • •
Global and effective, so that improving them significantly enhances value creation. Clear and simple, so people can understand them and act appropriately. Easy to measure. The people who use a specific measure should collect the required data, or it should be drawn from existing information systems. Satisfying. Searching for optimal “perfect and accurate” measures may result in a heavy maintenance burden and over-precision. This in turn, may lead to abandoning the system. Fit the specific organization. Attempting to adopt a proven successful performance measurement system “as is” may end in disappointment. Each organization has to gradually build up a measurement system that suits its needs. The measures in table 1 may serve as a starting point.
These attributes guided us in the development of VFM, which is presented in the next section, and will be used to evaluate VFM in Section 5. However, the goal is to increase shareholder value and therefore this value should be used as the primary performance measure.
3. Value Focused Management Value focused management is a practical methodology for increasing shareholder value. VFM draws on VBM and TOC and provides a common language across all functional
154
B. Ronen et al. / Value Focused Management (VFM)
areas, thus it enables aligning all the organizational decision making with the goal and creates a clear link between management actions and shareholder value. VFM adds the difficulty of implementation dimension to the decision making process, through the focusing matrix which is detailed below. Hence, VFM considers the load on management attention, which according to Davenport and Beck (2000) is the scarcest resource in modern organizations, and focuses management attention on the most promising value drivers. VFM identifies the value drivers, quantifies their estimated contribution, and prioritizes them, according to their relative value creation potential and difficulty of implementation. The stages of VFM are: 1. 2. 3. 4. 5.
Define the goal. Determine the performance measures. Identify the value drivers and evaluate their potential impact. Decide how to improve the value drivers. Implement and control.
We now elaborate on each of these stages. 3.1. Stage 1: Define the Goal The goal of the firm is to maximize shareholder value, and it should be clear to all managers and employees of the organization. The debate whether a firm should maximize shareholder value or stakeholder value, has been going on since the nineteenth century. However, legal as well as theoretic arguments stress that the objective function of the corporation is to maximize shareholder value (Copeland et al. 1996; Sundaram & Inkpen, 2004). Nevertheless, considering the interests of stakeholders such as employees, customers, suppliers and the community will advance the goal of the firm in the long term. 3.2. Stage 2: Determine the Performance Measures VFM combines several financial and operational performance measures, but the firm value is the primary performance measure. We use the DCF approach (Copeland et al., 1996) and define shareholder value as the discounted cash flow available to shareholders. The financial statements provide the necessary data for valuation. Shareholder value is calculated as follows: the value of operations less net financial liabilities plus excess assets (such as real estate not necessary for ongoing operations). The value of operations is the discounted value of expected future free cash flow (Copeland et al., 1996), and is separated into two time periods: an explicit forecast period (usually the first five years) and the value after the explicit period, which is referred to as the residual value (also termed the continuing value). The performance measures also include the global TOC measures: throughput, operating expenses, inventory, lead time, quality, and due-date performance, as well as specific relevant global measures. Finally, EVA® is used to measure the change in shareholder value during the period. 3.3. Stage 3: Identify and Evaluate the Value Drivers This is the main stage of VFM which differentiates it from other value creation methodologies, and it includes seven activities, which are detailed below.
B. Ronen et al. / Value Focused Management (VFM)
155
3.3.1. Identify the Value Drivers As already mentioned, a value driver is any important factor that significantly affects the value of the firm (Amit & Zott, 2001). The potential value drivers are identified by a focused review and analysis of the organization from four different approaches, which are detailed below. The review is carried out by interviewing the management team, key personnel, customers, suppliers, or other business partners; reviewing financial and management reports; visiting the premises; and benchmarking against similar organizations. The financial statements approach. The financial statements are reviewed and benchmarked in order to identify potential value drivers such as high inventory levels or a decrease in revenues. The functional review approach (bottom-up). All the organizational functions are systematically examined to find relevant value drivers. These functions include: the business strategy; marketing, sales and business development; human resources management; information systems; finance; research and development; quality; operations, logistics and procurement; cost accounting, organizational structure; risk management; customer service and support; project management. The performance measures approach. The value creation potential of improving each one of the current and prospective performance measures of the organization is evaluated. These include the global TOC measures: throughput, operating expenses, inventory, lead time, quality, and due-date performance, as well as other specific relevant global measures. Sometimes, the use of inappropriate performance measures such as traditional cost accounting measures distorts decision making and reduces shareholder value. Hence, in these cases, modifying the performance measures can be a major value driver. The core problem identification approach (top-down). A current reality tree (Goldratt, 1994) is used to analyze the undesirable effects (UDEs) and identify the root problems of the organization. A UDE is any major issue that prevents the organization from achieving its goal. The UDEs may include problems and symptoms which were revealed by the other approaches, as well as new UDEs. Figure 1 presents an example of a current reality tree. The distinction between problems and symptoms is crucial, since the real value creation potential lies in solving the core problems. 3.3.2. Evaluate the Potential Impact and Difficulty of Implementation About ten of the identified value drivers, which are perceived as the most important ones are selected. The potential impact of each value driver is estimated, as well as additional required investments. For instance, insourcing the customer service call center will cost two million dollars during the first year, and will result in a one-time 2% increase in sales in the second year, due to improved customer retention. In the following years, sales will remain in this higher level. The cost of sales changes proportionately. The difficulty of implementation is evaluated on a scale from 1 (very hard) to 5 (easy). In this example, it is estimated as medium, 3. Additional examples are provided in Section 4. 3.3.3. Prepare a Base Valuation The base valuation is the starting point of the value creation potential calculation. An example of a base valuation is presented in appendix A. It is based on the company’s
156
B. Ronen et al. / Value Focused Management (VFM)
financial statements, and can be easily prepared by using an electronic spreadsheet. Since the purpose of this valuation is to provide a point of reference for measuring the relative changes it does not have to be very accurate (e.g., the cost of capital may be rounded to whole percentiles). 3.3.4. Prepare a Pro Forma Valuation for Each Value Driver A separate pro forma valuation, such as the two examples presented in appendix B, is prepared for each value driver, according to the assumptions regarding its impact (e.g., a one-time 3% increase in sales, and a proportionate increase in cost-of-sales). 3.3.5. Prepare a Focusing Table The information regarding each of the value drivers’ importance (i.e., value creation potential) and difficulty of implementation is summarized in a focusing table, such as table 2 below, and the total value creation potential of the organization is calculated. 3.3.6. Prepare a Focusing Matrix The value drivers are presented in a focusing matrix (Pass & Ronen, 2003), like the one in Fig. 2 below. This presentation, that reminds an efficient frontier graph (Markowitz, 1952), helps identify those drivers with the greatest value creation potential and which require less implementation efforts. 3.3.7. Select the Value Drivers Finally, top management has to choose and prioritize the value drivers which will be improved. Besides the dimensions of the focusing matrix, there may be other considerations, for instance, the implementation of a certain value driver may create options for further value creation, or there may be interdependencies between certain value drivers. 3.4. Stage 4: Decide How to Improve the Value Drivers For each chosen value driver a detailed work plan will be drawn up, including a full description of the activities, an implementation schedule and the person responsible for each activity. The plans will be based on innovative management methods and techniques, such as TOC (Goldratt & Cox, 1986), Just-In-Time (Schonberger, 1986), the complete kit concept (Ronen, 1992) and others. 3.5. Stage 5: Implement and Control Top management is responsible for the implementation and control of the value creation plan. Since corporate mission is to increase shareholder value, top management should lead and participate in the steering committee of each value driver improvement project. The implementation process should be reported to the board of directors which should discuss the encountered problems and ways to overcome them. It is of paramount importance that the value creation process will be integrated with the organization’s management and control processes, hence the performance measurement and reward systems, the information and control systems and all the other mechanisms should be used for this purpose.
B. Ronen et al. / Value Focused Management (VFM)
157
4. Example: The Supermarket Chain This section analyzes an example of a supermarket chain and demonstrates the effectiveness of VFM as a practical value creation methodology. The managerial value drivers examined below are highly relevant to many organizations in the retail industry. The following example is a modified abridged version of the supermarket chain example which was analyzed by Eden and Ronen (2002). The example elaborates on the third stage of VFM: identifying the value drivers and evaluating their potential impact. The other stages are as described in the previous section. That is, the goal of the supermarket chain is to maximize shareholder value (stage 1). The performance measures (stage 2) are those indicated in Section 3.2: the DCF approach is used to measure shareholder value and the required data are taken from the financial statements (see appendix A), TOC global measures are used for performance measurement, and EVA® measures shareholder value change during each period. The activities of the third stage are detailed in the subsections below. The two final stages: deciding how to improve the value drivers (Stage 4), and implementation and control (Stage 5), were carried out as described in Sections 3.4 and 3.5 respectively. Appendix A presents the financial statements and the base valuation of the supermarket chain. Shareholder value is estimated at $154 million. The shareholders require a return of 10% per year, and the cost of debt is 3.2% (that is, 5% less 36% corporate tax rate). The equity is $182 million, and the debt is $63 million. Hence, the weighted average cost of capital (WACC) is 8.25%, and was calculated as follows: WACC= 10% * 182/245 + 3.2% * 63/245 = 8.2514% ≈ 8.25% EVA® is defined as: EVA® = net operating profit after tax (NOPAT) – WACC * invested capital. Thus, the EVA® at the base year is negative: EVA® = 17.92 – 8.25% * 245 = –2.296 Although the chain has an annual net income of $15.9 million at the base year, the EVA® is negative, and shareholder value is being eroded by approximately $2.3 million per year. Therefore, management must find ways to create value. 4.1. Identifying the Value Drivers The four approaches described in Section 3.3.1 were used to identify the chain’s value drivers. 4.1.1. The Financial Statements Approach The chain’s financial statements were analyzed and compared with the two leading competitors and sector average data. Three potential value drivers were identified: • • •
Low profit rate compared to the sector average. High inventory levels. Relatively short supplier credit terms compared to the competition.
158
B. Ronen et al. / Value Focused Management (VFM)
4.1.2. The Functional Review Approach (Bottom-Up) All the organizational functions were reviewed in a focused process which included several visits to selected supermarket branches and to the chain’s headquarters, on-site interviews and management workshops. The main findings were as follows: The organization’s strategy. The chain’s strategy is outdated and not well defined. The branches are located mainly in the suburbs and there are no branches in prime locations. The chain does not have a private label. Furthermore, it lacks a logistics center which would enable better operations and control. Marketing, sales and business development. A comparative study showed that the average customer purchase is 8% lower than that of the competitors. Human resources management. There is a large turnover of low and middle management personnel, especially among key branch employees. However, there is a strong sense of identification and loyalty of the branch staff members, who have been with the chain for many years, and labor relations are good. Information systems. Branch managers complain that the information systems are inadequate, and do not provide them with managerial information. For instance, sometimes they reveal shortages by physically checking the shelves. Operations, logistics and procurement. The supermarket chain does not have a logistics center. On a typical day, more than 30 different suppliers arrive at a branch, and it interferes with the branches’ smooth operation. Organizational structure. The organizational structure is centralized and the branch managers are allowed little freedom of action. All financial expenditures for branch maintenance, for sales promotion or hiring temporary or permanent personnel have to be authorized by the main office. 4.1.3. The Performance Measures Approach The chain’s performance measures were reviewed and the identified potential value drivers are: Inventory. There are about 11.9 inventory turns per year, meaning that the inventory level is enough for one month. Lead time. The average time from a branch request until its fulfillment is four working days. Lost sales. This is an important measure, which is commonly used by retailers. The average lost sales rate of the supermarket chain is estimated at 4.6%. It was calculated based on the assumption that in half of the cases when the required item is out of stock, the customer will buy a similar item on the same purchase occasion, or will postpone the purchase at the chain to a later occasion. In the remaining cases, the customer will buy the product elsewhere or not at all (particularly in cases of spontaneous buying). 4.1.4. The Core Problem Identification Approach (Top-Down) A current reality tree (Goldratt, 1994) was built in order to identify the company’s core problems and is presented in Fig. 1. The UDE’s were elicited from interviews with managers and employees, and they also include problems and symptoms which were revealed by the previous three approaches. As shown in Fig. 1, the chain’s core problems are: outdated strategy, ineffective operations and over-centralized management.
159
B. Ronen et al. / Value Focused Management (VFM)
Shareholder value is not sufficiently improved
Profit after tax is too low
Inventories are too high
Average purchase is too small
No shelf space management
High rate of lost sales
No private label No autonomy for branch managers
No logistics center
Outdated Strategy
Inadequate information systems
Ineffective operations
Over-centralized management
Core problems
Figure 1. Current Reality Tree.
4.2. Evaluating the Potential Impact and Difficulty of Implementation Following the identification process, seven potential value drivers were chosen for further consideration, and the expected impact and implementation difficulty of each one of them is detailed below. 4.2.1. Value Driver 1: Increasing the Average Customer Purchase Since the functional review revealed that the average customer purchase is 8% lower than that of the competitors, increasing it seems to be a promising value driver. It is assumed that increasing the average purchase by 5% will lead to a parallel increase in sales of 5% in the first year, after which the sales will remain at the same level. It will be accomplished by seasonal sales promotions, cashier training, sampling and tasting promotions, advertising, lotteries, and so on. Service improvement has also a major impact on increasing customer average purchase. Attention should be paid to enhancing the shopping experience, by trying to give the customers more than they expected with regard to service and courtesy. The checkout counters are a bottleneck during peak hours. The cashier’s “complete kit” (Ronen, 1992, Ronen et al., 2006) at the start of the shift can greatly help in reducing the non-effective time at the checkout counter. The cashier’s kit includes coins and small bills for change, additional cash register rolls, up-to-date price lists for certain items, information on special offers, coupons, and so on. This will reduce the cashiers’ wasted time, and enable them to devote more time for special promotions and interaction with customers. These actions will require additional costs, estimated at 0.5% of sales, every year. The difficulty of implementation is considered relatively easy, 4 on a scale of 1 (difficult) to 5 (very easy).
160
B. Ronen et al. / Value Focused Management (VFM)
4.2.2. Value Driver 2: Establishing a Logistics Center Establishing a logistics center is in line with contemporary management philosophies, especially Just-in-time and TOC. Setting up a single logistics center which centrally distributes goods to the branches has the following advantages: • • •
•
Reducing the number of daily deliveries to the branch will relieve branch managers of handling deliveries and supervising unloading, therefore leaving them more time for improving service and promoting sales. Reducing unloading time and waiting time of the delivery vehicle. Optimizing deliveries since a single aggregate delivery is cheaper and more efficient than 10 or 20 deliveries of different suppliers. As both the suppliers and the chain benefit from the change, the suppliers may be charged for the additional service provided by the logistics center. Managing inventories from a global perspective together with improved supervision and control will result in cost savings, fewer shortages and reduced lost sales.
It is estimated that lost sales will fall by 50% (from 4.6% to 2.3%), and the extra time that branch managers will be able to devote for service improvement and sales promotion will lead to a 1% increase in sales, resulting in a total sales increase of 3.3%, starting from the second year. The estimated cost of constructing a logistics center is $10 million in the first year, with another $4 million per year for maintenance and operations, beginning from the second year. Aggregated delivery to the branches will allow an annual saving of $500,000, starting form the second year. That is, the net costs will increase by $3.5 million. Moreover, it will be possible to charge the suppliers 1% of the sales, starting from the second year, for transportation and handling. Furthermore, total inventories will decrease by 10%, as of the second year. In the opinion of the chain’s managers, implementation will be difficult – 2 on the 1–5 scale, since it requires establishing new business processes and fundamental changes in working with many suppliers. 4.2.3. Value Driver 3: Introducing a Private Label The chain considers launching its own private label and plans that in the first year private label sales of coffee, soft drinks, and washing detergents will reach 5% of the revenues. This percentage will increase in the second year to 7%, in the third year to 10%, and in the fourth and fifth years to 15%. The chain’s economists calculated that it is possible to procure private label products at 80% of the brand names price. This will also enhance the chain’s bargaining power over the leading brand names suppliers, though due to conservative practices we will not include this benefit in the calculation. The cost of introducing a private label is estimated as follows: $2 million in the first year; $1.5 million in the second year; $1 million in each of the third, fourth and fifth years. The difficulty of implementation is 3 (moderate). 4.2.4. Value Driver 4: Shelf Space Management Not enough attention is paid to shelf space management and product display. It is estimated that the implementation of a supportive software package can increase the average consumer purchase by 3%, resulting in a similar 3% growth in revenues each year. An additional 1% increase is sales can be obtained by applying specific throughput
B. Ronen et al. / Value Focused Management (VFM)
161
concept (Pass and Ronen, 2003) which is further explained below, and removing items with poor specific throughput from the shelves. Altogether, sales will increase by 4%. In a large branch, some 10,000 different items (stock keeping units – SKUs) are displayed on the shelves, while there are more than 100,000 potential SKUs that the suppliers would like to offer. Thus, there must be a system of strategic gating of the products (Pass and Ronen, 2003). Although choices are limited, management still has some flexibility over 20% of the shelf space. Since the system constraint is the shelf space, one of the considerations in displaying goods on the shelves or removing them is the throughput per unit of shelf space, i.e., the specific throughput. The costs of purchasing shelf space management software, applying the specific throughput concept, and additional advertising and sales promotion expenses are estimated as follows: A one-time expenditure of $500,000 will be required in the first year. In this year, the increase in sales will not be realized, due to the need to implement the system in the branches. Starting from the second year, there will be a variable cost increase of about 0.2% of sales while sales will grow by 4% compared to the base year, and will remain at this level in subsequent years. The implementation difficulty is 5 (relatively easy). 4.2.5. Value Driver 5: Improving the Quality of the Administrative and Operations Personnel One of the main challenges of the chain is the need to replace some of its midmanagers. At the same time, the turnover of those managers the chain wishes to retain has to be reduced. Management recruiting, training and development programs, and plans to retain competent employees are likely to induce the following results: • •
•
In the first year, there will be no change in revenues. From the second to the fifth year, sales will be 3% higher relative to the base year due to better management and a further increase in the average customer purchase, as well as a 10% decrease in lost sales. These improvements are in addition to those described in the previous options. Due to increased efficiency, the inventories will remain at the base year levels, despite the increase in sales.
The expenses involved are about $1,000,000 for the first year, and include recruiting, managers training, and a program to retain competent employees. From the second to the fifth year the expenses will amount to $700,000 per year. The implementation difficulty is considered 3 (moderate). 4.2.6. Value Driver 6: Expanding the Product Display Area The suggested improvement is to expand the product display area in the branches by reducing storerooms area and increasing the frequency of deliveries. Currently, deliveries to the branches are mostly made once a week (except for fresh produce, which arrives daily) and about 25% of the branch area serves as internal storage space. It is proposed to double the delivery rate in order to halve the required storage area, so the redundant space can be transformed into additional display area. However, this plan is contingent on the establishment of a logistics center which was described above (see Section 4.2.2).
162
B. Ronen et al. / Value Focused Management (VFM)
Pilot studies carried out by the chain showed that expanding the display area and displaying new product categories in the additional space resulted in a proportional sales increase. As of the second year, the display area will be expanded by 8%, hence sales will increase by 8% compared to the base year and will remain at this level from that year on. The additional expenses (beyond the cost of moving to the logistics center which is accounted for in Section 4.2.2) will amount to $2 million per year, apart from the first year when they will be $1 million. The implementation difficulty is 4 (relatively easy). 4.2.7. Value Driver 7: Increasing Supplier Credit Days The chain’s supplier credit terms are worse compared to its competitors. The intention is to increase credit days by 10%. Even then, it will still be lower than the terms of the competing chains. This will be carried out by negotiating with the suppliers, and therefore it does not involve any additional expenses. However, the implementation difficulty is estimated as 1 (very difficult), due to strong opposition mainly from the leading suppliers. 4.3. Preparing a Base Valuation The base valuation is presented in appendix A. Shareholder value is estimated at $154 million. 4.4. Preparing a Pro Forma Valuation for Each Value Driver Two examples of pro forma valuations are presented in appendix B. All the valuations are based on the following general Assumptions: • • • • • •
Any change in sales volume results in a proportional change in the cost of sales, in the accounts receivable and the accounts payable, and in the inventories. The operating expenses are fixed. The supplier credit (i.e., accounts payable) is sufficient to finance the inventories and the customer credit (i.e., accounts receivable). Depreciation expense is $18 million per year. The annual capital expenditures are equal to the amount of depreciation. At the end of each year, all available cash flow is distributed to shareholders as dividends. The weighted average cost of capital (WACC) is 8.25% (as calculated above).
The assumption that the cost of sales changes proportionally to the change in sales implies that all these costs are considered as variable costs. This may not be the case, so it can be regarded as a conservative valuation. Alternatively, one may assume that a certain portion of the cost of sales is fixed, and change the calculations accordingly. 4.5. Preparing a Focusing Table The total potential additional value creation calculated in Table 2 is $382 million. As the base valuation is $154 million, obviously the chain is up for a considerable improvement.
163
B. Ronen et al. / Value Focused Management (VFM)
Table 2. The Focusing table: Calculating the total value creation potential #
Value driver
Importance: Additional value ($M)
Difficulty of implementation (1 – very difficult, 5 – relatively easy)
1
Increasing the average customer purchase
44
4
2
Establishing a logistics center
66
2
3
Introducing a private label
97
3
4
Shelf space management
43
5
5
Improving the quality of administrative and operations personnel
35
3
6
Expanding the product display area
93
4* Contingent on the establishment of a logistics center
7
increasing supplier credit days
4
1
Total value creation potential
382
4.6. Preparing a Focusing Matrix The focusing matrix presented in Fig. 2, maps the value drivers according to their relative importance and difficulty of implementation. The preferred value drivers are those nearest the right top of the matrix, since they have the greatest value creation potential and they are the easiest to implement. However, the selection is not straightforward; for instance, value driver #4 is easier to implement, compared to value driver #2, but the latter has more value creation potential. 4.7. Selecting the Value Drivers At a meeting of the board of directors, it was decided to implement the first six potential value drivers. The seventh proposal, to increase supplier credit days, was rejected since its expected contribution is relatively low, it would require considerable management attention and efforts, and moreover, it may jeopardize the full cooperation with suppliers which is required for the successful operation of the new logistics center. At the first stage, management will start working on value drivers 1, 2, 4, 5 and 6. The implementation of the third value driver, introducing a private label, was postponed due to the large amount of management time it requires, and in order to avoid dispute with leading suppliers. It should be mentioned that the board of directors instructed management beforehand to focus the efforts on utilizing existing resources. Therefore, value drivers which involved further expansion, such as opening new branches, were not considered. 4.8. Summary Supermarkets usually have low profit margins. However, as the above example demonstrated, managerial value drivers, which involve a relatively low investment, have a great potential to improve the chain’s shareholder value. VFM provides management
164
B. Ronen et al. / Value Focused Management (VFM)
100 3
6
80 2 Importance Value Added in M$
60 1
4
40 5 20 7
Difficult
1
2
3
4
5
Easy
Difficulty of imple mentation Figure 2. The focusing matrix.
with practical tools to identify, analyze and realize these managerial value drivers. The subsections above elaborated on the third stage of VFM, but this is just the beginning of the improvement process. Detailed plans should be prepared for each approved value driver (stage 4), and top management should lead and control the implementation (stage 5). The importance of management commitment and involvement cannot be overstated. Management role is to ensure that the value creation process is integrated with the organization’s management and control processes, and if necessary, change these processes. The EVA® at the base year is negative, so as long as there is no change, the chain destroys shareholder value at a rate of $2.3 million per year. The implementation of just a single value driver from drivers 1, 2, 3, 4 or 5 (driver 6 is not included albeit its great potential, since it is contingent on driver 2) is enough to generate a positive EVA®. The base valuation is $ 154 million, and if most of the suggested plans are fulfilled it may increase over threefold.
5. Discussion Management’s mission is to increase shareholder value. Sometimes, organizations try to advance numerous improvement initiatives simultaneously. However, the scarcest resource in organizations is attention (Davenport & Beck, 2000), so managers cannot handle all these initiatives successfully. Moreover, trying to do so may result in failure of most or all of the initiatives altogether. Managers may be aware of the undesirable effects of bad multitasking, but even when they choose to focus on a limited number of initiatives, they do not necessarily choose the critical issues which have the greatest
B. Ronen et al. / Value Focused Management (VFM)
165
potential to increase shareholder value. VFM provides managers with a structured methodology that helps them identify the relevant value drivers. A further unique contribution of VFM is that it considers the load on management attention by adding the difficulty of implementation dimension to the decision making process, and through the focusing matrix, it helps choosing the most promising value drivers. Malmi and Ikaheimo (2003) suggested four guidelines which ate detailed in Section 2.1, for improving VBM so it can become a more practical management approach. VFM fulfills all these guidelines: First, VFM aims to create shareholder value; however, this is not unique to VFM. Second, VFM provides a structured methodology for identifying the value drivers. But, VFM’s main contribution is that it connects decision making and action planning, both strategic and operational, to value creation or value drivers. The fifth stage of VFM emphasizes the importance of integrating VFM with the organization’s management and control processes. Hence, VFM connects performance measurement, target setting and rewards to value creation or value drivers. As measures should guide management and employees alike to act in ways that advance the overall goal of the organization (Otley, 1999), the attributes of appropriate performance measures (Geri and Ronen, 2005) detailed in Section 2.2, are used to evaluate VFM. VFM’s measures are global and effective since the primary measure is shareholder value, which directly relates to the goal. Moreover, the measures are clear and simple, and their most important attribute is that they provide a common language, understandable by all. The implications of alternative operational improvements are translated to financial terms, and compared through the focusing matrix, which also considers their difficulty of implementation. The necessary financial data are based on the firm’s financial statements and do not require additional measuring or data collection efforts. The base valuation, as well as the pro forma valuations of the value drivers’ impact, are satisfying, and do not entail major efforts to find the most “perfect and accurate data”. The valuations are used to estimate the relative importance of the proposed improvements and are meant for internal purposes. Hence the valuations need not be accurate and can be prepared by the organization’s internal staff, without consulting external valuation experts. Stage 2 of VFM, determining the performance measures, allows fitting the measures to the organization’s special needs, while at the same time it provides the main general global measures which should guide all business organizations in their decisions.
6. Conclusions In most organizations, managerial value drivers are underutilized due to lack of a clear connection between managerial improvements and value creation. Moreover, management attention is limited, and sometimes this scare resource is wasted on less worthy improvement initiatives, while other important ones are overlooked or neglected. This paper presented value focused management, which is a practical methodology for increasing shareholder value. VFM provides managers with a structured process that includes a focused diagnosis of the organization followed by a comprehensive implementation plan, which helps them direct their efforts towards the most promising value drivers.
166
B. Ronen et al. / Value Focused Management (VFM)
VFM draws on VBM and TOC and provides a common language across all functional areas, thus it creates a clear link between management actions and shareholder value. VFM considers the load on management attention by adding the difficulty of implementation dimension to the decision making process, through the focusing matrix. VFM has been successfully implemented in dozens of organizations worldwide. The paper analyzed a case study of a supermarket chain which demonstrated VFM’s potential as an effective practical methodology to guide companies in their ongoing quest to increase shareholder value.
References [1] Amit, R., & Zott, C. (2001). Value Creation in E-business. Strategic Management Journal, 22 (6/7), 493–520. [2] Copeland, T., Koller, T., & Murrin, J. (1996). Valuation – Measuring and Managing the Value of Companies. 2nd Ed., New York, NY: McKinsey & Company, Inc., John Wiley & Sons. [3] Davenport, T. H., & beck, J. C. (2000). Getting the Attention You Need. Harvard Business Review, 78(5): 118–126. [4] Eden, Y., & Ronen, B. (2002). It Costs Me More: Decision Making, Cost Accounting and Value Creation. Herzelia, Israel: Hod-Ami (Hebrew). [5] Eden, Y., Ronen, B., & Spector, Y. (1993). Developing Decision-Support Tools for Costing and Pricing, Faculty of Management, Tel Aviv University, The Joseph Kasierer Institute for Research in Accounting, 53 (3) (Hebrew). [6] Geri, N., & Ronen, B. (2005). Relevance Lost: The Rise and Fall of Activity-Based Costing. Human Systems Management, 24(2), 133–144. [7] Goldratt, E. M. (1991). The Haystack Syndrome. Great Barrington, MA: North River Press. [8] Goldratt, E. M. (1994). It’s Not Luck. Great Barrington, MA: North River Press. [9] Goldratt, E. M., & Cox, J. F. (1986). The Goal (2nd revised edition). Croton-on-Hudson, NY: North River Press. [10] Gupta, M. (2003). Constraints Management: Recent Advances and Practices. International Journal of Production Research, 41 (4), 647–659. [11] Ioannou, G., & Papadoyiannis, C. (2004). Theory of Constraints-Based Methodology for Effective ERP Implementations. International Journal of Production Research, 42 (23), 4927–4954. [12] Ittner, C. D., & Larcker D. F. (2001). Assessing Empirical Research in Managerial Accounting: A Value Based Management Perspective. Journal of Accounting and Economics. 32 (1–3), 349–410. [13] Kaplan, R. S., & Norton, D. P. (1992). The Balanced Scorecard – Measures that Drive Performance. Harvard Business Review, 70(1): 71–79. [14] Kaplan, R. S., & Norton, D. P. (1996). Using the Balanced Scorecard as a Strategic Management System. Harvard Business Review, 74(1): 75–85. [15] Mabin, V. J., & Balderstone, S. J. (2000). The World of the Theory of Constraints: A Review of the International Literature. Boca Raton, FL: St. Lucie Press/APICS Series on Constraints Management. [16] Mabin, V. J., & Balderstone, S. J. (2003). The Performance of the Theory of Constraints Methodology: Analysis and Discussion of Successful TOC Applications. International Journal of Operations and Production Management, 23 (6), 568–595. [17] Malmi, T., & Ikaheimo, S. (2003). Value Based Management Practices – Some Evidence from the Field. Management Accounting Research, 14 (3), 235–254. [18] Markowitz, H. M. (1952). Portfolio Selection, Journal of Finance, 7 (1), 77–91. [19] O’Hanlon, J. & Peasnell, K. (1998). Wall Street’s Contribution to Management Accounting: The Stern Stewart EVA® Financial Management System. Management Accounting Research, 9 (4), 421–444. [20] Otley, D. (1999). Performance Management: A Framework for Management Control System Design. Management Accounting Research, 10 (4), 363–382. [21] Pass, S., & Ronen, B. (2003). Management by the Market Constraint in the Hi Tech Industry. International Journal of Production Research, 41 (4), 713–724. [22] Pitman, B. (2003). Leading for Value. Harvard Business Review, 81 (4), 41–46. [23] Rahman, S. (1998). Theory of Constraints: A Review of the Philosophy and its Applications. International Journal of Operations and Production Management, 18 (4), 336–355. [24] Ronen, B. (1992). The Complete Kit Concept. International Journal of Production Research, 30 (10), 2457–2466.
167
B. Ronen et al. / Value Focused Management (VFM)
[25] Ronen, B., Coman, A., & Schragenheim, E. (2001). Peak Management. International Journal of Production Research, 39 (14), 3183–3193. [26] Ronen, B., Pliskin, J. S., & Pass S. (2006). Focused Operations Management for Health Services Organizations. San-Francisco, CA: Jossey-Bass, John Wiley and Sons. [27] Ronen, B., & Spector. Y. (1992). Managing System Constraints: A Cost/Utilization Approach. International Journal of Production Research, 30 (9), 2045–2061. [28] Schonberger, R. J. (1986). World Class Manufacturing: The Lessons of Simplicity Applied. New York, NY: Free Press. [29] Shoemaker, T. E., & Reid, R. A. (2005). Applying the TOC Thinking Process: A Case Study in the Government Sector. Human Systems Management, 24 (1), 21–37. [30] Smith, K. (2004). Eastern Financial Florida Credit Union: Creating a Competitive Advantage in the Mortgage Lending Business. Presented at TOC World 2004 Conference, Uncasville, Connecticut, April, 13–16. [31] Stern, J. M., & Shiely, J. S. (2001). The EVA® Challenge, New York, NY: John Wiley & Sons. [32] Stewart, G. B., III (1992). The Quest for Value, New York, NY: Harper Collins. [33] Sundaram, A. K., & Inkpen, A. C. (2004). The Corporate Objective Revisited. Organization Science, 15 (3), 350–363.
Appendix A: Base Valuation of the Supermarket Chain The valuation was prepared under the assumption of “business as usual”. Table 3. Base valuation – financial statements summary Base year
Year 1
Year 2
Year 3
Year 4
Year 5
Thousands $ Income statement summary Sales
700,000
700,000
700,000
700,000
700,000
700,000
Cost of goods sold
512,000
512,000
512,000
512,000
512,000
512,000
Gross profit
188,000
188,000
188,000
188,000
188,000
188,000
Sales, General and Administrative
160,000
160,000
160,000
160,000
160,000
160,000
Earnings before interest and taxes
28,000
28,000
28,000
28,000
28,000
28,000
Interest expense
3,150
3,150
3,150
3,150
3,150
3,150
Earnings before income taxes
24,850
24,850
24,850
24,850
24,850
24,850
Income taxes (36%)
8,946
8,946
8,946
8,946
8,946
8,946
Net income
15,904
15,904
15,904
15,904
15,904
15,904
168
B. Ronen et al. / Value Focused Management (VFM)
Table 3. (Continued.) Base year
Year 1
Year 2
Year 3
Year 4
Year 5
Thousands $ balance sheet summary Current assets Accounts receivable
65,000
65,000
65,000
65,000
65,000
65,000
Inventories
43,000
43,000
43,000
43,000
43,000
43,000
108,000
108,000
108,000
108,000
108,000
108,000
Accounts payable
105,000
105,000
105,000
105,000
105,000
105,000
Other current payables
32,000
32,000
32,000
32,000
32,000
32,000
137,000
137,000
137,000
137,000
137,000
137,000
Net working capital
–29,000
–29,000
–29,000
–29,000
–29,000
–29,000
Net property plant and equipment
274,000
274,000
274,000
274,000
274,000
274,000
Total capital required
245,000
245,000
245,000
245,000
245,000
245,000
Short-term bank credit
11,000
11,000
11,000
11,000
11,000
11,000
Long-term debt
52,000
52,000
52,000
52,000
52,000
52,000
Total debt
63,000
63,000
63,000
63,000
63,000
63,000
Shareholders’ equity
182,000
182,000
182,000
182,000
182,000
182,000
Total capital resources
245,000
245,000
245,000
245,000
245,000
245,000
Short-term noninterest liabilities
Table 4. Base valuation Base year
Year 1
Year 2
Year 3
Year 4
Year 5
Thousands $ Cash flow available to investors EBIT (Earnings before interest and taxes)
28,000
28,000
28,000
28,000
28,000
28,000
Taxes on EBIT
10,080
10,080
10,080
10,080
10,080
10,080
NOPAT (Net operating profit after tax)
17,920
17,920
17,920
17,920
17,920
17,920
Depreciation expense
18,000
18,000
18,000
18,000
18,000
18,000
Gross cash flow
35,920
35,920
35,920
35,920
35,920
35,920
169
B. Ronen et al. / Value Focused Management (VFM)
Table 4. (Continued.) Base year
Year 1
Year 2
Year 3
Year 4
Year 5
Thousands $ Increase (decrease) in Working capital
0
0
0
0
0
0
Capital expenditures
18,000
18,000
18,000
18,000
18,000
18,000
Total gross investment
18,000
18,000
18,000
18,000
18,000
18,000
Cash flow available to investors
17,920
17,920
17,920
17,920
17,920
17,920
NOPAT (Net operating profit after tax)
17,920
17,920
17,920
17,920
17,920
17,920
WACC (Weighted average cost of capital)
8.25%
8.25%
8.25%
8.25%
8.25%
8.25%
Debt/equity ratio
0.35
0.35
0.35
0.35
0.35
0.35
Invested equity (beginning of year)
182,000
182,000
182,000
182,000
182,000
182,000
Financial liabilities (beginning of year)
63,000
63,000
63,000
63,000
63,000
63,000
Total invested capital
245,000
245,000
245,000
245,000
245,000
245,000
Capital charge
20,216
20,216
20,216
20,216
20,216
20,216
EVA ® (Economic Value Added)
–2,296
–2,296
–2,296
–2,296
–2,296
–2,296
Economic value added calculation
Valuation at the base year Discounted cash flow First five years
71,078
Residual value
146,097
Total valuation of invested capital
217,175
Net financial liabilities
63,000
Excess assets
0
Value of the company to its shareholders
154,175
170
B. Ronen et al. / Value Focused Management (VFM)
Appendix B: Pro Forma Valuations of the Value Driver’s Impact Value Driver 1: Increasing the Average Customer Purchase The value driver is described in Section 4.2.1. The valuation assumes a one-time 5% increase in sales, after which the sales will remain at the same level. The additional costs are estimated at 0.5% of sales, every year. The general assumptions are detailed in Section 4.4. Table 5. Value driver 1 – financial statements summary Base year
Year 1
Year 2
Year 3
Year 4
Year 5
Thousands $ Income statement summary Sales
700,000
735,000
735,000
735,000
735,000
735,000
Cost of goods sold
512,000
537,600
537,600
537,600
537,600
537,600
Gross profit
188,000
197,400
197,400
197,400
197,400
197,400
3,675
3,675
3,675
3,675
3,675
Additional costs Sales, General and Administrative
160,000
160,000
160,000
160,000
160,000
160,000
Earnings before interest and taxes
28,000
33,725
33,725
33,725
33,725
33,725
Interest expense
3,150
3,158
3,158
3,158
3,158
3,158
Earnings before income taxes
24,850
30,568
30,568
30,568
30,568
30,568
Income taxes (36%)
8,946
11,004
11,004
11,004
11,004
11,004
Net income
15,904
19,563
19,563
19,563
19,563
19,563
Accounts receivable
65,000
68,250
68,250
68,250
68,250
68,250
Inventories
43,000
45,150
45,150
45,150
45,150
45,150
108,000
113,400
113,400
113,400
113,400
113,400
Accounts payable
105,000
110,250
110,250
110,250
110,250
110,250
Other current payables
32,000
32,000
32,000
32,000
32,000
32,000
137,000
142,250
142,250
142,250
142,250
142,250
balance sheet summary Current assets
Short-term noninterest liabilities
171
B. Ronen et al. / Value Focused Management (VFM)
Table 5. (Continued.) Base year
Year 1
Year 2
Year 3
Year 4
Year 5
Thousands $ Net working capital
–29,000
–28,850
–28,850
–28,850
–28,850
–28,850
Net property plant and equipment
274,000
274,000
274,000
274,000
274,000
274,000
Total capital required
245,000
245,150
245,150
245,150
245,150
245,150
Short-term bank credit
11,000
11,150
11,150
11,150
11,150
11,150
Long-term debt
52,000
52,000
52,000
52,000
52,000
52,000
Total debt
63,000
63,150
63,150
63,150
63,150
63,150
Shareholders’ equity
182,000
182,000
182,000
182,000
182,000
182,000
Total capital resources
245,000
245,150
245,150
245,150
245,150
245,150
Table 6. Value driver 1 – pro forma valuation Base year
Year 1
Year 2
Year 3
Year 4
Year 5
Thousands $ Cash flow available to investors EBIT (Earnings before interest and taxes)
28,000
33,725
33,725
33,725
33,725
33,725
Taxes on EBIT
10,080
12,141
12,141
12,141
12,141
12,141
NOPAT (Net operating profit after tax)
17,920
21,584
21,584
21,584
21,584
21,584
Depreciation expense
18,000
18,000
18,000
18,000
18,000
18,000
Gross cash flow
35,920
39,584
39,584
39,584
39,584
39,584
Increase (decrease) in Working capital
0
150
0
0
0
0
Capital expenditures
18,000
18,000
18,000
18,000
18,000
18,000
Total gross investment
18,000
18,150
18,000
18,000
18,000
18,000
Cash flow available to investors
17,920
21,434
21,584
21,584
21,584
21,584
172
B. Ronen et al. / Value Focused Management (VFM)
Table 6. (Continued.) Base year
Year 1
Year 2
Year 3
Year 4
Year 5
Thousands $ Economic value added calculation NOPAT (Net operating profit after tax)
17,920
21,584
21,584
21,584
21,584
21,584
WACC (Weighted average cost of capital)
8.25%
8.25%
8.25%
8.25%
8.25%
8.25%
Debt/equity ratio
0.35
0.35
0.35
0.35
0.35
0.35
Invested equity (beginning of year)
182,000
182,000
182,000
182,000
182,000
182,000
Financial liabilities (beginning of year)
63,000
63,000
63,150
63,150
63,150
63,150
Total invested capital
245,000
245,000
245,150
245,150
245,150
245,150
Capital charge
20,216
20,216
20,221
20,221
20,221
20,221
EVA ® (Economic Value Added)
–2,296
1,368
1,363
1,363
1,363
1,363
Valuation at the base year Discounted cash flow First five years
85,479
Residual value
176,059
Total valuation of invested capital
261,538
Net financial liabilities
63,000
Excess assets Value of the company to its shareholders
0
Value increase relative to the base year
198,538
44,364
29%
Value Driver 4: Shelf Space Management The value driver is described in Section 4.2.4. The valuation assumes a one-time 4% increase in sales in the second year, after which the sales will remain at the same level. $500,000 cost in the first year and additional costs estimated at 0.2% of sales as of the second year. The general assumptions are detailed in Section 4.4.
173
B. Ronen et al. / Value Focused Management (VFM)
Table 7. Value driver 4 – financial statements summary Base year
Year 1
Year 2
Year 3
Year 4
Year 5
Thousands $ Income statement summary Sales
700,000
700,000
728,000
728,000
728,000
728,000
Cost of goods sold
512,000
512,000
532,480
532,480
532,480
532,480
Gross profit
188,000
188,000
195,520
195,520
195,520
195,520
500
1,456
1,456
1,456
1,456
Additional costs Sales, General and Administrative
160,000
160,000
160,000
160,000
160,000
160,000
Earnings before interest and taxes
28,000
27,500
34,064
34,064
34,064
34,064
Interest expense
3,150
3,150
3,156
3,156
3,156
3,156
Earnings before income taxes
24,850
24,350
30,908
30,908
30,908
30,908
Income taxes (36%)
8,946
8,766
11,127
11,127
11,127
11,127
Net income
15,904
15,584
19,781
19,781
19,781
19,781
Accounts receivable
65,000
65,000
67,600
67,600
67,600
67,600
Inventories
43,000
43,000
44,720
44,720
44,720
44,720
108,000
108,000
112,320
112,320
112,320
112,320
Accounts payable
105,000
105,000
109,200
109,200
109,200
109,200
Other current payables
32,000
32,000
32,000
32,000
32,000
32,000
137,000
137,000
141,200
141,200
141,200
141,200
Net working capital
–29,000
–29,000
–28,880
–28,880
–28,880
–28,880
Net property plant and equipment
274,000
274,000
274,000
274,000
274,000
274,000
Total capital required
245,000
245,000
245,120
245,120
245,120
245,120
Short-term bank credit
11,000
11,000
11,120
11,120
11,120
11,120
balance sheet summary Current assets
Short-term noninterest liabilities
174
B. Ronen et al. / Value Focused Management (VFM)
Table 7. (Continued.) Base year
Year 1
Year 2
Year 3
Year 4
Year 5
Thousands $ Long-term debt
52,000
52,000
52,000
52,000
52,000
52,000
Total debt
63,000
63,000
63,120
63,120
63,120
63,120
Shareholders’ equity
182,000
182,000
182,000
182,000
182,000
182,000
Total capital resources
245,000
245,000
245,120
245,120
245,120
245,120
Table 8. Value driver 4 – pro forma valuation Base year
Year 1
Year 2
Year 3
Year 4
Year 5
Thousands $ Cash flow available to investors EBIT (Earnings before interest and taxes)
28,000
27,500
34,064
34,064
34,064
34,064
Taxes on EBIT
10,080
9,900
12,263
12,263
12,263
12,263
NOPAT (Net operating profit after tax)
17,920
17,600
21,801
21,801
21,801
21,801
Depreciation expense
18,000
18,000
18,000
18,000
18,000
18,000
Gross cash flow
35,920
35,600
39,801
39,801
39,801
39,801
Increase (decrease) in Working capital
0
0
120
0
0
0
Capital expenditures
18,000
18,000
18,000
18,000
18,000
18,000
Total gross investment
18,000
18,000
18,120
18,000
18,000
18,000
Cash flow available to investors
17,920
17,600
21,681
21,801
21,801
21,801
175
B. Ronen et al. / Value Focused Management (VFM)
Table 8. (Continued.) Base year
Year 1
Year 2
Year 3
Year 4
Year 5
Thousands $ Economic value added calculation NOPAT (Net operating profit after tax)
17,920
17,600
21,801
21,801
21,801
21,801
WACC (Weighted average cost of capital)
8.25%
8.25%
8.25%
8.25%
8.25%
8.25%
Debt/equity ratio
0.35
0.35
0.35
0.35
0.35
0.35
Invested equity (beginning of year)
182,000
182,000
182,000
182,000
182,000
182,000
Financial liabilities (beginning of year)
63,000
63,000
63,000
63,120
63,120
63,120
Total invested capital
245,000
245,000
245,000
245,120
245,120
245,120
Capital charge
20,216
20,216
20,216
20,220
20,220
20,220
EVA ® (Economic Value Added)
–2,296
–2,616
1,585
1,581
1,581
1,581
Valuation at the base year Discounted cash flow First five years
82,494
Residual value
177,811
Total valuation of invested capital
260,304
Net financial liabilities
63,000
Excess assets Value of the company to its shareholders
Value increase relative to the base year 197,304
43,130
28%
176
Advances in Multiple Criteria Decision Making and Human Systems Management Y. Shi et al. (Eds.) IOS Press, 2007 © 2007 The authors. All rights reserved.
Zeleny’s Human Systems Management and the Advancement of Humane Ideals1 Alan E. SINGER Department of Management, University of Canterbury, Private Bag 4800, Christchurch, New Zealand Email:
[email protected] 1. Background Milan Zeleny’s latest book, “Human Systems Management: Integrating Knowledge, Management & Systems” is important not only for its role in advancing and upgrading business practices and principles, as was intended, but it also has considerable potential for informing and in some ways challenging the field of business ethics. The book (HSM) offers many prescriptions for running contemporary state-of-the-art competitive enterprises in service of society: the normative principles of meso-level business strategy and ethics. However, it also provides a very distinctive viewpoint on micro-level managerial ethics, as well as macro-level social and political systems. In the following section of this review, some of the main principles and tenets of HSM are described and briefly critiqued. Then (in Section 3) an augmentation of HSM is outlined. At that point, additional prescriptions for enterprise strategy are indicated in order to more fully accommodate the several known limitations of market based systems. The final section of the article then traces the historical context of the HSM thesis, seeing that if it is indeed “not an ideology,” as is claimed, it is certainly an informed and persuasive view…from somewhere.
2. HSM Principles HSM advances distinctive ways of thinking about several familiar philosophical and social science constructs, by placing them squarely within the context of contemporary technological enterprise. These constructs include knowledge, wisdom, purposes, ethics, capital, and synthesis (where contributions by Zeleny to the theory of tradeoffs are well-known) as well as culture and ideology. The HSM contributions in these areas are summarized in Table 1. In many instances, a distinctive position is staked out in the book that quickly invites philosophical challenge. However, before taking that up, it definitely pays to remember that this is a thesis with a mission: to equip business managers with mental models and a lexicon that is appropriate to practical action within a culture of enterprise. Where similar constructs have been considered elsewhere within the spectrum of 1
This is an adapted reprint, with permission, of an article that originally appeared in: Singer A.E Strategy and ethics: managing human systems and advancing humane ideals Business Ethics Quarterly. 2007 17(2): 341–363.
A.E. Singer / Zeleny’s Human Systems Management and the Advancement of Humane Ideals
177
Table 1. Selected constructs with HSM contributions CONSTRUCT
HSM CONTRIBUTIONS
Knowledge
coordination of action
Wisdom
explication of purpose
Purpose of enterprise
self-production, service to society
Ethics
mastery of micro-contexts
Synthesis
de novo programs, cognitive-equilibrium
Capital
multi-form, live & dead
Organisation
autopoeisis, organism, boundaries
Culture
productive practices, convergence
Ideology
Capitalism with HSM is not an ideology
the social and managerial sciences, somewhat different purposes and epistemologies have usually been adopted. 2.1. Knowledge Taking its cue from FA Hayek (1945), HSM starts by identifying the central problem of contemporary management as the division and re-integration of knowledge; or more precisely, the “co-ordination of complementary complexes of specific skills”. In this context, knowledge should be thought of, indeed is the coordination of action. Backing of only slightly, Zeleny subsequently describes knowledge as “the ability to coordinate action” and as “an embodied complex of action-enabling structures, externalized through purposeful coordination of action”. Put simply, knowledge equals know-how. It is not …a justified belief, or knowing-that. Accordingly, managers of productive enterprises should now think of all knowledge as tacit knowledge, the rest (encoded, explicit, externalized etc.) is simply information, or data; yet “it is the greatest truth of our age: information is not knowledge”. Indeed, as a society we have far too much data and information (see 3.5) but not enough “knowledge”; that is, not enough coordinated productive activity. 2.2. Wisdom This action-emphasis in HSM is extended to a distinctive notion of management wisdom. It is “knowing-why” things are so. Implicitly adopting a Buddhist tenet, “knowwhy” is identified as a component of a larger wisdom of enterprise. In Buddhist writing (e.g. Dhammananda, 1999) it is said that “the knowledge of how things work is quite different from…wisdom, which is insight as to why it works, or why it is done”. Similarly, in HSM, the wisdom of enterprise refers to management understandings of why things work (the science), why particular activities are being carried out (the strategy) and especially, why particular purposes or missions have been adopted (1). Traditionally, Buddhist philosophy has been criticized as non-productive economically: more to do with nothing than with bringing forth into the World something worthwhile. Some-
178
A.E. Singer / Zeleny’s Human Systems Management and the Advancement of Humane Ideals
what paradoxically, HSM revitalizes this ancient philosophy of wisdom, conferring a completely new role for it in a predicted era of wise enterprise. The other component of wisdom is “the explication of purpose”. The wise entity fully communicates all its “know-why,” but it does this primarily through its actions. Elsewhere, Zeleny quotes Sir Fletcher Jones: “what a man says whispers, what he does thunders.” If we are wise, our actions will automatically communicate and explicate our purposes, although Zeleny appears to concede that the natural language of ethics and politics might also be deployed in order to refine clarify and reinforce this explication. To be self sustaining in the current environment, an entity’s purposes must be socially accepted and “validated by experience”; they must be credible and aligned with enterprise actions (2). Corporations can be informed, they can be knowledgeable; but in the global era they must also become wise. Although the trend from informationbased to knowledge-based strategies still has some way to go, the next transition will be from knowledge to wisdom. Indeed “it is already taking shape”. 2.3. Purposes HSM endorses the established idea in the field of strategic management (and long before that, in philosophy, cf. Burrell, 1989) that an entity is not completely free to select or compose its purposes and goals. To some extent these emerge from its current circumstances and activities, so they reflect its capabilities. Accordingly, the logic of business strategy includes not only the traditional (ends ways means) sequence, but also its reversal (means ways ends) and one has to search for the right goals or ends to “validate and enhance” the existing means. To the extent that an enterprise can still freely choose its purposes, these should not be looked for in the financial markets. HSM implicitly rejects the Milton Friedman maxim that the social responsibility of business is to increase its profits. On the contrary, managers of productive enterprises should not concern themselves with share prices that are based upon speculative trades and deals, nor the influence of quarterly financial reports on this type of speculation. Economic institutions should absolutely not encourage such a focus. Many prominent Western industrialists and entrepreneurs who have shared this rather negative view of financial markets are quoted in the book: “a business that exists to feed profits to people who are not engaged in it stands on a false basis” (Thomas Bata, a Czech entrepreneur); “the stock market is just a little show on the side” and it “has nothing to do with business” (Henry Ford), whilst according to JF Lincoln, “the stockholder should have the lowest priority”. According to HSM, monetary rewards should be earned exclusively by those who are actually engaged in production and coordination. The reward should go to the industrialist or entrepreneur who brings forth a living enterprise. As demonstrated by Bata Enterprises in the Czech Republic, not to mention much of Japanese industry in the 1980’s and 1990’s, business is all about earning money through the productive coordination of action. The mere trading of property rights, whether in shares or used cars, all too often involves dubious deals and swindles, so it just does not count as “earning,” in any upright or ethical sense. In nature, it is argued, such “deals” do not sustain productive networks, so for HSM they do not count as “earnings” and should be discouraged accordingly. (In financial reporting such trades and deals are of course a major component of profit, income and earnings). It thus becomes apparent quite early on that HSM is siding firmly with the stakeholder model of enterprise, or stakeholder capitalism. The purpose of enterprise is to
A.E. Singer / Zeleny’s Human Systems Management and the Advancement of Humane Ideals
179
serve customers, employees, and society, certainly not to facilitate the accumulation of financial capital from a distance, through speculative trades. With regard to customers as stakeholders, HSM quotes JF Lincoln that “The proper responsibility of business is to build a better and better product at lower & lower price”. This point is frequently reinforced in HSM by demonstrations of how enterprises can eliminate cost-quality tradeoffs and respond “kinetically” to individual customer-related events. Modern enterprises (do and should) strive to be agile and kinetic. They should strive to deal with “markets of one” (one customer, that is, not one stock-trader) just as medical doctors have always tried to respond to each individual patient and each episode. With regard to employees as stakeholders, the doctrine of Thomas Bata is further endorsed. It is to “provide a satisfying environment, now and in future” whilst empowering employees. Long ago, Bata introduced the idea of employee share ownership (ESOP’s) in his companies in the Czech Republic. He claimed at that time that this was a “simple” idea that “immobilizes defenders of the ideological struggle between capitalists and workers.” As if to pre-empt howls of outrage due to Enron-style vanishingESOP’s, JF Lincoln is also quoted at this point regarding the “injustice” that takes place whenever “a worker loses his job and the manager is unpunished”. Finally, with regard to society as a whole, HSM invokes the “wisdom” of Sir Fletcher Jones, that “every business enterprise should have as its very basic policy…to benefit society” and that “the aim of enterprise is…better life for all” (emphasis added). By way of explication, Jones claimed that “only under such conditions can enterprise continue”. However, HSM crucially stops short of working through the full implications of its view of business purpose as service to society, especially as it relates to the full set of limitations of market based systems (see Section 3 of this article). 2.4. Ethics Since the book was written, Zeleny has also proposed a “4E spine” of enterprise, in somewhat similar vein to the clichéd 4P’s of marketing and 5P’s of strategy. The “spine” is: “efficiency, effectiveness, explicability (as mentioned)… and ethics”. Modern managerial action should possess all four qualities. The ethics component of the 4E spine is now likely to prove of particular interest to philosophers. It is a mixture of virtue ethics, pragmatism and egoism, in which ethics in business is held to be observable only at the micro-level. In general, ethical behaviour is a spontaneous inclination (rather than a planned response) and it stems “naturally” from a desire for gain. Ethics should not be thought of as a social imposition. On the contrary, it is part of the tacit knowledge of individuals, so that to be ethical, one simply “acts good”. Accordingly, business ethics is expressed by individuals as they strive for “mastery of the microcontext” and by “human coping with immediate circumstances.” It is a property of micro-level human behaviour and it operates in real time (or “online,” to quote Ken Goodpaster circa 1983). By implication, one need not attempt to obey explicit encoded (and imposed) ethics rules. Ethics committees that struggle to compose even a few good principles (cf. Soule, 2002) are really not needed. They are barking up the wrong tree. Still more controversially, Zeleny argues that to be “truly” ethical (i.e. act good) one cannot be intentionally “ethical” (i.e. obey official rules). One should simply act out of an “informed sense of the good”. Significantly, however, there is no discussion in HSM of how well-informed this particular sense needs to be. Another tenet of HSM that will catch the attention of moral philosophers is its indication that meso-level stakeholder integration strategies do not really need an inde-
180
A.E. Singer / Zeleny’s Human Systems Management and the Advancement of Humane Ideals
pendent justification, whether normative, instrumental or literary: these strategies are just plain “natural”. Philosophical arguments concerning the Naturalistic Fallacy are not entertained. The thesis just states (or persuasively claims) the scientific facts: enterprises are autopoietic systems, so they (do/should/must) act in ways that sustain and coproduce their own support network. The stakeholder model of strategy is thus literally alive and well, unlike the shareholder model (see “Capital” below). Finally, at the level of macro-ethics and politics, HSM simply issues an appeal for a “functioning democracy that is based on respect, with free market behaviour (i.e. exchanges between HSM-type enterprises and their customers)…that is based on trust”. Although Milan Zeleny has been personally active in national politics in the Czech Republic and in the USA, the HSM thesis does not really consider the possible ways of bringing about this necessary macro-environmental state of trust and respect. It simply implies that more and more HSM-type enterprise will be enough to do the trick, but it neglects to explore the possibility that businesses might be able to influence governments (heteropoietically) to also pursue that end more actively. Strikingly, governments in general are dismissed as being “least equipped” to promulgate morality in business and in society. 2.5. Capital Given Zeleny’s seminal contributions in the 1970’s and 1980’s to the theory of multicriteria decision making (MCDM), it comes as no surprize that HSM also endorses a multi-dimensional view of the concept of capital. There “are” several distinct forms of capital, including human, social and ecological as well as manufactured and financial (which in turn has many sub-forms). To create prosperity at the national level, Zeleny believes that social capital is usually the most critical, although in reality it has often been the most neglected and ignored. Social capital is “the enabling infrastructure of institutions and values” in a society. In line with Sen (1996), Zeleny also notes that Japan’s wealth is primarily due to its human and social capital investments and that “strong cultures with high levels of civic trust tend to produce higher economic performance…not the other way around”. Social capital cannot be engineered, but it can be deliberately cultivated. Once again, however, the possible ways of doing this are not very fully explained. According to HSM, trade-offs in which the level of social capital is reduced in a system in order to maximize the manufactured form of capital “are rarely sustainable”. Instead, enterprise strategies and national policies should both aim to achieve a balance or a harmony between the four forms, essentially by creating the right amounts of each one, rather than destroying one in order to create the other. This approach has been pursued in recent times by several well-known ecological thinkers (e.g. Hawkens, 1999; Porritt, 2005) as well as some World Bank studies that are discussed in the book. In contrast, financial-economic models routinely incorporate or subsume the distinctive forms of capital into a single overarching formal utility (wealth or profit) function. Although this is done strictly for the purposes of formal analysis, Zeleny wrote that the different forms of capital “cannot” be subsumed into a single measure in this way, by which he meant that such models mislead managers and policy makers (3). Although the distinction between the “manufactured” and “financial” forms of capital was glossed over in the above discussion, it is really quite central to the HSM thesis. It re-emerges later when a distinction is drawn between “live capital,” which is the re-invested monetary earnings of a productive enterprise, vs. “dead” capital, which is the accumulated financial profit gained from speculative trades and deals (4). Put
A.E. Singer / Zeleny’s Human Systems Management and the Advancement of Humane Ideals
181
simply, live capital is a very good thing: it refers to the assets used by an enterprise to produce its valued market offerings and future-self. It takes its place within a natural productive cycle: Production Capital more Production “Dead” capital is not good. It “has as its main purpose the production of payments for owners” and it can be represented as: Capital Production more Capital Although it may be obvious that both of these sequences are embedded in a recurring means-ends chain (and are modeled in financial-economics as dynamic dividend and investment decisions) the “accent” and implications for business strategy differ, under the two different representations, or frames. The first sequence coheres with the Japanese management tradition in which manufacturing and marketing are accorded a much higher priority than financial-market deals. On the other hand, within Anglo-US critical scholarship, one can find plenty of references to the notion of “dead” capital, or similar. For example, Manning (1988) noted that under the assumption of corporate moral agency, acquisition becomes the moral equivalent of murder (although she finds this “counterintuitive”); whilst at about the same time Burrell (1989) wrote trenchantly on “linearity and death” in his broad critiques of the engineering view of economics. All such references tend to reinforce the underlying point in HSM: the first “live” action-sequence is good and we need more of it, the second sequence is lifeless and should not be encouraged. 2.6. Synthesis The development of all four forms of capital (social, human etc.) in harmony exemplifies an ethos of “management without tradeoffs” (MWT), which is in turn a component of a larger Global Management Paradigm (GMP) that is discussed in the book. GMP combines ideas such as open-books, customer & supplier integration, masscustomization, horizontal (flat) organisation, within a mindset that continually focuses upon the elimination of tradeoffs. In managerial decision making, nothing should be thought of as fixed or given (like the “given” constraints in mathematical programming problems). All tradeoffs are “perceived” or “apparent” and everything can be reframed and potentially redesigned. Put differently, when trying to optimize a system, one should also take some time to explore possible ways of re-designing it. One should be similarly cautious about formal solutions to problems that locate an optimal point within a given set of constraints (remember this is from a well-known mathematician), instead one should try to “dis-solve” the original problem, through innovation. About 25 years ago, the de novo linear programming method (e.g. Zeleny, 1981) gave formal expression to this entire idea and it has since been widely reported and discussed in Management Science and Engineering journals. In de novo programming, the estimated costs of incrementally re-designing the system are estimated ex ante and become part of the calculation of an optimum. In the new book, the discussion of the de novo method is confined mainly to cost-quality problems. Elsewhere it has been applied to perceived environmental costs. However no-one has yet taken the opportunity to apply the de novo method to the Global social and moral problem of distributive
182
A.E. Singer / Zeleny’s Human Systems Management and the Advancement of Humane Ideals
justice: the “apparent” nature of all equity – efficiency tradeoffs, together with the associated “false choices” between profit and fairness (e.g. Kuttner, 1984). The concept of synthesis re-appears in another distinctive theory of decisionmaking that is set out in the book: the theory of Cognitive Equilibrium (CE). This expands the de novo method at the conceptual-modelling level, but it also admits a formal representation in terms of fuzzy-sets (e.g. Zeleny, 1991). Both the conceptual and formal versions of CE express the core idea that none of the components of a structured decision-making problem is really a “given,” or an exogenous condition. Instead, the various criteria, values, alternatives and their representations are all products of the mind; furthermore, they are all inter-dependent. When particular alternatives are under consideration, for example, selected criteria and constraints then tend to become relatively more apparent, and so on. Accordingly, decision-making should be depicted as a circular equilibrium-seeking (dissonance-reduction, coherence-seeking) process in which descriptions, representations and frames are iteratively re-considered, alongside the relevant criteria and values. Although Zeleny’s CE theory was first set out in 1991, it retains some potential to contribute to several streams of philosophical thought. In rational choice theory, for example, Schick (1991) wrote that “we value things under the descriptions we put on them.” In CE, we also tend to create things “under” the values we adopt, and so on. In business ethics, those who are interested in the theory of moral imagination and the effects of framing on decision making (e.g. Werhane, 1999), or the notion of a reflective equilibrium within Contractarian ethics, might all find this theory of cognitive equilibrium to be relevant to their endeavours. As Sen (1996) once mentioned, the same applies to the “promising” field of MCDM as a whole, within which the theory of CE was developed. 2.7. Organization The term “synthesis” is also of course suggestive of dialectics. Ever since that latter notion was first articulated (by Plato) it has been associated with the sciences of life and mind. HSM treads that ancient pathway, but without mentioning dialectics per se. It not only develops synthesis as a theme within general management, but it also sets out a lengthy and persuasive thesis on living organisation. It is first noted that the modern business involves “adaptation, creativity, innovation and trust, none of which are particularly machine like,” but it is also claimed that productive enterprises answer directly to a particular description of living: one that involves the biological concepts of autopoiesis and a “natural” life-cycle of production, bonding and degradation. The idea of organisation-as-organism is thus held to be much more than a metaphor. In order to bring forth a product (a market-offering) an enterprise also has to “produce its futureself”; that is, it must sustain and adapt itself over time intervals. Indeed, the latter type of production is “more important” than the former. The Amoeba system of Kyocera corporation is used to illustrate and affirm this notion. The corporate entity consists of autonomous agents (associates) & teams (amoebae), linked by intra-company markets. The “amoebae” are embedded within intercompany networks of suppliers and communities of customers who are integrated into the production process and thereby extend the internal network into a functional and competitive whole, or a competitive complex. According to Zeleny (who is now working on a new book: Organisation as Organism), this “whole” is alive. It follows the cycle of production, bonding and degradation; it responds to action in its environment
A.E. Singer / Zeleny’s Human Systems Management and the Advancement of Humane Ideals
183
with action-feedback (kinetics). Its overall organization (amoeba, network) is conserved over time, whilst its detailed structure always remains open and changing. Its boundary is not legislated; it is instead a manifestation of the underlying organization and the networked structure. Within each “amoeba,” age, training and formal qualifications are irrelevant. The only essential qualification for moving to the head of an amoeba (be it an organization or a cellular slime mould) is the action-based know-how or competence to do the necessary job. The challenge for managers (if any) is therefore not “how can we sustain the system” but how we can help it to sustain itself, with its sub-systems (i.e. like a good doctor helping a patient). This challenge was well understood in the 1930’s by Thomas Bata, who observed that “all ranks… realize personal growth and social development through (this) self-renewing corporate organism”. In the contemporary context, Zeleny believes that these biological principles continue to yield a natural, spontaneous and gainful way to arrange human affairs. He notes that “humans like change…so long as it preserves the support network that they are part of.” In contrast, the radical disruptions of productive organization that can occur under the dictates of bureaucratic (e.g. socialist) hierarchies and under financial market capitalism (by virtue of M&A activity) are unnatural, undesirable and unnecessary. Whilst this critique may be rather familiar, it has always been evaded or suppressed by power seekers and ideologues. 2.8. Culture Zeleny really dislikes ideology (see below) but he is also very critical of what he sees as an overemphasis on culture in academic studies of global business. He begins the assault on the burgeoning scholarship of culture by quoting Tolstoy: “happy families are all alike, every unhappy family is unhappy in its own way”. Likewise, “good management is good management, anywhere in the world,” whilst “bad management has many forms but it is remarkably recognizable.” Furthermore, HSM presents modern knowledge-based business enterprise as the quintessential “cultural” institution. Economic organization and quality of production should never be thought of as something separate from culture; indeed, in many ways they are its most reliable and expressive manifestation. Surely, he argues, we should now think of a well-run enterprise as “a more potent and expressive cultural achievement than a hand-made mug or selfabsorbed painting”. In any case, there is plenty of casual and formal evidence that the “cultures” of business enterprise and management are everywhere converging (5). There is an obvious tension between this viewpoint and the more common understanding of an entity’s culture as its rituals, symbols and mental states (e.g. Doktor, 1990). However, in recent time, many of those symbols (including national identities) have been recast by entrepreneurs as dot.com marketing tools. Meanwhile, according to HSM, the only relevant “mental state” in a society is to be found in the know-how and know-why of its productive enterprises (6). So far as growing a vibrant global enterprise or network is concerned, local “cultural” products such as handicraft and assorted mugs are now just another sideshow. Indeed, they tend to “bring misery”. On a much happier note, Zeleny predicts that modern market-driven work practices such as teleworking and work-cloning will continue to create new communities of geographically dispersed peers; a development, he says, that is “sorely needed”.
184
A.E. Singer / Zeleny’s Human Systems Management and the Advancement of Humane Ideals
2.9. Ideology What is definitely not needed any more is hierarchical management: it is ideological, “hopeless” and its consequences are decline and corruption. HSM is particularly critical of top-down state involvement in enterprise operations and ownership, as well as the historically related notion of social class. On the first of these, Bata is once again quoted with approval: “It is enterprises (i.e. the HSM-type, not the state, not unions) that are… bringing quality of life and possibilities of education to the nation”. With regard to “social class,” Zeleny argues that each employee or citizen should be thought of as a repository of know-how and certainly not as a representative of a class that is in need of solidarity. By implication, trades-unions are no longer necessary (but see next section). It is perhaps worth recalling at this point that HSM has also objected strongly to the mainstream Anglo-US variant of financial-market capitalism (FMC), where there are too many “swindles” and too much dead capital. Accordingly, it is essentially advocating a variant of capitalism, a kind of Third Way, in which (i) enterprises are structured and viewed as self-sustaining and learning entities, (ii) stakeholders and “live capital” are fully integrated into the network, and (iii) education is for enterprise but with substance. A startling claim is then made that, unlike both Socialism and FMC, such a system “is not an ideology”. Zeleny insists that HSM has simply described and depicted a natural and spontaneous system: one that “comes to life when ideological pressures and limitations cease”.
3. Humane Ideals Despite that insistence, HSM often reads more like political manifesto (e.g. Zeleny, 1988). Indeed, it moves unambiguously into the political arena when it argues against state ownership and suggests that “the state is least equipped to promulgate morality” (for an opposing view on that see for example Casson, 1998). On such a reading, there is a risk that the thesis might then become fuel for the political far right: the modern day Spencer or Hegel. Not only are enterprises modelled as social and biological systems, but their core dynamic is “human striving” to become a valued member, or a “master”. The domain of HSM is effort, earning and the struggle to achieve. The problem with all of these, when viewed as core components of a political doctrine, is that only those individuals who can “earn” (or who can pay) will be able to sustain. This ethic of care come under pressure and the idea of moral minima applied to Global society as a whole is implicitly challenged, or at best neglected. Accordingly, there remains another question for contemporary managers, concerning humane ideals. It is as follows: can enterprises also play a role in influencing governments so that they jointly promote all aspects of morality more effectively? Put differently, enterprises might be able to act autopoietically and heteropoietically, co-producing not only the future enterprise and it market offerings, as is specified in HSM, but also fostering moral progress in a much wider sense. To do this credibly and effectively, it is surely not sufficient (as HSM implies) to aim only for a culture of enterprise. It is also necessary to (a) know (i.e. know-that) all market-based systems confront specific limitations regarding the ways in which they can “serve society” and “benefit all,” and then (b) to try to compensate systematically for those known limitations.
A.E. Singer / Zeleny’s Human Systems Management and the Advancement of Humane Ideals
185
Table 2. Limitations of market based systems & compensatory strategies in variants of capitalism THE KNOWN LIMITATIONS OF MARKET-BASED SYSTEMS
FINANCIAL MARKET CAPITALISM
HUMAN SYSTEMS MANAGEMENT
HUMANE IDEALS MANAGEMENT
Monopolistic tendencies
Anti-trust
Knowledge as power, hypercompetition
Rights & empowerment through unions
Distributive justice & fairness
Government policy
Individual contribution & community needs, educate to self-sustain
Individual needs, public goods, educate to help
Ability to pay
Government policy
No handouts, Payingcustomer is king
Expand the network, Keynesian approaches
Information
Speculative trades
Authentic conversations, open books
Same
Positioning, PR Preference vs. wellbeing
Creation of desire, framing
Products incorporate human goods
Same
Alienation
Production for market
Re-integration of labour, knowledge & price
Integrate nonvalued members
Associations
Democracy & markets, mechanistic
Democracy & markets, biological
Care, rights, moralminima
3.1. Market Limitations There is a standard set of known limitations of market based systems (KLMBS) and these can be considered one at a time in relation to the HSM prescriptions, as well as FMC-type business strategies (i.e. business as usual). The limitations include inter alia the monopolistic tendencies of producers, distributive justice concerns, the inability to pay, the availability and credibility of market information, the difference between consumers’ revealed preferences and their well-being, together with various forms of social alienation (Table 2, column 1). As a matter of strategy, most FMC-type companies routinely try to exploit all of these limitations. They not only attempt to maximize their market power (see below), but they also try to create desire (e.g. Crisp, 1989), to conceal information, to glamorize social alienation, and so on. HSM has set out several elements of a more enlightened strategy that have the effect of overcoming particular limitations (e.g. by catering honestly to markets-of-one, by re-integrating price and value, etc.). However, it has remained silent on several other important limitations of markets: especially those where the compensatory strategies of enterprise would involve an ethic of care, the protection of rights, or an assurance that all peoples’ needs for income and opportunity are met to a reasonable level (refer to Table 2, columns 3 & 4). It has often been noted that businesses are able to exploit some of the market limitations, as a matter of strategy. However, what has hardly ever been stated clearly and explicitly is the simple idea that all types of enterprise might now be able to serve society much better by strategically compensating for all of these limitations, especially if they are willing and able to act jointly in partnership with governments, to this end (7). Accordingly, an alternative system of enterprise (Humane Ideals Management, or
186
A.E. Singer / Zeleny’s Human Systems Management and the Advancement of Humane Ideals
FMC Enterprise actions
All KLMBS exploited
HSM Enterprise actions
HIM Enterprise actions
Some KLMBS overcome
Service to society, Common good
All KLMBS overcome or compensated
Figure 1. Variants of capitalism and enterprise service to society.
HIM) can now be more clearly envisaged, in which this type of “compensation” strategy becomes the norm for ethical business. HIM is a hypothetical (but arguably achievable) way to arrange human affairs and it can be thought of as a potential augmentation or adaptation of HSM (Fig. 1). Its elements are briefly outlined in the remainder of this article. 3.2. Monopolistic Tendencies In FMC-type companies, it is standard practice to consider the strategic uses of market power that knowingly deny benefits to others (e.g. Boddewyn & Brewer, 1994). As Quinn & Jones (1995) and Prakash-Sethi (2003) have all noted, this implies that there is little concern for others’ rights (8,9). To compensate for this (refer to Table 2, row 1) ideals-based HIM strategies would work within the spirit of anti-trust laws, but they would also work with governments to help ensure (for reasons of social service) that there is an adequate dispersion of market opportunities. The mainstream strategy literature has notably shied away from considering this type of “ideal” approach, but it has also neglected another obvious way of counterbalancing excessive corporate power; that is, partnerships with trades unions. On this point, HSM has sided strongly with FMC: trades unions are seen as the dysfunctional embodiment of a social class, rather than a way of building social capital. Zeleny curtly dismissed “the unrealistic dreams of the working class…” noting that “the power is not in the class, but in the knowledge”. He then added that “autonomous, independent and self-motivated workers or citizens have never been good material for unionisation” (10). In contrast, HIM-enterprise would encourage a legitimate role for trades unions: the upholding of rights and the provision of security for those who are not yet valued members of any productive complex, or who lack the capacity to attain autonomy, or who need care. 3.3. Distributive Justice According to HSM, modern businesses must strive to become valued members of a productive complex, whilst individuals who do not add value “should not take part”. However, we don’t find out what they should do instead. According to Bata, people “must be taught to help themselves”; that is, they must find ways of qualifying for net-
A.E. Singer / Zeleny’s Human Systems Management and the Advancement of Humane Ideals
187
work membership. This imperative is endorsed by Sir Fletcher Jones: “We shall succeed “when we educate people to manage and direct their own work…”. Indeed, one hears this message again and again…particularly from those who have already succeeded in business; but much less from medics, social entrepreneurs and those on the front line of global social welfare. Accordingly, ideals-based HIM-type enterprises would work with governments and NGO’s to ensure that immediate survival needs everywhere are met to a reasonable level and that public goods are well-funded, whilst people are generally encouraged and educated to help each other directly. 3.4. Ability to Pay There have always been a variety of reasons why some individuals lack the “ability to pay” for a dignified standard of living. HSM makes no recommendation on overcoming this particular limitation (Table 2, row 3). However, it does quote successful entrepreneurs about what not to do: “By dispensing gifts of money …people become dependent on handouts and ignore their abilities of self reliance.” “Charity does not help people...the goal of philanthropy is to foster self-help,” and so on. On the other hand, FMC and HSM both appear to accept the proposition that when a dollar is spent on market offerings, the paying customer is king: “The customer is always right, even when he is wrong,” said Bata. At this juncture (i.e. when referring to consumption, rather than production) HSM is notably silent as to whether a regal customer’s dollars were earned, endowed, inherited, the fruits of speculative trades, or simply “swindled”. HIM in contrast sees that there are certain types of charitable actions that are not only consistent with many of the core principles of HSM, but also endow more people with an ability to pay and participate. Just as HSM has self-sustaining enterprises integrating their customers into the production cycle, HIM – enterprises would integrate their employees more fully into the consumption cycle. They would take direct and indirect action (through stakeholder strategies, political lobbying and partnership) aimed at sustaining and growing the network of customers that have the ability to pay. The most obvious direct action of this type is simply to lobby to increase wages, across a broad front. Recently, a senior executive of the (much criticized) Walmart corporation mentioned this very notion. He said on TV that he was in favour of raising the national minimum wage “because minimum wage earners are our customers”. A second approach to pro-actively expanding the customer network involves influencing governments to have more tax revenues channelled into poverty alleviation programs. These include social safety nets (charitable social welfare) as well as bottom-ofpyramid micro-credit programs for enterprise. HSM indirectly and unintentionally invites further consideration of such approaches in an intriguing section that discusses birth and death processes in nature, with particular reference to the works of a Russian Systems – Theorist, Bogdanov (1927 & 1984). In his theory of Tectology (a precursor to the Italian theory of autopoiesis) the decline and death of a biological species was interpreted as a signal or affirmation of the persistence of all other species. The event of death was thus described as “the most exquisite assurance of life yet to come”. One cannot help but wonder how Bogdanov would see, in today’s World, the linkage between the death of an individual human being and the best way of “affirming” or assuring the “persistence” of the remainder. The death of a multi-billionaire, for example, releases a pool of (dead?) financial capital that can directly assist with the survival of a great many others. It is only necessary to re-distribute the wealth. Although it seems inconsistent with HSM, a properly thought-through (Global) estate tax might thus be-
188
A.E. Singer / Zeleny’s Human Systems Management and the Advancement of Humane Ideals
come an “exquisite assurance” to many and a very effective way to “meet community needs,” well beyond the level that is achievable through autopoietic enterprise combined with voluntary philanthropy [11]. 3.5. Information In HSM (and in HIM) “speculative trades” and “un-earned gains” are associated with dead capital and they are discouraged accordingly. This includes gains from playing the market from a distance, as well as the exploitation of insider information (Table 2, row 4). Instead, people everywhere should be encouraged to implement their tacit knowledge: the type that brings forth effective coordinated actions in the service of society. For example, human-systems accountants would ensure that the internal management accounts and external financial reports are as open and clear as possible, so that everyone can understand them and see how they relate to the explicated strategy of the enterprise. Similarly, human-systems marketers would ensure that “the product speaks for itself” (again, action, not encoded information). They would engage in authentic conversations with stakeholders and see to it that the company acts quickly (kinetics) in response any to revealed product weaknesses. 3.6. Preferences & Well-Being Within FMC consumer markets, encoded information is routinely used to create desires, manipulate frames and obscure any known product weaknesses. “Industrial-era” public-relations “PR” departments routinely provide spin that, according to HSM “no longer relates” to the public. Coupled with shock-transitions from Socialism to Capitalism (in E. Europe) this type of abuse has already misled many young people into “flying around the flashing lights of empty promises of hope, until they end up totally exhausted with their wings already burned”. In contrast, HSM (and HIM) enterprises do not try to trick people. Like Bata enterprises, they “endeavour to fulfil even the unexpressed wishes” of their customers. They produce artefacts (e.g. cars, cameras, clothing, etc.) that are infused with the human goods (beauty, quality, harmony, etc.) and that will immediately be recognized by customers as fostering their genuine well-being. 3.7. Alienation Whilst they strive to meet their paying customers’ authentic requirements, HSM-type enterprises are also collectively laying to rest the 19th century Marxist idea of the alienation of the worker-producer. Under the conditions that prevailed when Marx was writing, factory workers might have been mentally numbed (made stupid) by the mechanical division of labor, whilst craftsmen might have experienced a sense of alienation as a result of having their expressive values and identities subsumed into a singular monetary measure (price). However, in contemporary production we observe instead a re-integration of labour, task and knowledge. Labour is once again becoming a craft, a profession and a skill. Furthermore, HSM-type enterprises have always thought of price as an expression of their collective integrity. In a spectacular inversion of Marxist thought, price becomes an integral part of the multi-attribute market offering. That is why Bata declared that “bargaining does not exist” and “our first word is our last”. Put simply, in HSM (and HIM) the product is good and so the price is correct.
A.E. Singer / Zeleny’s Human Systems Management and the Advancement of Humane Ideals
HIM Enterprise actions
189
KLMBS overcome or compensated
Service to society, Common good Indirect influence
Direct Influence
Government actions
Enterprise encouraged, markets regulated, ethics promoted
Figure 2. A framework for enterprise strategies that compensate for market limitations.
3.8. Political Associations Finally, at the political level, all these variants of capitalism (FMC, HSM, HIM, etc.) have at times been linked with democracy and individual freedom. In comparison with other systems (or ideologies), they are held to foster a negative freedom from state oppression, as well as the positive freedom associated with individual achievement. For FMC, such claims often confront documented episodes that attest to the contrary, involving extractor corporations and the tacit acceptance of human rights abuses. HSM joins FMC in making the strongest possible case for freedom, but it sees it as dependent upon productive enterprises adopting integrator stakeholder strategies. HSM also appeals for a macro-level “democracy based on respect” (i.e. respect for every citizen, certainly not for authority derived from hierarchy and positional power), yet it dismisses governments as being “least equipped” to help bring about such social conditions. In contrast (Fig. 2) HIM enterprises attempt to influence governments to this end, directly through lobbying, as well as indirectly through their communications. This is in accordance with (a) their explicated mission of service to society and (b) their sense of the good that is fully informed of the limitations of markets. 3.9. An Illustration The principles of HIM can be illustrated from a distance with reference to Canon Corporation. That company already follows many tenets of HSM. For example, in 1984, Canon’s Japanese chairman Mr. Mitarai (like Bata in Czechoslovakia before him) advanced the idea that “profitability alone is not enough” and that a company also had an obligation to lend its strength “to society’s betterment” (Sandoz, 1997, p. 25). Canon then explained its corporate philosophy of Kyosie, or living and working together for the common good. Currently, the company remains profit-focused, yet its website also states that “it is the presence of imbalance in our World… (that) hinders the achievement of Kyosei.” However, Canon’s well-documented history appears to indicate some wavering in its emphasis on Kyosie with an HSM-endorsed “service to society” versus FMC-type profitability and shareholder wealth-creation (e.g. Kaku 1996; Sandoz 1997; Granstrand 2001; www.Canon.com; Business Week, 2002). An HIM ideals-based strategy would not only have Canon shift its strategic orientation firmly towards Kyosie, but would also have it act (i.e. explicate) to try to correct that observed “imbalance”. There
190
A.E. Singer / Zeleny’s Human Systems Management and the Advancement of Humane Ideals
Table 3. Elements of HSM strategy with ideals-based augmentations Human Systems
Humane Ideals
Explicate purpose
Promote the re-balancing of wealth
Just treatment of workers
Partnership with unions
Philanthropy for self-help
Political support for public goods
Customer is right
Guidance on common-good applications
Knowledge as capital
IPR-free race-to-the-top (hyper-competitive)
Environmental engineering
Partnership against root causes
are several possibilities. With regard to the re-balancing of wealth, for example, the company might take a lead in moderating its pay scales whilst also recognizing and supporting properly-motivated unions. Then (Table 3, rows 2 & 3) its corporate philanthropy program could be augmented with political action aimed at promoting the funding of public goods by the state, or by trans-governmental bodies. FMC and HSM companies generally adopt quite the opposite stance, with executives arguing at every turn for lower taxes and less government. Other possibilities for ideals-based strategy include (i) criticizing product applications that obviously do not serve the “common good” (as opposed to accepting that “the customer is always right”), (ii) explaining the ambivalent relationship between strong IPR regimes and social benefits, whilst adjusting IPR management strategies accordingly12 and (iii) supplementing environmental engineering programs with partnerships aimed at tackling the root causes of pollution.
4. A View from Somewhere Canon’s expressed mission, like the entire HSM thesis, has been shaped by the distinctive personal experiences of its author. At Canon, it was a single executive who proposed the Kyosie mission. Mr. Kaku’s views were no doubt influenced by the fact that he was a survivor of the Nagasaki atomic bomb (Sandoz, 1997) and spent his childhood in “straightened circumstances” in China. In HSM one can observe quite different influences: the formative effects of FA Hayek’s tutelage, the history of the Czech nation, visits to the Asian “Tiger” nations and an immersion in the financial-market culture of the USA. The remarkable claim that HSM “is not an ideology” can then be re-assessed in the light of an awareness that Bata’s human – oriented businesses were severely damaged by Nazis in 1939 and vilified after 1948 by the Communist party. Reading HSM (indeed, all of management theory) in light of its author’s personal background thus helps us to see its limitations more clearly. It also confirms the relevance to Business Ethics of the philosophical concept of positional relativity (and the corresponding notion of strategy-as-perspective). Mr. Kaku’s view of business purposes was shaped in Japan and China in the 1940s. Zeleny has been “positioned” in the USA, the Czech Republic, Beijing and Taipei. His downbeat view of government capabilities contrasts with that of Casson (1998), an English Economist, who has suggested that governments are “well-equipped” to promote morality (emphasis added) at least for the purpose of improving corporate governance within FMC. More generally, it seems that ethicists and managers should be cautious about prescriptions that look
A.E. Singer / Zeleny’s Human Systems Management and the Advancement of Humane Ideals
191
Table 4. HSM claims and unacknowledged counter-claims HEME
HSM CLAIM
COUNTER-CLAIM
Class struggle
Dream
Valid dictum
Benevolence
Restrict
Stretch
Justice
Implicit
Imperative, Standard
Serving shareholder
False basis
Financial-Economics
universal and insightful, since these are evidently influenced by the life experiences and the consequent political stance of their authors. Given the forceful tone of HSM, it is reassuring to find out that Zeleny is indeed very well aware of this precise point. More than once in the book he has quoted the Spanish poet Manuel Y. Ortega: “I am myself and my circumstances”. 4.1. Lacunae Even though we do tend to “see the world… as we are” (Koehn 2006 p. 395, citing Anais Nin), we can also endeavour to detect and compensate for our own blind spots. In HSM, unfortunately, there are quite a few. The failure to “see” and consider the full set of limitations of market-based systems has already been noted (Table 2). In addition, HSM has also passed over several clear opportunities to acknowledge and engage with well-known counter-claims (Table 4). For example, the assertion that Marx’s working class has been awakened from its “dreams” by the current re-integration of labour with knowledge finds a perfect counter-point in Rorty’s sustained views on that very same question: “Nothing that has happened in the last hundred years would lead Marx to revise his dictum that the history of the human race is the history of the class struggle” (2006, p. 379). It thus seems that the reader of HSM has at the very least been deprived of a spectacular critique. Part of the problem here (as indicated at the outset) is that HSM focuses throughout upon the management of modern enterprise, while Rorty and many companion thinkers are gazing steadily at the continuing worldwide social injustices. As a result, HSM lacks precisely those exercises in “imaginative sympathy” and “stretching of benevolence” that many see as essential to general moral progress. In particular, HSM endorses only a restricted form of charity (cf. Section 3.4) along with a strict subset of the human goods (e.g. beauty, quality and harmony). Indeed, the words “justice” and “fairness” appear only a very few times in the book [13]. Many business managers will be quite unconcerned about this blind-spot, but they will be highly troubled by another omission in the thesis. There is a failure to mention or acknowledge that the proper functioning of capital markets confers distinctive benefits to society, despite any “swindling.” Capital accumulated by disengaged investors might well be “dead.” in a sense, but it can quickly be resurrected and revitalized. So long as the institutions of FMC function properly, it is redeployed into other enterprises (including HSM-type companies) and productive individuals. This rather basic principle of financial economics is not mentioned in HSM, but it is also “remarkably recognizable” and has recently expanded its institutional expression, almost everywhere in the world.
192
A.E. Singer / Zeleny’s Human Systems Management and the Advancement of Humane Ideals
Table 5. Consensus in Philosophy and HSM PHILOSOPHY THEME
CONSENSUS
HSM
Narratives
Exemplary figures
Bata & others
Imagination
Envision new possibilities
Designing new alternatives, de novo
Contingent beliefs
Historical, similar packages
Remarkable recognizability
Coherence
Important, but limited
Cognitive equilibrium, poetic license
4.2. Reinforcements There is much else in HSM that qualifies as independent discovery and reinforcement of current trends in philosophy (Table 5). This aspect of HSM is indeed rather striking and it accords fully with Rorty’s view that disciplines other than philosophy can now contribute to applied ethics. In a sense, therefore, HSM can be regarded as an intellectual parallel universe; one that business ethicists probably ought to visit. Such visits will quickly reveal that Zeleny has been narrating and re-telling the story of Thomas Bata and other exemplary entrepreneurs, exactly as several philosophers (e.g. Rorty, Duska, Freeman) have been urging business ethicists to do. Also, in respect of the view of imagination as a primary instrument of moral progress, Zeleny’s 25-year-old de novo method with its accompanying meta-mathematical assertions (“everything can be re-designed to achieve human purposes”) seem impressively prescient. On the other hand, many philosophers now accept the historical contingency of human beliefs, yet Zeleny just plain knows that we live in a world of universally legitimate distinctions (e.g. good vs. bad management; market vs. hierarchy). For him, there is no need to mention others’ discoveries that “the same packages are recognized by independent cultures” (S.J. Gould; cited in Koehn 2006, p. 395). Furthermore, because social systems “are” biological systems, this type of universality extends naturally to categories within management and politics. Finally, Zeleny might have informed us more fully of his views on the importance of “coherence” in prescriptive works involving social systems. In the formal mathematics of HSM, non-coherence (contradiction) is of course accepted as proof of an error or of false formal proposition. In addition, the conceptual model of cognitive equilibrium, which is presented as having normative force, is essentially a depiction of coherencecreation. On the other hand, when one considers the book as a whole, a few latent inconsistencies do seem quite apparent (e.g. the one about “always-right” customers). It seems that the reader is supposed to accept such things as the product of “circumstance” and hence to grant a Whitman-like poetic license (cf. Koehn, 2006, p. 393). This is, after all, the work of a prolific and innovative scholar.
5. Conclusion From its inception, Zeleny has sustained major contributions to the mathematical field of MCDM. About 10 years ago, Amartya Sen (1996) described that field as a “promising approach” to linking business ethics with economics. Meanwhile, Zeleny was setting out his integrative and conceptual HSM thesis (the bibliography cites over 60 of 350 + journal articles). The fact that he was recently ranked #1 among Czech econo-
A.E. Singer / Zeleny’s Human Systems Management and the Advancement of Humane Ideals
193
mists suggests that this work will be recognized quickly and widely. The principles of humane ideals management (Section 3) might take a little longer, although there are some signs that they too are “already taking shape”. In any case, HSM together with its limitations and augmentations can immediately be viewed from anywhere as a kind of re-integration of knowledge within the ethical, economic and political domains, following an historical division along quite different lines. Notes (1) Several other components of wisdom have been identified elsewhere (e.g. Kekes, 1983), including the selection of suitable purposes. (2) The wise organisation in “totally-aligned,” in the sense used by Rossouw & Van Vuuren (2003). (3) As with Game Theory (e.g. Solomon, 1999; Binmore, 1999) the concern is with the effects of applying mathematical models. In this HSM multi-capital case, however, the user-warning comes from the mathematician, rather than the ethicist. (4) Under generally accepted accounting principles, the item retained-earnings combines these “live” and “dead” forms. Human capital is reflected to some extent in the judgemental valuation of intangibles, whilst triple bottom line reporting also incorporates the social and ecological forms. (5) A recent statistical study (Munusuam et al. 2006) has indicated that national cultures of business management have indeed converged. (6) This HSM thesis on culture could be invoked in support of the pharmaceutical industry practice of paying for knowledge of the medicinal properties of local fauna. According to this thesis, they are not disrupting culture; they are expressing the local and the enterprise-based forms in ways that are “recognizable” and “potent”. (7) Strategies that compensate for market limitations implicitly recognize the moral duty for a company to act “when it co-creates bad conditions, or when there exists unjust conditions from which the company benefits” (Margolis and Walsh, 2003). (8) Like Quinn & Jones, HSM also challenges the I/O-Economic framework for business strategy, but in a different way. HSM invokes elements of the hyper-competition framework (D’Aveni, 1994) such as agility, technical know-how and minimal government intervention. Unlike Quinn & Jones, neither D’Aveni, nor HSM, nor Boddewyn & Brewer considered the social and moral significance of the market power wielded by global hypercompetitive entities. (9) Prakash-Sethi (2003) argued for an accountability-based approach to this problem, suggesting that corporations should be “held accountable for a more equitable distribution” whenever groups “were deprived…because of market imperfections and corporate power.” (10) Zeleny has also suggested that objections to tele-working might be motivated by union leaders’ concerns about their own “loss of influence” over the remote workers. They might instead be concerned that bosses would not care for invisible workers, in the good way that Bata cared for his employees, who were all on-site. (11) HSM does not discuss political proposals of this sort. The only exception is its endorsement of merit-based (rather than kinship-based) job-recruitment (cf. Section 2.7). (12) Canon has had many victories in the corporate-initiated patent wars. An alternative “race to the top” strategy would have it (a) lobbying for weaker IPR regimes (because these are often associated with injustices, e.g. Collier, 2000), whilst (b) reducing dependence upon IP law (e.g. through hypercompetitive moves). (13) If, as Adam Smith wrote, “the prevalence of injustice must utterly destroy…society” (cited in Werhane, 2006, p. 406), or if, as Koehn (2006, p. 391) attributed to Rorty “the standard for assessing an account… is whether it…efficaciously enables us to achieve social justice,” then this omission in HSM needs to be remedied.
References [1] Binmore K. (1999) Game theory and business ethics, Business Ethics Quarterly 9(1) pp. 31–35. [2] Boddewyn J. & T. Brewer (1994) International business political behaviour: new theoretical directions, Academy of Management Review, 19(1) pp. 119–143. [3] Bogdanov A.A. (1927) Bor’ba za zhizniesposobnost, (The Struggle for Viability) Moscow.
194
A.E. Singer / Zeleny’s Human Systems Management and the Advancement of Humane Ideals
[4] Bogdanov A.A. (1984) Essays in Tektology, Trans. By G Gorelik, Intersystems: Seaside CA. [5] Burrell G. (1989) The absent centre: the neglect of Philosophy in Anglo-American management theory. Human Systems Management 8 pp. 307–311. [6] Casson M. (1998) An entrepreneurial theory of the firm (manuscript). University of Reading, England. [7] Collier J. (2000) Globalisation and ethical global business, Business Ethics: a European Review, April, pp. 71–76. [8] Crisp R. (1987) Persuasive advertising, autonomy and the creation of desire. Journal of Business Ethics 6, pp. 413–8. [9] D’Aveni R. (1994) Hyper-competition: Managing the Dynamics of Strategic Manouvering. Free Press, NY. [10] Dhammananda K. Sri (1999) Food for the Thinking Mind Buddha Educational Foundation: Taipei. [11] Doktor, R.H. (1990) The Myth of the Pacific Century, FUTURES, 22(1) pp. 78–82. [12] Hawkens P., A. Lovins & L.H. Lovins (1999) Natural Capitalism: Creating the Next Industrial Revolution Little Brown & Co. [13] Hayek F.A. (1988) The Fatal Conceit University of Chicago Press: Chicago. [14] Hayek F.A. (1945) The use of knowledge in society, Economica, Feb 1937, pp. 33–45. [15] Kaku (1996) Address on Kyosie and Canon’s strategy at the World Congress of Business Ethics and Economics. Reitaku University, Chiba, Tokyo. [16] Kekes J. (1983) “Wisdom” American Philosophical Quarterly 20(3) July, pp. 277–286. [17] Koehn D. (2006) A response to Rorty, Business Ethics Quarterly 16 (3), pp. 391–399. [18] Kuttner R. (1984) The Economic Illusion: False Choices Between Prosperity & Social Justice. Houghton Mifflin. [19] Manning R. (1988) Dismemberment, divorce and hostile takeover , Journal of Business Ethics, 7, pp. 639–643. [20] Margolis J. & P. Walsh (2003) Misery loves companies: rethinking social initiatives by business, Administrative Science Quarterly 48(2) pp. 268–306. [21] Munusuamy, V.P., Valdez, M.E., Lo, K.D., Budde, A.E.K., Suarez, C.M., and R.H. Doktor, “Economic Growth and Cultural Convergence: Evidence from Hofstede and GLOBE Studies, presented at Academy of Management meetings, Atlanta, 2006. [22] Porritt J. (2005) Capitalism as if the World Matters, James & James/Earthscan. [23] Prakash Sethi, S. (2003) Globalisation and the good corporation. A need for proactive co-existence. Journal of Business Ethics, 43(1) pp. 21–31. [24] Quinn D. & T. Jones (1995) An agent morality view of business policy, Academy of Management Review, 20(1) pp. 22–42. [25] Rorty R. (2006) Is Philosophy relevant to applied ethics? Business Ethics Quarterly, 16 (3), pp. 369–380. [26] Rossouw G.J. & L.J. van Vuuren (2003) Modes of managing morality: a descriptive model of strategies for managing ethics, Journal of Business Ethics 46(4) pp. 389–402. [27] Sandoz P. (1997) Canon: Global Responsibilities and Local Decisions. In the series “Japanese Business: the Human Face”. Penguin: London. [28] Schick F. (1991) Understanding Action: An Essay on Reasons. CUP: Cambridge. [29] Sen A. (1996) Economics, business principles and moral sentiments. Business Ethics Quarterly 7, pp. 5–16. [30] Soule E. (2002) Management moral strategies: in search of a few good principles, Academy of Management Review 27(1) pp. 114–124. [31] Solomon R. (1999) Game theory as a model for business and business ethics, Business Ethics Quarterly, 9(1) pp. 11–29. [32] Werhane P. (2006) A place for philosophy in Applied Ethics and the role of moral reasoning in moral imagination. Business Ethics Quarterly 16 (3), pp. 401–8. [33] Zeleny M. (2005) “Human Systems Management: Integrating Knowledge, Management & Systems” Singapore: World Scientific Publishing. [34] Zeleny M. (1991) Cognitive equilibrium: a knowledge based theory of fuzziness and fuzzy sets, General Systems 19(4) pp. 359–381. [35] Zeleny M. (1988) Beyond capitalism and socialism: human manifesto, Human Systems Management 7(3) pp. 185–188. [36] Zeleny M. (1981) On the squandering of resources and profits via linear programming, Interfaces, 11, pp. 101–7.
Advances in Multiple Criteria Decision Making and Human Systems Management Y. Shi et al. (Eds.) IOS Press, 2007 © 2007 The authors. All rights reserved.
195
Outsourcing Risks Analysis: A Principal-Agent Theory-Based Perspective a
Hua LI a and Haifeng ZHANG b Professor and Vice Dean, School of Economics and Management, Xidian University, Xi’an, 710071, China, E-mail:
[email protected] b School of Economics and Management, Xidian University, Xi’an, 710071, China
Abstract. There are four risks under a principal-agent system, namely, information asymmetry, responsibility non-equivalence, incentive incompatibility and contract incompleteness. The paper starts from introducing the four problems and based on the principal-agent theory it puts forward an analysis framework for outsourcing risks. It comprehensively analyses the risk factors in outsourcing, aiming at providing some reference for the enterprises to make wise outsourcing decisions. Keywords. Principal-agent theory, outsourcing, risk
Since the 1990’s, due to intensive market competition and rapid change of business environment, more and more enterprises have outsourced their non-core businesses to gain competitiveness and competitive advantage. Outsourcing is also called external purchasing, which refers to the transfer of some businesses from within the enterprise to outside. It can bring many benefits to enterprises such as simplifying administrative structure, reducing cost, achieving professional service and fast response to change of market demand, etc. However, outsourcing is in fact a half-combination-enterprise relationship between self-provision and purchasing among enterprises, and the special relationship causes enterprises to face more unforeseeable risks. As outsourcing develops rapidly, outsourcing risks attract the attention of home and abroad scholars, and the related theoretical research and case studies also go deep into the risks gradually, analyzing the risks in outsourcing mainly from different aspects and using different theories. Christine Koh, Cheryl Tay and Soon Ang [1] took the psychological perspective and made a survey. They determined mutual obligations expected by both parties in outsourcing transactions by analyzing answers given by CEOs and CIOs in IT industry, and they analyzed the risks caused by the other party’s not fulfilling the obligations respectively. Hazel Taylor [2] took Hong Kong IT outsourcing as the research object and analyzed six categories of risks undertook by providers. Therefore, their research was from the provider perspective. Ravi Aron, Eric K. Clemons and Sashi Reddi [3] divided the risks of outsourcing into the following four categories: strategic risks, operational risks, intrinsic risks of atrophy and intrinsic risks of location. They then discussed in detail how to manage to reduce strategic risks. Cui Nanfang, Kang Yi and Lin Shuxian [4] divided the whole outsourcing process into two phases according to the
196
H. Li and H. Zhang / Outsourcing Risks Analysis: A Principal-Agent Theory-Based Perspective
time point of signing the contract, and they analyzed the risks in the two phases based on transaction cost and principal-agent theories. Benoit A. Aubert, Michel Patry and Suzanne Rivard etc. [5–8] did systematic research in this area. They took IT outsourcing as the research object and defined risk as the product of the probability of an undesirable outcome and the loss due to the undesirable outcome. Then risk assessment procedure was given, and they classified the undesirable outcomes of IT outsourcing into four categories: hidden costs, contractual difficulties, service debasement and loss of organizational competencies. They also investigated the risk factors in IT outsourcing based on transaction cost and agent theories taking three perspectives (agent, principal, and transaction). The links between undesirable outcomes and risk factors were discussed and a framework for risk management was given. Furthermore, risk management strategies for IT outsourcing were put forward via related case studies. Based on the research of Aubert etc, Bouchaib Bahli and Suzanne Rivard [9], mainly based on transaction cost theory, identified four main risk scenarios that can be associated with outsourcing and the related risk factors, and the risk mitigation mechanisms were also discussed. On the whole, outsourcing risks have become one of the hotest research issues for home and abroad scholars, and some scholars have already done some research based on principal-agent and transaction cost theories. The author argues that outsourcing risks analysis based on principal-agent theory remains to be further deepened. Essentially, outsourcing embodies a collaborative relationship among supply chain enterprises, and principal-agent is the true reflection of the relationship between an outsourcing enterprise and a contractor. So it’s very necessary for us to deeply analyze the related risk factors under this pattern. Not like Aubert and Cui Nanfang, this paper analyzes outsourcing risks from the source of risk problems under a principal-agent system, the four risks problems proposed by this paper are universal in a principal-agent system.
1. Principal-Agent Theory Principal and agent are in fact a form of differentiation of labor when social economy develops to a certain level, and they are the reconfiguration of economy rights. The separation of ownership and control gives rise to the principal-agent system. Principal-agent was first put forward by Rose in 1973, later Mirrlees (1974, 1976) and Stiglitz (1974, 1975) did further research on it. Principal-agent theory gradually becomes an important branch of economics research, and its essence is the development to modern contract theories. An agent is a person who is in the employ of another principal and takes actions according to the principal. The definition by Stiglitz has certain representativeness: “how a principal (e.g. the employer) designs a compensation system (the contract) to drive another person (his agent, e.g. the employee) to behave in the interests of the principal [10].” In outsourcing, outsourcing enterprise is the principal and contractor is the agent. They complete outsourcing transactions by signing outsourcing contracts. Due to incomplete information between a principal and an agent, in order to avoid the agent’s deviation from his own target and achieve maximization of the expected utility, the principal will monitor the behavior of the agent by actions such as signing contracts, which lead to the creation of the agent cost. Just due to the existence of the agent cost, the principal must design an effective incentive and disciplinary mechanism
H. Li and H. Zhang / Outsourcing Risks Analysis: A Principal-Agent Theory-Based Perspective
197
(the contract) not only to reduce agent cost but also to achieve the maximization of his own utility. The ultimate aim of the principal is to acquire the maximizing agent income, the source of which lies in the efficiency of division of labor derived from the separation of ownership and control. We can take the agent effect coefficient as the index for measuring the effectiveness of the principal-agent system: agent effect coefficient (a) is agent income (Q) divided by agent cost (C). (i) a > 1 indicates the effectiveness of the principal-agent system. Namely, Q > C means that the existence of the principal-agent relationship is reasonable. (ii) a ≤ 1 indicates the ineffectiveness of the principal-agent system. Namely, Q ≤ C means that some problems exist in the principal-agent relationship. In this case, it is quite necessary to make system innovation and perfect the incentive, monitoring and disciplinary mechanisms to adjust the principal-agent relationship, then reduced agent cost and increased agent income will be seen as the result. Once an enterprise chooses outsourcing, it adopts the principal-agent system at the same time. There exist four risks under this system, which restrict the cost and income of the principal-agent system, giving rise to principal-agent risks. (1) Information asymmetry. The agent possesses more private information. Because of the information asymmetry, adverse selection and moral hazard may occur respectively before and after market transaction, leading to lack of efficiency of the function of the market mechanism. (2) Responsibility non-equivalence. Under a principal-agent system, the loss of the agent is work opportunity at most, but the principal may lose the huge assets committed to the agent. This is also called the lock-in problem. (3) Incentive incompatibility. That is to say, the interests maximization of the principal contradicts that of the agent. (4) Contract incompleteness. In actual transaction, because of limited rationality of the principal (seeking rationality subjectively, but he can only finitely doing this objectively) and complexity and uncertainty of the external environment, the drafting and implementation of the contract are usually incomplete. The four risks existing in a principal-agent system make the agent possess both the motive and the condition to damage the interests of the principal, and it is hard to assure that the agent acts according to the interests of the principal trustily. Hence increased agent cost and decreased agent income will result, and the fact that a>1 may change to a≤1 will make the ineffectiveness of the principal-agent system. So there exist great risks under a principal-agent system.
2. Outsourcing Risks Analysis This paper starts from the four risk problems existing in a principal-agent theory, and it analyzes the risk factors in outsourcing under this theory comprehensively. Table 1 shows the analysis framework for outsourcing risks based on principal-agent theory. 2.1. Risk Factors Caused by Information Asymmetry Generally speaking, because of information asymmetry, the outsourcing enterprise is usually in a more disadvantageous situation compared to the contractor. Taking the perspective of the content of the asymmetric information, asymmetric information may refer to knowledge and information of the contractor, and it may also refer to the be-
198
H. Li and H. Zhang / Outsourcing Risks Analysis: A Principal-Agent Theory-Based Perspective
Table 1. Analysis framework for outsourcing risks based on principal-agent theory information asymmetry responsibility non-equivalence incentive incompatibility contract incompleteness
opportunism of the contractor adverse selection moral hazard potential lock-in risk of the outsourcing transaction asset specificity small number of contractors outsourcing enterprise’s lack of expertise in outsourcing contracts opportunism of both the outsourcing enterprise and the contractor imperfect commitment coordination problems of the outsourcing transaction potential costs after transaction measurement problems of outsourcing performance contractual amendments costs disputes and litigation costs
havior of the contractor, called hiding information and hiding behavior respectively. Taking the perspective of the time of the asymmetric information, adverse selection and moral hazard may occur respectively before and after the signing of the contract. (1) Adverse selection of the outsourcing enterprise caused by hiding information of the contractor. Hiding information refers to the fact that the contractor has already held some private information not known by the outsourcing enterprise before the signing of the contract, and the information may be disadvantageous to the outsourcing enterprise. Thus the contractor signs a contract with the outsourcing enterprise advantageous to himself. The outsourcing enterprise will be in a bad situation because of information disadvantage, making his own interests very easy to be damaged. This is opportunism in the contract signing phase. The problem of hiding information is universal in the process of outsourcing enterprise choosing contractors. Due to information asymmetry, the contractor usually knows more about his own credit and real technique and personnel strength than the outsourcing enterprise, and the contractor may often exaggerate his capabilities and provide the outsourcing enterprise with insufficient and untrue information. The decision made in information asymmetry leads to adverse selection—the outsourcing enterprise mistakenly chooses a contractor not fit to his real condition. In this case, it is destined that the agent effect coefficient (a) may be no more than 1 before the formal commence of the outsourcing transaction, and the principal-agent system may lose its effectiveness. (2) Moral hazard caused by hiding behavior of the contractor. Hiding behavior assumes that information is symmetrical between outsourcing enterprise and contractor when they sign the contract, but the outsourcing enterprise can’t observe some behavior of the contractor after signing the contract, or the change of outside environment is only observed by the contractor. In this case, the contractor may go against the intent of the outsourcing enterprise, thus damaging the interests of the outsourcing enterprise. This is opportunism in the contract implementation phase. The problem of hiding behavior is also universal in the process of outsourcing enterprise managing contracts. Once the relationship between the contractor and the outsourcing enterprise is fixed in the form of the contract, the outsourcing enterprise will not be able to observe the whole operation process of the contractor thoroughly and particularly as before. To observe the behavior of the contractor completely will incur prohibitive costs [5]. Hiding behavior leads to moral hazard—service debasement and increased agent costs by the contractor, which will make a ≤ 1 and ineffectiveness of the principal-agent system.
H. Li and H. Zhang / Outsourcing Risks Analysis: A Principal-Agent Theory-Based Perspective
199
2.2. Risk Factors Caused by Responsibility Non-Equivalence Due to responsibility non-equivalence of the two parties in outsourcing transaction, the loss of the agent is work opportunity at most, but the outsourcing enterprise may lose huge assets associated with the outsourcing transaction, hence the lock-in problem will result, also called hold-on problem. Lock-in effect refers to the fact that the outsourcing enterprise can’t get rid of the transaction relationship with the contractor, unless the enterprise would like to afford high transfer costs. The contractor can “threaten” the enterprise when time comes to renew the contract by using lock-in effect, thus the outsourcing enterprise would be in a dilemma either to accept the disadvantageous contractual terms or to pay high transfer costs. Lock-in risk directly leads to increased agent costs and decreased agent effect coefficient (a), and it will also make the ineffectiveness of the principal-agent system. Three main risk factors lead to the lock-in problem [9]. (1) Asset specificity. Asset specificity refers to the degree to which an asset can be redeployed without sacrificing its productive value if the contract is to be interrupted or prematurely terminated [5]. If a durable investment is used in a particular transaction, the assets will possess specificity. Because the “next best use” value of a specific asset is much lower, the outsourcing enterprise would lose part of its investment if the transaction was not completed. If the outsourcing enterprise invests large amount of specific assets in the transaction, it will face the potential risk of lock-in problem. Even if specific assets are not considered, it will be costly to change contractors. No matter how abundant experience the new contractor has, the enterprise may have to face an almost fire-new outsourcing environment. (2) Small number of contractors. The bargaining power of the contractors increases as their number decreases. Often the lack of alternative contractors is the primary cause of the enterprise’s dependency on its contractor. The fact that not enough contractors participate in the competition will lead to the increasing of agent costs. (3) Outsourcing enterprise’s lack of expertise in outsourcing contracts. This mainly refers to the fact that the outsourcing enterprise lacks the related expertise in signing contracts, thus it signs a long-term contract lacking adjustability. So an enterprise with little expertise may make decisions that will directly lead to a lock-in situation. 2.3. Risk Factors Caused by Incentive Incompatibility Since the interests maximization of the contractor contradicts that of the outsourcing enterprise, related risk factors are destined to result. A basic assumption of principal-agent theory is that opportunism is an inherent characteristic of such a relationship. (1) Imperfect commitment of both the contractor and the outsourcing enterprise. Imperfect commitment is the imperfect capacity of both the contractor and the outsourcing enterprise to commit themselves to the relationship. They may deviate from terms of the contract, and they are also likely to be tempted to renege on their promises and commitments. No contract is immune from such behavior. For instance, technology will develop endlessly and the technology mentioned in the contract will be out of date as time passes, but the contractor may refuse to adopt new technique in his own interests. It claims that such adaptations had not been foreseen, or because the clauses of the outsourcing contract is not clear. However, such adaptations are often necessary, and the outsourcing enterprise will lose competitiveness if the contractor doesn’t adopt
200
H. Li and H. Zhang / Outsourcing Risks Analysis: A Principal-Agent Theory-Based Perspective
new technique all along. This could cause increase of agent costs and reduction of agent income. (2) Coordination problems of the outsourcing transaction. There exist some risks in the communication of the two parties in outsourcing transaction. The contractor and the outsourcing enterprise are two independent economic entities, and the interests maximization between the two conflicts. Due to their differences in strategic target, management thought and business culture, communication obstacles and misunderstanding will result. The contractor and the outsourcing enterprise may mistrust each other and criticize each other, and then it’s hard to carry through effective cooperation continually. Under this condition, agent costs will increase dramatically, and it can even lead to the abortion of the outsourcing transaction at the worst [4]. 2.4. Risk Factors Caused by Contract Incompleteness Enterprises face various uncertainties in their business environment. Due to limited rationality of the outsourcing enterprise, the enterprise is unlikely to search all related information associated with the outsourcing contract, and it is also impossible for the enterprise to predict future changes. As a result, the contract is usually incomplete. Contract incompleteness not only increases costs after transaction, but also facilitates opportunism of the contractor objectively. Both of them can give rise to increase of agent costs and reduction of agent income. (1) Measurement problems of outsourcing performance. In information economics, goods are usually divided into search-goods and experience-goods. Generally speaking, search-goods are those whose properties can be touched, observed and assessed when customers buy them, such as clothing. In contrast, experience-goods are those whose properties can be distinguished and understood only after extended use. In fact, the service offered to the outsourcing enterprise by the contractor is a typical experience-good, and it is subject to measurement problems. The contract loses its power along with the end of the transaction. If the enterprise finds that the quality is inadequate in the future, doing poorly done work all over again will result in increasing agent costs. (2) Contractual amendments costs. Because of uncertainty of the transaction and incompleteness of the contract, the two parties will bargain for contractual amendments and perfection of the contract. This is inevitable to increase agent costs. (3) Disputes and litigation costs. In outsourcing, the two parties may have different explanations about some terms of the contract, disputes will arise between each other and even litigation will result. The primary cause of disputes and litigation is the measurement problems of outsourcing performance. If an enterprise’s outsourcing fails due to litigation, it’s possible to disrupt its business processes, giving rise to great losses.
3. Discussion and Conclusions Outsourcing enterprise’s non-core business processes to other enterprises will undoubtly bring about many benefits to the enterprise. However, outsourcing risks are destined to occur at the same time. If the enterprise can’t identify and control outsourcing risks very well, it not only can’t benefit from outsourcing but also may face great losses. So researches on outsourcing risks are of great necessity. Essentially, principal-agent is the true relationship between the outsourcing enterprise and the contractor. This paper
H. Li and H. Zhang / Outsourcing Risks Analysis: A Principal-Agent Theory-Based Perspective
201
starts from the four risks prevailing in a principal-agent system, and it proposes an analysis framework for outsourcing risks based on principal-agent theory. The paper analyses the risk factors in outsourcing completely, aiming at providing some reference for wise outsourcing decisions of enterprises. To further improve the framework proposed in the paper, much work remains to be done. For instance, it’s necessary to add more risk factors to the lists in the framework, and case studies may be conducted in order to provide a first validation of the risk factors in the framework. At the same time, it’s also required to study on the mitigation mechanisms of the four risk problems, for example, how to change asymmetric information to symmetric information, etc.
Acknowledgements The research is supported by Shaanxi Province Scince and Technology Research Program (2005KR16). The first author of this paper would like to thank Prof. Milan Zeleny, the vice director of the Academic Committee of the School of Economics and Management, Xidian University, for his contribution to the development of the School.
References [1] Christine Koh, Cheryl Tay, Soon Ang. Managing vendor-client expectations in IT outsourcing: a psychological contract perspective [J]. Proceeding of the 20th international conference on information systems. January 1999. pp. 512–517. [2] Hazel Taylor. The Move to Outsourced IT Projects: Key Risks from the Provider Perspective [J]. Proceedings of the 2005 ACM SIGMIS CPR conference on computer personnel research. April 2005. pp. 149–154. [3] Ravi Aron, Eric K. Clemons, Sashi Reddi. Just Right Outsourcing: Understanding and Managing Risk [J]. Proceedings of the 38th Hawaii International Conference on System Sciences. 2005. [4] Cui Nanfang, Kang Yi, Lin Shuxian. Analysis and Control of the Outsourcing Risks [J]. Chinese Journal of Management. January 2006. pp. 44–49. [5] Benoit A. Aubert, Michel Patry, Suzanne Rivard. Assessing the Risk of IT Outsourcing. CIRANO Working Papers 98s-16, CIRANO. May 1998. [6] Benoit A. Aubert, Sylvie Dussault, Michel Patry, Suzanne Rivard. Managing the Risk of IT Outsourcing. CIRANO Working Papers 98s-18, CIRANO. June 1998. [7] Benoit A. Aubert, Michel Patry, Suzanne Rivard, Heather Smith. IT Outsourcing Risk Management at British Petroleum. CIRANO Working Papers 2000s-31, CIRANO. September 2000. [8] Benoit A. Aubert, Michel Patry, Suzanne Rivard. A Framework for Information Technology Outsourcing Risk Management [J]. The DATA BASE for Advances in Information Systems. Fall 2005. Vol. 36, No. 4. pp. 9–28. [9] Bouchaib Bahli, Suzanne Rivard. The information technology outsourcing risk: a transaction cost and agency theory-based perspective [J]. Journal of Information Technology. September 2003. pp. 211–221. [10] Fama E. Agency problems and the theory of firm [M]. Chicago: Louis Puttermanen, 1986. p. 154.
202
Advances in Multiple Criteria Decision Making and Human Systems Management Y. Shi et al. (Eds.) IOS Press, 2007 © 2007 The authors. All rights reserved.
Mobile Technology: Expanding the Limits of the Possible in Everyday Life Routines Christer CARLSSON a and Pirkko WALDEN b IAMSR/Abo Akademi University, ICT House, 20520 Abo, Finland Email:
[email protected] b IAMSR/Abo Akademi University, ICT House, 20520 Abo, Finland Email:
[email protected] a
However beautiful the strategy, you should occasionally look at the results. – Sir Winston Churchill Abstract. When the short message service (SMS), was first initiated in 1992 nobody could foresee its tremendous popularity. Simple in design, easy to adapt and effortless to employ it rapidly became a profitable, matchless, globally used mobile service – one, which changed the lives of European teenagers. Ever since the quest for the next mobile service “killer application” has continued. Year after year the mobile service markets produce new services and applications that due to complexity or lack of relevance fail to meet the consumers’ expectations. Nevertheless, the future growth of mobile telephony revenues is expected to rely on mobile services. The use of mobile services is expected (or hoped) to be a significant part of the revenues to be generated by 3G mobile networks. This may yet be true but the adoption of new mobile services has been much slower than expected, especially in Europe. Several reasons have been suggested for the slow adoption rate, ranging from cultural to business models. In this paper we are focusing on the Finnish market, where there are several reasons to suggest that a rapid adoption could take place, including a relatively long history of mobile services, low costs, positive attitudes to the use of high technology and rather a good supply of mobile services. We will discuss three mobile services that commonly have been described as promising and innovative: mobile games, mobile television and snapshots with mobile phones, in an attempt to understand their potential for becoming successful services. If we for a moment forget the quest for “the mobile killer applications” – the dream of quickly making lots of money with a new technology – there is no way to deny that the mobile technology has had and continues to have a profound impact on our everyday routines. Things become easier to manage, timeconsuming routines a bit faster to handle and there are a number of things we are able to accomplish which have not been possible before – or which we have not even thought to be possible. Professor Milan Zeleny first visited us in 1978 when he was working on building breakthrough theories for multiple criteria decision making, which had a significant impact on management science research in Finland for the next 1–2 decades. Then he started Human Systems Management, a journal devoted to the integration of knowledge, management and systems which is different from many other research journals as it blazes the trail for innovative thought, less tested theories and models, and serves as a platform for a critical re-examination of established truths and results which are not questioned anymore. We have had the opportunity over the years to test our new insights and first systems constructs in Human Systems Management and we have been encouraged to continue work on
C. Carlsson and P. Walden / Mobile Technology: Expanding the Limits of the Possible
203
improved and better constructs through the constructive criticism we got, which is one of the trademarks of Human Systems Management. Professor Milan Zeleny is not unlike his journal – he has always been at the forefront of new ideas, new theories, new systems constructs in management science (and a number of other areas of research). He is critical, he is questioning your premises and he is arguing with you about your conclusions, but he is very knowledgeable and he is generous with his ideas and his advice. He is always challenging his friends and collaborators to take one more step and find insight which is there, but not yet seen – until you have had one of the discussion sessions with Milan. We salute a visionary researcher and a great scientist on his birthday and we wish for many more birthdays to come!
1. Introduction There has been a lot of discussion why certain mobile services have not been successful in the European markets, even if the devices making them possible are finding their way to consumers. Also many of these services seem to be mere extensions of the original services than true innovations. Even the successful innovations are sometimes developed more by accident than by design, for example SMS was intended as a new kind of pager, not as the communication means it is today.1 The mobile service markets have been turbulent throughout recent years in Europe, especially in Finland. The rapid emergence of new mobile technologies, innovations and mobile services has produced a scenario, where the industry estimates were quite promising (Knutsen, 2005), especially in the long-run. The 3G especially is seen as having great promise as the new standard platform for wide-spread development and adaptation of new services (UMTS Forum, 2003; Pagani, 2004). Most users, nevertheless, remain satisfied with very basic mobile services, such as voice and SMS. Here we will concentrate on the Finnish market, generally seen (at least outside Finland) as a technologically advanced market with a population ready and willing to adapt to new services. As we have found in our empirical studies (cf. Carlsson et al. 2005, 2006), there is a supply-demand mismatch for mobile services in Finland; even in Japan and Korea, considered forerunners in adoption rates of mobile services, rather basic services such as messaging and ring tones have been most successful (Funk, 2005; Srivastava, 2004; Kim et al., 2004). Basic services have during recent years been popular also in Europe (Carlsson et al., 2005, 2006; Mylonopoulos & Doukidis, 2003), but more advanced services have not yet found their ways into the everyday lives of consumers. In short, mobile service users appear to be conservative and slow in adapting their everyday lives to the possibilities of new technology. In the mobile service markets, in Finland and elsewhere, the technology has advanced and new innovative services have/are being launched (mobile television, graphical information services etc.). The ongoing price competition in several countries, but especially in Finland, has made the costs for using the mobile services very affordable, but there has not been any widespread adoption of new services. There are a number of reasons for this development but these are so far not well understood. There has been a lack of common standards since the early days of mobile communications. Europe and North America took divergent approaches in managing the 1
An earlier version of this paper was presented at the 19th Bled eCommerce Conference, June 5–7, 2006.
204
C. Carlsson and P. Walden / Mobile Technology: Expanding the Limits of the Possible
spectrum for wireless voice and data services. This was especially visible in the 2G and post-2G markets (Gandal et al., 2003; Steinbock, 2003). While the European approach to 2G (GSM) was to put one’s faith in mandated standards developed by the EU, the American approach, which created several standards (CDMA/IS-95, GSM, TDMA, iDEN) relied on letting the market determine the standards (Gandal et al., 2003). The third way, applied by the Japanese, was to trust in their own proprietary standard called PDC (Gandal et al., 2003; Steinbock, 2003). A number of technological advances took place alongside the introduction of GPRS (General Packet Radio Service, an enhancement to GSM which is often referred to as 2.5G) in Europe. For example, colour screens, cameras and Multimedia Messaging Service (MMS) became available. Such features were first pioneered in high-end smart phones, with the Symbian OS that supports third party services. Java became more mature, which lead to the birth of a market for downloadable applications – in particular games. Even streaming video to mobile phones became functional (Repo et al., 2004). At the moment we are moving into the 3G standards, which offer high speed communication and multimedia data services. The standard supports a concurrent use of multiple services and bridges the gap between mobile phones and computing (GSM World, 2004; UMTS Forum, 2003). Two main standards have been proposed and are currently in use. These are UMTS (or WCDMA) and CDMA2000; the former is a European standard and the latter is preferred in the USA and certain parts of Asia, such as Korea. Both standards are available in Japan (Gandal et al., 2003), and to make things more interesting, the Chinese have their own standard, called TD-SCDMA (McGinty & Bona, 2004). In these “standard wars” there is no clear answer to the question of which approach (mandated or market-driven) is the best one. GSM clearly is a triumph for the mandated approach, but the market-driven standards have worked for the Americans in the case of high definition television (HDTV) and for the Japanese in the 3G-markets (Steinbock, 2003; Gandal et al., 2003). Basic services evolved along and after the standards. A good example is SMS, which became surprisingly popular after 1995 as users began sending messages to each other (which was actually not the planned function, since SMS was meant as a pager). Later, SMS became a major platform for a wide range of services that reach an extensive clientele, but are – compared to more advanced applications – cumbersome to use as they require typing and the memorizing of service numbers. The Wireless Application Protocol (WAP) was introduced to enhance usability and availability as it was meant to bring the Internet and Internet browsing for services into mobile phones. From the perspective of service development, WAP was a step forward. The Japanese i-Mode was, and still is, a big success in Japan (Steinbock, 2003). Then it is rather surprising that the introduction of i-Mode in UK, Germany and The Netherlands has been very slow and that it will probably not become an alternative to WAP, GPRS or UMTS. One reason for this appears to be that most technologies and markets grow in tandem, and that the development of mobile markets is evolutionary, i.e. new products and services will have a backbone and a basis in the existing products and services (and the i-Mode did not have this basis in Europe). Recent history shows that a number of applications, which were first introduced in the GSM networks and then discarded because they were slow and cumbersome, have now reappeared in the GPRS networks and are gaining acceptance. The WAP was a failure in the GSM era, but is
C. Carlsson and P. Walden / Mobile Technology: Expanding the Limits of the Possible
205
obtaining increasing approval in the GPRS networks. News, information and entertainment services that were built as WAP applications are now gaining more acceptance. From a historical point of view, 3G mobile telephony seems more evolutionary than revolutionary (Nokia Networks, 2003). Much of the infrastructure needed for service provision had already been developed and turned commercial before the 3G. Vesa (2005) illustrates the current era by using technology and user-centric approaches as polar ends. The technology-centric (WAP, Digi-TV, 3G) approach relies on investments in technology and marketing, but not on an understanding of the services or user needs. The user-centric approach (WiFi, WLan, Internet, and SMS) focuses upon the number of users as the source for revenue growth. Naturally, neither is optimal, but the former approach and its shortcomings may explain why the service adoption has been slow. Also, there is the real possibility that the user-centric technologies (WiFi, WLan) will change the mobile service markets by offering alternative channels to mobile services and bypassing the mobile network operators (Cheng et al., 2003; Lehr & McKnight, 2003). Although many promised new services have been attributed to 3G (Robins, 2003; UMTS Forum, 2003), they have in fact already reached mature stages. Even services, which rely on graphical browsing or multimedia messaging, have approached basic availability for regular users. Information services, ticketing and different forms of entertainment are maturing services which can be used over a number of mobile technologies, including SMS (Short Message Service, e.g. text messaging). Studies of the mobile Internet tend to neglect this because they do not consider SMS to be an Internet technology (cf. Ishii, 2004; Funk, 2005). Jenson (2005) openly criticizes the mobile industry for adopting “default thinking”, which leads to failed consumer products and services. He illustrates this by using an industry comment “MMS is an extension of SMS and therefore a natural progression for the industry”. What is missing here is that MMS is a much more complicated service to use and most users do not see enough added values over SMS to adopt MMS; what is needed is a value adding usage context. Mobile phones with wireless data capacity are another “inbred” design. Jenson claims that the industry looked backward and saw the web. So this led to the following equation: “the Web is hot, phones are hot, and therefore web + phone have got to be hotter”. The basic challenge is to understand how and why people adopt or do not adopt mobile services. Jenson’s approach suggests that the industry aims and consumer needs do not match; consumers are not part of the content/use context design process. Technology development is often seen as the key in service adoption, but as Anckar and D’Incau (2002) pointed out, more is needed. Sarker and Wells (2003) mention that there is missing a clear understanding of the motivations and circumstances which guide consumers to adopt and use mobile devices. As they realize that there cannot be any business applications unless there is (cf. p. 36) “widespread proliferation of wireless devices and related applications, there is a clear need to comprehend how and why individuals (potential m-commerce consumers) adopt such devices”. Knutsen (2005) illustrates that even if research on culture, infrastructure, inter-firm collaboration and business models may shed light on the phenomenon the basic recurring theme suggested for further research is the value of services for the user. The present study is more an explorative than a validating/verifying study, which is why we will use a simple theoretical framework. We have chosen to apply the Braudel Rule (cf. Section 4) as a theoretical framework to find out why and how mobile services can make sense as a basis for viable business. The paper is structured as
206
C. Carlsson and P. Walden / Mobile Technology: Expanding the Limits of the Possible
follows: in Section 2 we will give a brief summary of the Finnish mobile services market and contrast it with some material from other markets; in Section 3 we will work through and compare the results of the 2004 and 2005 Finnish consumer surveys on mobile services and compare the results with the expert surveys carried out the same years; in Section 4 we will use the results from the empirical studies to discuss the future potential of three mobile value services – mobile games, mobile television and snapshots with mobile phones; Section 5 is a summary and offers some conclusions.
2. The Finnish Mobile Services Markets In the early 1990’s Finland had a unique position in the global mobile telecommunications market. It was the first country to open a GSM network and the first to introduce a mobile service, Short Message Service (SMS). Since then the Finnish mobile telecommunications market has shifted from regulation to de-regulation. Today there are three network operators providing 3G mobile services (two Finnish and one international, which is the market leader) together with some small service providers that offer mobile communication and services as a way to extend their hamburger and night club brands. From the consumer point of view the situation is ideal as there are a variety of mobile service providers to choose from at very competitive prices (Ministry of Transport and Communications, 2005a). For the mobile telecommunications industry the situation in Finland and Western Europe in general, is far from ideal. The mobile phone market reported a 15% growth during the first half of 2005 (Paul Budde Communications, 2006). According to industry reports, about 30% of mobile phones sold in June 2005 had Bluetooth functionality, while 20% of GSM and 95% of 3G phones had MP3 or similar technology on board. Technology innovations, such as mobile TV, have been launched in several markets, but the ARPU (Average Revenue Per User) declined for much of 2005, particularly in Scandinavia (Paul Budde Communications, 2006). In addition, even industry analysts agree that SMS (and not any of the new services) remains the most successful data service, accounting for 15–20% of many operators’ revenues in 2005, and up to 95% of the data service revenues. In Finland the price competition between the service operators has accelerated since the introduction of mobile number portability in July 2003. As a result the voice communications charges have decreased notably; during the year 2005 by some 20.5 % (Ministry of Transport and Communications Finland, 2006). In addition to heavy price competition in the basic service markets, the introduction in April 2006 of the possibility to bundle 3G mobile devices and mobile subscriptions will force the operators to build customer loyalty from new and different sources. Looking at the Finnish market with a broader time-frame in terms of subscribers, mobile devices and mobile services, Finland has experienced quite a rapid growth over the last decade. The number of mobile phone subscribers was around 5.4 million in 2005 which exceeds by far the number of fixed-line subscriptions and representing a penetration rate of mobile phones which is more than 100 %. The mobile phones with GPRS, WAP, MMS and Java features were 1.8 million units at the end of 2004, representing about one third of the mobile phones. About 75% of the phones had features required for using new mobile services, such as WAP or MMS services (cf. Ministry of Transport and Communications Finland, 2005b; The Association of Electronic Wholesalers, 2005). The technologically advanced mobile phones encourage users to try out
C. Carlsson and P. Walden / Mobile Technology: Expanding the Limits of the Possible
207
new services, but the adoption rate of mobile services has so far not grown as rapidly as could be expected (Carlsson et al., 2005). Though the market shows positive signs of growth, the consumers are not willing to pay for the available mobile services (Rantanen, 2006; Carlsson et al., 2005, 2006): a recurring theme in Finnish consumer surveys has been that consumers claim that both the start-up costs and the costs of using the services are too high. The total value of the Finnish mobile services market in 2004 was 246 M€ (a growth of 11% from 2003) and was estimated to be 258 M€ in 2005 and 267 M€ in 2006 (a slowing yearly growth of only 3–5%). When categorizing the mobile services in person to person messaging (PPM), content services and data services it was found that all three categories grew in 2004. The PPM was the largest category by revenue, (64% of the mobile services market value) and the market grew despite declining average revenue per message as the volume growth outpaced the average revenue decline. In 2004 the value of the content service market was 67 M€ (16% increase over 2003) with ring tones, directory services and chat services the largest revenue providers. The market value for premium voice services was 120 M€ with directory services and taxi orders the most used services. Nevertheless, the emerging mobile services are not even close to the SMS and Finland is typical for Western Europe in this sense. The 5.3 million people sent 2241 million text messages in 2004 (an average of 37 text messages/month/person) with a market value of 203 M€. In comparison, there were 7.4 million MMS messages representing a market value of 1.7 M€. It appears that acquired habits have a strong influence on the medium of choice and that new mobile services have to give the users a clear added value in order to make a breakthrough. The development of mobile services has slackened in Finland relative to many other countries and the optimistic and experimental mood five years ago has become more cautious and in favour of more conservative market operations. There appears to be fewer risks taken and fewer resources are used for developing and launching new services. A point in case is that Finland was among the last countries to launch 3G-services in Western Europe, which may be significant as Finland was a pioneering country for mobile services in 2000–2002 (Ishii, 2004; Ministry of Transport and Communications Finland, 2004). The World Cellular Information Service reported in February 2006 that the fastest growth in mobile data services is found in Asia (Indonesia, Japan and Korea) and that Europe is lagging in producing value adding data services.
3. The 2005 Finnish Consumer Survey on Mobile Services 3.1. Sample Profile The 2005 Finnish consumer survey on mobile services was conducted during midApril – mid-June 2005. The target group consisted of 1000 habitants of mainland Finland together with the residents in the archipelago region. A random sampling was used to select the survey participants whose prerequisite was to be between the ages of 16–64 and have Finnish or Swedish as their mother tongue. All in all 462 filled-in questionnaires were returned by regular mail resulting in a 46.2% response rate. Slightly over half of the respondents were females (56.8%) and those between 36–50 years of age formed the largest age group (36.8%). Most of the survey participants had vocational school as their highest level of education (24.2%), an annual in-
208
C. Carlsson and P. Walden / Mobile Technology: Expanding the Limits of the Possible
come between €20 000–30 000 (30.2%) and belonged to the socio-economic group of manual workers (33.9%). A total of 94.6% of the respondents had a mobile phone in use and contradictory to the results of our previous consumer surveys in 2002–4, it was in a majority of cases (52.1%) a more advanced type of mobile phone, a device that at a minimum had a GPRS (general packet radio system) -functionality. 3.2. What the Finnish Consumers and Experts Say About the Mobile Services 3.2.1. Finnish Consumers Even though a majority of the studied consumers in 2005 had a mobile phone in hand that was technically equipped for the use of more sophisticated mobile services like MMS (multimedia messaging service) and mobile Internet, the most popular services utilized on a regular basis were the simple ones: SMS (92.8%) and search services (37.0%); the search service is like requesting a phone number by either sending a text message or employing a WAP (wireless application protocol) service (cf. Table 1). It is interesting to note the differences in consumer response (cf. Table 1) when compared with what the experts believe that the consumer response will be (cf. Table 2). It appears that in the Finnish mobile services market the prevailing ideology has been to push services to the market in the hope that the consumers can be activated/trained/persuaded to adopt the services before they have to be withdrawn because of a lack of cash flow. As this policy is driven by the experts, who appear not to be concerned with the actual consumer demand, at least a partial explanation can be found for the slow uptake on mobile services. The mobile services the respondents had been most curious to try were ring tones (45.0%) and icons, logos or wallpapers (42.8%), which all are also accessible by SMS. The mobile services the consumers would be most willing to try in the future were MMS (56.9%), Mobile email (55.6%), Location based services (48.7%) and Checking timetables (45.5%). Datamonitor in February 2006 predicted that mobile email will be the next fast growth service: there are now 650 million corporate mailboxes in use worldwide of which Datamonitor claimed that 35% could be mobilised (Eazel, 2006; Cellular News, 2006). Similar results to the ones shown above have been obtained in previously conducted surveys by the National Consumer Research Centre (NCRC) in Finland. In 2003 and 2004 some 1000 panellists of NCRC listed the mobile services they use on a regular basis as well as the ones they just have tried. In both years the communication service SMS was the most commonly employed mobile service; 97.1% in 2003 and 96.4% in 2004, respectively (Hyvönen and Repo, 2005). 3.2.2. Finnish Experts IAMSR has carried out the expert surveys in Finland annually since 2001. The primary goal when the series was started was to get insights into the actual status of mCommerce and its progress. The target group was set to include (i) 50 industry experts and decision makers from companies that were offering m-commerce products/services, and (ii) managers of companies providing consulting, financing and/or infrastructure for mCommerce since they were seen to have sufficient expertise and knowledge. The expert surveys have been carried out with web questionnaires; the potential respondents were contacted via e-mail and/or by phone. In order to increase the response rate and as
C. Carlsson and P. Walden / Mobile Technology: Expanding the Limits of the Possible
209
Table 1. Current and future use of mobile services by Finnish consumer in 2005 compared with the results from Finnish consumer survey 2004
Services SMS Search services (address, tel.) Ring tones Icons, logos or (wallpapers)* MMS News and weather Reading/sending mobile e-mail Internet surfing/browsing Humorous messages Checking flight/train timetables Payment (micro, midi) Routine m-banking Games Buying and downloading music Event specific services Shopping Reservation of a movie, etc. tickets LBS (restaurant, hotel, etc.) Chat Health care services Video calls Checking stock rates Hotel presentation a/o making a reservation Insurance services (info search) Wireless alerting/security system Mobile TV Making a reservation a/o buying flight/train tickets Locating family members Lotto, tote etc. Adult entertainment
96.1 62.8 57.6 58.2 18.2 24.7 22.6 20.3 24.6 13.2 14.8 12.0 8.0 14.3 12.0 11.9 13.6 8.4 6.8 8.0 3.7
Would use in future 2005 (%) 89.9 62.0 39.2 36.0 56.9 32.7 55.6 31.6 11.1 45.5 35.8 32.1 8.4 22.0 14.0 20.4 37.9 48.7 6.4 31.7 23.6 10.4
Would use in future 2004 (%) 86.7 59.2 36.4 35.2 46.2 23.2 49.3 29.1 15.2 45.5 29.5 40.0 6.1 17.7 9.6 18.4 36.6 43.4 3.3 26.5 8.6
Regular use (%)
Only tried (%)
Aggregate 2005 (%)
Aggregate 2004 (%)
92.8 37.0 12.8 11.1 21.0 12.0 16.4 11.8 5.2 6.4 8.7 6.6 3.3 5.0 5.2 5.0 5.9 5.4 2.8 6.6 6.6 3.5
3.5 29.5 45.0 42.8 18.9 19.9 13.0 13.7 17.3 12.3 14.4 9.2 12.5 12.5 10.2 11.3 7.7 7.3 7.1 5.9 3.8 2.4
96.3 66.5 57.8 53.9 39.9 31.9 29.4 25.5 22.5 18.7 23.1 15.8 15.8 17.5 15.4 16.3 13.6 12.7 9.9 12.5 10.4 5.9
3.3
5.2
8.5
6.8
33.4
31.6
4.0 7.5 3.5
3.5 2.6 4.7
7.5 10.1 8.2
4.2 6.3 -
23.2 45.0 15.4
21.2 40.1 -
5.0
5.0
10.0
7.6
38.7
39.5
5.7 3.8 1.9
3.3 3.3 3.1
9.0 7.1 5.0
5.2 4.3 3.7
38.5 22.6 3.6
36.8 21.8 2.1
Consumer survey 2005 Finland n=418-426 (current use) and n=396-421 (future use) Consumer survey 2004 Finland n=419-438 (current use) and n=407-427 (future use) * = wallpapers were not included in the group of icons and logos in 2004.
a token of appreciation, summary reports of the results were made available for the respondents. If we compare Table 1 with Table 2 a number of interesting observations can be made: • •
On SMS: the experts believe that the saturation level has been reached; the consumers intend to increase their use of SMS – this is also shown in the volume data collected from the market. On data services [“search services” in our survey]: both experts and consumers expect and report increasing use, which the market data shows is increasing slowly.
210
C. Carlsson and P. Walden / Mobile Technology: Expanding the Limits of the Possible
Table 2. The likelihood of firms achieving a satisfactory level of revenue for the listed mobile services in the next 18 months; the estimation was done by Finnish mobile service industry experts in 2004–5 Services SMS Search services (address, tel.) Adult content Games Downloading/purchasing music Ring tones
Mean 2005 4.10 3.95 3.86 3.67
Mean 2004 4.13 3.80 3.67 3.87
Chat Mobile e-mail Event-specific services LBS (e.g. locating a restaurant)
Mean 2005 3.19 3.10 3.00 3.00
Mean 2004 2.93 3.00 2.73 2.73
3.67
3.93
Lotto, pool betting etc.
2.90
3.20
2.86
2.87
2.76
2.67
2.59 2.52
2.53 2.80
Services
3.52
3.80
Mobile payment (micro/midi)
3.48
3.60
Ticketing (e.g. movie tickets) Weather services Flight/train timetables via e.g. SMS, WAP [2005] Icons, logos a/o wallpapers Hotel info a/o room reservation via e.g. SMS, WAP [2005]
3.48 3.43
3.47 3.40
Mobile surfing/browsing Brokerage services (e.g. stock rates) Shopping MMS
3.40
x
m-Banking services (routine)
2.52
2.80
3.33
3.60
Mobile TV
2.36
2.60
3.32
x
Mobile video call
2.05
2.13
3.33
Flight time tables, check-in, hotel reservation [2004]
x
3.33
News services
3.19
The Finnish expert survey 2005 n = 22, The Finnish expert survey 2004 n = 15 A 5-point scale was used where 5 = Very good and 1 = Poor
• •
•
•
On ring tones, icons and logos: the experts rate these as less interesting and decreasing; the consumers display a growing demand. On games and music: the experts believe more in these services than the consumers, but the experts downgraded them from 2004 to 2005; market data shows that the uptake is rather slow as the consumers – except for the youngest consumer group – need to find a context for using these services. On adult content, entertainment: the experts expect these to be fast growing mobile services; the consumers rate them as of no interest, which may be a result of the fact that few survey participants will admit to have an interest in adult content (even if the survey is totally anonymous). On news and weather: the experts are not keen on this service but the consumers show a growing interest in the service with a very visible strengthening of the interest from 2004 to 2005 (probably caused by an increasing availability in a user-friendly form).
Here we will not go more deeply into the comparison and analysis of the causeeffect relations but it appears to be safe to conclude that (i) consumers are much more conservative than expected in adopting new mobile services, (ii) that they do not start using mobile services even if their phones have the technical capability of supporting them, and (iii) experts predict the adoption and growth of services which are not visibly relevant for the consumers. We will in the following briefly discuss three potentially important mobile services using the empirical data as a background. The three services we selected (out of 30 services studied) have been identified as an emerging generation of (more) advanced mobile services in several studies (cf. Section 4).
C. Carlsson and P. Walden / Mobile Technology: Expanding the Limits of the Possible
211
4. Three Mobile Value Services The Braudel Rule (cf. Keen-Mackintosh, 2001) is a useful instrument for judging if a mobile service qualifies as a mobile value service: it should expand the limits of the possible in the structure of everyday routines – in which case it will be part of the everyday lives of the users and get adopted as a routine, which will be sorely missed if it for some reason is no longer available. This simple idea, which is intuitively easy to support, has proved very useful for evaluating mobile services (cf. Carlsson et al. (2005), (2006)). 4.1. Mobile Gaming There is currently a great deal of variety and options for gamers: computers, consoles such as Nintendo Gamecube, MS Xbox and Xbox360, Sony Playstation, etc. and handheld gaming devices either dedicated, such as Nintendo Gameboy variants and DS, or game capable, such as mobile phones and PDAs. Positioned somewhat between these is the Nokia N-Gage, which could be described as a dedicated hand-held gaming platform with built-in phone features. The most recent trend is for multiplayer gaming, either over the Internet (Worlds of Warcraft, Runescape etc.) and Internet-capable consoles (Playstation, Xbox Live, etc.) or device-to-device gaming, as with the Gameboy. Thus we have two different gaming contexts: human(s) gaming with a device and humans gaming together. When we add the fact that most games are published to different kinds of devices, for instance game X is available for the console and for hand-held devices with a possibility to continue gaming when switching between devices, the chain starts to look complicated. Add to this the fact that mobile (phone) gaming requires that a game must be adapted to multiple phone types mainly due to differing screen features. Thus, if we want to turn mobile gaming into a mobile value service we have to cope with a number of features which are not well understood individually and even less understood in combinations. The features need to be worked out before it is possible to construct a viable business model for mobile gaming. In our empirical studies mobile gaming did not gain much support among the general population but the number of respondents who have tried mobile games had increased from 2004 to 2005. The experts rated games high among potential mobile services but the rating had decreased from 2004 to 2005. In terms of the Braudel Rule mobile gaming will change the limits of the possible for the gamers who switch from consoles to handheld devices, but it appears that the gamers still form a small minority of the general population. The N-Gage (http://www.n-gage.com/) unearthed an interesting phenomenon through the N-Gage Arena: gamers form a virtual community in which they assume artificial identities and compete to gain status among their peers. The value forming mechanisms of virtual communities are not yet well understood and it may be worthwhile to take a closer look at the secondary value added features which may be part of the mobile value added services. However, many industry experts believe in the mobile gaming. As a typical example, Juniper Research (2005a, 2005b) predicts that the money games and mobile games will become the second most important source for data revenues (after music) worldwide by 2009. In Asia the mobile games are already a revenue generating service, which has overtaken personalization services. The literature shows (see for example Steinbock, 2003; Orange, 2006) that the adoption of a payback model that leaves a large slice of the individual purchases to the publishers is a key driver for this devel-
212
C. Carlsson and P. Walden / Mobile Technology: Expanding the Limits of the Possible
opment. There is some further insight which is worth mentioning: the mobile game is an almost ideal digital product – it has low transaction costs (delivery via networks), its copying costs are near zero and it extends the main brand cost-effectively (see for example Shapiro and Varian, 1999 or Schwartz, 1999). 4.2. Mobile Television The Mobile TV service is becoming visible as a new technology to be pushed to the consumer market. We tested the interest for Mobile TV in our consumer survey in 2005 (cf. Table 1) but attracted only a 15% interest to try it in the future – the main reason was probably that the concept and the technology were unknown to the consumers. Siemens in a recent study in 8 countries (February 20062) found out that 59% of the respondents indicated an interest in Mobile TV but also found that in Korea the indicated interest was 90%. As usual this is not the whole truth – the high numbers of “indicated interest” is a long way from turning an offered or (in this case) proposed service into revenue generating mobile value service. In terms of the Braudel Rule Mobile TV should be successful in changing one of our more established routines – that of watching TV – into a service (or a system of services) which will expand the limits of the possible in the blissful enjoyment of being entertained by TV programs. This will probably be a tall order and there are a number of challenges to be met and overcome. The Mobile TV device should produce an image quality which is not significantly inferior to the standard established by regular TV, even if the screen size is much smaller. The network coverage and the signal strength should be sufficiently good to give the viewer an uninterrupted service of comparable quality with regular TV. As most Mobile TV networks still are not much less than prototypes there is still some way to go before the transmission quality is sufficiently good. Another crucial feature of Mobile TV is the programming: it is reasonable to assume that the standard TV format cannot and should not be used as such for mobile TV. The standard format is designed and implemented for a sizeable screen, today typically 32–40”, for watching indoors in well-lit and quiet surroundings in a comfortable setting. The Mobile TV was announced to be planned for public transportation or otherwise on the move, for breaks during a working day or when waiting for some activity to start during free time, when waiting in traffic lights or when there is a need to follow breaking news or the key action of some sporting event. Nevertheless, the programming format for the 500 volunteers in the FinPilot study of Mobile TV in the summer 2005 was standard TV and it appears that the user context for Mobile TV was not thought out neither planned. In the Mobile TV setting the standard format does not work: if you want to follow a comedy or a similar program you need at least 10 minutes, which is the wrong format for Mobile TV where you share your attention with other things. It appears that on-demand news or a non-stop loop programming with a max duration of 30–45 minutes would be better than the traditional TV format; mobile TV users could have e.g. 20 minutes time at a specific moment – then the service would be value-adding. Part of the non-stop loop programming could be to store the programs on the Mobile TV device and there is some indication that this will be one of the solutions for offering on-demand service (Seagate recently launched a 12 GB hard drive for mobile phones). Some of the ideas for mobile value services are programs which can be 2
From www.digitoday.com (news_id=53397).
C. Carlsson and P. Walden / Mobile Technology: Expanding the Limits of the Possible
213
activated when there is time and opportunity to watch and specially designed programs (news, sports, cartoon, documentaries, etc.) where the core/key information can be absorbed in less than 10 minutes. Mobile TV is an interesting service when travelling and the user is cut off from his/her normal TV watching routines; the value is in the potential to follow something which otherwise would be missed. Thus the Mobile TV should be made available throughout the country, which is in opposition to the present business model thinking that Mobile TV is a better business in the cities than in the country-side. The counterargument is that the opportunity to use TV services is much better in the cities and that the mobile service will be harder to position and turn into a value service. If we return to the Braudel Rule the crucial point of success is when Mobile TV becomes a freedom for the users and becomes part of their everyday routines, a part which is important and which makes life harder if it is missing. Nevertheless, this will not happen at any price – consumers prefer affordable, fixed monthly costs which is a recurring theme in our consumer surveys. The early reports on business models promote the notion that networks would make a windfall with premium value services (but – regular consumers never buy anything at any price for very long). It might make sense to use the DVB-H (Digital Video Broadcast Handheld) only for mobile TV and to offer all support services as regular 3G or GPRS services, which will keep the costs down and will offer a good software challenge to integrate the two forms of services for various mobile devices. 4.3. Mobile Phones with an Integrated Camera By the end of 2004 the saturation of mobile phones with integrated camera had grown by half million units and there were around 620 000 such mobiles devices, i.e. 12% of all mobile subscriptions in Finland. At the same time smart phones had a penetration rate of 4% (Ministry of Transport and Communications Finland, 2005). The camera phone is basically a phone equipped with a lens, the essential part of a digital camera. The quality (resolution etc.) has improved markedly, but even the best camera phones do not match the quality of snapshots with a standard digital camera in terms of resolution, usability, picture transfer capabilities and photo settings possibilities. The camera phone nevertheless has a key advantage as it is usually available because it is a personal accessory. People generally carry their mobile phones with them: as an article in the Washington Post (Nogushi, 2005) aptly illustrates, there are a number of user contexts for the camera phone: natural or man-made catastrophes are now being stored in digital form and reported first by people experiencing them (which actually is a good representation of the Braudel Rule). Celebrities are the targets for instant paparazzi who actually can earn a good deal of money by sending their snapshots to magazines and TV channels which thrive on news about people well known – but probably not too keen to be captured in company not suitable to their public image or when having a few beers (some of them too many). As a direct competitor to the digital camera, the camera phone faces the same problems as the gaming phone at being effective and competitive for something which is a secondary task. Recent articles in the Forbes magazine (Lidor, 2005) show that there still is a problem with building a viable business model: consumers are making more photos with their camera phones but the ratio of turning them into actual printed photographs is low. Key photo market actors like Fuji and Hewlett-Packard are opti-
214
C. Carlsson and P. Walden / Mobile Technology: Expanding the Limits of the Possible
mistic about this ratio changing and turning camera phones into platforms for a new (mostly MMS based) graphic and imaging industry. The problems and possibilities are the same as with mobile games: seen theoretically the snapshot taken with a camera phone is an almost ideal digital product/service. It is stored in digital form, it can be shipped to printing via a network at a very low cost, and it can be shared with friends via a network at a marginal cost.
5. Summary and Conclusions Quite a few mobile services have been launched without success in the European markets despite the fact that advanced mobile phones – which enable the services – are both well spread and accepted among the consumers. As the resources and the work invested in the services are quite significant it is interesting to find out why some services fly and others fail. We have tackled the issue of mobile services in three ways: (i) we have studied the Finnish mobile services market with the results of our 2004–5 consumer surveys, (ii) with insights from our 2004–5 Finnish expert studies on mobile commerce and (iii) through a discussion of three mobile services that commonly have been described as promising and innovative: mobile games, mobile television and snapshots with mobile phones. A selection of our results shows that, • • •
On SMS: the experts believe that the saturation level has been reached; the consumers intend to increase their use of SMS – this is also shown in the volume data collected from the market. On ring tones, icons and logos: the experts rate these as less interesting and decreasing; the consumers display a growing demand. On games and music: the experts believe more in these services than the consumers, but the experts downgraded them from 2004 to 2005; market data shows that the uptake is rather slow as the consumers – except for the youngest consumer group – need to find a context for using these services.
We used the Braudel Rule as an instrument to judge if a mobile service qualifies as a mobile value service: it should expand the limits of the possible in the structure of everyday routines. We found that the mobile game qualifies as an almost ideal digital product – it has low transaction costs, its copying costs are near zero and it extends the main brand cost-effectively. Mobile gaming satisfies the Braudel Rule for gamers and it appears that there are secondary value added features derived from the virtual community formed by the gamers. Mobile TV should be successful in changing the limits of the possible in the enjoyment of TV programs, which may be a tall order. The crucial point of success is when Mobile TV becomes a freedom for the users and becomes part of their everyday routines. We found out that the camera phone is competing with the digital camera, which is a specialized device for specific tasks, and should be effective and competitive for something which is a secondary task. In this the camera phone has a key advantage as it is usually available.
C. Carlsson and P. Walden / Mobile Technology: Expanding the Limits of the Possible
215
These results show that all three services have the potential to become useful, mobile value services but that they will have to go through 2–3 more cycles of evolution before becoming full-fledged implementations of the Braudel Rule. Thus they merit further research and systematic empirical studies. Finally, if we broaden the scope a bit and take a closer look at the mobile technology and its applications we will probably have to accept a fundamental insight: there is no way to deny that the mobile technology has had and continues to have a profound impact on our everyday routines. Things become easier to manage as we do not have to fix appointments in place and time, and then try to keep them to the minute (as is our custom in Finland) but we can agree on an approximate place and time and then coordinate by mobile phone as we get close to the agreed time. Time-consuming routines become a bit faster to handle as we can simplify parts of them by relying on getting access to context-relevant data and information as we need it. There are a number of things we are able to accomplish – such as getting updated information on our flight as we are struggling to get to the airport, finding out that the flight is 20 minutes late (which will reduce the stress level) and being able to check in for the flight while enroute (yes, a good secretary could do the same for us and then call us, but this is not for ordinary people). A few years ago we could not even think about this possibility – mobile technology will bring about a hundred services like this. Mobile technology and its applications have important consequences for almost all parts of management research and for the everyday lives of people and the companies they work for. We will probably have to rewrite significant parts of the classical management literature in order to adjust it and management practice to the possibilities and the challenges of mobile technology. This is a quest worthy of Human Systems Management.
References [1] Anckar, B., D’Incau, D., (2002): Value creation in mobile commerce: Findings from a consumer survey, Journal of Information Technology Theory and Application, 4, 43–65. [2] Balasubramanian, S., Peterson, R. A., Jarvenpaa, S. L., (2002): Exploring the Implications of M-Commerce for Markets and Marketing, Journal of the Academy of Marketing Science, 30(4), 348–361. [3] Carlsson, C., Carlsson, J. Puhakainen, J. and Walden, P. (2006) Nice Mobile Services Do Not Fly. Observations of Mobile Services and Finnish Consumers (with J. Carlsson, J. Puhakainen and P. Walden) in Proceedings of the 19th Bled eCommerce Conference, Bled, Slovenia, June 5–7, 2006. [4] Carlsson, C., Walden, P., Bouwman H., (2006): Adoption of 3G+ services in Finland, International Journal of Mobile Communication, 4(4), 369–385. [5] Carlsson, C., Hyvönen, K., Repo, P., Walden, P., (2005): 18th Bled eConference, “Adoption of Mobile Services across Different Technologies”, Bled, Slovenia, June 6–8, 2005. [6] Cellular News, (2006): Mobile Email on the Verge of Mass Market Adoption. Available at: www.cellular-news.com/story/15925.php. Last accessed February 3, 2006. [7] Cheng, J. Z., Tsyu, J. Z. & Yu, H.-C. D. (2003), “Boom and gloom in the global telecommunications industry”, Technology in Society, 25, 65–81. [8] Eazel, W., (2006): Mobile email set to explode, vnunet.com. Available at: http://www.-vnunet.com/ vnunet/news/2149865/mobile-email-set-explode. Last accessed February 11, 2006. [9] Funk, J. L., (2005): The Future of the Mobile Phone Internet: An Analysis of Technological Trajectories and Lead Users in the Japanese Market, Technology in Society, 27, 69–83. [10] Gandal, N., Salant, D. & Wawerman, L. (2003), “Standards in wireless telephone networks”, Telecommunications Policy, 27, 325–332. [11] GSM World (2004), Available at: http://www.gsmworld.com/index.shtml.
216
C. Carlsson and P. Walden / Mobile Technology: Expanding the Limits of the Possible
[12] Hyvönen, K., Repo, P., (2005): Mobiilipalvelut suomalaisten arjessa (Mobile services in the every day life of Finns), ”Vox consumptoris – Kuluttajan ääni”, J. Leskinen, H. Hallman, M. Isoniemi, L. Perälä, T. pohjoisaho, E. Pylvänäinen, (Editors), Kuluttajatutkimuskeskus, Helsinki. In Finnish. [13] Ishii, K., (2004): Internet Use via Mobile Phone in Japan, Telecommunications Policy, 28(1), 43–58. [14] Jenson, S., (2005): Default Thinking: Why Mobile Services are setup to fail, “The Inside Text: Social, Cultural and Design Perspectives on SMS”, edited by R. Harper, L. Palen, A. Taylor (Editors), Springer. Available at: http://www.jensondesign.com/DefaultThinking.pdf. Last accessed: February 3, 2006. [15] Juniper Research, (2005a): Gambling on Mobile. Available at: http://www.juniperresearch.com/pdfs/ white_paper_mgambling2.pdf. Last accessed February 14, 2006. [16] Juniper Research, (2005b): Mobile Fun & Games. Available at: http://www.juniperresearch.com/pdfs/ white_paper_mgames2.pdf. Last accessed February 14, 2006. [17] Keen, P. G. W., Mackintosh, R., (2001): “The Freedom Economy: Gaining the mCommerce Edge in the Era of the Wireless Internet”, Osborne/McGraw-Hill, New York. [18] Kim, J., Lee, I., Lee, Y., Choi, B., (2004): Exploring E-Business Implications of the Mobile Internet: A Cross-National Survey in Hong Kong, Japan and Korea, International Journal of Mobile Communications, 2(1), 1–21. [19] Knutsen, L. A., (2005): The 38th Hawaii International Conference on System Sciences (HICSS-38), “M-service Expectancies and Attitudes: Linkages and Effects of First Impressions”, Island of Hawaii, USA, January 3–6, 2005. [20] Lehr, W. & McKnight, L. W. (2003), “Wireless Internet access: 3G vs. WiFi?”, Telecommunications Policy, 27, 351–370. [21] Lidor, D., (2005): Fuji Helps Develop Pics From Your Handset, Forbes.com. Available at: http://www. forbes.com/2005/09/14/fuji-cameraphone-photofinish-cx_dl_0914fuji.html. Last accessed February 10, 2006. [22] McGinty, A. & Bona, D. D. (2004), “3G Licensing in China: a waiting game”, Computer Law and Security Report, 20(6), 480–481. [23] Ministry of Transport and Communications Finland, (2004): Mobiilipalvelumarkkinat Suomessa 2003 (Mobile services market in Finland in 2003), Helsinki, Finland. In Finnish. Available at: http://www. mintc.fi/oliver/upl545-24_2004.pdf. Last accessed: February 14, 2006. [24] Ministry of Transport and Communications Finland (2005a): Matkaviestinverkkojen tulevaisuus (Future of mobile telecommunications network), Helsinki, Finland, in Finnish, Retrieved: April13, 2006, Available at: http://www.mintc.fi/oliver/upl569-Julkaisuja%2040_2005.pdf. [25] Ministry of Transport and Communications Finland (2006): Suomen telemaksujen hintataso 2005 (Price level of telecommunications charges in 2005), Helsinki, Finland, in Finnish, Retrieved: April 13, 2006, Available at: http://www.mintc.fi/oliver/upl266-Julkaisuja%2019_2006.pdf. [26] Ministry of Transport and Communications Finland, (2005b): Mobiilipalvelumarkkinat Suomessa 2004 (Mobile services market in Finland in 2004), Helsinki, Finland. In Finnish. Available at: http://www. mintc.fi/oliver/upl497-Julkaisuja%2034_2005.pdf. Last accessed February 14, 2006. [27] Mylonopoulos, N. A., Doukidis, G. I., (2003): Introduction to the Special Issue: Mobile Business: Technological Pluralism, Social Assimilation, and Growth, International Journal of Electronic Commerce, 8(1), 5–22. [28] Nogushi, Y., (2005): Camera Phones Lend Immediacy to Images of Disaster, Washington Post.com. Available at: http://www.washingtonpost.com/wp-dyn/content/article/2005/-07/07/AR2005070701522_ pf.html. Last accessed February 11, 2006. [29] Nokia Networks (2003), A History of Third Generation Mobile, Nokia Networks, Espoo. [30] Orange, (2006): The Promise of Mobile Games, “Orange Partner Newsletter”. Available at: http://www. orangepartner.com/site/enuk/home/p_home.jsp. Last accessed February 8, 2006. [31] Pagani, M., (2004): Determinants of Adoption of Third Generation Mobile Multimedia Services, Journal of Interactive Marketing, 18(3), 46–59. [32] Paul Budde Communication (2005), Western European Mobile Communications Market 2006, Market Reserach.com, Accessed: April 13, 2006, Available at: http://www.market-research.com/product/ display.asp?productid=1187841&g=1. [33] Rantanen, E., (2006): Operaattorit hoi, järkeviä palveluja (Operators ahoy, reasonable services), Talouselämä. In Finnish. Available at: http://www.talouselama.fi/doc.te?f_id=851290-&s=u&wtm=te10022006. Last accessed February 10, 2006. [34] Repo, P., Hyvönen, K., Pantzar, M. & Timonen, P. (2004), “Users Inventing Ways to Enjoy New Mobile Services – The Case of Watching Mobile Videos”, Proceedings of the 37th Hawaii International Conference on System Sciences (HICSS-37), Island of Hawaii, USA, January 5–8. [35] Robins, F., (2003): The Marketing of 3G, Marketing Intelligence & Planning, 21(6), 370–378. [36] Sarker, S., Wells, D., (2003): Understanding mobile handheld device use and adoption, Commun. ACM, 46(12), 35–40.
C. Carlsson and P. Walden / Mobile Technology: Expanding the Limits of the Possible
217
[37] Schwartz, E. I., (1999): “Digital Darwinism: 7 breakthrough business strategies for surviving in the cutthroat Web economy”, Broadway Books, New York. [38] Shapiro, C., Varian, H. R., (1999): “Information rules: a strategic guide to the network economy”, Harvard Business School Press, Boston, MA. [39] Srivastava, L., (2004): Japan’s Ubiquitous Mobile Information Society, Info, 6(4), 234–251. [40] Statistics Finland, (2005): Telecommunications, Helsinki, Finland. Available at: http://www. tilastokeskus.fi/til/tvie/index.html. Last accessed February 8, 2006. [41] Steinbock, D., (2003): “Wireless Horizon – Strategy and Competition in the Worldwide Mobile Marketplace”, American Management Association, New York. [42] The Association of Electronic Wholesalers, (2005): Matkapuhelinten myynti kaupalle 1985 – 2005e (Sales to trade 1985 – 2005e). In Excel format and In Finnish. Available at: http://www.etkry.com/ tilast1.htm. Last accessed February 14, 2006. [43] The World Cellular Information Service, (2006) Available at: http://www.wcisdata.com/?-proceed= true&MarEntityId=20001154670&entHash=2541cb0c8&UType=true. [44] UMTS Forum, (2003): Mobile Evolution: Shaping the Future. Available at: http://www.umts-forum. org/servlet/dycon/ztumts/umts/Live/en/umts/Resources_Papers_index Last accessed March 31, 2004. [45] Vesa, J. (2005), Mobile Services in the Networked Economy, IRM Press, London.
218
Advances in Multiple Criteria Decision Making and Human Systems Management Y. Shi et al. (Eds.) IOS Press, 2007 © 2007 The authors. All rights reserved.
Informed Intent as Purposeful Coordination of Action Malin BRÄNNBACK Professor, International Business Åbo Akademi University, Henriksgatan 7, FIN-20500 Åbo, Finland E-mail:
[email protected] Abstract. This paper develops a conceptual framework for understanding how the individually driven intentions emerge into collective action. While real action is the concern of practicing business it seems to have received insufficient attention from the research community. A wealth of models has yet to explicate how intentions, strategies, visions, and missions get enacted. The majority of management literature has taken the collective perspective. This paper argues for a reversed causation starting with the individual – called supervenience, where the collective is a function of its parts. People create venture, people innovate, people act.
Introduction In his last and award winning article Sumantra Goshal stated that what made social sciences distinct from science, arts and humanities was that its basic unit of explanation was (and is) that intentions guide action (Goshal, 2005, see also, Dennett, 1989, Elster, 1989, Malle and Knobe, 1997). This paper discusses the attitude-behavior or intentionaction link through the lenses of entrepreneurial intentionality. The paper will argue that entrepreneurial activity is a knowledge-based process with a reversed causal directionality between individuals and collective. We argue that efforts to increase entrepreneurial activity in a nation or a region will require a thorough understanding of the intentions of the individual actors, i.e. the entrepreneurs and the entrepreneurs in waiting. Merely dealing with the issue on a macro level (national or regional) will not sufficiently improve the situation. We introduce the notion of mereology and supervenience that have been discussed extensively in the philosophy of science. Here it is presented as an alternative perspective of approaching knowledge and value creation in society through enhancing entrepreneurship. The paper conceptualizes purposeful coordination of action, i.e. the process of getting from might and maybe to action. This paper will argue that many of our frameworks and theoretical constructs in management fail to adequately explain how intentions, decisions, and strategies get enacted because they are inherently viewed as collective constructs yet behavior is first and foremost an individual activity that can but does not have to have collective consequences. While research has been very busy describing organizations and processes, developing ways to make decisions more effective and efficient, enhancing evaluation processes, design better strategies, and model key factors driving intentions, little concern seems to have been shown to whether real action is the consequence of all these activities. Research in social psychology has shown that attitudes cause behavior, that behavior cause attitudes, i.e. reciprocal causation exists (Kelman, 1974), that the two are un-
M. Brännback / Informed Intent as Purposeful Coordination of Action
219
related or that the two are caused by something else (McBroom and Reed, 1992). It has also been shown that the link between attitude and action is anything but consistent. However, Kelman (1974) argue that inconsistencies is a consequence of researchers not counting for the social constraints of the situations in which the action is observed and the attitudes are assessed. Early entrepreneurship research realized that understanding the link between ideas and action was critical for understanding the entrepreneurial process (Bird, 1989, Krueger, 1993). Studies in entrepreneurial intentionality has shown that modeling intentions is indeed important as they show the highest accuracy in predicting behavior (Ajzen and Fishbein, 1980, Shapero, 1982, Krueger et al., 2000). Consequently, entrepreneurship researchers have understood that if entrepreneurial intentionality can be measured it is then possible to predict the rate by which new firms emerge. Since the turn of the century the Global Entrepreneurship Monitor (GEM) has annually been measuring total entrepreneurial activity (TEA) in over 30 countries. The data informs us that TEA is very low in most western European countries, higher in eastern European countries, and extremely high, in for example, Equador and Peru1. The fact that TEA is high in Equador or Peru reflects the distinction between opportunistic and necessity entrepreneurship. The latter is driven by the need for survival and a necessity of feeding a family. Necessity entrepreneurship is almost non-existent in western European nations. While the GEM indeed provides us with an impressive database, this database provides no information on what ever became of these intentions, e.g. what ever happened to the intending 4.4% in Finland? Hence while it appears that real action seems less important, it should be the real concern and interest of researchers and practitioners. Most researchers and policy maker however, appear to be satisfied with monitoring TEA since research has shown that the model measuring entrepreneurial intentions is surprisingly robust even when researchers have taken considerable liberties with model specification or measurement. Path analysis has confirmed that the correlation between attitudes and behavior is fully explained by the attitude-intention and intention-behavior links (Kim & Hunter, 1993). And besides, formal intentions models have been applied successfully to entrepreneurial behavior (e.g., Davidsson, 1991, Krueger, 1993, Krueger & Brazeal, 1994, Krueger et al., 2000). So, this is apparently the best we can get? No! Because we still do not have ways of sufficiently explaining how intentions get enacted. This conundrum is also addressed in a recent book by Pfeffer and Sutton (2000) who asked (p. 1): “Why do so much education and training, management consulting, and business research and so many books and articles produce so little change in what managers and organizations actually do?” Zeleny (2006) points at the same issue by arguing that in many organizations, strategies (including mission statements and visions) remain symbolic descriptions of future activities, and they remain floating above the cloud lines, failing to diffuse into the organization thus failing to be enacted. Strategies are not communicated below to the people who act and who ensure that strategies get enacted. In other words strategies are crafted on a macro level reflecting ideas of what a collective ought to do or will do. Such a view assumes that the locus of knowledge and value creation lie on the firm level. This view assumes that individuals 1
The TEA measures the proportion of the population aged 18–64 who is considering starting a company, i.e. the proportion of nascent entrepreneurs in a country. The data informs us that in 2004 it was 4.4 for Finland, 3.5 for Sweden, 12.8 for the US, close to 40 for Equador and Peru. In other words 4.4% of the Finnish population may consider becoming an entrepreneur. For more information consult www.gemconsortium.org.
220
M. Brännback / Informed Intent as Purposeful Coordination of Action
are a priori homogeneous, infinitely malleable or randomly distributed into organizations (Felin and Hesterly, 2007). Hence following this rationale it is somehow expected that once the strategy is formulated and mission statements and visions are declared or published they will somehow magically be understood by the individuals in an organization. The perspective is that of top-down exhibiting down-ward causation, where the collective determine the behavior of the lower levels (Sawyer, 2001, Felin and Hesterly, 2007). Hence, according to this view, organizational knowledge is emergent and can even be seen as independent of the individuals (Levitt and March, 1988) and the collective cannot be understood by merely studying its parts. This paper will argue for a reversed perspective called supervenience, where the collective is a function of its parts (Kim, 1993, Sawyer, 2001, Felin and Hesterly, 2007). Accordingly, it is argued that firms do not innovate. People innovate! Entrepreneurs create firms, not innovation systems, regional or national. The paper will argue that collective action is thus a function of individually intentions to act which is driven by personally perceived desirability and feasibility.
1. Contemporary Models of Intentions and Action Understanding why and when attitudes affect intentions in a way that intentions transfer into behavior has been the focal interest of researchers in many different areas such as consumer research (Ajzen and Driver, 1992, Bagozzi, 2000a,b, et al., 2003), health care, e.g. weight loss (Bagozzi and Warshaw, 1990), organization behavior, everyday decision making (Mathur, 1998), adoption of new technologies (Davis et al., 1989 Bagozzi, 1992) career choice, entrepreneurship (Davidsson, 1991, Krueger, 1993, 2000, Krueger and Brazeal, 1994, Krueger et al., 2000), and above all in social psychology (Liska, 1984, Fazio & Williams, 1986, McBroom & Reed, 1992, Taylor & Gollwitzer, 1995, Brunstein & Gollwitzer, 1996, Gollwitzer & Brandstätter, 1997, Gollwitzer & Schaal, 1998, Scheeran et al., 2005). Common to all studies is that they draw on a theoretical framework or frameworks explaining social action, presented by Ajzen and Fishbein (1980). As pointed out by Bagozzi (1992) one criteria of the strength of a model or theoretical framework is its remarkable persistence over many years. The initial model, theory of reasoned action (TRA) (Fig. 1a), assumed that attitudes and social norms predicted intentions (Fishbein and Ajzen, 1975, Ajzen and Fishbein, 1980) was enhanced through the addition of variables with moderating effects such as experience, attitudinal confidence and attitude accessibility (Fazio and Williams, 1986) and ultimately augmented by Ajzen (1991) including a variable with a direct predicting effect of intentions; perceived behavioral control. The new model has become known as theory of planned behavior (TPB) (Fig. 1b) and has dominated attitude research for the past fifteen years. The fundamental thesis of TPB that attitudes impact behavior, attitudes impact intentions which are the strongest predictors of behavior. Despite the dominance of TPB, both TRA and TPB were criticized as it was argued that they failed to adequately predict actual behavior, i.e. when intentions get enacted. It was argued that intentions were insufficient impetus for action (Bagozzi & Warshaw, 1990, Bagozzi, 1992, McBroom & Reed, 1992). TRA only applied to volitional behavior where nothing prevented action from taking place (Bagozzi, 1992). TBP was less limited and suitable for action under partial volitional control. Possible impediments from personal deficiencies were incorporated in perceived behavioral
M. Brännback / Informed Intent as Purposeful Coordination of Action
221
Attitude
Action
Intention Subjective Norm
a) Theory of Reasoned Action Attitude
Subjective Norm
Action
Intention
Perceived Behavioral Control b) Theory of Planned Behavior Attitudes toward success Expectation of success
Frequency of past trying
Recency of past trying
Intention to try
Trying
Attitudes toward failure Expectation of failure Attitudes toward process Subjective norm towards trying c) Theory of Trying Figure 1. Contemporary attitude theories (adapted from Bagozzi, 1992, p. 179).
control. To deal with the possibility that something unexpected may come in the way Bagozzi and Warshaw (1990) presented a refinement, theory of trying (TT) (Fig. 1c), where final performance was assumed to be preceded by a series of attempts – trials, where the outcome can be success – or failure.
222
M. Brännback / Informed Intent as Purposeful Coordination of Action
Theory of trying distinguishes between three kinds of attitudes which are also found in goal pursuit: attitudes toward success, failure and process which Bagozzi and Warshaw (1990) found valid. It has been argued that these three attitudes are incorporated in perceived behavioral control and indeed they are close. Bagozzi and Warshaw points out that expectations of success and failure captures the reality that either success or failure will follow. It is an estimate of the likelihood of either outcome or goal attainment. Perceived behavioral control expresses the subjects belief of that she or he can do it (Bagozzi, 1992), i.e. the means to an act. This corresponds with Bandura’s self-efficacy and outcome beliefs, which are necessary for outcome and end-state goal attainment (Bandura, 1982). Finally, theory of trying differs from the previous two in that the model includes past action. Although Ajzen apparently saw little explanatory value in past behavior, Bagozzi (1992) did.
2. Entrepreneurial Intentionality The entrepreneurial intentionality model draws on two theories (i) the theory of planned behavior (Ajzen, 1987), and (ii) Shapero’s entrepreneurial event (Shapero, 1982) that have been shown to be equally powerful in predicting entrepreneurial activity (Krueger et al., 2000). The model draws on TPB shown in Fig. 1b and has been somewhat modified as shown in Fig. 2, but as can be seen these two are essentially the same. More important, the model (Fig. 2) held in virtually every study, even where researchers took considerable liberties with model specification or measurement. Path analysis confirms that the correlation between attitudes and behavior is fully explained by the attitude-intention and intention-behavior links (Kim & Hunter, 1993). Moreover, formal intentions models have been applied successfully to entrepreneurial behavior (e.g., Davidsson, 1991, Krueger, 1993, Krueger & Brazeal, 1994, Krueger et al., 2000). For those readers who are less familiar with the intentions model, let us review the critical components of the entrepreneurial intentions model, i.e. how TPB has evolved in the context of entrepreneurship. According to the model, entrepreneurial intentions are dependent on personally perceived desirability and feasibility. Desirability in turn is influenced by social norms, although social norms have not always been shown to have a significant impact (Krueger et al., 2000). Hence shown in Fig. 2 and according to Ajzen’s TPB, perceptions of desirability and feasibility explain (and predict) intentions significantly. Intentions toward pursuing an opportunity are best predicted by three critical perceptions: that the entrepreneurial activity is (a) perceived as personally desirable, (b) perceived as supported by social norms, and (c) perceived as feasible. Feasibility is impacted by perceived self-efficacy, also termed perceived behavioral control in the literature. In the Ajzen-Fishbein framework, personal attitude depends on perceptions of the consequences of outcomes from performing the target behavior: their likelihood as well as magnitude, negative consequences as well as positive consequences, and especially intrinsic rewards as well as extrinsic (in short, an expectancy framework). It is also argued that these perceptions are learned. Social norms represent perhaps the most interesting component of the AjzenFishbein framework. This measure is a function of perceived normative beliefs of significant others (e.g., family, friends, co-workers, etc.) weighted by the individual’s motive to comply with each normative belief. Measuring social norms does require identifying the appropriate reference groups. The reference-group for a potential entrepre-
M. Brännback / Informed Intent as Purposeful Coordination of Action
Perceived Social Norm
223
Perceived Desirability
Entrepreneurial Intentions
Perceived Self-Efficacy
Perceived Feasibility
Figure 2. The Entrepreneurial Intentions Model (adapted from Shapero, 1982; Krueger, 1993; Krueger & Brazeal, 1994; Krueger, et al. 2000).
neur need not be family and friends, rather the perceived beliefs of their colleagues (including those who have already started a venture) and also likely entails multiple stakeholders. Self-efficacy (or perceived behavioral control) is our sense of competence, belief that we can do something specific (Bandura, 1997, 2001) and that self-efficacy is a strong driver of goal-oriented behavior (Baum and Locke, 2004, Bandura, 1997, 2001). The concept reflects an individual’s innermost thoughts on whether they have the abilities perceived as important to task performance as well as the self-confidence that they will be able to effectively convert those skills into a chosen outcome (Bandura, 1989; 1997). Self-efficacy is related to one’s choice of activities and one’s tenacity, one’s emotional reactions when failing (Bandura, 1997; 2001). Thus, self-efficacy is concerned with one’s judgment of what one can do with whatever skills one possesses not with the actual skills one has (Chen et al., 1998, Markham et al., 2002) – which at times may seem like a mindset of positive illusions (Taylor and Gollwitzer, 1995). Hence taking action requires consideration of not just outcome expectancies (i.e., desirability) but also perceived self-efficacy (i.e., feasibility).
Knowledge is Action – Action as Informed Intent Drawing on insights from prior scholars and specifically, Polanyi (1967) who claimed that all knowledge was tacit and that we may know more than we were able to explicate and Maturana and Varela (1987) all knowing is doing and all doing is knowing, Zeleny (1989a,b, 1996) defined knowledge as purposeful coordination of action and showed that information was merely symbolic description of action. The one who knew was also able to act. A truly knowledge-based organization was one that could coordinate
224
M. Brännback / Informed Intent as Purposeful Coordination of Action
its actions purposefully or in such a way that it excelled over its competitors. Therefore, an organization claiming to be engaged in knowledge management should pay attention to the actions. Only those who were able to act were possessors of knowledge. Others were merely loaded with information. While Zeleny presented his conceptualization of knowledge in the late 1980s the business community appear to have jumped on the knowledge buzz a few years later often inspired by Nonaka’s article in Harvard Business Review titled “The knowledge creating company” (Nonaka, 1991). Nonaka presented an interesting framework of knowledge which through a spiral-like effect transcends from tacit to explicit to tacit. Later the model was expanded to fit with different ontological levels where Ba was conceptualized as inter-organizational knowledge creation space (Nonaka et al., 2000, Nonaka, 2003, Brännback, 2003). Implicitly this spiral carries the notion of action, but the model is on a level of abstraction that for many practicing managers trying to solve daily problems of action remained an intellectual exercise rather than providing real guidance. Contrary to Nonaka’s framework Zeleny’s conceptualization of knowledge is straight forward and speaks the language of the everyday manager. To be blunt: whatever knowledge-based activities are undertaken in a firm, unless it results in purposeful coordination of action the activity is redundant, it is no action, it is doing nothing. In a recent article and a forthcoming one Zeleny (2006, 2007) develops the argument and argues that strategy is what a company does and what the company does is its strategy, all else is talk. Strategic management thus becomes knowledge management and the strategy is the blueprint of purposeful coordination of action. Zeleny argues that the reason for strategies not getting enacted is the real knowledge-doing gap in organizations and is due to the fact that strategies (including a firms mission statement and visions) are formulated above the cloud line detached from the real actors. While knowledge is consensually social, i.e. embedded in a social context it is also deeply connected to language, verbal or non verbal. One could even argue that knowledge is language. Thus, while many organizations spend endless efforts in developing new strategies or restating their visions and rewriting their mission statement, even in the most correct language, there is no guarantee that the language is understood by other actors in the organization. Strategy, mission statement and visions thus remain symbolic descriptions of intentions that however well intended run a very real risk of not getting enacted. Strategy, mission statement, and vision remain insights, good intentions, and brilliant epigrams that never become achievements (Drucker, 2001). The knowledge process requires effective communication. There appears a striking resemblance between knowledge as purposeful coordination of action and getting from strategy to action on the one hand and entrepreneurial intentionality on the other. In both cases we wrestle with the challenge of getting from intentions to real action. Thus intentions that fail to be acted upon remain symbolic descriptions of action, i.e. information. Intentions that do get enacted can be defined as informed intent or knowledge.
Discussion: Informed Intent as Purposeful Coordination of Action The strategy process has and is in many organizations a top-down process. Many will argue that it should remain so, as letting ordinary persons craft strategies is just sheer madness. The notion of strategy as a top-down process is perhaps a valid description in
M. Brännback / Informed Intent as Purposeful Coordination of Action
225
Macro Collective
Micro Individuals
a) Supervenience
b) Downward causation
Figure 3. Mereology and Causal Directionality between the parts and the whole (Felin and Hesterly, 2007, p. 200).
a large organization as only those on the top have a grasp of the big picture. Strategy is thus seen as a collectivistic concept. However, in small firms or starting firms it may well be necessary to reverse the causation. Strategy and entrepreneurial activity is supervenient (Felin and Herstley, 2007). Supervenience is a philosophical concept (Kim, 1993) not often found outside the discipline of philosophy. It is an important philosophy of science dimension termed mereology, which deals with issues of causal directionality and part-whole relationship (Felin and Hesterly, 2007). In a recent article Felin and Hesterly (2007) depict the relevance of mereology and supervenience for value and knowledge creation. Drawing on Simon’s argument that all organizational learning takes place inside human heads (Simon, 1991, p. 125), Felin and Hesterly (2007) suggest a supervenient approach to knowledge and value creation (Fig. 3). This corresponds with the claims in the introduction that innovators innovate and entrepreneurs create ventures (Krueger, 2000). Applying the rationale depicted in Fig. 3 to the contexts of entrepreneurial intent and firm strategies we would create a framework as shown in Fig. 4a to the left. Figure 4b to the right represents the traditional conceptualization of knowledge, value creation and strategy processes in organizations. Figure 4b represents a collective locus of knowledge that has been the primary focus in the literature. It is this rationale that Zeleny (forthcoming) criticizes and argues that is the source of the knowledge-doing gap. We propose a reversed approach, supervenience, and argue that it depicts the true nature of the entrepreneurial process, that moves from individual intent through purposeful coordination of action to the creation of a collective – that we call a venture. This venture may remain an individual entity as is the case of self-employment entrepreneurship, but in most cases it does include a few more individuals. We argue that the entrepreneurial process thus is a primary example of supervenience. In reality this process often starts with a personally perceived desirability and feasibility that forms the basis of an intention to act. At this point the individual may choose not to act. But, given the choice is to act, most nascent entrepreneurs of today become involved in the process of formulating a business plan. It is still possible to create a venture without any formal planning and indeed it is done. However, if external funding, for example, is required, a business plan becomes mandatory. Today, not even banks provide financial means without first having seen a business plan.
226
M. Brännback / Informed Intent as Purposeful Coordination of Action
Collective
Purposeful coordination of action
Strategy/Intention
Individual
Entrepreneurial Intent
Action/Knowledge
a) Supervenience
b) Downward causation
Figure 4. Supervenience, entrepreneurial intent and strategy.
However, not even a good business plan will ensure action. A business plan is merely symbolic description of a future activity that seeks to justify and show the feasibility of acting on an idea. It is not a guarantee for success but it is commonly agreed that it increases the probability of success. A business plan will also be a statement of the venture’s strategy, its mission statement and should include an operational plan (Carsrud and Brännback, 2007). A business plan will become informed intent when it is enacted – realized as purposeful coordination of action. From the perspective of philosophy of science supervenience provides a rationale for thinking about how to get from intentions to action. We also argue that it is the same rationale that should be applied within firms for carrying out their strategy process, thus firmly anchoring the strategy, mission statement and visions with the individual actors in the organization. This is an approach to informed intent or purposeful coordination of action, which is a function of individual intent. It is an extension of the insights that attitudes cause behavior. While it can be argued that this is an idealistic view, it can equally well be argued that it is no more idealistic than the one assuming that individual action will follow collective symbolic descriptions.
References [1] Ajzen, I. (1991) Theory of Planned Behavior: Some Unresolved Issues, Organizational Behavior and Human Decision Processes, 50: 179–211. [2] Ajzen, I., Driver, B.L. (1992) Application of the Theory of Planned Behavior to Leisure Choice, Journal of Leisure Research, 24: 207–224. [3] Ajzen, I. Fishbein, M. (1980) Understanding Attitudes and Predicting Social Behavior, Prentice-Hall: Englewood Cliffs, NJ. [4] Bagozzi, R.P. (1992) Self-Regulation of Attitudes, Intentions, and Behaviour, Social Psychology Quarterly, 55(2): 178–204. [5] Bagozzi, R.P. (2000a) On the Concept of Intentional Social Action in Consumer Behaviour. Journal of Consumer Research, 27: 388–396. [6] Bagozzi, R.P. (2000b) The poverty of economic explanations of consumption and an action theory alternative. Managerial and Decision Economics, 21: 95–109. [7] Bagozzi, R.P., Warshaw, P.R. (1990) Trying to consume. Journal of Consumer Research, 17: 127–140
M. Brännback / Informed Intent as Purposeful Coordination of Action
227
[8] Bagozzi R.P., Dholakia, U., Basuron, S. (2003) How Effectful Decision get Enacted: the Motivating Role of Decision Processes, Desires, and Anticipated Emotions. Journal of Behavioral Decision Making, 16: 273–295. [9] Bandura, A. (2001) Social cognitive theory: An Agentic Perspective, Annual. Review of Psychology. 52: 1–26. [10] Bandura, A. (1997) Self-efficacy: The Exercise of Control. New York: Freeman. [11] Bandura, A. (1989) Human Agency in Social Cognitive Theory, American Psychologist, 44(9): 1175–1184. [12] Bandura, A. (1982) Self-efficacy Mechanism in Human Agency, American Psychologist, 37(2): 122–147. [13] Bird, B. (1989). Entrepreneurial Behavior. Glenview, IL: Scott Foresman & Company. [14] Brunstein, J.C., Gollwitzer, P.M. (1996) Effects of failure on subsequent performance: the importance of self-defining goals. Journal of Personality and Social Psychology, 70(2): 395–407. [15] Brännback, M. (2003) R&D Collaboration: Role of Ba in Knowledge-Creating Networks, Knowledge Management Research & Practice, 1(1): 28–38. [16] Carsrud, A.L., Brännback, M. (2007) Entrepreneurship, Greenwood Press: Westport, CT, in press. [17] Chen, C.C. Greene, P.G., Crick, A. (1998) Does Entrepreneurial Self-efficacy distinguish entrepreneurs from managers? Journal of Business Venturing, 13(4): 295–316. [18] Davidsson, P. (1991) Continued Entrepreneurship, Journal of Business Venturing, 6(6): 405–429. [19] Davis, F.D., Bagozzi, R.P., Warshaw, P.R. (1989) User Acceptance of Computer technology: A Comparison of two Theoretical Models. Management Science, 35: 982–1003. [20] Dennett, D.C. (1989) The Intentional Stance. MIT Press: Cambridge, Mass. [21] Drucker, P.F. (2001) Essential Drucker. HarperCollins: New York. [22] Elster, J. (1989) Nuts and Bolts for the Social Sciences, Cambridge University press: Cambridge. [23] Fazio, R.H., Williams, C.J. (1986) Attitude accessibility as a moderator of the attitude-perception and attitude-behavior relations: an investigation of the 1984 presidential election. Journal of Personality and Social Psychology, 51: 504–514. [24] Felin, T., Hesterley, W.S. (2007) The Knowledge-Based View, Nested Heterogeneity, and New Value Creation: Philosophical Considerations on the Locus of Knowledge. Academy of Management Review, 32(1): 195–218. [25] Fishbein, M., Ajzen, I. (1975) Belief, Attitude, Intention, and Behavior: An Introduction to Theory and Research, Addison-Wesley: Reading, MA. [26] Gollwitzer, P.M., Brandstätter, V. (1997) Implementation Intentions and effective goal pursuit. Journal of Personality and Social Psychology, 73(1): 186–199. [27] Gollwitzer, P.M., Schaal, B. (1998) Metacognition in action: the importance of implementation intentions, Personality and Social Psychology Review, 2(2): 124–136. [28] Ghoshal, S. (2005) Bad management theories are destroying good management practices. Academy of Management Learning & Education, 4(1): 75–91. [29] Kelman, H.C. (1974) Attitudes are Alive and Well and Gainfully Employed in the Sphere of Action. American Psychologist, 29: 310–24. [30] Kim, J. (1993) Supervenience and Mind, Cambridge University Press: Cambrdge Mass. [31] Kim, M., Hunter, J. (1993) Relationships among attitudes, intentions, and behaviour, Communications Research, 20: 331–364. [32] Krueger, N. (1993) The impact of prior entrepreneurial exposure on perceptions of new venture feasibility & desirability. Entrepreneurship Theory & Practice, 18(1): 21–530. [33] Krueger, N. (2000) The cognitive infrastructure of opportunity emergence, Entrepreneurship Theory & Practice, 24(3): 5–23. [34] Krueger, N. & Brazeal, D. (1994) Entrepreneurial potential & potential entrepreneurs, Entrepreneurship Theory & Practice, 18(1): 5–21. [35] Krueger, N., Reilly, M. & Carsrud, A. (2000) Competing models of Entrepreneurial Intentions. Journal of Business Venturing 15(5/6): 411–532. [36] Levitt, B., March J. (1988) Organizational Learning. Annual Review of Sociology, 14: 319–340. [37] Liska, A.E. (1984) A critical examination on the causal structure of the Fishbein/Ajzen attitudebehavior model. Social Psychology Quarterly, 47: 61–74. [38] Malle, B.F., Knobe, J. (1997) The Folk Concept of Intentionality, Journal of Experimental Social Psychology, 33(2): 101–121. [39] Markman, G., Balkin, D., and Baron, R. (2002) Inventors & new venture formation: The effects of general self-efficacy & regretful thinking. Entrepreneurship Theory and Practice, 27(2): 149–165. [40] Maturana, H.R., Varela, F.J. (1987) The Tree of Knowledge, Shambhala Publications: Boston, Mass. [41] McBroom, W.H., Reed, F.W. (1992) Toward a Reconceptualization of Attitude-Behavior Consitency. Socail Psychology Quarterly, 55(2): 205–216.
228
M. Brännback / Informed Intent as Purposeful Coordination of Action
[42] Nonaka, I. (1991) The Knowledge Creating Company, Harvard Business Review 69: 96–104. [43] Nonaka, I. (2003) The Knowledge-creating Theory revisited: knowledge creation as a synthesizing process, Knowledge Management Research & Practice, 1(1): 2–10. [44] Nonaka, I., Toyama, R., Konno, N. (2000) SECI, Ba, and Leadership, a Unified Model of Dynamic Knowledge Creation, Long Range Planning, 9(1): 1–20. [45] Pfeffer, J., Sutton, R.I. (2000) The Knowledge-Doing Gap, Massachusetts: Harvard Business School press: Boston Mass. [46] Polanyi, M. (1967) The Tacit Dimension. Routledge & Kegan Paul Ltd.: London. [47] Sawyer, R.K. (2001) Emergence in Sociology: Contemporary Philosophy of Mind and some Implications for Sociological Theory. American Journal of Sociology, 107: 551–585. [48] Scheeran, P., Webb, T.L., Gollwitzer, P.M. (2005) The interplay between goal intentions and implementation intention. Personality and Social Psychology Bulletin, 31(1): 87–98. [49] Shapero, A. (1982) Social dimensions of entrepreneurship. In C. Kent, D. Sexton, K. Vesper (Eds.) The Encyclopedia of Entrepreneurship. Englewood Cliffs: Prentice Hall, 72–90. [50] Simon, H.A. (1991) Bounded Rationality and Organizational Learning, Organization Science, 2: 125–134. [51] Taylor, S.E., Gollwitzer, P.M. (1995) Effects of mindset on positive illusions, Journal of Personality and Social Psychology, 69(2): 213–226. [52] Zeleny, M. (1989a) Knowledge as a new form of capital, Part 1. Division and reintegration of knowledge, Human Systems Management 8(1): 45–58. [53] Zeleny, M. (1989b) Knowledge as a new form of capital, Part 2. Knowledge-based management systems, Human Systems Management 8(2): 129–143. [54] Zeleny, M. (1996) Knowledge as coordination of action, Human Systems Management 15(4): 211–213. [55] Zeleny, M. (2006) Knowledge-information autopoetic cycle: towards the wisdom systems. International Journal of Management and Decision Making, 7(1): 3–18. [56] Zeleny, M. (2007) Strategy and strategic action in the global era: overcoming the knowing-doing gap. International Journal of Management and Decision Making, (forthcoming).
Advances in Multiple Criteria Decision Making and Human Systems Management Y. Shi et al. (Eds.) IOS Press, 2007 © 2007 The authors. All rights reserved.
229
Competence Set Analysis and Effective Problem Solving Po-Lung YU a and Yen-Chu CHEN b Distinguished Chair Professor, Institute of Information Management, National Chiao Tung University, 1001, Ta Hsueh Road, HsinChu City, Taiwan
[email protected] b Institute of Information Management, National Chiao Tung University, 1001, Ta Hsueh Road, HsinChu City, Taiwan
[email protected] a
Abstract. In this article we introduce the concepts of competence analysis as an extension of habitual domain theory. Specifically, we discuss the cores of habitual domains; learning processes to build competence; classification of decision problems in terms of competence set; decision quality, confidence, risk taking and ignorance; and effective decision making. Further decomposition of competence set analysis and many research topics are also provided. Keywords. Competence set analysis, habitual domains, learning processes, classification of decision problems, routine problems, fuzzy problems, challenging problems, decision quality, confidence, risk taking, effective decision making, and elements of competence set
1. Introduction For each decision problem E, there is a perceived competence set CS*(E), a collection of ideas, knowledge, skills, efforts and resources for its effective solution. When the decision maker believes that he/she has already acquired and mastered CS*(E), he/she would feel comfortable and confident about making a decision. Otherwise, he/she would hesitate to make a decision, especially when it involves high stakes. The perception, acquisition and mastering of CS*(E), thus, play important roles in determining how effective a decision is made and executed. These topics and their applications will be the focus of study in this article. In particular, we shall discuss core of habitual domains in Section 2; learning processes to build competence in Section 3; classes of decision problems in Section 4; decision quality, confidence, risk taking and ignorance in Section 5; effective decision making in Section 6; in Section 7 further decomposition of competence set analysis and research topics will be provided. In order to facilitate our discussion, let us consider the following example. Example 1: Archimedes Archimedes, a great scientist, was summoned by the King of Greece to verify if his new crown was made of pure gold. Of course, in the verification process, the beautiful
230
P.-L. Yu and Y.-C. Chen / Competence Set Analysis and Effective Problem Solving
crown should not be damaged. The problem was a great challenge and created a very high level of charge on Archimedes. The scientist’s curiosity was increased and his reputation was at stake. The burning desire to solve the problem kept Archimedes awake day and night. One day, when Archimedes was in his bathtub watching the water fill up and overflow, a solution suddenly struck him. He rushed out of the bathtub shouting “eureka” (means “I found it” in Greek) and in his excitement, he even forgot to put on his clothes. His discovery which is the well-known displacement principle states that the volume of the displaced water should be equal to the volume of his entire body in the water. Thus the crown, when immersed in the water, should displace its own volume. By comparing the weight of the crown with an equivalent weight of pure gold of the same volume, one should be able to verify if the crown is made of pure gold. What a relief to Archimedes!
Effective Decision Making: Challenging problems are solved with good ideas that flush into our existing domain.
2. Habitual Domains and Cores Each person has his/her own set of ideas and operators, consisting of his/her memory and habitual ways of thinking, judging, responding and handling problems. It has been shown and mathematically proved that this set will be gradually stabilized within a certain domain. This set of ideas and operators, together with its formation, interaction, and dynamics is called his/her habitual domain (HD). Wherever one goes, his/her HD goes with him/her, just like the turtle’s shell always follows the turtle. One’s HD reflects in his/her personality, attitude, and ways of living, and has a great impact on his/her behavior, decision making and problem solving. Refer to [18–21] for details.
P.-L. Yu and Y.-C. Chen / Competence Set Analysis and Effective Problem Solving
231
There are four basic elements in HD: (1) The potential domain (PD): the collection of all ideas and operators that can be potentially activated in one’s mind. (2) The actual domain (AD): the collection of the ideas and operators that are actually activated at this particular time and place. (3) The activation probability (AP): the probability that the ideas and operators in potential domain can be actually activated. (4) The reachable domain (RD): the collection of ideas and operators that can be generated from a given initial actual domain. The concepts and ideas are represented by circuit patterns in our brain. The circuit patterns can be activated depending on our charge structures, attention allocation and the attended events. Through association and analogy, and our HDs, given that an event has our attention, some ideas and concepts can be activated and some cannot. For instance, the event of talking about your boy/girlfriend may trigger the activation of his/her name, image and some special memory about him or her. It may less likely activate the concepts of George Washington or your grandfather. Talking about an upcoming job interview may immediately activate the concepts of being neat, knowledgeable and a good listener. You would less likely activate the concept of icebergs or rooster fights. Given an event or a decision problem E which catches our attention at time t, the probability or propensity for an idea I to be activated is denoted by Pt(I,E). Like a conditional probability, we know that 0 ≤ Pt(I, E) ≤ 1, that Pt(I,E) = 0 if I is unrelated to E or I is not an element of PDt (potential domain) at time t, and that Pt(I,E) = 1 if I is automatically activated in the thinking process whenever E is presented. Note that in Example 1, with E as verifying the crown’s content and I as the concept of the displacement principle, Archimedes’ Pt(I,E) was 0 before taking the bath and quickly became 1 after I was discovered because of the high level of charge of the problem. Empirically, like with probability functions, Pt(I,E) may be estimated by determining its relative frequency. For instance, if I is activated 7 out of 10 times whenever E is presented, then Pt(I,E) may be estimated at 0.7. Probability theory and statistics can then be used to estimate Pt(I,E). Note that the larger the number in the experiment, the more accurate the estimate will be. Let us define the α-core of HD at time t denoted by Ct(α,E) to be the collection of the ideas or concepts that can be activated with a propensity larger than or equal to α. That is, Ct(α, E) = {I㨨Pt(I,E) ≥ α}. In Fig. 1, α-core is depicted for illustration. Note that in an abstract we can regard a package of computer hardware and software as a special HD. For this special kind of computer HD, an idea is either activated or remains silent. Thus the corresponding Pt(I, E) is either a 0 or 1 step function, and Ct(α1, E) = Ct(α2, E) if α1 > 0 and α2 > 0. Thus, no matter how much we lower the value of α, the α-core remains the same as long as α > 0.
232
P.-L. Yu and Y.-C. Chen / Competence Set Analysis and Effective Problem Solving
Figure 1. α-Core of HD.
By the core of HD for E (with α absent), denoted by Ct(E), we mean the collection of ideas or concepts that would almost surely be activated when E is presented. In other words, it is the α-core with α→1. Sometimes, for convenience and to avoid confusion, the core of HD may simply mean the α-core with a high value of α. Thus if I is an element of the core of HD for E, then Pt(I, E) is large (close to the limit of 1) for most of time t when E is present. Note that when one has worked for a long time on a certain problem E, he/she would develop a core of HD for E. Thus a baseball player can sense and run very quickly to catch a fly ball because of his/her core of HD. Such core of HD may be called an intuition. Thus, management intuition makes sense. Indeed, we all have a large number of instincts. Now recall that CS*t(E) is the perceived competence set for solving E, as the subscript t is used to emphasize its dynamics. Suppose that Ct(α, E) ⊃ CS*t(E) with a large value of α (that is, α is close to its upper limit 1). In this case the decision maker would feel comfortable with the problem and could solve it with a high degree of efficiency, because he/she has acquired and almost mastered CS*(E). If the above inclusion holds with α=1, then the decision maker has the needed spontaneity to solve this problem. Note that when CS*t(E) \ Ct(α, E)ҁ∅, the decision maker may need further learning or training to acquire and master the new ideas in order to achieve a certain degree of proficiency or confidence in solving the problem E. The relative size between CS*(E) \ Ct(α, E) and CS*(E) can be a measurement of relatively how much more is needed to be learned or trained. It may also be a relative measure of the subjective proficiency or confidence in making the decision for E.
P.-L. Yu and Y.-C. Chen / Competence Set Analysis and Effective Problem Solving
233
3. Learning Processes for Building Competence In this section we discuss how the competence sets are acquired and mastered. This is a learning process which includes implanting, nurturing and habituating. 3.1. Implanting Given a decision problem E, suppose that from an expert’s point of view an idea or skill I∈CS*(E). Now suppose that Pt(I, E) = 0. That is, the decision maker does not associate I with E. Two possible cases could happen: (i) the decision maker could have the circuit pattern of I in his/her potential domain, that is, I∈PDt; or (ii) the decision maker could not have learned I, that is, I∉PDt. The purpose of implanting is to make a positive association between I and E. That is, Pt'(I,E) > 0 for t' > t. This can be achieved through teaching, suggestion and/or training. Note that to be effective, we must understand the decision maker’s HDt, make a strong connection of I and HDt and/or that of I and E. To make a connection one may try an indirect approach by presenting a sequence of information which connects to I. For instance, in a job interview, we may ask the decision maker, “Would you hire someone for $100,000 if he/she could make you $1,000,000?” and “Would you hire someone for just $30,000 if he/she could only make you $10,000?” The answer to these questions may facilitate a good connection to implant the idea that in interviewing, you should emphasize what contributions you can make to the company, and not how much you can get paid by the company. Note that in Example 1, Archimedes got the indirect connection between I and E by watching the water overflowing from the bathtub. Without a good connection, the idea I may be rejected right away. With a strong connection, however, the idea can be accepted more easily. The information which can increase and release our charges will usually catch our attention. For more details, refer to [18,20]. Once the idea I is accepted, we still need to make an effort to be sure that I is sufficiently rehearsed and practiced so it will have a strong circuit pattern representation. Otherwise, I may be stored in a remote area and be difficult to retrieve, which could prevent us from reaching Pt'(I,E) > 0 for all t' > t. 3.2. Nurturing Once the idea I is implanted, Pt(I,E) can be positive, but yet still low. In order for I to have an impact on the decision maker, Pt(I,E) needs to be high enough. To achieve this goal, we need to nurture the idea using training, practice and rehearsing. Like seedlings of a tree, the newly implanted ideas will wither and disappear without nurturing. According to the association and analogy hypothesis, we know that the existing HDt or memory has great influence on the nurturing process. Like trees planted into soil, new ideas are planted into HDt. To be everlasting and influential, the new ideas must be connected and integrated with the existing HDt. The process of making connections and integration with HDt is an essential part of nurturing, which can be achieved by self-discipline, training and practice.
234
P.-L. Yu and Y.-C. Chen / Competence Set Analysis and Effective Problem Solving
To help this nurturing process, some environmental control and support systems are needed so that attention will not be too distracted by other events. For instance, in the example of buying a house, once the idea of resale value has been implanted, the potential buyer may nurture this idea by asking for the resale value of the available houses he/she visits. After a number of visits, the idea of resale value may be strongly ingrained in his/her HDt. Similarly, in the job interview example, once the idea of focusing on your potential contributions to the company is implanted, and we repeatedly discipline ourselves to use the idea over a number of interviews, we will find that the idea is more easily activated whenever we talk about job interviews. Finally, we notice that experiencing and self-suggesting, in addition to information inputs, are two important ways to strengthen our circuit patterns of new ideas. Our mind may not distinguish the sources, but both physical experience and mental exercise (or suggestion) are important in the nurturing process. Thinking without doing may not push the ideas down to the sensory and motoring sections of the brain, thus the ideas may be less concrete. On the other hand, experiencing without thinking may not integrate the ideas extensively with the existing knowledge encoded in the existing HDt. Thus the ideas may not be as strong as they could be. They may even be rejected occasionally by part of the existing HDt. 3.3. Habituating Through repeated practice and nurturing, a new idea I could gradually become an element of the core of HDt on the decision problem E. Thus, the propensity of activation of I is very high or, Pt(I,E) →1. That is, whenever our attention is paid to E, I would be almost surely activated. When we reach this stage for I, we say that I is a habituating element of HDt on E. In Example 1, E (verifying the crown) created a very high level of charge, but once I (the displacement principle) was discovered, then I would be strongly imprinted and to become a strong circuit pattern or a core element of HD. Thus I was quickly habituated in Archimedes’ mind. Note that habituating elements have a strong influence on our decisions and behavior, consciously or subconsciously. Their influence may be insidious and so strong that we may not escape from their reach. One occasionally needs to detach himself from E and those habituating elements to escape from HDt in order to develop creative and innovative ideas. Finally, we notice that the learning process of implanting, nurturing and habituating is not only applicable to self-learning, but also making suggestions to other people and/or training other people to acquire the competence set CS*(E).
4. Competence Sets and Classes of Decision Problems Depending on the perception of the availability of competence sets CS*t(E) and the core or α-core of HD on E(Ct(E) or Ct(α,E)), we can classify decision problems into four categories: routine problems, mixed routine problems, fuzzy problems and challenging problems. We shall discuss these problems in four subsections and suggest how we can acquire the competence sets for solving the various problems.
P.-L. Yu and Y.-C. Chen / Competence Set Analysis and Effective Problem Solving
235
To facilitate our discussion, recall Example 1. For Archimedes, taking a bath was a routine or mixed routine problem; before the discovery of I (the displacement principle), E (verifying the crown) was a challenging problem; after the discovery of I, E can become a fuzzy problem. So can be the problem of reporting the result to the king. Note that the perception of CS*t(E) may be unique for each individual. Let CS**t(E) be the collection of all the individuals’ CS*t(E), (including all experts). Note that by definition CS**t(E) ⊃CS*t(E). For convenience, we shall call CS**t(E) the collective competence set, while CS*t(E) shall be called the individual (decision maker) competence set. Note that what is unknown to the individual but possibly known to some people is given by CS**t(E) \ CS*t(E). 4.1. Routine Problems These are the familiar problems for which satisfactory solutions are readily known and routinely used. More precisely, E is a routine problem to the individual decision maker if CS*t(E) is well-known to him/her and Ct(E) ⊃ CS*t(E) or Ct(α,E) ⊃ CS*t(E) with α→1. Note that the first condition means that the competence set is well-known, and the second condition means that the decision maker has mastered the set. Take the procedures of first learning to eat a steak or drive a car as examples. For most adults, these are routine problems. But for babies, they are definitely not routine problems. Likewise, buying groceries may be a routine problem for many people, yet for people who do not have enough money, it is not a routine problem at all. Note that there may be problems in which CS*t(E) is well-known but that the decision maker has not yet mastered it. That is, Ct(α,E) ⊃ CS*t(E) does not hold for some large value α. We shall call these kinds of problems potentially routine problems because through training and practice, they can become routine problems. For instance, picture yourself trying to buy groceries in Japan or some other country which you are not familiar with. You know you could find a grocery shop and that you could buy what you want. However, you do not yet have the proficiency. Under these circumstances, because you lack familiarity with the situation, grocery shopping is not a routine problem. Likewise, you may know how to change your car’s oil or assemble a simple gadget, yet because you have not done it frequently enough, you may not do the jobs proficiently enough to say that the jobs are routine. From a societal viewpoint, we say a problem, E, is a collective routine problem if CS**(E) is well-known and can be acquired and mastered by people through training, teaching and practice. In auto assembly lines, workers are trained to perform a fixed sequence of jobs which can be classified as collective routines. Similarly, typing, simple machine operations and simple accounting procedures are also collective routine problems. Note that collective routine problems may not be individual routine problems. They can become individual routine problems after the individual has acquired CS**t (i.e. when CS*t ⊃ CS**t) and mastered it (i.e. once Ct(E) ⊃ CS**t). Finally, we notice that when Ct(α,E) ⊃ CS*(E) with α→1 or Ct(E) ⊃ CS*(E), the decision maker can respond to the problem instantaneously and he/she will have the spontaneity to solve the problem. This spontaneity is possible through training, hard work and practice. Once spontaneity is reached, making a decision becomes easy. It
236
P.-L. Yu and Y.-C. Chen / Competence Set Analysis and Effective Problem Solving
will not cause much stress or charge on our system. Fortunately, many of our daily problems are routine. 4.2. Mixed Routine Problems A decision problem E is a mixed routine problem if it consists of a number of routine subproblems. Buying groceries, playing basketball, preparing simple accounting reports, and cooking are all mixed routine problems. Although the decision maker can solve each subproblem proficiently and readily, he/she may not solve the entire problem effectively and readily. Because there are a number of routine problems which need to be solved, the decision maker must decide how to allocate his/her time and decide which of the subproblems should be addressed first so that the entire problem can be solved effectively. Training, teaching and practicing are again very important ways to acquire and master the competence set needed to solve the entire problem efficiently and effectively. When the decision maker reaches such a state of proficiency that he/she can solve the entire problem readily and efficiently, the entire problem can be regarded as a routine problem. Let us consider driving a car, eating a steak, and buying groceries. Do they not gradually become routine problems from originally mixed routine problems? 4.3. Fuzzy Problems A decision problem E is a fuzzy problem if its competence set CS*t(E) is only fuzzily known. That is, the ideas, concepts and skills needed to successfully solve E are roughly, but not clearly, known. This implies that the decision maker has not yet mastered the skills and concepts necessary for solving these problems. For instance, in purchasing a house or participating in a job interview, unless you have a number of similar experiences, you may find that you are not sure which competence set can guarantee a successful solution. You may be aware of a set of ideas, concepts and skills which are commonly but fuzzily known to be good for solving the problems. This implies that you have not yet mastered the ideas, concepts and skills required to solve the problem. That is, CS*t(E) is not contained in your core or α-core of HD on E with a high value of α. In Fig. 2, we depict the case in which CS*t(E) is only fuzzily known. Note that as CS*t(E) is only fuzzily known, the ideas, concepts and skills are elements of the potential domain, PDt, even if they may not be elements of the α-core, Ct (α,E). As CS*t(E) may not be contained in the α-core with a high value of α (see Fig. 3), in order to recognize and acquire the competence set CS*t, one should occasionally relax and lower the value of α. If possible, the decision maker should try to detach himself/herself from the problem as to expand his/her HD. In Fig. 3, we see that if we could sufficiently lower the value of α, we could capture most of CS*t(E).
P.-L. Yu and Y.-C. Chen / Competence Set Analysis and Effective Problem Solving
237
CS*t (E)
Figure 2. CS*t(E) Is Fuzzily Known.
CS*t Figure 3. α-Core and Competence Set.
In general, for fuzzy decision problems, the α-core with a large value of α is usually not adequate for solving the problem. In fact, rigid and inflexible HDs (i.e. the α-core is almost fixed even if we lower the value of α) may prove to be detrimental to solving the fuzzy problems. For instance, in the house purchasing or job interview examples, if we are novices and are not willing to be flexible and open-minded, we most likely could not acquire the competence set CS*t(E). To expand our HD, we could benefit from open-minded discussions with some experts, or from relaxing a little bit in order to have time to think over the problems. Please refer to [19,21] for methods and principles to expand HD. Once the competence set is gradually defined and clarified, we can again use practice, rehearsing and training to obtain the needed degree of proficiency to solve our problems. When we repeat the process enough times, the fuzzy problems may gradually become routine problems.
238
P.-L. Yu and Y.-C. Chen / Competence Set Analysis and Effective Problem Solving
Figure 4. Competence Set of Challenging Problems.
From a societal viewpoint, a decision problem is fuzzy if the collective competence set CS**t is only fuzzily known. Many problems such as the national education policy, trade policy, labor policy and defense policy all belong to this class of problems. Other problems such as corporate strategic planning, making decisions, human resource management, career management and conflict prevention and resolution are also fuzzy problems. The above acquiring and mastering processes for the individual problems are applicable to the collective problems as well. However, as we notice that there are many collective problems (such as those listed above) which have been dealt with and managed over thousands of years in human history without clearly known competence sets, we may expect such problems to remain fuzzy for a long time to come. In fact, fuzziness may be needed to maintain the flexibility of the policy. Many people survive or even take advantage of the fuzziness. 4.4. Challenging Problems A decision problem E is a challenging problem if its competence set CS*t(E) is unknown or only partially known to our existing HD, which implies that CS*t(E) contains some elements outside of the existing potential domain, which in turn implies that CS*t(E) cannot be contained in any α-core Ct(α,E), no matter how small α is. Figure 4 depicts such a relationship. Innovative research and development problems which challenge the existing technical assumptions (for instance, designing an airplane which could fly around the earth in two hours), market restructuring problems, complex conflict resolution problems and management of traumatic disasters are some examples of challenging problems. Similarly, the following problems are also challenging. In buying groceries, how would you pay for them if you did not have any money or credit? In purchasing a house, how would you do it without any money for a down payment? In a job interview, how would you get the desired job if you apparently lacked qualifications as compared to a number of other candidates? Challenging problems can be solved only by expanding and restructuring our HD. A fixed mind (fixed HD) usually becomes a major resisting force for solving challenging problems. Refer to [18–21] for methods and principles of expanding our HD.
P.-L. Yu and Y.-C. Chen / Competence Set Analysis and Effective Problem Solving
239
Table 1. A Classification of Decision Problems
Problem Type
Characteristics (i) CS*t is well known
1. Routine Problems (ii) Ct(α,E)⊃CS*t (i) CS*t is fuzzily known 2. Fuzzy Problems (ii) CS*t\Ct(α,E)≠∅ and PDt⊃CS*t (i) CS*t is unknown or partially known 3. Challenging Problems (ii) CS*t\PDt≠∅ and CS*t\Ct(α,E)≠∅
Through learning and hard work, CS*t(E) may be gradually recognized and acquired. Then through practice, training and rehearsing, the decision maker may master CS*t(E) and the problem E would become a routine problem to him/her. From a societal viewpoint, a collectively challenging problem is one where CS**t(E) is unknown or only partially known to the existing collective HD, which implies some of CS**t(E) is not readily available. For instance, the problem of designing an airplane which could fly around the earth in two hours is such a problem. Collectively challenging problems can be very difficult to solve. But they are not entirely impossible as human history has demonstrated. There are a number of breakthroughs which have occurred when we left the existing domains. Steam engines for trains and boats, jet propellers for airplanes, nuclear power plants, missiles, computers and lasers are just some of the products of these innovations to solve challenging problems. Finally, we notice that what is a challenging problem to one person may be a fuzzy or routine problem to someone else. For instance, if you get sick and want to find out what is wrong with you, the problem may be a challenging one unless you are a physician. For some doctors, however, the problem can be either routine or fuzzy. Likewise, a mechanical problem with an automobile may be routine or fuzzy to professional mechanics, yet it can be a challenging problem to a large number of physicians. Through learning and experience, each one of us acquires and masters a variety of competence sets for routinely and effectively solving a variety of problems. These competence sets become our intangible assets and/or niches for survival and success in the world of complex decision problems. One cannot master all competence sets, yet more is better than less. The value of the competence sets will depend on how unique the sets are and how they relieve other people’s charges. Before we close this section, let us distinguish between routine, fuzzy and challenging problems in Table 1.
240
P.-L. Yu and Y.-C. Chen / Competence Set Analysis and Effective Problem Solving
5. Confidence, Risk Taking and Ignorance Why is there a saying that, easy promises are usually difficult to keep? This is true when the promiser underestimates the difficulties of the problem and/or overestimates his/her ability to handle the problem, assuming there is no other motivational problem. This raises a series of questions on decision quality, confidence, risk taking and ignorance. We shall address these problems through the following three critical concepts related to decision problems: (1) The set of acquired and/or mastered skills, concepts and knowledge for dealing with the decision problem E, denoted by Skt(E), will be called the acquired skill set on E. Note that Skt(E) is closely related to the α-core, Ct(α,E). Depending on how urgently the problem needs to be solved, a particular α, say αo, may be chosen. For instance, in hunting or boxing, because spontaneity is needed, αo may be chosen to be 1; in purchasing a house with no time constraints, αo may be chosen to be 0.2 or something close to 0. One can then set Skt(E) = Ct(α,E). (2) The perceived competence set for problem E is CS*t(E). As we discussed before, this is a subjective perception of the skills, concepts and knowledge needed to solve problem E successfully. (3) The true competence set for problem E is denoted by Tr(E). This is the set of skills, concepts and knowledge that are truly needed to solve problem E, including being able to respond to all unknown parameters (in the house purchasing example, these included the ability to detect hidden structural problems, to handle the problems of house defects and to manage the transaction if a big earthquake occurs and destroys the candidate house). Note that CS*t(E) and Skt(E) are habitual domains. They can evolve with time, yet they can also be stabilized over time. However, if necessary, they can also be expanded. The following are worth mentioning: (1) The set of skills or ideas which are in Tr(E) but not in CS*(E) [i.e. Tr(E)\CS*(E)] represents what is needed, yet unknown, to the decision maker to successfully execute E. The larger the set Tr(E)\CS*(E), the more ignorant the decision maker is of the problem E. (2) If the acquired skill set Sk(E) contains the perceived competence set CS*(E), then the decision maker would have full confidence to make a decision on E. Otherwise, the decision maker would hesitate or lack full confidence to take on E. (3) The results of a decision and its implementation for nontrivial problems depends not only on the decision itself, but also on some unknown or uncertain factors. For instance, in boxing or hunting, the results will, to a large extent, depend on your opponents. In purchasing a house, your satisfaction, to a large extent, depends on which houses are available on the market. Thus, a successful decision does not necessarily imply that the decision maker has full competence on E (i.e. it does not mean Sk(E)⊃Tr(E)). Due to the unknown factors, Sk(E) may be adequate in some situations to successfully solve E, but when the unknown factors change, Sk(E) may no longer be adequate and the results of the decision may be undesirable. For instance, in the above boxing or hunting examples, if the opponents are replaced by higher caliber substitutes, winning may become doubtful.
P.-L. Yu and Y.-C. Chen / Competence Set Analysis and Effective Problem Solving
241
Due to the analogy and association hypothesis [18], repetitive winning or success can inflate our confidence so quickly that we may overlook the importance of unknown factors. For instance, let us take a look at Napoleon, a great soldier in human history, who led the French army through a number of military successes and established a French empire. Undoubtedly, he was a brilliant military strategist. But his numerous military successes might have inflated his confidence so greatly that he regarded himself as invincible in all situations. He might, therefore, have underestimated the importance of unknowns and uncertainty, which may have led to his military venture into Russia and to his strong army being ambushed and destroyed by the cold icy weather (the severity of the cold weather was not adequately known to Napoleon). Thus, we may say that Napoleon’s eventual defeat was built upon his numerous successes due to overconfidence, which blinded him from dealing with the unknowns successfully. (4) Suppose that Tr(E) = CS*(E), and the decision maker fully understands the problem. Assume that Sk(E) is much smaller than CS*(E). Then, the decision maker may not be able to handle the problem under all possible unknowns or uncertain situations. Nevertheless, after a careful deliberation of the possible consequences, if uncontrollable, unknown or uncertain situations occur, the decision maker may decide to go ahead and make a decision on E even though his/her acquired skill set is not adequate. In this case, we say that the decision maker is taking a calculated risk. (5) In general, we say a decision maker is risk taking if CS*(E) ⊃ Sk(E) and CS*(E)ҁSk(E). Here we notice that it is possible that Tr(E)\CS*(E)ҁ∅. That is, the decision maker may be ignorant or partially ignorant about E in his/her risk taking. In the example of purchasing a house and a number of other nontrivial decision problems, we may just have to take some risk.
6. Effective Decision Making From the previous sections we know the concepts of decision problems classification, the processes of learning and decision making and the decision quality relative to confidence, risk taking and ignorance. Let us apply these concepts into our decision making. The following can enhance our effectiveness to solve decision problems: (1) Identify the features of the problem denoted by E. We need to look into the five decision elements–decision alternatives, decision criteria, decision outcomes, decision preferences and decision information inputs–and the four environmental facets–decisions as part of our behavior mechanism, stages in the decision processes, players involved in the decision making process, and the unknowns and uncertainty involved in the process–of the problem in order to specify the features of the problem [18]. This work is important if we would like to avoid the blind spots, especially when the problems are of high stakes. Careful canvassing through the decision elements and environmental facets and their interrelationships can not only help us understand the problem better, but also allow us to identify the vital solution and its effective implementation. In terms of HD, the above canvassing forces us to look into the nine dimensions (five decision elements and four environmental facets) of the problem involved, which allows remote but relevant ideas, knowledge and information to be more easily activated to help solve the problem. This is especially important for fuzzy or challenging problems.
242
P.-L. Yu and Y.-C. Chen / Competence Set Analysis and Effective Problem Solving
(2) When solving problems, we may see just a small part of the problem domain (including all parameters and their possible variations over time). The portion (of the problem domain) that we cannot see is our decision blind. Suppose our alerted domain (those parameters and their variations that are currently under our consideration) to be fixed in only a small part of the problem domains. Then very likely we could end up with a serious mistake. This situation is known as decision trap. A systematic scheme, based on habitual domain theory, could help us reduce decision blinds and avoid decision traps so that we could make decision with good quality. Please refer to [26] for details. (3) Expand the perceived competence set CS*t(E) and the skill set Skt(E) as effectively and as quickly as possible. Recall that if CS*t is smaller than Tr(E) we have ignorance, and if Skt is smaller than CS*t we would be short of confidence or comfortableness in taking risks. Consulting with credible experts and professional books may be an effective means to expand the related HDs. There are eight ways and nine principles to expand our HDs which may be helpful, please see [19–21]. (4) Repeat and rehearse the learned skills, knowledge and information so that they can become part of the core of our HD to solve the problem. Once the Skt containing CS*t becomes part of the core, we would be full of confidence and have the instincts to solve the problem quickly and effectively (if CS*t contains Tr(E)). This development can make us more efficient, especially for the repetitive and/or routine problems. (5) Do not fail to consider the implementation problems especially when the problem is of a dynamic nature involving a number of stages of decisions, a number of players and a number of uncertain factors or unknowns. Watch and anticipate changes in the problem domains. The players’ HDs, perceptions and attitudes can change with time and circumstance. They can impose new constraints, situations and conditions for the solution. What is today’s time optimal solution may not be tomorrow’s time optimal solution. Keeping a high degree of alertness and vigilance is important, especially when there are antagonistic players. (6) If the problem is repetitive and/or becoming routine, we may need a periodic revision or renewal of our way of solving the problem. As we become more efficient in solving the problem, clearly specified methods to release the charge created by the problem are readily available. We may not spend enough time and effort to seek a better way to do the job. Unwittingly, our Skt and CS*t gradually become stable and rigid, and may not readily accept better methods, which may make us lose competitive strength. After all, time continuously advances, so the problem domains in terms of time and states are never the same at different times, even though the differences could not be detectable without conscious effort. Maintaining the same methods or solution for the same problem over all time can be a serious mistake. To revise and renew our concepts on the problem and its solution is equivalent to expanding (revise and renew) our HDs for the problem.
7. Competence Set Analysis and Research Topics The concept of competence set has been prevailing in our daily life. In order to increase our competence, we go to schools to study and get diplomas or degrees when
P.-L. Yu and Y.-C. Chen / Competence Set Analysis and Effective Problem Solving
Tr(E) The True Competence Set
Ignorance Uncertainty Illusion
Tr*(E) The Percieved Competence Set Confidence, Risk Taking
Decision Quality Sk(E) The DM's Acquired Skill Set
243
Illusion Ignorance Uncertainty
Sk*(E) The Percieved Acquired Skill Set
Figure 5. The Interrelationships among Four Elements of Competence Set.
we graduate from schools with satisfactory performance. Diplomas or degrees symbolize that we have certain set of competence. In order to help people to reduce uncertainty and unknown, or verify that certain professionals or organizations have certain competence sets, various certificates through examinations are issued to certify the qualifications. Hundreds of billions of dollars annually has been spent in this acquiring and verifying competence sets. Given a problem, different people might see the needed competence set differently. Indeed, competence set for a problem is an HD (habitual domain) projecting to a particular problem. Note that competence sets are dynamic and can change with time t. Without confusion, in the following discussion we shall drop subscript. In order to more precisely understand CS, we shall distinguish “perceived” and “real” CS, and “perceived” and “real” skill set Sk. Thus, there are four basic elements of competence set for a given problem E, described as follows: (i) The true competence set (Tr(E)): consists of ideas, knowledge, skills, information and resources that are truly needed for solving problem E successfully; (ii) The perceived competence set (Tr*(E)): The true competence set as perceived by the decision maker (DM); (iii) The DM’s acquired skill set (Sk(E)): consists of ideas, knowledge, skills, information and resources that have actually been acquired by the DM; (iv) The perceived acquired skill set (Sk*(E)): the acquired skill set as perceived by the DM. Note, for clarity we replace CS*(E) of the previous sections by Tr*(E). The interrelationships of the above four elements are shown in Fig. 5. Note that the above four elements are some special subsets of the HD of a decision problem E. Without confusion, we shall drop E for the following general discussion. According to the different relations among the four elements, we have the following observations:
244
P.-L. Yu and Y.-C. Chen / Competence Set Analysis and Effective Problem Solving
Figure 6. Two Domains of Competence Set Analysis.
(i) The gaps between the true competence set (Tr or Sk) and perceived competence set (Tr* or Sk*) are due to ignorance, uncertainty and illusion; (ii) If Tr* is much larger than Sk* (i.e. Tr*⊃⊃Sk*), the DM would feel uncomfortable and lack of confidence to make good decisions; conversely, if Sk* is much larger than Tr* (i.e. Sk*⊃⊃ Tr*), the DM would be fully confident in making decisions; (iii) If Sk is much larger than Sk* (i.e. Sk⊃⊃Sk*), the DM underestimates his own competence; conversely, if Sk* is much larger than Sk (i.e. Sk*⊃⊃ Sk), the DM overestimates his own competence; (iv) If Tr is much larger than Tr* (i.e. Tr⊃⊃Tr*), the DM underestimates the difficulty of the problem; conversely, if Tr* is much larger than Tr (i.e. Tr* ⊃⊃ Tr), the DM overestimates the difficulty of the problem; (v) If Tr is much larger than Sk (i.e. Tr⊃⊃Sk), and decision is based on Sk, then the decision can be expected to be of low quality; conversely, if Sk is much larger than Tr (i.e. Sk⊃⊃Tr), then the decision can be expected to be of high quality. (vi) There are three concepts of competence needed to be clarified. First, core competence is the collection of ideas or skills that would almost surely be activated when problem E is presented. To be powerful, the core competence should be flexible and adaptable. Next, the ideal competence set (similar to ideal HD, see [19,21]) is the one that can instantly retrieve a subset of it to solve each arriving problem successfully and instantaneously. Finally, a competence is competitive if it is adequately flexible, adaptable, and can be easily integrated or disintegrated as needed to solve the arriving problems faster and more effectively than that of the competitors. Research Problems of Competence Set Analysis Competence set analysis has two inherent domains: competence domain and problem domain, as depicted in Fig. 6.
P.-L. Yu and Y.-C. Chen / Competence Set Analysis and Effective Problem Solving
245
In the following sub-sections, we will discuss some research problems of competence set analysis. 7.1. Given a Problem or Set of Problems, What is the Needed Competence Set? And How to Acquire It? For example, how to produce and deliver a quality product or service to satisfy customers’ needs is a main problem of supply chain management. To successfully solve this problem, each participant in a supply chain including suppliers, manufacturers, distributors, and retailers must provide the chain with its own set of competence, so that the collected set of competence can effectively achieve the goal of satisfying customers’ needs. We may illustrate the above concept by the following figure.
Customers' needs Competence of supply chain
Supplier's competence
Manufacturer's Distributor's competence competence
Retailer's competence
Figure 7. Competence of a Supply Chain.
For further discussion on supply chain operations, please refer to [1]. How to expand the existent competence set to the needed competence set in most effective and efficient way? Under some suitable assumptions, this can be formulated and solved by graph theory, spanning trees, spanning tables and mathematical programming [5,7,10,11,13]. The earlier research has focused only on the deterministic situation; however, one could remove this assumption to include uncertainty, fuzziness, and unknowns. In the recent studies, some heuristic methods, such as genetic algorithm or multi-objective evolutionary algorithm (MOEA), and data mining technology have also been incorporated into the analysis of competence set expansion [6,8,12]. 7.2. Given a Competence Set, How to Locate a Set of Problems to Solve as to Maximize the Usage and Value of the Competence for Individuals or Organizations? In the subsection 7.1, we focus on analyzing the needed competence for a given problem. Conversely, given a competence set, what is the best set of problems that can be
246
P.-L. Yu and Y.-C. Chen / Competence Set Analysis and Effective Problem Solving
solved by the competence set as to maximize the value? If someone has already acquired a particular competence set, what are the problems he/she should focus to solve as to maximize its value? Fail to do so will lead to loss of opportunity and business. A notorious example occurred at Xerox Corporation. Xerox was the leader in computer using interface technologies such as pull-down menus and the mouse, but it did not effectively exploit this competence for its true value (which was realized by Apple and Microsoft.) The concept of Innovation Dynamics [9] could be introduced here. It describes the dynamics of how to solve a set of problems with our existent or acquired competence (to relieve the pains or frustrations of “certain customers or decision makers” at certain situations) as to create value, and how to distribute this created value so that we can continuously expand out competence set to solve more challenging problems and create more value. Let’s take a look at an example in supply chain. With its vast transportation and distribution competence, Federal Express plays an important role in the supply chain of Dell computers, which runs the business of directly selling computer to customers. Nevertheless, Federal Express is also an active participant in the other supply chains as to maximize the value and return of its competence. Once a firm is aware of having a certain competence set, it may want to effectively use it to maximize its returns. With a certain competence set, a firm may run many businesses. Which set of business can be best served by the competence set? Given a competence set what is the set of products or services it can produce, so that the usage of its competence set is maximized under the capacity limitation. This problem can be viewed as a selection problem for the best portfolio of products under some capacity constraint. 7.3. How to Enrich and Expand Existent Competence Set? And How to Effectively Manage Competence Set so that It Can Produce Maximum Value Over the Time? In order to survive and prosper, the competence set must be rich, flexible, liquid, and adaptable to changes. Without continuous enrichment and expansion of competence set, an individual or organization will become rigid, mature, decline and even dead. For each individual or organization, there are two ways to enrich or expand its existent competence set: internal and external expansion. Internal expansion means to acquiring new skills, knowledge etc, through learning by self, such as employees’ on job training. The three tool boxes for expanding HD [19,21] are especially useful here. External expansion means to attain the enrichment of competence set by hiring new personnel, outsourcing, or forming a strategic alliance with external entities. A strategic alliance is defined as a long-term cooperative arrangement between two or more independent firms that engage in business activities for mutual economic gain [14]. Forming an alliance enables a firm to focus on its core skills, resources, and competence, while acquiring other capabilities it lacks from other members. By bringing together firms with different skills, knowledge, and competence, alliances create unique learning opportunities for expanding the habitual domains and competence sets of the participating firms. Please see [2–4] for related application. An example of external expansion of competence set is the TV program “Super Voice Girl” in China. It is an annual national singing contest in China for female con-
P.-L. Yu and Y.-C. Chen / Competence Set Analysis and Effective Problem Solving
247
C
A
B
Figure 8. Collective Competence Sets when Merger Resulting in Cooperation or Conflict.
testants, organized by Hunan Satellite Television since 2004. In 2005, new strategy was introduced to the program by the sponsor, China Mengniu Dairy Company Limited, and since then “Super Voice Girl” has become one of the most watched TV entertainment shows in mainland China with tens of millions of viewers. This program successfully integrated diverse competence sets from different parties: the contestants, the audiences, the TV Station, the telecom companies, and the sponsor of the program. By combining competence sets to form a strategic alliance with external parties, “Super Voice Girl” did create enormous value [2]. Forming alliance may be studied by second order game theory [15–18,20]. More formal mathematical analysis and applications are needed in this important area. In addition to enrich and expand existent competence sets, we also want to effectively manage competence set to produce maximum value. Statistically, seventy percent of mergers are not successful. Also note that collaboration does not always provide opportunity to internalize a partner’s skills. A “psychological barrier” may exist between partners, stemming from the fear that the one may out-learn or de-skill the other. The above discussion may be illustrated by Fig. 8. Symbolically, let A and B be two competence sets of two firms, CSconflict (A ∪ B) is the collective competence set when merger resulting in conflict, which usually is smaller than A ∪B; while CScooperation (A ∪ B) is that resulting in cooperation, which can usually be larger than A∪B. How can a merger create some C = CScooperation (A ∪ B), a larger competence set through cooperation, and avoid CSconflict (A ∪ B), a smaller competence set because of conflict and infighting? Therefore, the following problems need to be further studied: (i) Under what circumstances maximal value can be created when two or more firms or individuals form a strategic alliance by pooling their competence and efforts to pursue a set of agreed-upon goals; (ii) How to put firms together via different agreements so as to create “synergy effects” for all members in the alliances of competence sets.
248
P.-L. Yu and Y.-C. Chen / Competence Set Analysis and Effective Problem Solving
7.4. How to Create a Path for Competence Set to Grow as to Maximize the Value of Existence of Individuals, Firms, or Supply Chain? Life span has a limit. As time ticks away constantly, our remaining lifetime diminishes. Therefore, how to make our life most fulfilling is a main issue of career management [18,20]. Related to career management are the following questions: (i) What are our ideal career paths from now to the deadline? (ii) Time is a rare resource. How do we allocate it to reach our ideal career path? (iii) If we regard “competence” as personal capital, what is the optimal path, with respect to time, for investments of this personal capital? In other words, we want to acquire a sequence of competence sets with respect to time, CS(1), CS(2), …, CS(t), …, by which we may create maximal added value and reach career goals, and with minimal cost if possible. From these questions we could apply the idea of competence set transformation to human resource management [25]. Also, a firm’s life span may have limit. In fact, fully one-third of the companies listed on the 1970 Fortune 500 had disappeared just 13 years later, due to mergers, acquisitions, or being broken apart. But just because most companies don’t live very long doesn’t mean they can’t. To be a long-lived and continuously prosperous company, a company should be sensitive to their environment and know how to develop its competence sets to adapt to changes, or even control the changes. The following questions still need to be further studied: (i) What is the firm’s ideal path of competence development? (ii) How do we allocate resources to reach our ideal development path? (iii) What is the optimal path, with respect to time, for investments by acquiring new competence? 7.5. How Firms Can Release Customers’ Charge by Providing Products or Services in Terms of Their Competence Sets? To be competitive, a firm or supply chain must be able to deliver right quality products or services, which satisfy the target customers’ needs or desires, or release their charge, pain or frustration ahead of its competitors. In abstract, each product and service has capabilities or attributes to release customers’ pain or frustration, or to satisfy their needs. They may be regarded as a composition of competence sets. Using this concept, a firm must have the insight to deploy or reorganize its competence sets, so that it could deliver a quality product or service to his customers at a better price ahead of its competitors. As customers’ desires and needs can be motivated by external force, advertisement becomes important. For example, customers’ need for personal computer (PC) is progressing from desktop, laptop to personal digital assistant (PDA). A successful PC manufacturer must have this insight as to design and produce new products to satisfy customers’ needs. The following two kinds of problems need to be further studied: (i) Develop the insight (a competence) to design and produce a new product or service (a composition of attributes or competence set) to satisfy the customer newly emergent needs.
P.-L. Yu and Y.-C. Chen / Competence Set Analysis and Effective Problem Solving
249
(ii) Given a product or service, how to motivate customers so that they are willing to buy our product or service.
8. Conclusion For each decision problem, there is a competence set in the decision maker’s mind. If the decision maker already own the competence set, he/she would be confident and comfortable to solve the problem; otherwise, he/she may tend to avoid the problem, or try to expand and acquire the needed competence set so that he/she can solve the problem effectively. In this article, we have first discussed core of habitual domains, and how the competence sets are acquired and mastered, then introduced the classes of decision problems and how we can acquire the competence sets for solving the various problems and making decisions effectively. Finally we have presented a number of research problems and directions of competence set analysis for future studies.
References [1] Chopra, S. and Meindl, P., Supply Chain Management: Strategy, Planning and Operation, Prentice-Hall, New Jersey, 2001. [2] Chen, Y.C. and Yu, P.L., Value Creation by Using Habitual Domain Theory: A Case Study on “Super Voice Girl”, Proceedings of the 13th Conference on Habitual Domains, Taipei, Taiwan, pp. 73–82, 2006. [3] Chiang-Lin, C.Y. and Yu, P.L., Value Creation by Corporate Alliance in Resource Allocation, Market Distribution and Manufacturing Process – Mathematical Approaches, accepted by Journal of Management, 2006. [4] Cho, C.S. and Yu, P.L., A Study on Self Production or Outsourcing for the Needed Part, Proceedings of the 13th Conference on Habitual Domains, Taipei, Taiwan, pp. 16–34, 2006. [5] Feng, J.W. and Yu, P.L., Minimum Spanning Table and Optimal Expansion of Competence Set, Journal of Optimization Theory and Applications, 99, pp. 655–679, 1998. [6] Hu, Y.C., Chen, R.S., Tzeng, G.H. and Chiu, Y.J., Acquisition of Compound Skills and Learning Costs for Expanding Competence Sets, Computers and Mathematics with Applications, Vol. 46, No. 5–6, pp. 831–848, 2003. [7] Huang, G.T., Wang, H.F. and Yu, P.L., Exploring Multiple Optimal Solutions of Competence Set Expansion Using Minimum Spanning Table Method, Proceedings of the 11th Conference on Habitual Domains, pp. 163–175, 2004, HsinChu, Taiwan. [8] Huang, J.J., Ong, C.S. and Tzeng, G.H., Optimal Fuzzy Multi-Criteria Expansion of Competence Sets Using Multi-Objectives Evolutionary Algorithms, Expert Systems with Applications, Vol. 30, Issue 4, pp. 739–745, 2006. [9] Lai, T.C. and Yu, P.L., Knowledge Management, Habitual Domains, and Innovation Dynamics, Lecture Notes in Computer Science, pp. 11–21, 2004. [10] Li, H.L. and Yu, P.L., Optimal Competence Set Expansion Using Deduction Graphs, Journal of Optimization Theory and Applications, 80, pp. 75–91, 1994. [11] Li, J.M., Chiang, C.I., and Yu, P.L., Optimal Multiple State Expansion of Competence Set, European Journal of Optimization Research, 120, pp. 511–524, 2000. [12] Opricovic, Serifam and Tzeng, G.H., Multicriteria Expansion of a Competence Set Using Genetic Algorithm, Multi-Objective Programming and Goal-Programming: Theory and Applications, by Tanino, T., Tanaka, T. and Inuiguchi, M. (eds.), Springer, pp. 221–226, 2003. [13] Shi, D.S., and Yu, P.L., Optimal Expansion and Design of Competence Set with Asymmetric Acquiring Costs, Journal of Optimal Theory and Applications, 88, pp. 643–658, 1996.
250
P.-L. Yu and Y.-C. Chen / Competence Set Analysis and Effective Problem Solving
[14] Tsang, W.K., A Preliminary Typology of Learning in International Strategic Alliances, Journal of World Business, 34, pp. 211–29, 1999. [15] Yu, P.L., Second-Order Game Problem: Decision Dynamics in Gaming Phenomena, Journal of Optimization Theory and Applications, 27, No.1, pp. 147–166, January 1979. [16] Yu, P.L., Introduction to Decision Dynamics, Second Order Games and Habitual Domains, in MCDM: Past Decade and Future Trends, A Source Book of Multiple Criteria Decision Making, (ed.) M. Zeleny, Jai Press, Greenwich, Connecticut, pp. 26–49, 1984. [17] Yu, P.L., Second Order Games and Habitual Domain Analysis, in Mathematical Modeling in Science and Technology, (eds.) X.J.R. Avula, G. Leitmann, C.D. Mote Jr., and E.Y. Rodin, Pergamon Journals Limited, pp. 7–12, 1987. (A Keynote Lecture at the Fifth International Conference on Mathematical Modeling, July 1985, Berkeley, California). [18] Yu, P.L., Forming Winning Strategies: An Integrated Theory of Habitual Domains, Springer-Verlag, Berlin, Heidelberg, 1990. [19] Yu, P.L., Habitual Domains: Freeing Yourself from the Limits on Your Life, Highwater Editions, Kansas City, Kansas, 1995. [20] Yu, P.L., Application of Habitual Domains—Becoming a Great Winner (In Chinese), Hong Educational and Cultural Foundation, Taipei, Taiwan, 1998. [21] Yu, P.L., Habitual Domains and Forming Winning Strategies, NCTU Press, Hsin Chu, Taiwan, 2002. [22] Yu, P.L., HD: Habitual Domains, Human Software, beyond IQ and EQ, that determines our life (in Chinese), China Times Publishing Company, Taipei, Taiwan, 1998. [23] Yu, P.L. and Chiang, C.I., “Competence Set Analysis – An Effective Means to Solve Non-trivial Decision Problems” Multiple Criteria Decision Making in the New Millennium, Springer-Verlag, pp. 142–151, 2001. [24] Yu, P.L. and Chiang, C.I., Decision Making, Habitual Domains and Information Technology, International Journal of Information Technology and Decision Making, Vol. 1, pp. 5–26, 2002. [25] Yu, P.L., Chiang-Lin, C.Y. and Chen, J.S., Transforming from A Researcher into A Leader in High-Tech Industries, International Journal of Information Technology & Decision Making, Vol. 3 No. 3, pp. 379–393, 2004. [26] Yu, P.L. and Chiang-Lin, C.Y., Decision Traps and Competence Dynamics in Changeable Spaces, International Journal of Information Technology and Decision Making, Vol. 5, No. 1, pp. 5–18, 2006.
Part 3 Information, Knowledge and Wisdom Management
Advances in Multiple Criteria Decision Making and Human Systems Management Y. Shi et al. (Eds.) IOS Press, 2007 © 2007 The authors. All rights reserved.
253
Information and Knowledge Strategies: Towards a Regional Education Hub and Highly Intelligent Nation Thow Yick LIANG Singapore Management University, Lee Kong Chuan School of Business 50 Stamford Road, Singapore 178899, Republic of Singapore E-mail:
[email protected] Abstract. Singapore is a small nation with a very high population density. One of her key priorities for survival is to nurture a well-educated and continuous learning work force that can function competitively in a knowledge-intensive economy and fast-changing environment. In this respect, effective information and knowledge strategies are highly significant. To achieve this vision, Singapore has mapped out and exploited several different IT/ICT strategic plans/policies and integrated them systematically into the national education system. It is recognized that the continuous and strategic exploitation of appropriate ICT in education is inevitable for continuous and effective knowledge structure updating at all levels (individual, organization, community) through a smoother learning dynamic. As the world activities become more knowledge focused, the ability to provide higher levels education for a larger proportion of the population is a critical factor for all organizations/nations to survive. In addition, it is equally crucial for every one in the work force to be supported by a comfortable lifelong learning environment. Concurrently, it must be noted that the education (learning and knowledge acquisition) dynamic of human beings is highly complex and nonlinear. This study on information and knowledge strategies reveals that ICT and education (encompassing elearning, e-library and e-landscape), and intelligence and collective intelligence are closely associated. The effective strategies must encompass an emergent component. The integration between human learning and ICT is synergic in the current environment. The output can be affected by the butterfly effect. Keywords. National ICT and education policy, knowledge-intensive economy, regional knowledge hub, knowledge worker, e-learning, lifelong learning, e-library, intelligent nation, adaptive, nonlinearity, complexity
Introduction Singapore a Knowledge-Intensive and ICT-Savvy Nation With an area of about 660 sq km and a population of around 4.2 million people (resident population of about 3.5 million) Singapore has to be highly vibrant, well informed and adaptive to survive in the current fast-changing environment. In particular, effective information and knowledge strategies are vital in the new competition. For Singapore, the preparation started in the late 1970s. For various reasons the government recognized that the strategic exploitation of IT (information technology) is a critical factor for economic, social and national development. Consequently, four strategic IT/ICT
254
T.Y. Liang / Information and Knowledge Strategies
(information and communications technology) plans/policies have been mapped out and implemented over the past twenty-five years. Concurrently, the national education system is also allocated high priority. Thus, the information and knowledge perspective has always been recognized as a key strategic domain that maps Singapore’s competitiveness today. Singapore’s current IT/ICT achievements are due to the insights and foresights embedded in the four strategically developed national policies. The later policies encompass a special focus on the requirements to compete effectively in the emerging global trend of a knowledge-intensive economy and the fast-changing environment (Cusumano and Markides, 2001; Dankbaar, 2003; Kelly, 1988; Liang, 1992, 1993, 2000; Porter, 1985; Porter and Miller, 1985; Senge et al, 1994; Zeleny, 1985, 1989, 2000, 2005). According to two separate international studies recently, Singapore at the moment is the third most infocomm-savvy nation in the world after Finland and the United States, and the second in e-governance (4.2 million hits a month) after Canada. Over the same period, the fundamental aim in education is re-directed at nurturing more knowledge workers rather than factory workers. Education at all levels of training and re-training, spreading from pre-school education to research and development in industry and post-doctoral programs in research institutions has always been provided with abundant support. Integrating with ICT to create a highly automated learning environment is one of the key strategic approaches. In general, Singapore is moving towards the objective of being a highly integrated and yet diversified education and knowledge hub, strongly supported by the focus on innovation and creativity, and world-class achievements. At the moment, the ICT objective in many higher institutions of learning is towards wireless local area networks, ultra wideband networks and grid computing. The budget allocated to education in general has also increased substantially from S$11billion in 2005 to S$14 billion in 2006. A significant point that has to be handled with greater subtlety is the highly complex, nonlinear and fast-changing learning dynamic, encompassing knowledge structure creation, updating, sharing and even elimination. The Study The study analyzes the information and knowledge strategies that Singapore has adopted, focusing on the challenges encountered, and the transformational processes of changing from a third-world manufacturing-intensive industry (in the 1960s) to a firstworld knowledge and expertise-oriented economy (since the 1990s). This research is based on literature review, examination of historical records, interviews with authorities/people who have participated in the mapping out and implementation of the various strategic plans, and some existing strategic models. In particular, three different aspects are examined more in-depth. The two perspectives that are highly responsible for shaping Singapore into a more intelligent and knowledge-based nation, and into a regional education hub are investigated first. a)
The mindsets, foresights, objectives, strategies and expectations of the various IT/ICT/Infocomm policies: The focal point of this component is on the changing strategies and emphases of the various plans, as the thought of the government, economy and people in the IT/ICT/Infocomm sector matures gradually. The dissimilar impacts and contributions of the various plans are exam-
T.Y. Liang / Information and Knowledge Strategies
b)
c)
255
ined and compared. A key result to note in this investigation is the various paths information and knowledge has been creating niches for Singapore to achieve its current first-world economy status over the different periods. The development, implementation and integration of IT/ICT/Infocomm into the education system and Singapore’s emergence as a regional education hub: The objectives of the two master plans for IT/ICT/Infocomm in education, the new education strategies adopted, and the benefits derived from the development of the integrated plans are examined. This analysis also investigates how IT/ICT/Infocomm is supporting the overall needs associated with nurturing more knowledge workers, the continuous lifelong education of the population as a whole, and the emergent of Singapore as a strategic regional education hub. The strategic impacts of integrating ICT and education are also examined. Examine Singapore’s emergence as an intelligent nation: Associated with the first two parts, the third component is directed on the path that leads to Singapore’s emergence as an intelligent nation. The characteristics of such a human organization are best examined with respect to Chaos and Complexity Theory. The increasingly nonlinear and complex environment and its co-relationship with the emergent strategy are investigated. The dynamic of a highly intelligent complex adaptive system (iCAS), in particular, with respect to knowledge acquisition and learning (individual and organizational) is studied.
1. Criticality of ICT Strategies 1.1. Changing Mindsets and Objectives In Singapore, the automation of information systems and the exploitation of IT, and subsequently ICT/infocomm during the last two and the half decades have been implemented systematically. However, the details of the various national IT/ICT/Infocomm policies are rather dissimilar. First, the mindset and hence the focus of the four plans has been rather different. In addition, the scope of the plans, the strategies adopted, the technologies involved, and the expectations also deviate significantly from one plan to another. Consequently, the impacts and contributions of the four plans vary substantially in certain aspects. However, information is primarily made available more swiftly across the plans, and thus accelerated the nurturing of appropriate knowledge structures and the rate of decision-making. The CSCP (Civil Service Computerization Program) was an internal program for automating information systems within the civil service. At that point in time, the government of Singapore had little or no knowledge of IT. All the ministries including the central bank (The Monetary Authority of Singapore) were not computerized. So, its initial objective was to automate the civil service, and the success was to serve as a model for the private sector. However, the subsequent NITP (National IT Plan) and IT2000 were strategic plans on a nationwide basis, and in IT2000 the communications aspect was incorporated and exploited extensively. When Infocomm 21 was mapped and implemented, Singapore’s level of ICT maturity had been elevated significantly. Infocomm 21 was a flexible emergent plan that evolved with the infocomm technological frontiers. The leadership role was transferred to the industrial community while the government agencies acted more as facilitators.
256
T.Y. Liang / Information and Knowledge Strategies
The second half of Infocomm 21 (The Connected Singapore plan) was a “regional plan” that aimed at linking Singapore more closely with the other infocomm hubs in the Asia Pacific region. In this respect, the mindsets, objectives, strategies and expectations have been modified, updated, improved upon, and rendered more focus and effective with each successive plans. Therefore, this study is highly significant to countries that would like to exploit or further utilize ICT as a key strategic instrument in social and economic development. A more detailed analysis on the shift in the respective plans is captured below. 1.2. CSCP (1980–1985) In May 1980, the Civil Service Computerization Group was formed to formulate a master plan to develop computerized information systems for the civil service in Singapore. The Civil Service Computerization Program (CSCP, 1981) was mapped out and this program may be regarded as Singapore’s first IT policy, although, the focus was only on the civil service. Among the recommendations was the formation of the National Computer Board (NCB), and all ministries were to embark upon their own computerization programs immediately using a phased approach. Originally, the CSCP was very much internally focused emphasizing on the automation of back office systems. The systems computerized were mainly transaction processing systems aimed at saving costs and reducing manpower. However, after the first five years, the program was gradually expanded from internal productivity to encompass quality external customer services, such as the e-Citizen (CSCP, 1980–1999; e-Government Action Plan, 2000–2003; e-Government Action Plan II, 2003–2006; iGov2010, 2006–2010). Currently, the different government bodies are highly proficient with e-government services supported by integrated service systems. Thus, Singaporeans are now provided with a one-stop service when they need to deal with any government agencies. In 1993, the CIO Top 100 award recognized the CSCP as the best in the world. Subsequently, in 1999, e-Citizen was cited by the US government as the most developed example of integrated computer service in the world. Apparently, the CSCP has been continuously modified, updated and expanded even after subsequent IT/ICT plans were conceived and implemented. 1.3. NITP (1986–1991) Recognizing the initial success of CSCP within the civil service, the Singapore’s government decided to map out an IT plan for the entire economy. Thus, the National IT Plan was formulated in 1986 (Liang, 1993; NITP, 1986). The NITP was an extensively strategic IT plan at the national level encompassing two main objectives. The first objective was to exploit IT as a tool for increasing productivity in all economics sectors. The other objective focused on the IT sector as a means to elevate standard of living for the society by creating new innovative IT-related products and services. The main contributors to NITP were NCB, Singapore Telecom and several other industry participants. A seven-pronged approach was recommended and a plan containing five strategic levels was formulated using IT frameworks that were current at that point in time (Benjamin et al, 1984; McFarlan and McKenny, 1983; Porter, 1985; Porter and Miller, 1985; Rockart, 1979; Rockart and Crescenzi, 1984). The idea of nurtur-
T.Y. Liang / Information and Knowledge Strategies
257
ing an information society was proposed. Structural changes were carried out, and the focus of economics revamping was very much on the tangible and physical domain. The primary aim was to evolve into an information society in which information networks and databanks became the heart and arteries of all aspects of economics and social activities in Singapore. Arising from NITP, Singapore was a big step into being an IT-savvy nation. 1.4. IT2000 Plan (1992–1999): Into Broadband Technology and E-Commerce Subsequently, recognizing the fact that the communication technology was becoming a vital partner of IT and that the more developed nations were moving deeper into the knowledge-based structure, NCB and Singapore Telecom, together with representatives from the different industries again formed a committee to examine the further exploitation of ICT into the next millennium. The IT2000 plan (IT2000, 1992), covering specific application opportunities in eleven industrial sectors was developed. A primary focus was to better integrate the fast developing communication technology with IT. This plan put in place the basic structure of a nationwide ICT architecture in Singapore, and included a transformation in mindset on IT usage, such as the introduction of ecommerce. A first objective of IT2000 was to develop Singapore into an intelligent island and the first country in the world to have an advanced nationwide information and communication infrastructure, including a broadband network. This infrastructure was to facilitate the interconnection of computers in every home, office, school, and factory. The action was supported by the belief that information was becoming a more and more valuable resource, and IT could change the way Singaporeans live by providing a better lifestyle and more leisure. With the more encompassing approach, Singapore hoped to become a world leader in ICT applications and systems. Simultaneously, the government also strongly believed that Singapore had the potential to serve as a total business hub for IT/ICT MNCs (Multi-National Corporations). It went to the extent of funding some IT2000 projects that were not commercially viable but were strategic and highly beneficial to the country. Those that were commercially viable were open to participation by the private organizations. The components/systems put in place by the government included the National Information Infrastructure encompassing Telecommunication Networks, Common Network Services, technical standards, national IT applications, and the policy and legal framework. As indicated above, the new ICT application opportunities encompassed eleven sectors, namely, construction and real estate; education and training; financial services; government; healthcare; IT industry; manufacturing; media, publishing and information services; retail, wholesale and distribution; tourist and leisure services; and transportation. The various local communities were linked together internally using networks such as Community Telecommuting Network, and internationally to the other global hubs using the Singapore International Network. Eventually, the application systems implemented included services such as one stop, non-stop government and business; teleshopping; cashless transactions; more leisure; easy commuting; telecommuting; better healthcare; and intelligent buildings. During the second half of IT2000, there was a sudden increase in e-commerce activities. Singapore intended to serve as a competitive hub for international e-commerce development and utilization. An e-commerce master plan was conceived and incorpo-
258
T.Y. Liang / Information and Knowledge Strategies
rated into the existing IT2000 plan. Consequently, an internationally linked e-commerce infrastructure was constructed to strengthen Singapore’s position as an e-commerce hub. To jump-start Singapore as an e-commerce hub, especially in business-to-business services, incentive schemes and other support programs were introduced to attract local as well as international organizations (digital infrastructure providers, online service developers, trading and distribution companies, retailers and wholesalers, and reservation systems service providers) to base their e-commerce activities in Singapore. The financial and logistics sectors played an important role in driving this thrust. 1.5. Infocomm 21 (2000–2003) / Connected Singapore (2003–2006): Global Information Hub The fourth ICT policy was Infocomm 21 (Infocomm 21, 2000) conceptualized in the year 2000. Its objective was to develop Singapore into a vibrant and dynamic global infocomm capital with a thriving and prosperous e-economy and a pervasive and infocomm-savvy e-society. Supported by two decades of IT/ICT strategic planning experience, this plan embedded an emergent characteristic. In this respect, Infocomm 21 had a more flexible strategic framework and implementation path as compared to the earlier master plans. It was supposed to evolve with the changing global economics, technological and social environment. Again, the industry participants led the implementation of this new policy and the government agencies only acted as the catalyst. Briefly, the Infocomm 21 focused on following three different frontiers: a) b) c)
From intelligent island to global infocomm capital Strategic thrusts of Infocomm 21 Critical success factors of Infocomm 21
1.5.1. (a) From Intelligent Island to Global Infocomm Capital With the nationwide broadband infrastructure in-placed, and the high speed connectivity to twenty countries established, Singapore’s next objective was to emerge as a premier global infocomm hub in the Asia-Pacific region. Numerous strategies were adopted including developing a globally competitive telecommunications industry cluster, creating an interactive broadband multimedia industry cluster, spearheading the development of a wireless industry cluster, positioning Singapore as a intellectual property rights center, building new competitive capabilities and nurturing new local enterprises, and fostering strategic partnerships and alliances overseas. The intention of the strategy was to further attract more global organizations into Singapore and to nurture a highly dynamic infocomm environment with activities encompassing research and development, venture capital, intellectual capital, education, and thought leadership. 1.5.2. (b) Strategic Thrusts of Infocomm 21 The six strategic thrusts adopted were, namely, Singapore as premier infocomm hub, Singapore business online, Singapore government online, Singaporeans online, Singapore as infocomm talent capital, and a conducive pro-business and pro-consumer environment. In particular, the Singapore government aimed to elevate in e-governance so as to achieve better connectivity with all her citizens. Today, the e-Citizen network is very well established covering almost all aspects of daily activities Singaporeans need
T.Y. Liang / Information and Knowledge Strategies
259
to interact with the government. More specifically, the areas e-Citizen engulfs include the arts and heritage, business, defence, education, employment, elections, family, health, housing, law, library, recreation, safety and security, sports, transport, and travel. 1.5.3. (c) Critical Success Factor of Infocomm 21 In Infocomm 21, the Singapore government recognized that certain factors were vital for the e-economy. The critical factors identified included speed-to-market, creativity and innovation, intellectual capital, technopreneurship, and access to venture capital and human capital. Although some of these factors were earlier recognized in IT2000 and in NITP, other factors such as creativity and innovation, and technopreneurship were given new priority. Concurrently, the importance of creativity and innovation was greatly emphasized in the education system, leading to tremendous changes being introduced into the school curriculum, and teaching pedagogy updated. Similarly, technopreneurs were provided with greater financial support, publicity and recognition. An interesting point to note at this juncture is the shift from just an effective physical connection to incorporate talent (the mind). 1.6. IN2015 (2006–2015) Currently, the government through IDA (Infocomm Development Authority) is mapping out the next infocomm strategic path, iN2015 (Intelligent Nation, 2015), for Singapore over the next ten years. It is an open-ended infocomm master plan that will accommodate the new emerging infocomm technologies over this period. It will concentrate on new frontiers such as wireless and sensor technologies, and networks including those that are able to transport high volume files, for instance, computer generated medical graphics of chronic patients. In this respect, information and in particular knowledge (with larger chunk size) can be communicated and shared more swiftly. Inputs are sorted from three different groups of people, namely, public sectors, private sectors, and the general users. Interestingly, the last group comprises the consumer focus groups and schools. The plan aims to create 80,000 new jobs, and raise infocomm revenue from SGD$22 billion to SGD$60 billion by 2015. However, at the moment, the immediate interest is the Singapore government’s SDG$1.5 billion tender called standard operating environment (SOE). A primary aim of SOE is to standardize the brands of PCs, software, network and email in the entire civil service in Singapore, except the Ministry of Defence. This is a highly significant strategic information and knowledge synchronization for the nation. This enormous exercise through better integration and uniformity encompassing both physical effectiveness and smoother intangible dynamic will transform Singapore’s infocomm environment to an even more advanced stage of knowledge development. 1.7. Strategic Trends and Achievements of Infocomm Plans The strategic development and changes in mindset and objective that have been observed in the different IT/ICT/Infocomm policies are interesting and highly significant. They do not only reflect the innovative and creative exploitation of IT/ICT/Infocomm over times but also reveal the associated changes in leadership and management strategies (see Table 1). The intensive use of infocomm technologies, the swift emergence of a knowledge-intensive environment, and the change in leadership and management
260
T.Y. Liang / Information and Knowledge Strategies
Table 1. Some significant achievements of the first four ICT policies
Strategic Planning
CSCP (1980–1985) and Subsequent e-Govt Plans
Some Significant Achievements
This plan introduced computerized information systems into the civil service. The basic aim was to reduce manpower costs. However, this plan was extended beyond 5 years and the projects from subsequent plans included e-governance. The Singapore government is now an e-govt expert in the region.
NITP (1986–1991)
This was the actual nationwide IT plan. The first objective was to exploit IT as a productive tool for the entire economy. The second objective was to increase the standard of living of the people through creative use of new IT products and services. Consequently, Singapore became a more IT-savvy nation.
IT 2000 (1992–1999)
A basic objective of this plan was to integrate the fast developing communication technology with IT. Through this plan, a nationwide communication network was erected. This infrastructure enables computers to be connected into every home, office, school and factory. Singapore became a more intelligent nation and an e-commerce centre. ICT was also integrated into the education system.
Infocomm 21 (2000–2003) Connected Singapore (2003–2006)
This was the flexible strategic plan with an emergent characteristic, changing with the economic, technological and social environment. One of the objectives was to nurture Singapore into a premier global infocomm hub in the Asia-Pacific region. Another was the extensive and innovative use of ICT in education.
strategies are highly interrelated. In general, the strategic trends and achievements observed include the following aspects: a) b) c)
d) e)
f)
Infocomm leadership diffuses from the Singapore’s government and civil service to the industrial and ordinary users. Effective integration of IT and communication (infocomm) technology. From manpower saving strategy to broadband high volume intensity networks (knowledge-intensive strategy), such as supporting the transmission of MRI and PET scan images in the healthcare sector. From localized information system clusters to extensive regional and international connections, thus, emerging as a key global infocomm hub. Establishment of a one-stop e-Citizen service cultivating a smoother intangible dynamic, and being recognized as a regional expert in e-government services. Towards a more intensive e-commerce, e-economy and e-savvy nation.
T.Y. Liang / Information and Knowledge Strategies
g) h) i) j)
261
Singapore is now a highly infocomm-savvy nation supported by a nationwide network infrastructure. Extensive usage of wireless and e-landscape technologies and their strategic exploitation in business and education. Towards nationwide standardization of all infocomm usage/connectivity and technologies. Information and knowledge chunks have increased in size and are more swiftly communicated, shared and effectively exploited.
The extensive utilization of infocomm technologies, introduction of the elandscape, increasing knowledge-intensive economy, and fast-changing environment have rendered the human world more complex, nonlinear and unpredictable. Concurrently, over the last twenty years, a new mindset in management, leadership and strategy has been analysed and developed by many researchers from various disciplines (Gleick, 1988; Kauffman, 1992; Langton, 1989; Levy, 1994; Liang, 1998, 2004, 2004a; Liebowitz, 1996; McMaster, 1996; Merry, 1995; Overmen, 1996; Perry, 1995; Stacey, 1991, 1995, 1996; Thietart and Forgues, 1995; Waldrop, 1992; Westley and Vredenburg, 1997; Zeleny, 1989, 2000, 2005). Consequently, learning, knowledge, intelligence and collective intelligence are assuming a higher status. Humanity is entering a new era that focuses more on the human brain/mind as a complex adaptive system (using bio-logic instead of machine logic). In this respect, there is a greater need to nurture more educated/informed individuals and intelligent human organizations (Liang, 1998, 2001, 2002, 2004a, 2004b) at different levels, including education institutions, businesses, economies, communities, nations, and even regional grouping such as ASEAN. Thus, elevating the intelligence of the individuals and the collective intelligence of the groups is highly crucial for the present and future competitions. An integrated focus on learning, information dissemination, knowledge acquisition, and training and education is inevitable. This is an important aspect that all new effective information and knowledge strategies must now encompass.
2. Strategic IT/ICT/Infocomm Integration into Education 2.1. Changing Singapore Education System Being a country with absolutely no natural resources except for her people, it is natural that Singapore has to rely heavily on the ability of her population to create a competitive edge over the rest of the world. The knowledge economy has changed the ways human beings lives, works and socializes. This fresh development is an advantage for Singapore. Effective information and knowledge usage and its associated strategies are vital in the new situation. In this environment, quick decisions and speedy actions are the basic needs for businesses as well as many other daily activities. Thus, the new competition can best be supported by automated information systems and swift communications networks, and effective education and training systems. These are the resources that enable the individuals as well as organizations to attain the new competitive intelligence advantage. The niche now is the availability and sharing of larger knowledge chunk size and expertise.
262
T.Y. Liang / Information and Knowledge Strategies
Consequently, the Singapore’s government places significant emphasis on education and skills training, as well as integrating ICT with this development. This is a natural course of action, and yet it is strategic. The knowledge structures that each individual carried in the brain are vital in the knowledge-intensive environment. Concurrently, Singaporeans must also be well equipped with the necessary ICT skills to exploit and manage information and knowledge effectively. Thus, a high level of computer, information and knowledge literacy is essential. The new learning and working dynamic has become more complex and nonlinear. The swift and revolutionary advances in ICT are transforming the entire human world, and are also presenting new challenges to all nations. This tremendous rate of change has never been observed in any other technological domains. In order to build a knowledge-rich community and to stay ahead in the new context, Singapore has to ensure that her people also engage in lifelong learning and re-training. Therefore, a key strategic path that education must adopt is to prepare the new generations to participate constructively in the knowledge-intensive environment. In this respect, the following information and knowledge actions are crucial to Singapore’s future competitiveness. a) b)
c) d) e)
The education system must ensure that the knowledge structures of each and every Singaporean are developed to the optimal. The system must also possess characteristic and ability to sustain lifelong learning so that the knowledge structures in the individuals can be updated whenever necessary. The importance of ICT literacy must be greatly emphasized and utilized in the education process. In this respect, e-learning is a highly significant feature in the Singapore’s education system, especially in higher education and adult education. The individuals must be thought to learn how to learn with the fast changing environment, and to innovate whenever possible.
2.2. The First IT Masterplan in Education In order to meet the above objectives and to optimize the output, education commences from very early childhood. ICT is integrated into the entire education journey starting from pre-primary education. This approach that seeks to encompass the new challenges of education in the information age is captured in the Masterplan for IT in Education, in force since 1997. The four goals of the first IT Masterplan in Education (1997–2002) were as follows: a) b) c) d)
To enhance linkages between the schools and the world around them, so as to expand and enrich the learning environment; To encourage creative thinking, lifelong learning, and social responsibility; To generate innovative processes in education; and To promote administrative and management excellence in the education system.
Emerging from the first IT Masterplan in Education (1997–2002), all students in Singapore were equipped with the necessary basic ICT skills that centred on thinking, learning and communications. The strategy was to create an ICT-oriented teaching and learning environment in school early so that students could acquire computer and in-
T.Y. Liang / Information and Knowledge Strategies
263
formation literacy at a young age, and thus could support the emerging knowledgebased industry more effectively in future. This board-based approach in ICT-related education had elevated the status and value of information and knowledge in the entire nation. By year 2002, students spent up to thirty per cent of the curriculum time using automated systems. The use of ICT revolutionized the concept of learning, that is, learning would shift from information receiving towards an emphasis on finding relevant information, applying information to solve problems, and communicating ideas effectively. E-libraries emerged in many schools. Students were taught to learn how to learn. Such an approach led to a better and more effective exploitation of information and knowledge by the individuals. Two of the major projects implemented by the Ministry of Education (MOE) during this period were the Accelerating the Use of IT in Primary Schools (AITP) and the Students’ and Teacher’s Workbench (STW). The foundation of an automated and well-integrated teaching and learning environment in schools were created by these two systems that made ways for the emergence of more advanced systems. 2.3. The Current Education IT Masterplan II The second masterplan (2003–2007) consolidates and builds upon the achievements of the first. A more holistic approach is adopted to ensure that all essential parts of the education system are supported and integrated by ICT. Schools are provided with more autonomy to plan, execute and fund IT usage. Thus, the decision-making role is passed for the ministry to the schools. IT is even more deeply integrated with the curriculum to further enhance teaching and learning possesses. For instance, substantial amount of resources are now channelled into research in more effective exploitation of ICT and education. The new vision is towards Thinking Schools and Learning Nation. E-learning gets a special boost in this plan. A world-class e-learning national infrastructure is implemented. The e-learning industry in the Asia-Pacific region is expected to grow to S$500 million over the next few years. The Singapore government has also set up a lifelong learning fund of S$10 billion. Correspondingly, some business organizations are also setting aside corporate e-training funds. E-learning in schools is developing rapidly through the Edu.QUEST project using advanced multimedia technology. Similarly, some institutions of higher learning have also ventured into this direction. In fact, because of the rapidly increasing data volume due to media rich data, some of these institutions are looking at new ways of massive storage management. E-education in general is the fundamental pillar in Singapore’s continuous lifelong education structure. 2.4. Strategic Impacts and Achievements of IT Education Masterplans A substantial amount of benefits have been derived from the integration of ICT into the national education system. The understanding and learning of the young students has been elevated, and their creativity and critical thinking ability has also been enhanced. These have been made possible by some special effects (animation and simulation of automation systems) and more learner-centered activities that are only available in automated systems. Consequently, the people become more independent learners, thus, achieving the objective of a more effective lifelong education for all Singaporeans.
264
T.Y. Liang / Information and Knowledge Strategies
From the teacher and school perspective, an ICT-enriched culture enables them to stimulate knowledge sharing and innovation more conveniently. Communications between teachers and students and among teachers have increased substantially. Other aspects of school activities such as counselling and administration are also better supported. According to a global survey in 2002, Singapore is ranked second on the availability of Internet in schools, indicating clearly that an ICT-enriched environment has already been established in the Singapore education system. All the programs in the higher institutions of learning are also supported by elearning. In particular, wireless e-learning and education through the e-landscape has created a more conducive and convenient system for the continuous learning process to be sustained at all times and at any venues. However, a highly significant factor to note is that ICT, especially the e-landscape, has not only elevated the usable intelligence of individuals but also enhance the collective intelligence of some human organizations through better connectivity. This is a crucial aspect of the evolution dynamic.
3. Towards an Intelligent Nation 3.1. ICT, Learning and Knowledge Acquisition Strategies Apparently, in such a knowledge and information-intensive environment, the support provided by the automated education systems and ICT in general is immense. This analysis reveals that ICT is important for full-time education in many aspects. Besides traditional learning, it supports the enhancement of innovation and creativity in the young minds. Its role is even more significant for part-time education, as working adults need to learn at their own pace because of time constraint. In this respect, the role of e-education and e-learning has increased, as continuous life-long learning is becoming a norm in all competitive societies. The continual updating of knowledge and skills in individuals as well as their organizations is inevitable in the new context. For Singapore, her ability to venture and adapt to the knowledge-intensive environment is due significantly to the foresight of the government on the effective and organized use of ICT in general, and its integration into education in particular. The journey has been rendered substantially smoother by the various IT/ICT policies that started two and the half decades ago. The transformation from localized organizationbased automation, to industry-based networks such as electronic data interchange systems, and to a high volume information nation-wide connection that integrates different applications, has collectively moved Singapore closer to the objective of being an intelligent nation. The nation-wide hardware connectivity, and subsequently, the wireless networks, has provided the basis for a better intangible dynamic to evolve more effectively. In the school system, the information and knowledge strategies that focus on individual’s education have been very successful. For each cohort of students, the current target is to achieve twenty per cent with university education, forty per cent with diploma education, and twenty five per cent with technical skills training. Thus, the building/nurturing of knowledge structures, expertise and sophisticated technical skills in every individual citizen has been accelerating. The nation-wide strategy that encompasses a special emphasis on exploiting innovation and creativity as a niche that is greatly needed in knowledge-intensive activities such as research and development,
T.Y. Liang / Information and Knowledge Strategies
265
entrepreneurship is also emerging. This strategic path is guided by the mindset and bravery to venture into complex and nonlinear territories; territories of unexplored knowledge. For instance, a substantial amount of resources (human and monetary) has been channelled into achieving new bio-medical breakthroughs and the generation of pure water good enough for human consumption. Consequently, ICT that started as a supporting tool for better business transactions has become an integral part of life in Singapore. This ambition is further supported by a highly automated and integrated public library system on the national basis, although, libraries exist in all schools and higher institutions of learning (Library2000, 1994). Besides, with the presence of the elandscape a well-integrated library system will further equip Singaporeans to move faster into the intelligence era. The latest aim of the Singapore’s public library network is to elevate itself into a research library system incorporating users in the South East Asian region as well. Thus, Singapore is ready to serve as a knowledge powerhouse for the regional researchers. At the moment, Singapore is also beginning to share her expertises in this domain with some other countries in the region, including the application of e-government. In this respect, Singapore has acquired the knowledge to be a knowledge-working nation. It has gained the recognition and status of processing knowledge to effectively exploit knowledge. This marks the commencement of the next phase of development for Singapore: the knowledge master status and its related dynamic/activities and development (the knowledge of exploiting knowledge). Consequently, Singapore will emerge as a highly intelligent nation possessing the following characteristics: a) b) c) d) e) f) g) h) i) j)
A nation with a large numbers of highly intelligent interacting agents. A nation with a high collective intelligence. A continuous learning society. A nation with a high quality collective knowledge structure. A society that is well connected internally and externally. A nation that could quickly self-organize when facing unpredictable problems. A highly innovative and creative society. A society that exploit the intelligent advantage. A society with interacting agents that constantly exploits the edge of chaos. A knowledge powerhouse for the region.
Apparently, based on the above examination, the nurturing of a highly intelligent organization or nation needs more than just an effective physical structure. A significant amount of attention must be allocated the intangible dynamic. This shift in mindset is crucial. In this respect, collective intelligence, the right type of culture and the other associated attributes are highly important in nurturing an intelligent nation. 3.2. Complex Adaptive Characteristics and Dynamic Many researchers and practitioners now recognize that all forms of human organizations are higher order complex adaptive systems (CAS) because their constituents, the human thinking systems, are themselves CAS. Thus, in human organizations such as schools, business corporations, economies and nations, both order and complexity coexist. The complex and adaptive dynamics of such systems will vary with time, depending on changes in attributes/entities such as memberships, interacting processes,
266
T.Y. Liang / Information and Knowledge Strategies
and internal/external environment. In this respect, the five core properties of Chaos, namely, consciousness, connectivity, complexity, emergence and dissipation are extremely significant parameters when educating and managing all human systems and their members are concerned. For instance, the teacher-student relationship, and the teaching and learning dynamic are no longer linear in the new environment. The quality knowledge structure embedded in the individual human thinking systems becomes the most vital assets for establishing quality higher levels structure, such as corporate or national knowledge structure. Similarly, learning at different levels is equally crucial. Thus, a high level of collective intelligence is the most critical element for driving quality knowledge management (KM) and learning processes in human organizations. Intelligent organizations and their members must work as partners. As the world population becomes more educated, their values and expectations are also modified. Monetary rewards alone may not be sufficient. Social recognition is a new perspective that must be satisfied. Effective combinations of the KM and learning together with the right mindset form an intelligent advantage that intelligent human organizations (iCAS) must exploit in a knowledge-intensive environment. Nurturing a new mindset that encompasses the 3C-OK (collective intelligence, connectivity, culture, organizational learning and knowledge management) framework (Liang, 2004a) is inevitable. 4. Conclusion As humanity ventures deeper into the knowledge-intensive era, successful competition depends increasing more on intelligence, collective intelligence, and the knowledge structures embedded in the mind of the individuals and the organizations concerned. In this respect, appropriate re-defined information and knowledge strategies are vital. Continuous education is definitely becoming more crucial for the survival of an individual as well as any of his/her associated organizations such as a nation. The advanced nations have recognized that the high value-added economy sectors are those that deal intensely with the knowledge and expertise of the individuals and the groups. Inevitably, the people of a highly competitive nation must be well-educated and informed at all times to remain at the frontiers of their respective competition. This dynamic is greatly enhanced with the usage of ICT. Thus, this study reveals the high interdependency among ICT, education, and the new information and knowledge strategies. Apparently, learning and knowledge acquisition is a fundamental activity of all intelligent entity whether the entity is an individual or a human organization. An intelligent being must learn continuously to survive and evolve. They must be self-learners, experts in their own knowledge domains, and swift and effective decision-makers. This is the primary requisite to be an effective competitor in the new context. This is a basic attribute of nature, a pre-requisite for effective evolution. In this respect, a deeper understanding of the interdependency among all the components in the new dynamic is vital. The ability to nurture intelligent individuals and human organizations is a new critical advantage. They inherently encompass continuous learning and knowledge acquisitions, and the effective use of appropriate information and knowledge strategies as their fundamental approach. Therefore, comprehending and exploiting the complex adaptive systems dynamic with a special focus on the intelligence of the interacting agents and the collective intelligence of the system will continue to benefit Singapore as well as all other nations substantially.
T.Y. Liang / Information and Knowledge Strategies
267
References [1] Benjamin, R.I., Rockart, J.F., Scott Morton, M.S. and Wyman, J. Information Technology: A strategic opportunity, Sloan Management Review, 25(3), 3–10, (1984). [2] Cusumano, M.A. and Markides, C.C., 2001, Strategic Thinking for the Next Economy. San Francisco: Jossey-Bass. [3] CSCP, 1981, Civil Service Computerization Program: A Strategy Study. [4] Dankbaar, B., 2003, Innovation Management in the Next Economy. London: Imperial College Press. [5] Gleick, J., 1988, Chaos: Making a New Science. New York: Penguin. [6] Infocomm 21: Singapore where the digital future is, 2000. [7] IT2000, 1992, A Vision of an Intelligent Island: The IT2000 Report. [8] Kauffman, S.A., 1992, Origins of Order: Self-organization and Selection in Evolution, Oxford: Oxford University Press. [9] Kelly, K., 1988, New Rules for the New Economy. New York: Viking. [10] Langton, C.G., 1989, Artificial Life. New York: Addison-Wesley. [11] Levy, D., 1994, Chaos Theory and Strategy: Theory Application and Management Applications, Strategic Management Journal, 15, 167–178. [12] Liang, T.Y., 1992, Electronic Data Interchange Systems in Singapore: A Strategic Utilization, International Business Schools Computing Quarterly, Spring 43–47. [13] Liang, T.Y., 1993, Organized and Strategic Utilization of Information Technology: A Nationwide Approach, Information and Management, 24, 329–337. [14] Liang, T.Y., 1998, General Information Theory: Some Macroscopic Dynamics of the Human Thinking Systems, Information Processing and Management, 34(2–3), 275–290. [15] Liang, T.Y., 2000, The E-Landscape: An Unexplored Goldmine of the New Millennium, Human System Management, 19, 229–235. [16] Liang, T.Y., 2001, Nurturing intelligent human organizations: The nonlinear perspective of the human minds, Human Systems Management, 20(4), 281–289. [17] Liang T.Y., 2002, The inherent structure and dynamic of intelligent human organizations, Human Systems Management, 21(1), 9–19. [18] Liang, T.Y., 2003, The Crucial Roles of the Information Systems Web in Intelligent Organizations, Human Systems Management, 22(3), 115–124. [19] Liang, T. Y., 2004, Intelligence Strategy: The Evolution and Co-evolution dynamics of Intelligent Human Organizations and their Interacting Agents, Human Systems Managemnt, 23(2), 137–149. [20] Liang, T. Y., 2004a, Intelligence Strategy: The Integrated 3C-OK Framework of Intelligent Human Organizations, Human Systems Management, 23(4), 203–211. [21] Library2000: Investing in a Learning Nation, Report of the Library 2000 Review Committee, SNP Publishers, Singapore, 1994. [22] Liebowitz, J., 1996, Building Organizational Intelligence. New York: CRC Press. [23] McFarlan, F.W. and McKenny, J.L., 1983, Corporate Information Systems Management: The Issues Facing Seminar Executives, Irwin, Homewood. [24] McMaster, M.D., 1996, The Intelligence Advantage: Organizing for Complexity. Boston: ButterworthHeinemann. [25] Merry, U., 1995, Coping with Uncertainty. Connecticut: Prager. [26] NITP, 1986, National IT Plan: A Strategic Framework. [27] Overman, E.S., 1996, The New Science of Management: Chaos and Quantum Theory and Method, Journal of Public Administration Research and Theory, 6(1), 75–89. [28] Perry, T.S., 1995, Management Chaos allow more Creativity, Research Technology Management, 28(5), 14–17. [29] Porter, M.E., 1985, Competitive Advantage. New York: Free Press. [30] Porter, M.E. and Miller, V.E., 1985, How Information gives you Competitive Advantages, Harvard Business Review, 65(1), 149–160. [31] Rockart, J.F., 1979, Chief Executives Define Their Own Data Needs, Harvard Business Review, 57, 81–93. [32] Rockart, J.F. and Crescenzi, A.D., 1984, Engaging Top Management in Information Technology, Sloan Management Review, 25(4), 3–16. [33] Senge, P., et al, 1994, The Fifth Discipline: The Art and Practice of the Learning Organization. London: Nicholas Bredley. [34] Stacey, R.D., 1991, The Chaos Frontier, Oxford: Butterworth-Heinemann. [35] Stacey, R.D., 1995, The Science of Complexity: An alternative perspective for Strategic Change Processes, Strategic Management Journal, 16, 477–495.
268
T.Y. Liang / Information and Knowledge Strategies
[36] Stacey, R.D., 1996, Complexity and Creativity in Organizations. San Francisco: Berrett–Kochler Publishers. [37] Thietart, R.A. and Forgues, B., 1995, Chaos Theory and Organization, Organization Science, 6(1), 19–31. [38] Waldrop, M.M., 1992, Complexity: The Emerging Science at the edge of Order and Chaos. New York: Simon and Schuster. [39] Westley, F. and Vredenburg, H., 1997, Interorganizational Collaboration and the Preservation of Global Diversity, Organization Science, 8(4), 381–403. [40] Zeleny, M., 1985, Spontaneous Social Orders, General Systems 11(2), 117–131. [41] Zeleny, M., 1989, Knowledge as a new form of capital, Part 1: Division and integration of knowledge, Human Systems Management, 8(1), 45-58; Knowledge as a new form of capital, Part 2: Knowledgebased management systems, Human Systems Management, 8(2), 129–143. [42] Zeleny, M., 2000, IEBM Handbook of Information Technology. London: Thomson. [43] Zeleny, M., 2005, Human Systems Management. Singapore: World Scientific Publishing.
Advances in Multiple Criteria Decision Making and Human Systems Management Y. Shi et al. (Eds.) IOS Press, 2007 © 2007 The authors. All rights reserved.
269
Needed: Pragmatism in KM Zhichang ZHU The University of Hull Business School, UK Japan Advance Institute of Science and Technology, Japan
[email protected] Abstract. This paper, taking a pragmatic perspective, reflects on the current situation in our knowledge management (KM) research community and suggests a possible way forward. Keywords. Knowledge management, theory, practice, pragmatism
1. More, Bigger Models: For the Sake of What? Our community currently appears to be under an obsession of producing concepts, theories, models and methodologies. Creativity and novelty to us have become to mean more, bigger, heavier conceptual constructs. These are some examples at hand: a famous Japanese KM model, once focusing and stimulating, has now rapidly ‘synthesised’ almost everything in – knowledge ba, spiral, vision, asset, routine, incentive system, leadership, etc.; we recently receive a ‘prescriptive or normative’, seven-step spiral model, which firstly differentiates and then combines ‘market-’ vs. ‘academicoriented’ knowledge spirals, with each of the steps further embedding sub-level spirals; we also encounter a ‘creative space’ model that consists of ‘at least’ ‘10 dimensions’, ‘3 levels’, ‘310 = 59,049 nodes’ and ‘59,049 × 59,048 = 3,486,725,352 possible knowledge transitions’; I, the author of this paper, was invited by an international conference to review three submissions by the same co-authors – each submission proposes a distinctive set of new concepts and/or frameworks; and so on. We have become so skilful and productive in generating concepts and models, at such a great pace that even we ourselves cannot properly absorb and remember them. I am wondering what all these conceptual creatures and daily sophistications are good for? What do they mean to situated change agents? Is this a wise way to consume our creativity and resources?
2. KM: Theorizing or Experiencing? I was naïve to raise concerns. At an international conference Q&A section, I asked how we could help managers properly to make sense of these numerous, complicated management models; I was then told not to bother: the job of us scientists is to produce models whilst to apply them is the matter of the managers. During a group discussion, I mentioned that I intended to offer a critical analysis of Nonaka’s recent theoretical borrowing; I was then advised, kindly and friendly, that, whilst my critique might be well grounded, it was certainly not appropriate to criticize openly, to disturb harmony.
270
Z. Zhu / Needed: Pragmatism in KM
It appears that we have become more interested in what we produce in theorising than how we relate with managers and knowledge workers. We are more comfortable with generating normative models in our laptops than coping with messy uncertainties and surprises on the shop floor or in laboratories. Knowledge is to us created by generalized, complicated, well-ordered spirals and roadmaps, not emerging from continuing, unpredictable interactions between situated local agents. On the one hand, we are working hard in seeking universal principles, rules and methodologies out of differences, diversity and variety; on the other, we are too busy celebrating, accepting and printing whatever concepts, models and theories we receive to raise questions, to discern coherence and tensions. Do we lack the skill, or lack the will? Surely we now have more, thicker conference proceedings on the shelf, more models and theories on display. However, do we have relevant knowledge to make a real difference, are we more competent in knowledge creation/management, with convincing practical evidence?
3. Towards a Pragmatic Sensibility My personal view is that there is currently in our community too much interest in theorising and publishing concepts and models, too little concern about practical consequences that makes our research relevant and too little intent for engaging dialogues that nurtures intellectual rigour. To overcome this problematic situation, I propose to restore a pragmatic sensitivity. Pragmatism can be understood as a theory of knowledge, a methodology for action and a philosophy for life. It is an inherent intellectual and cultural sensibility in the Confucian tradition which is shared by, among many others, the Chinese and the Japanese, as well as in indigenous American thought (Hall and Ames 1999) and in the Aristotelian ‘phronsis’ of practical wisdom (Nonaka and Toyama 2006). It had been caricatured by some for a very long time as anything-goes, as being distasteful of any theory, as an instrumental kind of thinking, distinctively non-intellectual, altogether uninformed and unrefined. However, a genuine pragmatic sensibility is to me featured by a refusal to entertain ideas and actions as disjunctively related, a rejection of ‘the spectator theory of knowledge’, a commitment to endow experience with learning rather than seeking ‘truth’, a willingness to take action without knowing how things might unfold in the future, a readiness to embrace uncertainty and surprises, an eagerness to capitalise on the unanticipated and unexpected, a conviction that validity of knowledge should be sought based on the consequences of acting upon it, an enjoyment in conversation with situated agents about possibilities for change, a proposition viewing temporal conversations in a community, not any extra-historical Archimedean point, as our only sources of guidance for action, and a belief that participative consensus, if ever achievable, are often achieved at the aesthetic and cultural levels rather than with regard to the claims of Reason (Dewey 1934, Rorty 1982). Related to the current situation in our KM research community, a pragmatic sensibility has at least two immediate practical implications. One, since knowledge is about coping with practical problems in our situated living present rather then mirroring the inner essence of the external realm, be it ‘the outer world’ or ‘the first principle’ for human action (Baert 2005), our research should dedicatedly describe how our concepts and models are affecting change agents in making practical differences.
Z. Zhu / Needed: Pragmatism in KM
271
The other, as ‘it is only by submitting our hypotheses to public critical discussion that we become aware of what is valid in our claims and what fails to withstand critical scrutiny’ (Bernstein 1991: 328), we KM researchers should participate in, not shame away from, open dialogues with each other, as well as with situated change agents. I am not saying that we should ban model or theory building. I am objecting theorising-for-theory’s-sake. I am suggesting that we need to heighten our pragmatic sensibility, focus our attention on how our theories are working, or not working, in situated contexts, on whether and how our research helps practitioners solving real world problem, and on how to promote and engage in open, critical intellectual conversations. Pragmatism was once our distinctive competence and advantage, but we lost it on our way to be ‘modern’, or ‘postmodern’. It is now time for a renovation. We have been living with enlightening friends Kant, Hegel and Foucault for too long, we need to welcome our intellectual ancestors Confucius and Dewey, invite them back home from political exile.
References [1] Baert, P. Philosophy of the Social Sciences: Towards Pragmatism. Cambridge: Polity Press, 2005. [2] Bernstein, R.J. The New constellation: The Ethical-Political Horizons of Modernity/Postmodernity. Cambridge: Polity Press, 1991. [3] Dewey, J. A Common Faith. New heaven, CT: Yale University Press, 1934. [4] Hall, D.L. and Ames, R.T. The Democracy of the Dead: Dewey, Confucius, and the Hope for Democracy in China. Chicago: Open Court, 1999. [5] Nonaka, I. and Toyama, R. Strategy as Distributed Phronesis. Institute of Management, Innovation and Organisation Working Paper IMIO-14. University of California, Berkeley, 2006. [6] Rorty, R. Consequences of Pragmatism. Sussex: Harvester Press, 1982.
272
Advances in Multiple Criteria Decision Making and Human Systems Management Y. Shi et al. (Eds.) IOS Press, 2007 © 2007 The authors. All rights reserved.
Knowledge Management Platforms and Intelligent Knowledge Beyond Data 1 Mining Yong SHI a, b, c and Xingsen LI a, c Chinese Academy of Sciences Research Center on Fictitious Economy and Data Science, Graduate University of Chinese Academy of Sciences, Beijing 100080, China, E-mail:
[email protected] b College of Information Science and Technology, University of Nebraska at Omaha, Omaha, Nebraska 68182, USA c Management School, Graduate University of Chinese Academy of Sciences, Beijing 100080, China, E-mail:
[email protected] a
Abstract. There has been a continuing pursuit of wisdom since the beginning of human beings. Knowledge is a base of wisdom and takes various forms. The rapid development of data technology, such as data mining application and Internet growth, generates a large volume of knowledge for the business community at both individual and organizational levels. How to manage such knowledge and update wisdom is a challenging problem and is a bottleneck of further applying data mining. This paper explores the framework of management platforms for knowledge of human beings, knowledge of data mining and knowledge from data mining. After integrating them together, this paper proposes a concept of “Intelligent knowledge” (IK) towards wisdom. Intelligent knowledge consists of two parts: knowledge entity and meta-data. IK develops at a knowledge management platform in an Internet environment. Such a knowledge management system supported by a Man-Computer Cooperative method can effectively help users to obtain useful knowledge. Meanwhile, IK can self-manage in the knowledge management platform so that users can find and utilize the knowledge efficiently. Through IK, an organization may gradually reach a stage of business wise. Keywords. Human Knowledge, Intelligent Knowledge, Wisdom, Knowledge Management, Data Mining, Knowledge Management Platform
Introduction We live in an era of information and knowledge. Computing technology allows us to generate masses of data and information. Data mining as a knowledge resource has been accepted widely, especially in large sized-enterprises, government, and financial departments. A lot of new knowledge can be discovered by techniques of data mining (Shi, 2002; Han and Kamber, 2006; Olson and Shi, 2007). For example, the World Wide Web has also become a common and huge information source for most people. Although billions of pages are publicly available, it is still growing dramatically. Special data mining methods, such as Web mining and text mining can discover a lot of 1
This paper is to honor Prof. Milan Zeleny’s 65th Birthday, Corresponding author: Xingsen Li.
Y. Shi and X. Li / Knowledge Management Platforms and Intelligent Knowledge
273
knowledge from huge amounts of web pages, including web documents, hyperlinks between documents, usage logs of web sites, and local text files etc (Srivastava et al., 2003; Tan and Yu, 2003). Recent years, some scholars have observed that we are faced a large volume of information overload (for example, Shi, 2000). Although the ways converting such knowledge into practical use have dramatically increased, the amount of available knowledge is still overwhelming. As the era is moving from information overload to knowledge overload (http://www.trecc.org/), how to manage all kind of knowledge and make human beings wiser and wiser becomes a challenging problem. To overcome this problem, Zeleny (2006) presented that a management system that can witness a cumulative progression from data processing, through information technology, to the current knowledge management and the next step is wisdom. According to this idea, through wisdom system, people can know why, doing right and being good. However, to realize such wisdom system is a hard work. It needs much fundamental research on knowledge management mechanism, including its platforms towards wisdom. It has been understood that knowledge may have two main resources: one from human beings, another from data or information. The objective of this paper is to explore three kinds of knowledge management models for human knowledge, data mining knowledge and knowledge from data mining. Then this paper proposes a new concept called Intelligent Knowledge (IK) which serves as a bridge from knowledge to wisdom. The rest of the paper is organized as follows: Section 1 discusses a classification of knowledge and introduces a notion of IK. Section 2 describes a knowledge management platform for human knowledge sharing. Section 3 studies a management framework for sharing data mining knowledge so as to make data mining project simply and easily for new users. Section 4 investigates a knowledge acquisition process from data mining. Section 5 presents an IK management framework which could lead to a stage of business wise. Finally, Section 6 concludes the paper with future research directions.
1. Knowledge Classification Researchers have extensively discussed various definitions and classifications of knowledge. For instance, knowledge can be primarily classified as tacit and explicit on the basis of ease of transfer or codification/formalization (Nonaka, 1994). Biggam (2001) further showed a “Knowledge Types” matrix which divides knowledge into four categories: tacit vs. explicit; personal vs. organizational; dynamic vs. static and internal vs. external. On the other hand, Alavi and Leidner (2001) integrated results of many experts and scholars and viewed knowledge from five perspectives: a state of mind (knowledge is the state of knowing and understanding); an object (knowledge is an object to be stored and manipulated); a process (knowledge is a process of applying expertise); access to information (knowledge is a condition of access to information) and a capability (knowledge is the potential to influence action). Accordingly, knowledge may take many forms, such as equations, contingency tables, taxonomies, decision trees, rules, graphs, concepts, exceptions from patterns, and many more. In this paper, knowledge is loosely defined as any knowledge related to certain aspects of human beings’ interests. Knowledge is rooted in two main resources: one from human beings, another from data or information. By data mining, a lot of hidden patterns are discovered and certain knowledge can be represented as rules, scoring formu-
274
Y. Shi and X. Li / Knowledge Management Platforms and Intelligent Knowledge
Figure 1. A Classification of Knowledge.
las or models with parameters etc. We call this knowledge as “primary knowledge”, which is part of all knowledge, in applications. It is rough knowledge with impurity. Under this definition, knowledge related to data mining can be classified, according to the particular business usage of human beings, as primary, useful, and intelligent knowledge. The primary knowledge is all knowledge from data mining but only a part of a given knowledge connected to the particular interest of human beings’ interest. The useful knowledge is part of primary knowledge by auditing the characteristics of the particular interest and is useful for certain business. The intelligent knowledge (IK) is interface of useful knowledge and human knowledge that can automatically support human beings for achieving their particular interest. It is a new form of knowledge with a series of intelligent features such as memory, recognition, reasoning, automatically adapting, self-updating, dissemination, etc. As a result, this classification of knowledge can be given in Fig. 1. Wisdom can be generated from knowledge but more reach-out and intelligent than knowledge. It integrates all available knowledge with information and induces wise decisions. Therefore, intelligent knowledge is a bridge of knowledge towards wisdom. To start with the study of intelligent knowledge, we shall first introduce how to collect human knowledge and useful knowledge as in the following sections.
2. Management Platform for Knowledge of Human Beings Since human knowledge is most common and always available, it is important for corporations at the competitive global economy. A 1998’s survey of European corporations found that almost half of them have been suffered a significant setback from losing key staff, 43% experienced impaired client or supplier relations while 13% faced income loss because of the departure of one single employee (Alavi and Leidner, 2001). A key reason of the problem is lack of knowledge and resource sharing platform. As observed, knowledge is stored in some key staff’s brain and resource is controlled in their hands. This phenomenon is called Able Person problem which means “the operation of the company is ruled by some key staff’s random orders instead of rules”, it
Y. Shi and X. Li / Knowledge Management Platforms and Intelligent Knowledge
275
Figure 2. A Business Process-Driven Knowledge Management Model.
overemphasizes the influence of “able person” and ignores the rule system and knowledge system. “Able person” is usually not reliable and the companies that rely on “able person” often make the function of their company system weaker and weaker. In order to change this Able Person system into a knowledge sharing system, knowledge management should focus on how to convert the outstanding employees’ tacit knowledge into explicit knowledge. In the following, this section discusses two knowledge sharing models to construct a management platform for knowledge of human beings. 2.1. A Business Process-Driven Knowledge Management Model Despite growing interest about a strategic perspective on knowledge management, there is still no adequate procedure and method to guide the implementation of the strategies. In order to utilize the free resources and skilled persons in society for middle and small companies, Li et al. (2006a) proposed a knowledge management model which consists of project disassembling, task issuing, bidding, signing, monitoring and assembly testing etc. It makes good use of the available human resources in society and makes knowledge management a workflow. This idea can also be used to collect human’s knowledge inside a company. Accordingly, a business-process-driven model of knowledge management was designed as in Fig. 2. It is closely combined company’s operation management system with a uniform knowledge management system. Being working with operation system such as plan and action measure, key responsibility, achievement tracking, performance evaluation, etc., this knowledge management platform will collect knowledge from both business process and data from its information system then apply the knowledge to standardize the operation management in company. This includes six aspects:
276
Y. Shi and X. Li / Knowledge Management Platforms and Intelligent Knowledge
(1) In strategy management level, record external and internal competitive elements determined by company strategy planning. Decompose company goal to operation level and choose executing approaches. (2) In operation management level, make department objectives and decompose objectives to sub-units. All units make their plan and propose action measures up to their leader. Allow employees to involve in company operation and realize the promised goal. Instruct how to translate strategic plan into executable program, and put into effect in relevant departments and setup key action measures. (3) Define responsibility of each position, set up an inquiry platform, report performance results, periodically inquire leading officials for the achievement based on facts and data, and propose improvement method to ensure the implementation of yearly operation objective. (4) Performance evaluation: check and evaluate each employee’s achievement and performance including knowledge contribution rate. Then grade evaluation results by defined criterions, and fulfill rewards or punishment. (5) Knowledge management: Accumulate management and business operation knowledge in all the process and from data in business management systems by OLAP or data mining. The knowledge will be saved in knowledge base after been audited, and then applied to daily works, such as sharing experiences in the process of operation, finding answers from knowledge base, refining a flow process, and using the knowledge platform to make employees become more professional. (6) Knowledge feedback: The knowledge and the operation performance will be feed-backed to strategy level for optimizing the strategy and a work flow. Gradually most work process can be standardized and optimized. This platform enforces employees share their knowledge in daily work such as budgeting and planning, defining key responsibility, tracking achievement, and performing performance evaluation, etc. Everyone knows his/her authority and responsibility clearly; therefore, he/she will work wisely. 2.2. Knowledge Collection from Working Process Collecting knowledge from working process has five main steps: • Step1. Business target setting According to the development prospects and internal and external competitive elements, company will set down its strategic goal, designate it to each month and each business unit, and further assign responsibility to each person with key tasks. • Step 2. KPI (Key Performance Indicator) definition and assignation Define KPI including KPI code, name, its unit code and formula for computing and its description in detail according to unit’s specific strategy and work target within a period (normally a year), and then assign KPIs as part of duty to employees. • Step 3. Knowledge collection from work planning Month plan or week plan has to be made according to the targets of the month assigned from company goal. Key measures, important actions, resource plan and risk analysis are recorded in system. Leaders can check the employee’s plan and will give suggestions if their plan is off track. • Step 4. Knowledge collection from working reports
Y. Shi and X. Li / Knowledge Management Platforms and Intelligent Knowledge
277
At the end of the month or week, working results will be reported in the system, such as achieved KPI value, experiences or reason analysis, and key action analysis. Leaders can monitor at any time and provide opinions if necessary. • Step 5. Knowledge collection from action improving process If the working results are not good as what have planed, the leader can send a query instruction to the person who is in charge. The improvement actions and its key measures will be planed and the leader also can guide such a person. Necessary knowledge can be collected from the Web, data mining, or one’s experience in whole process and will be saved in the system. The knowledge can be indexed, queried, marked, updated or deleted by knowledge officials through a management process. The above model has been applied in a national-wide engineering machinery company and the result shows that this knowledge management platform is able to help small and middle businesses to build a knowledge-based management system with low costs. The company can accumulate knowledge in operation process gradually, and transfer “Able Person” system into institutional operation system.
3. Management Platform for Knowledge of Data Mining Data mining refers to extracting or “mining” knowledge hidden from large amounts of data (Han and Kamber, 2006; Olson and Shi, 2007). The importance of collecting data to achieve competitive advantage is widely recognized now. Data can be one of the most valuable assets of the corporation if we know how to discover valuable knowledge hidden in raw data. Data mining can help us discover knowledge hidden in data and turn this knowledge into a crucial advantage. Data mining can also support the corporation to optimize its business decisions, increase the value of each customer and communication, and improve satisfaction of customer. The demand of data mining has been increased according to the development of information systems. However, as a rule of “garbage in, garbage out”, the quality of the data affects the quality of decisions. Data mining need high quality data to get useful knowledge, but there are no enough good-quality data in many enterprises. Reported by PricewaterhouseCoopers’ survey at New York in 2001, 75% of 599 companies have resulted in economic losses because of data quality problems (Pierce, 2003). Commonly data cleaning and Extraction-Transformation-Loading tools or tolerance algorithms have been used to mine low quality data (Hernandez and Stolfo, 1998; Lee et al., 1999; Galhardas et al., 2000). In order to derive credible consequences, data cleaning and processing may consume up to 80–90% workload of a data mining project (Johnson and Dasu, 2003). That makes data mining a difficult work and most small & medium business could not afford it. In addition, new created data from enterprise information system may make the database dirty again. Therefore, data cleaning conceal the source of dirty data so no enough actions have been taken to improve the system. This forms a vicious circle as shown in Fig. 3. Data quality problem has become an important factor for data mining application (Dasu et al., 2003). This section shall further present two models where one is used to prepare the data preprocess while another one employs techniques of multiple criteria mathematical programming (MCMP) for data mining process.
278
Y. Shi and X. Li / Knowledge Management Platforms and Intelligent Knowledge
Figure 3. A Vicious Circle Analysis of Data Cleaning.
Figure 4. The framework of Data Mining Consulting.
3.1. Preparing Data for Mining By analyzing the reasons of low data quality systematically, a new method called data mining consulting has been established, it is consisted of three parts: principles of data mining, technology of software engineering and rules of management. From principles of data mining, the conditions what data mining need and its standard rules are listed. Then it traces back to the software living period and takes actions to prevent from dirty data being created. Through all period a series of management rules have to be used for reducing human mistakes. Its aim is to improve data quality and make data mining project be implemented efficiently and easily (Li, J. and Li, X.S., 2006). Data mining consulting solution framework is presented in Fig. 4.
Y. Shi and X. Li / Knowledge Management Platforms and Intelligent Knowledge
279
In this framework, data quality needed in data mining is listed first, and then we collect the data set and identify the gap between the present data and the objective data in the view of data mining. Then business experts and data mining engineers will find data quality problems by data warehouse analysis or data mining testing so as to solve the data quality problem by a series of methods. They include business requirement analysis, storage and integration, management measures etc. until the data meet requirement of quality standard. By recycling data mining experiment and taking improvement measures, data gap will be decreased and high quality data will be filtered from the poor ones. Once conclusions of data mining benefit the business decision-making, senior management will pay more attention to data’s accuracy and take some effective measures that will boost information system development, such as increasing investment, rectifying management, emphasizing data analysis, etc. With the above measures, one can augment the demand of data, make more data integrated, deal with the relevant quality issues and come to next phase of data mining consulting and implementation. This kind of spirally recycling implementation will improve the transform from un-mining data to mining data (Li, Shi and Li, 2006). 3.2. MCMP Data Mining Process Traditionally, data mining has been implemented by a number of well-known mathematical tools, such as statistics, neural networks, decision trees and support vector machine. Recent years, the authors have applied multiple criteria mathematical programming (MCMP) to conduct data mining process (Kou et al., 2003; Shi et al., 2001; Shi et al., 2002). MCMP is an extension of linear programming that has been used in classification for more than twenty years (Freed and Glover, 1981). Although these data mining methods have been used in a broad range of applications, they, due to the complicated nature of these techniques, seem to be a risky investment from the customers’ point of view. Since data mining demands certain skills to be operated, users may not handle it correctly. To help the users understand well with mathematics and technology in MCMP-based data mining, a knowledge management platform with the standardization of MCMP process referred to CRISP-DM is presented as follows (Li et al., 2006b). The CRISP-DM can be divided into 4 layers, The base layer includes mathematical principles of optimization; the technology layer includes algorithm and software development, the operation layer includes the use of MCMP software, and the application layer includes data selection, cleaning, mining, and deployment. To solve the user-difficulty problems, the key is to hide the technical details in the technology layer and the base layer from the users. A knowledge management platform with CRISP-DM is as shown in Fig. 5. This model sets up a bridge between the technology layer and application layer. Its working flow consists of five steps: •
•
Step 1, while experts perform MCMP-based data mining, the software is invoked to record what they do, then save it in a knowledge base. The experts can trace back by the records and revise the parameters if they are not satisfied with the mining results. Step 2, when a new user of MCMP runs into trouble in data mining process, he/she can ask questions to the platform, and then get automatically answers from the knowledge base.
280
Y. Shi and X. Li / Knowledge Management Platforms and Intelligent Knowledge
Figure 5. A knowledge Management Platform for Data Mining Process.
• • •
Step 3, after the new users solved their problems, they can write down their experience and save it back in the knowledge base. Step 4, Developers or experts browse the new user’s questions to find out what to do with the working sequence so as to improve the software. Step 5, with the help of the experts’ experience and the new users’ feedback, the platform and the algorithms can be improved quickly.
The knowledge management platform collects the experts’ experience in daily works of data mining and then accumulates knowledge for standardization. Standard methods and procedures will make the entire process of data mining becomes easier for different types of users. Finally, users of data mining will increase and more knowledge will be found from data.
4. Management Platform for Knowledge from Data Mining Data mining has supported by a lot of data sources. Besides business information systems, the World Wide Web has become a new data source. Billions of pages are publicly available, and it is still growing exponentially. Web Mining and text mining have been recognized as important aspects of data mining and they can discover knowledge from huge amounts of web pages, including web documents, hyperlinks between documents, usage logs of web sites, etc. (Srivastava, Desikan and Kumar, 2003). A framework of knowledge management with web mining and text mining is outlined as in Fig. 6. This model consists of web mining, text mining and human experience knowledge. By data mining, a lot of knowledge has been discovered. The knowledge can be rules, scoring formulas or models with parameters etc. Because such knowledge may be rough and with impurity, it can be used well in applications. The primary knowledge can be transferred to useful knowledge by Man-Computer Coop-
Y. Shi and X. Li / Knowledge Management Platforms and Intelligent Knowledge
281
Figure 6. Framework of the Different Data Mining Integrations.
Figure 7. Application Model of MCMP-based Data Mining for BI.
erative method (Dai, 2004). All useful knowledge from each part was saved in database through a knowledge management platform. This section provides several three models to integrate knowledge from the results of data mining. They all employ MCMP and can be used to support business intelligence (BI) in various ways. 4.1. Independent Enterprise Internal Application Model MCMP-based data mining can be developed as software tools and be used independently for knowledge acquisitions. As shown in Fig. 7, as long as raw data such as internet data source, offline network data and business source data mostly from MIS are integrated and processed as input data of data mining, through training and testing several times, a classification model can be obtained to present or visualized the attributes weights. If new data without labels is inputted, it can be first clustered and then classified as a useful list with labels. Both the resulting classifier and the classification results are saved into a knowledge base, through knowledge management platform. All business units will be benefited from them in the processes of planning, execution and monitoring. The benefit will also generate more high-quality data for creating a next level of knowledge discovering circle.
282
Y. Shi and X. Li / Knowledge Management Platforms and Intelligent Knowledge
Figure 8. Component-based Internal Application Model.
This model recently had been implemented at a well-known Chinese web service corporation for its email user churn analysis. Behaviors of the email users include: receiving letters, sending out letters, log on, payment and complaints. An MCMP scoring system scored its customers based on user behavior. Those with higher score were taken as ordinary customer, while low score as churn customers. By reasonable explanation of the model, we have helped the enterprise to improve the customer relationship and reduce the churn of customers (Nie et al. 2006). 4.2. Component-Based Internal Application Model If the classification model is proved efficiently in the application of the above model, it can be developed as a business software component which can be integrated in business operation systems. Its working process is shown as in Fig. 8. Here, the classification model in business working process is combined with data extraction component. When necessary data is created, it can be processed as input data for the model. The model component then classifies the records and produces real time-alert information, controlled information, knowledge or real time suggestion for real time management such as alerting, monitoring, decision making and other wise actions. Through such a knowledge management platform, all business units can work more wisely. These will also make just-in-time knowledge management possible. 4.3. Web-Based Application Service Model An application service provider (ASP) is a firm that offers individuals or enterprises access over the Internet to applications and related services that would otherwise have to be located in their own personal computers and enterprise servers. ASP is responsible for the management of the application system from building and maintenance to upgrade of the application. As the revolution of Internet is pushing the society from the global economy environment to e-Business era, ASP mode may re-direct future development of data mining applications.
Y. Shi and X. Li / Knowledge Management Platforms and Intelligent Knowledge
283
Figure 9. Web-based Application Service Model.
In this model, shown as in Fig. 9, the MCMP based data mining software can be distributed on the Internet and can employ application service for other users outside the enterprise. Through a web page, users can register, make payment by using a user id and password, when login correctly. Users can also extract data from business data base; then can form needed training data set and testing data set, and finally obtain the best classifier and classification results which can be saved in their local machine for further usages according to their authority. This model can make data mining tools available for many users, such as researcher or practitioners, over a grid computing environment. Especially, it will be useful for the middle and small businesses who cannot offer the expensive data mining software (Li et al. 2006a).
5. A Framework of Intelligent Knowledge Recall Fig. 1, intelligent knowledge is produced from the intersection of human knowledge (Section 3) and useful knowledge based on data mining (Section 4 and 5). This section discusses a framework of how to build intelligent knowledge. 5.1. Intelligent Knowledge By intelligent knowledge (IK), we mean a new form of knowledge with a series of intelligent features such as memory, recognition, updating and reasoning, etc. Some of its characters compared with human knowledge can be shown as in Table 1.
284
Y. Shi and X. Li / Knowledge Management Platforms and Intelligent Knowledge
Table 1. Comparison of primary knowledge, Intelligent Knowledge and Human knowledge Features
Human knowledge
Primary knowledge
Intelligent Knowledge
Resource
Human beings
Data mining or other data analysis
Data mining or other data analysis with human’s auditing
character
subjective and qualitative
objective and quantitative
qualitative and quantitative
memory
brain
CPU’s & RAM
Knowledge base
recognition
social
Be used passively
Serve to users positively
updating
idividually
manually
automatically
resoning
neurals
Can not
Reasoning in KM system
IK consists of two parts: knowledge entity and meta-data. Knowledge entity is knowledge itself, while meta-data is the information describing the entity, such as which data set, when was born, who has used it and its scoring etc. There are a number of available methods that can be used to express the structure of IK. For example, the method of matter-element (Cai, 1999) is a carrier. Matter-element is the base of Extension Theory, which was initiated by Prof. Wen Cai in 1976. It is a discipline which studies the extensibility of events, the laws and methods of exploitation, and the innovation to solve all kinds of contradiction problems in the real world with formalized models (Cai, Yang, et al., 2005). Extension theory establishes matter-element, affair-element and relation-element to describe matter, affair and relation. From the view of matter-element analysis in extension theory, IK can be described as follows:
⎡OIK , knowledge entity, ⎢ dataset, ⎢ ⎢ birthday, ⎢ IK = ⎢ method type, ⎢ auditing score, ⎢ last used time, ⎢ ⎢ ...... , ⎣
v1 ⎤ ⎡OIK , v2 ⎥⎥ ⎢⎢ v3 ⎥ ⎢ ⎥ ⎢ v4 ⎥ = ⎢ v5 ⎥ ⎢ ⎥ ⎢ v6 ⎥ ⎢ v7 ⎥⎦ ⎢⎣
c1 , c2 , c3 , c4 , c5 , c6 , c7 ,
v1 ⎤ v2 ⎥⎥ v3 ⎥ ⎥ v4 ⎥ v5 ⎥ ⎥ v6 ⎥ v7 ⎥⎦
(1)
IK represents an item of intelligent knowledge; OIK is its object name. Knowledge entity, data set, birthday, method type, auditing score and last used time are its attributes, vi (i = 1 to 7) is their value. Knowledge entity can be a combined matter-element, which consists of a series of attributes and their values or even affair-element and relation-element. Except knowledge entity, data set, birthday, method type, auditing score, last used time, etc. is the meta-data of IK. The extensibility of matter-element points out the possible way to solve the problems so as the ideas and strategies can be described by matter-element transformations and their combination (Cai, 1999). After IK has been expressed as matter-element, a series of particular extension techniques can be used to make IK, such as matter-element extension method, matter-element transformation method and optimal appraisal method based on extension theory. However, the investigation of other alternative methods to express how IK can be generated is an on-going research project.
Y. Shi and X. Li / Knowledge Management Platforms and Intelligent Knowledge
285
Figure 10. Intelligent Knowledge Management Platform.
5.2. Framework for Intelligent Knowledge Management System After the Intelligent Knowledge (IK) is born, its living environment is necessary. Therefore, an intelligent knowledge management system (IKMS) can be built as a platform for the IK. Through IKMS, IK will spend its life on it. During its life circle, IK can learn, grow, update, struggle, getting older and finally die. IK is growing to useful knowledge under IKMS. The IKMS platform is shown as in Fig. 10. IKMS have many functions. It includes knowledge storage, retrieve, audit, and score, identify, deduce, update, delete, etc. IK relies and grows on the IKMS. The platform, similar to human neural system, has intelligent cells, neurons, and the nerve center of a series of sensors and other components. It is closely combined with company’s operation management and brings all business system together into a uniform knowledge management system. When intelligent knowledge is retrieved and used, it can remember when, where and who use it and use for what. After being used for times, it will remember much more meta-data information. Therefore it can provide itself to the users who need it next time. It is similar to JIT knowledge management (Conteh and Forgionne, 2003). Jointed with operation system such as plan and action measure, key responsibility, achievement tracking, and performance evaluation, this IKMS can not only collect human knowledge from information system, but also deduct new knowledge (child knowledge) when human knowledge is combined with IK. The IK works together with other IK for new useful knowledge through reasoning. As time goes by, when useful knowledge becomes un-useful, it will know which one is older and process knowledge conflicts automatically. Through IKMS, IK can be applied to the right person at the right time to standardize the systematic management in an organization. IK lives in IKMS for the whole life includes evaluating, filtering, and updating, creating new child-knowledge and dies at last. In order to fulfill such above functions efficiently, principles of data mining, knowledge management and their relations must be researched deeply with the development of software components. The research structures are shown as in Fig. 11.
286
Y. Shi and X. Li / Knowledge Management Platforms and Intelligent Knowledge
Figure 11. Research Framework for Intelligent Knowledge.
Figure 12. Integrated Knowledge Management Platform for Business Wisdom.
Management platform for IK is the basic environment for the birth and living of IK, while software components are the bridge from theoretically study to real world applications. After the components and the system is developed, IK can be combined with operation systems and make every unit do the right decisions, gradually from business intelligence to business wisdom. In the business wisdom stage, human knowledge, primary knowledge and data mining will be integrated into the union of knowledge management platform, by a series of processing in Man-Computer Cooperative methods. IK is produced and saved in an intelligent knowledge base. This base is the source of business wisdom. Through the platform, the managers can see a more comprehensive picture of their business, predict futures and enhance the company’s operations. This platform’s structure can be outlined as in Fig. 12.
Y. Shi and X. Li / Knowledge Management Platforms and Intelligent Knowledge
287
6. Conclusions In this paper, we have analyzed the new knowledge forms with the rapid development of data technology, such as data mining and Internet growth. However, it is difficult to utilize data mining knowledge to wisdom without Man-Computer Cooperative methods. Recognizing knowledge is the base of wisdom; this paper has proposed a framework of management platforms for knowledge of human beings, knowledge of data mining and knowledge from data mining. In order to utilize knowledge from data mining efficiently, we have discussed a knowledge classification, including a new notation: Intelligent Knowledge. IK develops at a knowledge management platform in an Internet environment. Such a knowledge management system supported by a Man-Computer Cooperative method can effectively help users to obtain useful knowledge. Meanwhile, IK can self-manage in the knowledge management platform so that users can find and utilize the knowledge efficiently. Through IK, an organization may gradually reach a stage of business wise. We invite all interested scholars to join us for further research on this project, such as how to lead knowledge to wisdom, how to make Man-Computer Cooperative method work more effectively, how to make intelligent knowledge deduce or reason to get new useful knowledge, how to judge the age of intelligent knowledge and how to handle dead knowledge for future use etc. This research will greatly help the development of business intelligence by data mining technology.
Acknowledgements This research has been partially supported by grants from National Natural Science Foundation of China (#70621001, #70531040, #70501030, #70472074), National Natural Science Foundation of Beijing #9073020, 973 Project #2004CB720103, National Technology Support Program #2006BAF01A02, Ministry of Science and Technology, China, and BHP Billiton Co., Australia.
References [1] M. Alavi and D.E Leidner, Review: Knowledge management and knowledge management systems: Conceptual foundations and research issues, MIS Quarterly, Vol. 25, No. 1, (2001), 107–136. [2] J. Biggam, Defining Knowledge: An Epistemological Foundation for Knowledge Management, Proceedings of the 34th Hawaii International Conference on Systems Sciences, Hawaii, USA, Maui: IEEE, (2001), 198–208. [3] W. Cai, Extension theory and its application, Chinese Science Bulletin㧘Vol. 44 No. 17, (1999), 1538–1548. [4] W. Cai, C. Yang, et al., A New Cross Discipline —Extenics, Science Foundation In China, Vol. 13, No. 1, (2005), 55–61. [5] R.W. Dai, Man-Computer Cooperative Intelligent Science and Intelligent Technology, Engineering Science, Vol. 6, No. 5, (2004), 24–28. [6] T. Dasu, T. Gregg, J. Vesonder and Wright, Data quality through knowledge engineering, Conference on Knowledge Discovery in Data archive, Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, Washington, D.C. (2003), 705–710. [7] M. Elizabeth and Pierce, A Progress Report from MIT Information Quality Conference. http://www.Iqconference.org, 2003. [8] N. Freed, and F. Glover, Simple but powerful goal programming models for discriminant problems. European Journal of Operational Research, No. 7, (1981), 44–60.
288
Y. Shi and X. Li / Knowledge Management Platforms and Intelligent Knowledge
[9] H. Galhardas, D. Florescu, D. Shasha, et al., Declarative data cleaning: language, model and algorithms. Proceedings of the 27th International Conference on Very Large Data Bases. Roma: Morgan Kaufmann, (2001), 371–380. [10] J. Han and M. Kamber, Data Mining: Concepts and Techniques (2nd ed.), Morgan Kaufmann, 2006. [11] M.A. Hernandez, S.J. Stolfo, Real-World data is dirty: data cleansing and the merge/purge problem, Data Mining and Knowledge Discovery, Vol. 2, No. 1, (1998), 9–37. [12] T. Johnson and T. Dasu, Data quality and data cleaning-an overview, International Conference on Management of Data archive, Proceedings of the 2003 ACM SIGMOD international conference on Management of data, ACM Press, San Diego, California, (2003), 681–681. [13] G. Kou, X. Liu, Y. Peng, Y. Shi, et al., Multiple criteria linear programming approach to data mining: models, algorithm designs and software development Optimization Methods and Software Vol. 18, No. 4, (2003), 453–473. [14] M.L. Lee, T.W. Ling, H.J. Lu, et al., Cleansing data for mining and warehousing. In: Bench-Capon, T., Soda, G., Tjoa, A.M., eds. Database and Expert Systems Applications. Florence: Springer, (1999), 751–760. [15] J. Li, X. Li, A Data Mining Solution for Small & Medium Business, Proceedings of The Eighth West Lake International Conference on SMB, Hangzhou, P.R.China, (2006), 986–991. [16] X. Li, Y. Shi and A.H. Li, Application Study on Enterprise Data Mining Solution Based on Extension Set, Journal of Harbin Institute of Technology, Vol. 38, No. 7, (2006), 1124–1128 (in Chinese). [17] X. Li, Y. Liu, J. Li et al., (2006a) A Knowledge Management Model for Middle and Small Enterprises, 2006 International Conference on Distributed Computing and Applications for Business, Engineering and Sciences, DCABES2006 proceedings, Hangzhou, China, Oct. (2006), 929–934. [18] X. Li, Y. Shi, Y. Liu, J. Li and A.H. Li (2006b) A Knowledge Management Platform for Optimization-based Data Mining, Optimization-based Data Mining Techniques with Applications workshop at Sixth IEEE International Conference on Data Mining, Hong Kong, China, Dec. 2006. [19] Making Knowledge Work: http://www.trecc.org/newslink/0411knowledge.php. [20] Milan Zeleny, From Knowledge to Wisdom: On Being Informed and Knowledgeable, Becoming Wise and Ethical, International Journal of Information Technology & Decision Making, Vol. 5, No. 4, (2006), 1–12. [21] Y. Nabie, G.F. Conteh, Intelligent decision making support through just-in-time knowledge management. Lecture Notes in Computer Science, (2003), 2774:101–106. [22] G.L. Nie, L.L. Zhang, X.S. Li and Y. Shi, The Analysis on the Customers Churn of Charge Email Based on Data Mining——Take One Internet Company for Example, Optimization-based Data Mining Techniques with Applications Workshop at Sixth IEEE International Conference on Data Mining, Hong Kong, China, Dec. 2006. [23] I. Nonaka, A Dynamic Theory of Organizational Knowledge Creation, Organization Science, Vol. 5, No. 1, (1994), 14–37. [24] D. Olson and Y. Shi, Introduction to business data mining, McGraw-Hill, 2007. [25] Y. Shi, Humancasting: A Fundamental Method to Overcome User Information Overload, Information, Vol. 3, (2000), 127–143. [26] Y. Shi, M. Wise, M. Luo and Y. Lin, Data mining in credit card portfolio management: a multiple criteria decision making approach. In: M. Koksalan and S. Zionts (Eds.), Multiple Criteria Decision Making in the New Millennium, Springer, Berlin, (2001), 427–436. [27] Y. Shi, Y. Peng, X. Xu et al., Data mining via multiple criteria linear programming: Applications in credit card portfolio management. International Journal of Information Technology and Decision making, No. 1, (2002), 145–166. [28] Y. Shi, Data mining, IEBM Handbook of Information Technology in Business, M. Zeleny (Ed.), International Thomson Publishing, England, 2002. [29] J. Srivastava, P. Desikan and V. Kumar, Web Mining- Concepts, Applications and Research Directions, In Data Mining: Next Generation Challenges and Future Directions, AAAI/MIT Press, Boston, MA, 2003. [30] A.H. Tan and P.S. Yu, Text and Web Mining, Applied Intelligence, Vol. 18, No. 3, (2003), 239–241.
Advances in Multiple Criteria Decision Making and Human Systems Management Y. Shi et al. (Eds.) IOS Press, 2007 © 2007 The authors. All rights reserved.
289
Continuous Innovation Process and Knowledge Management Ján KOŠTURIAK and Róbert DEBNÁR
Abstract. In the global era of hypercompetition, telecommunications and accelerated information/knowledge sharing, the innovation process has to become a systemic property of a company and its organization. All individuals work in and are part of some key corporate processes; all of these processes are subject to both continuous and often discontinuous improvement. That means that continuous (quantitative) and discontinuous (qualitative) innovation drives must be embodied in each individual and embedded in the system of their daily interaction and work. An effective and company-wide innovation cycle is the prerequisite for maintaining strategic competitiveness in a fast-moving, turbulent era. The whole company, with all its employees, whether production or service oriented, must become an Innovation Factory.
Professor Milan Zeleny, Innovations 2005
1. Market Changes and New Business Concepts After many years of streamlining of the business and manufacturing processes many managers are now asking how to increase the competitiveness of their companies. Many cost reduction strategies led in many cases only to a temporary success. The previous problems are back and the improvement potential is decreasing. Like the Yo-Yo effect in a slimming programme. New buzzwords, medical cures, healers and medicines are coming. BPR, BSC, Lean, Six Sigma, TOC, and other new miraculous methods are applied. Sometimes successfully, sometimes not. A new wave is coming in recent years – innovations and knowledge management. Henry Ford kicked off the trend when he stated customers could order their cars in “any colour, as long it is black”. His focus was to filter out variability in the production process and increase efficiency. Product variety was limited; complexity reduced and customers were able to buy only the cars he produced – and, for a while, they bought them. Now, automotive businesses around the world are being influenced by customer demands for both greater product variety and reduced delivery lead times. This poses a dilemma for the industry because responsive delivery is usually based on standardisation, whereas product variety requires flexibility and innovation. Individualization of the markets and mass customization have influenced also many projects in the European automotive industry. Automotive companies are preparing radical changes in the whole supply and production network – from the “stock push” and “mass production” thinking to a stockless “build-to-order” (BTO) production strategy. This will require the re-invention of the complete automotive value stream from the material producers to the end consumers of the cars, through a cost
290
J. Košturiak and R. Debnár / Continuous Innovation Process and Knowledge Management
Figure 1. Mass Customization – the engine of business development in the recent years.
Figure 2. Five Day Car Concept – Project ILIPT.
optimized system delivering what the customer really wants without delay. Within the full framework of the “EU 5-Day Car Initiative”, the Integrated Project “Intelligent Logistics for Innovative Product Technologies – ILIPT” focuses on the following: 1. 2. 3.
Product configuration for build-to-order supply chains addressing new product technologies with the tools and management methods. New concepts in delivering flexible production networks addressing collaboration across complete value streams and interoperability of these processes. Novel methods and tools to assess and validate this radical business model for the European automotive industry.
This stockless vehicle supply system to deliver a customer ordered vehicle in 5 days is based on a radical new concept including a tremendous level of modularity, the joining methods and novel integration approaches. This concept aims at a groundbreaking renewal of current thinking from the traditional concept of supply chains, toward high-added value networks (Fig. 2).
J. Košturiak and R. Debnár / Continuous Innovation Process and Knowledge Management
291
Figure 3. The main business concepts – Lean Six Sigma and TOC.
Is really “the 5 Day Car Concept” so important for the customers? How many customers are not able to wait for their car more than 5 days? A typical customer needs sometimes a few weeks or months to make his final decision. But when he is definitely decided, he wants to have “his” product immediately. The producers able to satisfy him, will win. But this is not only the question of customer satisfaction. 300 billions Euro is lying in the whole supply chain network in the automotive industry. Flexibility and short delivery times mean also better cash flow and higher profitability for the automotive companies. This example is not exceptional. Similar change processes are running in many other businesses under the slogan “give your customer what he wants – but faster than your competitors”. The essential question is – what does the customer really want? What is the customer value? There are three fundamental business concepts focused on the customer value: 1. 2. 3.
Lean Management Theory of Constraints Six Sigma (Fig. 3)
Over the last decade, many companies have tried to copy Toyota principles. They are applying methods for waste elimination from production and business processes, they compare benchmark indicators like value added index or working hours per product. But the essence of Toyota’s excellence is not captured in the „common sense“ methods like 5S, Kanban, value stream management or manufacturing cells. Toyota has been developing this system consistently for over 50 years. Toyota has developed a system of knowledge which creates reusable knowledge, maintains it, and leverages its use in the future. Nobody from Toyota employees wrote a handbook of Toyota Production System, this is a business of other management gurus. The values and principles of Toyota Production System are developed in the minds and daily jobs of all the employees. All the knowledge gained throughout the design or production process, what works and what doesn’t work, could be captured and consistently applied for all future projects. Toyota doesn’t call its system “lean”, but it is lean, Toyota doesn´t speak about knowledge management, but it does it! Lean concept originated in Toyota is oriented on waste identification and elimination from the whole process chain (Value Stream Management). In other words – lean focus is maximisation of added value in all the production, logistical, administrative and development processes (Fig. 4). TOC (Theory of Constraints) is based on the identification and elimination of the system’s constraints with the goal ongoing throughput
292
J. Košturiak and R. Debnár / Continuous Innovation Process and Knowledge Management
Figure 4. Lean Concept.
improvement. The throughput is defined as the rate at which the organisation generates money through sales. In other words throughput is the added value in the process chain per time unit. The Six sigma philosophy specifies the value in the eyes of the customer (Voice of the customer) and identifies and eliminates variation from the value stream. Six Sigma, Lean and TOC continuously improve knowledge in pursuit of perfection and involve and empower the employees. The main problem of these most important business concepts is that they have tools to give to the customer exactly what he wants (without waste and quickly), but they don´t have systematic approach how to create a new value for him. Many companies are oriented on low cost strategies. But some cost attack programmes and transfer production facilities to the low cost countries showed that it is not the right and strategic solution. In the recent years many West European od US manufacturing firms have moved their production plants to the low cost countries. Over time, they recognised that they had lost some competitive advantages because some departments were physically separated (e.g. product design, production engineering, production, logistics) and the communication and co-operation between them was limited. Also many cultural differences reduced the effects of the low cost location. Not even massive implementation of lean management, Six Sigma or other world class concepts bring sometimes any radical improvement. Company success is not only in optimisation of current processes (doing right things right) but first of all in innovation (looking for new – but as fast as possible). The productivity world will be replaced by the world of creativity, the world of the perfect planning will be replaced by the world of the experiments and generating new ideas and opportunities. Not perfect planning of the change but fast realisation of the change is the way towards success.
2. Innovations and Customer Value The purpose of innovation is to add value. The purpose of good innovation is to eliminate tradeoffs (M. Zeleny, 2000).
J. Košturiak and R. Debnár / Continuous Innovation Process and Knowledge Management
293
Figure 5. Changing of markets and competitive factors.
“Innovation is back at the top of the corporate agenda. Never a fad, but always in or out of fashion, innovation gets rediscovered as a growth enabler every half-dozen years.”, wrote R. Moss Kanter in Harvard Business Review 11/2006. “A focus on cost-cutting and efficiency has helped many organizations weather the downturn, but this approach will ultimately render them obsolete. Only the constant pursuit of innovation can ensure long-term success.”, wrote Daniel Muzyka from University of British Columbia in Financial Times, 09.17.2004. The father´s world of the business has been changed radically in the recent years. The old world of compromises (e.g. quality OR price, customisation OR delivery time) has been replaced by the new world where the tradeoffs are not accepted. When you have two options – take both! This is the new rule of success on the market. “The basic economic resource is no longer capital, nor natural resources, nor labor. It is and will be knowledge.”, said Peter Drucker. M. Zeleny defines innovation as the change in the hardware, software, brainware or support network of a product, system or process that increases the value for the user or customer. From this definition it should become clear that not every invention (a discontinuous, qualitative change) is an innovation, and so not every improvement (a continuous, quantitative change) is an innovation. Innovation adds value – claims Zeleny. “To grow, companies need to break out of a vicious cycle of competitive benchmarking and imitation.”, say the authors of Blue Ocean bestseller – W. Chan Kim and R. Mauborgne. Customer value distinguishes the innovation from the simple change. But the innovation is not to be only a breakthrough technical solution. Generation of technical changes on the product or technological advatage in the producation process have not necessarily led to success. Many companies have a perfect product, produced by an excellent technology. They have the only limitation – the customers don’t buy them, because they don’t see any reason to buy them. They did not find the customer value. Innovation must generate “something new” for the customer life – simplificity, risk elimination, convenience, better price, fun, image and emotions, style or environmental friendliness.
294
J. Košturiak and R. Debnár / Continuous Innovation Process and Knowledge Management
Table 1.
Customer Benefits↑ (Higher)
Costs (Constant)
Customer Benefits (Constant)
Costs ↓(Lower)
Customer Benefits ↑(Higher)
Costs ↓(Lower)
Customer Benefits ↑ ↑ (2 x Lower)
Costs ↑(Higher)
Customer Benefits ↓ (Lower)
Costs ↓ ↓ (2x Lower)
Figure 6. Customer Value Improvement.
There are four basic areas for customer value creation.
Figure 7. Four Areas of Customer Value Creation.
J. Košturiak and R. Debnár / Continuous Innovation Process and Knowledge Management
295
The new customer value can be generated by − New value − Different Value − Higher Value D. Mann defines two ways of thinking regarding innovations: Table 2. Trade-Off Thinking
Breakthrough Thinking
High Quality OR Low Cost
High Quality AND Low Cost
Affordable OR Customized
Affordable AND Customized
First Cost OR Life Cycle Cost
First Cost AND Life Cycle Cost
Flexible OR Rigid
Flexible AND Rigid
Big OR Small
Big AND Small
Adaptor OR Innovator
Adaptor AND Innovator
A OR B
A AND B
All systems contain contradictions – something gets worse as something gets better (e.g. strength versus weight). Traditional approach usually accepts a compromise or a trade-off, but this is often not necessary. Powerful, breakthrough solutions are the ones that don’t accept the trade-offs. Such solutions are actively focused on contradictions and they are looking for ways of eliminating the compromise.
Figure 8. Overcoming tradeoffs through contradictions (Linde).
296
J. Košturiak and R. Debnár / Continuous Innovation Process and Knowledge Management
The WOIS approach developed by H. Linde has been successfully used in breakthrough product, process and business innovations in many companies (e.g. BMW, Braun, Hilti, Vikng, etc.). The main elements of the WOIS innovation methodology are: 1. Definition of the strategic orientation. 2. Definition of contradictions. Answers to the questions – What and Why? 3. Solution of contradiction (46 innovation principles, technical and physical contradiction, solution maps, laws of evolution, bionics). Answers on the questions – How? 4. Concurrent innovations in product, processes, organisation, resources and marketing. 5. Implementation and evaluation. The basic conditions and principles of successful innovation using WOIS are: −
− −
−
The innovation project starts with deep analyses – market analysis, product trends, analysis of technological trends, process analysis, analysis of production and assembly trends, trends in sales and service systems, analysis of the product as a system and its environment, analysis of system functions, analysis of existing solutions (patents, competitive solutions, solutions of other areas, generation of solution maps, benchmarking), analysis of system generations and evolution. Integrated, team based design and development process – marketing concept, product and process are designed by the same multifunctional team (marketing, design, process planning, production, logistics, controlling, customer). Use of the knowledge of the system evolution and system generations – strong orientation on the past and future development trends. Not only a new products or processes are created, but also the knowledge and strong learning effect is generated through the innovation process. Culture of creativity, acceptance of failures, space for experiments, prototypes, testing new ideas.
Figure 9. Contradiction based innovation strategy WOIS (Linde).
J. Košturiak and R. Debnár / Continuous Innovation Process and Knowledge Management
297
Figure 10. Innovation project – hospital bed.
Example Project: Project duration: Project team: Project inputs: Project goals: Project steps:
Innovation of hospital bed 6 month designer, production engineer, external consultant, logistic expert, process engineer, service, marketing, customer Target price, target markets, product life cycle, production volume New product with higher customer value (new functions, better parameters, lower costs) Fig. 10.
Different market and market segments were analysed, five important customer groups and their requirements were identified (inteviews, analyses and observations in the hospitals) – Fig. 11. From the customer requirements the design contradictions were defined and the evolution trends and new solution alternatives were generated (Figs 12–14). Project Results Product sale increase: Number of parts: Production costs: Production time reduction: New functions:
+20% –15% –30% –40% +10%
298
J. Košturiak and R. Debnár / Continuous Innovation Process and Knowledge Management
Figure 11. Market and customer analyses.
Figure 12. Design Contradiction Matrix.
J. Košturiak and R. Debnár / Continuous Innovation Process and Knowledge Management
299
Figure 13. Evolution trend analysis.
Figure 14. Solution concepts.
3. Knowledge Management and Corporate Potentials Innovation adds value through knowledge. The knowledge management is a set of processes, policies, and tools that link knowledge of employees to new sources of value (products, services, processes) in order to create innovative solutions. Some stakeholders and managers are focused only on the results, not on the analysis and systematic measurement and improvement of the corporate potentials. The big-
300
J. Košturiak and R. Debnár / Continuous Innovation Process and Knowledge Management
Figure 15. Two functions and four potentials in a company.
gest competitive advantage is not saved in the manufacturing or information technologies, but in the ability to manage the company potentials in four areas: 1. 2. 3. 4.
Mental – corporate strategy Physical – processes and resources Emotional – people development and knowledge management Spiritual – corporate culture
Each company has two basic functions: 1. 2.
Production and development products and services – this is the prerequisity for earning money, making profit and growing company. Self reproduction – creating knowledge and development of people – this is the prerequisity for long term mastering of the function 1.
The difference between an excellent and a good company is not in the machines, the software or the organisational structure. The difference is in the co-ware – cooperation, creation and dissemination of knowledge through the company (Fig. 16). The companies should be able to solve the following important questions regarding the knowledge management: 1. 2. 3. 4. 5.
How to reach and keep the best talents and individuals? How to share, communicate and develop the best corporate practices in the organisation? How to transfer knowledge between employees on the projects and actions in the company? How to increase and measure knowledge? How to change knowledge into innovation as fast as possible?
J. Košturiak and R. Debnár / Continuous Innovation Process and Knowledge Management
301
Figure 16. New competitive factors in an innovation age (Linde).
Table 3.
Corporate strategy Corporate processes Change management focus Employees
Competitive factors Corporate culture Intercorporate relationships Improvement concepts Innovation focus Management focus Management principles
Yesterday
Tomorrow
Productivity Standardisation Best practices, benchmarking, increase customer value Focus on the “empolyee’s muscles” (peformance – physical intelligence) and brains (kaizen – mental intelligence) Hardware, software No mistake and error culture Competition, fight
Innovation Improvement New Practices – Blue Ocean, create new customer value Focus on the employee’s heart (self motivation, emotional intelligence) and soul (moral and ethics – soul intelligence) Brainware, co-ware Culture of trials and experiments Co-operation, partnership
Lean Manufacturing, Six Sigma, TOC Product and Process innovation Quality, Productivity, Flexibility
Systematic Innovation, Lean Product Development Business and Thinking Innovation Innovation and Knowledge Management Management by opportunities, company as a living organism
Management by objectives, process and project management
Conclusion There are some new paradigms on the beginning of the 21st century. Companies which will be able to use these opportunities will have a higher chance to survive. References [1] ILIPT: The 5 Day Car Initiative. EU Research Project 2006. [2] Kennedy, M.: Set Based Thinking. Achieving the Capabilities of Toyota’s Product Development System Targeted Convergence Corporation (TCC) 05/19/06.
302
J. Košturiak and R. Debnár / Continuous Innovation Process and Knowledge Management
[3] Kim, W.C., Mauborgne, R., “Value Innovation: The Strategic Logic of High Growth,” Harvard Business Review, Jan.–Feb. 1997, 103–112. [4] Kim, W.C., Mauborgne, R., “Creating New Market Space,” Harvard Business Review, Jan.–Feb. 1998, 83–93. [5] Liker, J.K.: The Toyota Way. McGraw Hill 2004 [6] Linde, H., G.Herr, A.Rehklau: WOIS – Contradiction Oriented Innovation Strategy, Wois Institut Coburg 2005. [7] Mann, D.: Hands-On Systematic Innovation. Creax Press 2003. [8] Peters, T.: Re-Imagine. Dorling Kindersley 2003. [9] Zeleny, M.: The Innovation Factory: On the Relationship Between Management Systems, Knowledge Management and Production of Innovations. Innovations 2005, Zilina 2005. [10] Zeleny, M., Human Systems Management: Essays on Knowledge, Management and Systems, World Scientific, 2005. [11] Zeleny, M., “Knowledge of Enterprise: Knowledge Management or Knowledge Technology?” in: Governing and Managing Knowledge in Asia, edited by T. Menkhoff, H.-D. Evers, and Y.W. Chang, World Scientific, 2005. [12] Zeleny, M., “Elimination of Tradeoffs in Modern Business and Economics,” in: New Frontiers of Decision Making for the Information Technology Era, edited by M. Zeleny and Y. Shi, World Scientific, 2000.
Advances in Multiple Criteria Decision Making and Human Systems Management Y. Shi et al. (Eds.) IOS Press, 2007 © 2007 The authors. All rights reserved.
303
An Exploratory Study of the Effects of Socio-Technical Enablers on Knowledge Sharing Sue Young CHOI, Young Sik KANG and Heeseok LEE Graduate School of Management, Korea Advanced Institute of Science and Technology 207-43, Cheongryangri-dong, Dongdaemun-gu, Seoul, 130-082, Korea Correspondence to: Heeseok Lee, E-mail:
[email protected] Abstract. Recently, the need for knowledge management has been drastically increasing in order for organizations to meet the high level of dynamic, complex business change and uncertainty. Particularly, knowledge sharing has been recognized as a critical process through which organizational knowledge can be utilized. For successful knowledge sharing, however, companies need to capitalize on various enablers. In light of this, the objective of this paper is to provide a better understanding of how these enablers can affect knowledge sharing intention and behavior. For this purpose, this paper thus proposes a theoretical framework to investigate these enablers from a socio-technical perspective. A field study involving 164 users reveals that social enablers such as trust and reward mechanism are more important for facilitating knowledge sharing than technical support in isolation. Keywords. Knowledge Management; Knowledge Sharing; The Socio-Technical Perspective
1. Introduction Organizations have recognized that organizational knowledge can play a critical role in responding to the severe competition in today’s knowledge economy. Whether looking for breakthroughs or just trying to improve their process, organizations will benefit from valuable knowledge asset when judiciously employed. A variety of studies have discussed the importance of knowledge as a primary resource to keep competitive advantage [7,8,15,27,41]. For managing organizational knowledge, companies have adopted various knowledge management (KM) practices to encourage people’s participation [31]. They are also implementing KMSs (knowledge management systems) to utilize their knowledge. For example, several companies such as Anderson Consulting, Buckman Laboratory, and 3M, which are famous for successful knowledge management, have fostered knowledge intensive teams such as communities of practice based on robust KMSs. These companies illustrate the importance of knowledge management by embracing best practice and expertise beyond organizational unit to resolve business problems and seek for new business opportunities [4,27,31]. As a number of organizations have conducted KM practices in real business fields, knowledge sharing has emerged as a subject of great interest to both KM academicians and practitioners. An organization’s ability to leverage its knowledge is highly dependent on its people, who actually share knowledge [25], that is, a growing interest about
304
S.Y. Choi et al. / An Exploratory Study of the Effects of Socio-Technical Enablers
knowledge management has made it important to understand how individual members share their knowledge within their groups, across organizational units, and across hierarchical levels. The movement of knowledge across individuals and organizational boundaries is ultimately dependent on employees’ knowledge sharing behaviors [16]. Particularly, sharing employees’ expertise and skills provides opportunities for mutual learning, and contributes to an organizational capabilities to innovate [5,39]. Knowledge sharing among individuals with different domains and expertise can create organizational knowledge that is beyond what one individually owns. Individuals who share organizationally relevant information, ideas, suggestions, and expertise with one another would be able to jointly create new knowledge [40], which helps convert individual knowledge into economic and competitive value for the organization [22]. Therefore, knowledge sharing in the individual level would be a basic step to creating organizational knowledge. Researchers have sought to recommend strategies so that organizations can investigate different ways in which employees can share and leverage their knowledge [15]. Typically, many business leaders tend to believe that KM success depends on KMS in isolation. However, the most important value comes when their decisions make the people and the organization they lead to move toward the right direction. While much of KM literature heavily focuses on technical issues at the initial stage, the importance of human and social factors has been increasingly growing [4,2,21,22,26,37]. To our best knowledge, there is little research that has been written considering both social and technical perspectives with an empirical validation. Therefore, the emphasis of this paper is on developing a conceptual foundation to explain knowledge sharing from the socio-technical perspective. For this purpose, this paper attempts to propose a model to underpin knowledge sharing enablers and validate them. Correspondingly, the findings can help find way to capitalize on these enablers to facilitate knowledge sharing.
2. Knowledge Sharing and Related Studies Knowledge sharing is a multi-dimensional activity and thus involves several contextual, cognitive, and communicative skills [19,27,43]. A growing body of research has addressed the enablers which facilitate the willingness to share knowledge from various perspectives. Specifically, researchers in IS and organizational fields have made efforts to theorize and link these enablers to knowledge sharing behaviors. In addition, several studies focus on knowledge sharing from the technical perspective [3,23]. Their emphasis is on providing guidelines for implementing knowledge management systems. In contrast, the studies from the social perspective attempt to investigate cultural or motivational factors. Some of them are summarized as shown in Table 1 and they are further explored as follows. Several studies have found that social factors such as trust, expertise, and rewards are imperative for explaining knowledge sharing behavior. Based on TRA (Theory of Reasoned Action), some of them posited that knowledge sharing intention has strong association with knowledge sharing behavior [8,33,36]. Specifically, strong trust increases good will among employees [1] and plays a critical role in reciprocal transactions based on the social capital theory [41].
S.Y. Choi et al. / An Exploratory Study of the Effects of Socio-Technical Enablers
305
Table 1. Summary of the studies on knowledge sharing Perspective
Research
Method
Key topic
Social
Bartol & Srivastava (2002) [7]
Conceptual
The role of reward systems on knowledge sharing
Ryu et al. (2003) [33]
Empirical
Knowledge sharing based on TRA model in terms of expertise group (physicians in hospital)
Abrams et al. (2003) [1]
Empirical
The role of interpersonal trust within the context of trust
Widén-Wulff & Ginman (2004) [43]
Conceptual
Antecedents of knowledge sharing based on social capital dimension
Bock et al. (2005) [8]
Empirical
Knowledge sharing behaviors from a socialpsychological perspective; the emphasis on negative influence of extrinsic reward
So & Bolloju (2005) [36]
Empirical
Knowledge sharing and reuse based on TRA within the context of IT service operation
Wasko & Faraj (2005) [41]
Empirical
Knowledge sharing in electronic network based on social capital theory
Hendricks & Vries (1999) [22]
Conceptual
The role of KMS based on two-factor theory
Hendricks (1999) [23]
Conceptual
The trade-off of IT system for knowledge sharing
Alavi & Tiwana (2001) [2]
Conceptual
The role of KMS for integrating knowledge within the context of virtual team
Huysman & Wulf (2006) [24]
Conceptual
The role of IT for knowledge sharing based on social capital theory
Sherif et al. (2006) [35]
Case study
The role of KMS for building social capital within the context of global IT company
Pan & Scarbrough (1998) [31]
Empirical
An examination of knowledge sharing at Buckman Lab. from a socio-technical perspective
Hall (2001) [21]
Empirical
The strategies for making input-friendly intranets for knowledge sharing
Ipe (2003) [25]
Conceptual
An integrative model of knowledge sharing by combining three factors such as type of knowledge, motivation to share, and opportunities to share
Yang & Chen (2007) [45]
Empirical
The impact of organizational knowledge capabilities on knowledge sharing behavior
Technical
Integrated
Although some studies suggest that monetary value is indispensable to knowledge sharing, others have argued that this tangible reward in isolation is not sufficient to motivate knowledge sharing among individuals. Most studies propose that intrinsic reward may be more important than extrinsic reward for diffusing knowledge [7,43]. Furthermore, it was even noted that occasionally extrinsic rewards have negative effect
306
S.Y. Choi et al. / An Exploratory Study of the Effects of Socio-Technical Enablers
on knowledge sharing. For instance, the work of Bock et al. [8] empirically examined this negative effect such as monetary value on knowledge sharing intention. From the technical perspective, IT may be one of the important factors which affect successful knowledge sharing. Typically, KMS has been used for transferring explicit knowledge to other members in organizations beyond location and time, that is, some studies have highlighted its positive effect. KMS can help facilitate the efficient sharing of a firm’s intellectual resources. In particular, Huysman and Walt [24] showed that how IT can support knowledge sharing in communities based on social capital dimension. However, IT may be a friend or a foe depending upon the situation. IT tools such as intranets and knowledge base, which are geared towards codifying knowledge, may not be effective enough [22,23]. Thus, several KMS studies highlighted the role of social capital in organizations [35]. Hendricks and Vries [22] asserted a shift of focus from technical implementation in isolation to specific challenges and problems within a knowledge-based organization. From the integrated perspective, researchers began to consider both social and technical factors. For example, Pan and Scarbrough [31] applied this integrated framework to a particular company. Hall [21] investigated knowledge sharing antecedents including intranet system, critical mass, enabling conditions, and reward systems. Ipe [25] presented a conceptual model to incorporate three primary factors such as type of knowledge, motivation to share, and opportunities to share. In addition, Yang and Chen [45] examined the impact of organizational capabilities such as cultural, structural, human, and technical knowledge on knowledge sharing behavior. However, to our best knowledge, there is little empirical evidence for this socio-technical perspective.
3. Research Model A variety of social and technical enablers facilitates the knowledge sharing [7,25]. They interact in complementarily to shape knowledge management efforts [10,30,31]. This interaction needs to consider sociability, usability, and the fit between social and technical factors [9]. Socio-technical studies usually focus on the continuous interactions between IT and the people during the design, implementation and use of IT systems [24]. Specifically, they adopt a holistic approach that can highlight the interplay of social and technical factors in the way people work. Therefore, an organization should consider both factors and dynamic business processes require their synergy; this synergy is found to result in better performance [14,15]. In a similar vein, this paper proposes a research model as shown in Fig. 1. This integrated model is more likely to help investigate various enablers to decipher knowledge sharing activities. Our model posits that knowledge enablers affect knowledge sharing intention and behavior. It is analyzed based on the individual level. Understanding knowledge sharing between individuals can lead to a better understanding of knowledge sharing as a whole in an organization.
S.Y. Choi et al. / An Exploratory Study of the Effects of Socio-Technical Enablers
307
Figure 1. Research model.
4. Hypothesis Development Companies embrace organizational characteristics such as culture, structure, or people [27,31,43]. Among these social enablers, trust is difficult to acquire and complex to imitate [17]. Trust is defined as a set of expectations shared by all those in an exchange; it is a multi-faceted concept which can be conceptualized across three dimensions such as integrity, benevolence, and competence [29]. In particular, trust has been an important factor for knowledge sharing to consider in organizational and IS literature [3,21,25]. When employees believe that their relationships are high in trust, they become interested in knowledge exchange and social interaction [28]. Hence, we hypothesize: H1: Trust is positively associated with knowledge sharing intention. A reward system can motivate employees to concentrate their efforts in achieving common organizational goals by providing benefit. Rewards can range from extrinsic rewards such as bonus to intrinsic awards such as praise and public recognition. It can encourage employees to participate in communities of practice and donate their own expertise [41]. Significant changes may be required in the incentive system to encourage individuals to share their knowledge [7,21]. This way, the relationship between extrinsic and intrinsic rewards has been controversial. Even though most previous studies have highlighted the importance of extrinsic rewards, some researchers have suggested that intrinsic rewards would be more critical in knowledge sharing [7,8,38]. Extrinsic rewards seem to be relatively easier to acquire than intrinsic rewards [7,41]. To be clear, both extrinsic and intrinsic rewards would enhance knowledge sharing intention. Therefore, we hypothesize: H2: Intrinsic reward is positively associated with knowledge sharing intention. H3: Extrinsic reward is positively associated with knowledge sharing intention. Knowledge needs to be communicated; knowledge may result from connecting employees through the use of electronic communication tools [24,32]. KMS is an example of technical knowledge enabler. It is an IT-based system developed to support
308
S.Y. Choi et al. / An Exploratory Study of the Effects of Socio-Technical Enablers
the organizational processes of knowledge creation, storage/retrieval, transfer, and application [2]. In addition, KMS differs from other information systems because it stores and handles knowledge rather than information and thus requires functions such as knowledge repository, video conference, and search engines. For example, a knowledge repository system plays an important role in sharing explicit knowledge and the outcomes of knowledge transfer [5,35]. Moreover, KMS can also support complex knowledge sharing activities for enhancing communication among people and encourage employees to share their implicit knowledge in organizations [35]. It may thus foster knowledge sharing in informal interactions with or across teams or work units. For example, these kinds of interactions can happen within electronic communities of practices, which are voluntary forums of employees focusing on a topic of interest [35,42]. To encourage knowledge sharing beyond organizational boundary, KMS should provide appropriate functions with excellent qualities [3,21]. KMS quality is an enhanced construct that originates from system quality in IS field [44]. Accordingly, we hypothesize: H4: KMS quality is positively related to knowledge sharing intention. Willingness to share is related to a contingent way of knowledge sharing behavior [43]. Several studies provide an empirical validation that knowledge sharing behavior is mediated by a sharing attitude (for example [33,43]). Even though each employee has willingness to share, he/she may not share knowledge. To distinguish these related but equivalent concepts, several studies have suggested knowledge sharing models based on TRA [8] and a number of these studies have reported a strong and significant causal link between behavioral intention and targeted behavior [33,34]. Accordingly, we advance the last hypothesis: H5: Knowledge sharing intention is positively associated with knowledge sharing behavior.
5. Research Methodology The data for this study were collected from firms through the Knowledge Management Research Centre (KMRC) at KAIST (Korea Advanced Institute of Science and Technology). Typically, the corporate members of KRMC have been involved in knowledge management practices for several years. These participating companies have their own KMS, and they manage communities of practices. This study conducted a survey with two manufacturing companies among these corporate members. Questionnaires were administered to 200 employees in two companies, from which a total of 176 usable responses were gathered with the response rate of 82 percent. Incomplete questionnaires were discarded, leaving an analysis sample of 164. A high response rate was possible through the support of the companies’ knowledge management teams, which distributed the questionnaires and sent e-mails to encourage participation. Table 2 shows the demographic profile of the respondents. 5.1. Measurement The survey items were obtained from preexamined measurements from previous studies. We interviewed team leaders or managers to verify the validity of these items. The
S.Y. Choi et al. / An Exploratory Study of the Effects of Socio-Technical Enablers
309
Table 2. Profile of respondents Measure
Items 0–3 yr 3–6 yr 6–9 yr 9–12 yr 10– yr Missing
Work Experience
Frequency
Percent (%)
63 52 18 7 24
38.4 31.7 11.0 4.3 14.6
Gender
Male Female Missing
125 22 17
76.2 13.4 10.4
Position
Employee Chief Manager Director Others
113 11 14 3 23
68.9 6.7 8.5 1.8 14.0
Table 3. Operational definition Construct
Social Enablers
Technical Enabler
Operational Definition
Reference
Trust
Maintaining reciprocal faith in each other in terms of intention and bahavior
Abrams et al. [1] Lee & Choi [27] Mayer et al. [29]
Intrinsic Reward
Reward that can not be measured by monetary value such as pride and public recognition
Bartol et al. [7] Bock et al. [8] Ipe [25]
Extrinsic Reward
Reward that can be measured by monetary value such as incentive and bonus
Bartol et al. [7] Bock et al. [8]
KMS Quality
Perceived quality of knowledge management system such as availability, ease of use, stability, and response speed
Alavi & Leidner [4] Alavi & Tiwana [2] Wu & Wang [44]
Knowledge Sharing Intention
The intention of the provision of task information, knowhow, and feedback regarding a product or procedure
Bock et al. [8] Ryu et al. [29,33]
Knowledge Sharing Bahavior
Bahavior of the provision of task information, know-how, and feedback regarding a product or procedure
Bock et al. [8] Cummings [18] Ryu et al. [33]
Knowledge Sharing
final questionnaire consists of the items for knowledge enablers, knowledge sharing intention and behavior, according to a seven-point Likert scale. The operational definition and sources for constructs are illustrated in Table 3. For further details on the questionnaires, readers can refer to the Appendix.
310
S.Y. Choi et al. / An Exploratory Study of the Effects of Socio-Technical Enablers
Table 4. Results of confirmatory factor analysis Measures
Number of Items
Composite Reliability
Average Variance Extracted
Trust (TR)
4
0.942
0.764
Intrinsic Reward (IR)
4
0.933
0.777
Extrinsic Reward (ER)
4
0.948
0.819
KMS Quality (KSQ)
4
0.885
0.658
Knowledge Sharing Intention (KSI)
4
0.924
0.751
Knowledge Sharing Bahavior (KSB)
5
0.904
0.703
Social Enablers
Technical Enabler
6. Analysis Result In order to validate our model, this paper employed the partial least squares (PLS) technique which is widely used in the IS field. PLS can model latent constructs as either formative or reflective indicators. It requires minimal demands in terms of the sample size in order to validate a model, compared with other structural equation modelling techniques [12]. PLS has the ability to handle research models with formative constructs and relatively small sample sizes. It does not require multivariate normality distributions for the underlying data. PLS simultaneously models structural and measurement paths [13]. The recommended sample size in PLS is at least 10 times the number of independent variables. There are two steps in the process of theory testing: (i) developing valid measures of theoretical constructs and (ii) testing the relationship between theoretical constructs. Specifically, the PLS Graph Version 2.91 was used in our analysis. The test of the measurement model includes the estimation of internal consistency and the convergent and discriminant validity of the instrument items. The convergent and discriminant validity are assessed by construct validity testing. Particularly, the construct validity of the proposed model is commonly estimated by assessing item reliability and average variance extracted (AVE) [6,20]. AVE measures the amount of variance that a latent variable component captures from its indicators relative to the amount due to measurement error. All item reliabilities are higher than the 0.70 cut off value except for KMS quality (0.658). Table 4 shows the details of the validity test results and convergent validity. Table 5 confirms the discriminant validity; the square root of the average variance extracted for each constructs is greater than the level of correlations involving the constructs. The results of the inter-construct correlations show that each construct shares a larger variance with its own measures than with other measures. The PLS results are depicted in Fig. 2. With an adequate measurement model, the proposed hypotheses were tested. Interesting results are obtained from our sociotechnical perspective. The findings support our model except for the link between KMS quality and knowledge sharing intention. It is found that KM processes are heavily influenced by the social factors. Clearly, trust has the important influence. It is also found that intrinsic reward is more important than extrinsic reward. In contrast, KMS quality is found to be insignificant. Moreover, the strong association between knowledge sharing intention and behavior is confirmed.
311
S.Y. Choi et al. / An Exploratory Study of the Effects of Socio-Technical Enablers
Table 5. Correlation between constructs Variable
TR
IR
ER
KSQ
KSI
TR
0.874
IR
0.314
0.881
ER
0.332
0.625
0.905
KSQ
0.278
0.396
0.347
0.811
KSI
0.538
0.567
0.502
0.319
0.867
KSB
0.480
0.588
0.530
0.352
0.775
KSB
0.838
* The bold numbers in the diagonal row are square roots of the average variance extracted
Figure 2. Analysis results.
7. Discussion For a better understanding of individual knowledge sharing, this paper tried to understand both social and technical enablers from the socio-technical perspective. The primary purpose is to investigate how these knowledge enablers are associated with knowledge sharing. The following points are highlighted from our investigation. First, our results confirm that both intrinsic and extrinsic rewards can facilitate knowledge sharing. Interestingly, the association between intrinsic reward and knowledge sharing is found to be stronger than extrinsic reward. Recently, many companies began to provide extrinsic rewards (for example, gifts, points, or mileage) to employees in order to encourage knowledge sharing. In contrast with the work of Bock et al. [8], our study’s
312
S.Y. Choi et al. / An Exploratory Study of the Effects of Socio-Technical Enablers
result confirmed the positive effect of extrinsic reward. However, although extrinsic reward may be effective in the initial stage of accumulating external knowledge, its effect may become weaker [11]. Our study’s result would thus favour the shift from extrinsic rewards to intrinsic rewards as the knowledge management practices become mature. Second, our result confirms the strong effect of trust. If an employee believes other member’s expertise and skills, this would increase the intention to share individual knowledge. Trust is likely to help them highly value the benevolence of colleagues in determining the extent to which knowledge is donated or disseminated [1]. This benevolence can often lead to best practice or business opportunities. Third, the most recent knowledge management studies imply that IT can facilitate knowledge sharing (e.g., [15,20]). Despite this tendency to emphasize the role of IT, a growing number of studies are also starting to evoke the importance of the holistic view, which recognizes the interplay between social and technical factors [14,22,46]. To put it simply, KMS itself may not be sufficient. Similarly, Lee and Choi [27] noted the relatively weak positive relationship between knowledge creation and IT support. Our finding on the insignificant path from KMS quality to knowledge sharing intention confirms this rather intriguing point. Hertzberg’s theory may further sharpen this situation [23]. However, our intention is not to validate Herzberg’s theory as a motivational driver, but to use it as an intellectual basis to explain the role of KMS. According to this theory, some factors tend to be related to job satisfaction (motivators), while others are associated with job dissatisfaction (hygiene factors). The theory suggests that the presence of hygiene factors is necessary, but not sufficient for work satisfaction. When looking for the reason why people want to share knowledge, one almost automatically turn to the motivation factors, not the hygiene factors. Here, KMS may be regarded as a hygiene factor [23]. That is, KMS is one of necessities for sharing knowledge; however, it is not likely to be the only key player for enhancing knowledge sharing. Therefore, our analysis result would point out that emphasizing the KMS quality at the expense of trust or reward system is not a route to better knowledge sharing. Organizations might need to eradicate the impediments to knowledge sharing by providing hygiene factors such as KMS while being aware that these factors are not sufficient to foster employees’ intention to share knowledge. Even though the absence of excellent KMS may frustrate knowledge sharing, but it is not likely that it itself can enhance knowledge sharing behavior [22]. Indeed, we have to go beyond our tendency to take KMS for granted.
8. Conclusion The primary objective of this paper was to decipher the knowledge sharing mechanism from the socio-technical perspective. Practically, our findings would help culture knowledge management activities in an organization. If companies want to succeed in implementing knowledge management practices, they should consider both social and technical enablers. For example, the system-centric approach is likely to put too much emphasis on explicit knowledge in the knowledge repository system. Similarly, the human centric approach is likely to miss the chance to capture tacit knowledge due to the difficulty caused by social interaction. Instead, our study’s result hints that the balanced combination of the two approaches lead to better KM strategies. It is like squeez-
S.Y. Choi et al. / An Exploratory Study of the Effects of Socio-Technical Enablers
313
ing a balloon in one place only to find that it expands elsewhere. Furthermore, our result tells us that social enablers are likely to be more critical for knowledge sharing than KMS. For example, trust is the key in the decision by an individual’s decision to share their personal knowledge with others. However, KMSs are the basic components for sharing knowledge. Without information technologies, it will not be an easy task to share and communicate knowledge with other members across time and location. Several limitations of this study must be recognized. First, the current results are based on the individual characteristics. For a more comprehensive model, we might consider another antecedent such as organizational culture or structure. Second, since we adopted cross-sectional data, we might have missed the time lag between knowledge activities and subsequent performance.
Reference [1] L.C. Abrams, R. Cross, E. Lesser, and D.Z. Levin, Nurturing interpersonal trust in knowledge-sharing network. Academy of Management Executive 17 (4) (2003) 64–77. [2] M. Alavi and D.E. Leidner, Review: knowledge management and knowledge management systems: conceptual foundations and research issues. MIS Quarterly 25 (1) (2001) 107–136. [3] M. Alavi and A. Tiwana, Knowledge integration in virtual teams: the potential role of KMS. Journal of the American Society for Information Science and Technology 53 (12) (2002) 1029–1037. [4] M. Alavi, T.R. Kayworth, and D.E. Leidner, An empirical examination of the influence of organizational culture on knowledge management practices. Journal of Management Information Systems 22 (3) (2006) 191–224. [5] L. Argote and P. Ingram, Knowledge transfer: a basis for competitive advantage in firms. Organizational Behavior and Human Decision Processes 82 (1) (2000) 150–169. [6] R. Bagozzi, Y. Yi, and L.W. Phillips, Assessing construct validity in organizational research. Administrative Science Quarterly 36 (1991) 421–458. [7] K.M. Bartol and A. Srivastava, Encouraging knowledge sharing: the role of organizational reward systems. Journal of Leadership and Organization Studies 9 (1) (2002) 64–76. [8] G.-W. Bock, R.W. Zmud, Y.-G. Kim, and J.-N. Lee, Behavioral intention formation in knowledge sharing: examining the roles of extrinsic motivators, social-psychological forces, and organizational climate. MIS Quarterly 29 (1) (2005) 87–112. [9] R.P. Bostom and J.S. Heinen, MIS problems and failures: a socio-technical perspectives part I: the causes. MIS Quarterly 1 (3) (1977) 17–31. [10] R.P. Bostrom, and J.S. Heinen, MIS problems and failures: a socio-technical perspective part II: the application of socio-technical theory MIS Quarterly 1 (3) (1977) 11–28. [11] M.-Y. Chen and A.-P. Chen, Knowledge management performance evaluation: a decade review from 1995 to 2004. Journal of Information Science 32 (1) (2006) 17–38. [12] W.W. Chin, Issue and opinions on structural equation modeling. MIS Quarterly 22 (1) (1998) vii–xii. [13] W.W. Chin, B.L. Marcolin, and P.R. Newsted, A partial least square latent variable modeling approach for measuring interaction effects: results from a monte carlo simulation study and an electronic-mail emotion/adoption study. Information System Research 14 (2) (2005) 189–217. [14] B. Choi and H. Lee, Knowledge management strategy and its links to knowledge creation process. Expert Systems with Application 23 (3) (2002) 173–187. [15] B. Choi and H. Lee, An empirical investigation of KM styles and their effect on corporate performance. Information & Management 40 (2003) 403–417. [16] T.-C. Chou, P.-L. Chang, C.-T. Tsai, and Y.-P. Cheng, Internal learning climate, knowledge management process and perceived knowledge management satisfaction. Journal of Information Science 31 (4) (2005) 283–296. [17] S.-H. Chuang, A resource-based perspective on knowledge management capability and competitive advantage: an empirical study. Expert Systems with Application 27 (3) (2004) 459–465. [18] J.N. Cummings, Work groups, structural diversity, and knowledge sharing in a global organization. Management Science 50 (3) (2004) 352–364. [19] A.H. Gold, A. Malhotra, and A.H. Segars, Knowledge management: an organizational capabilities perspective. Journal of Management Information Systems 18 (1) (2001) 185–214.
314
S.Y. Choi et al. / An Exploratory Study of the Effects of Socio-Technical Enablers
[20] J.F. Hair, R.E. Anderson, and R.L. Tatham, Multivate Data Analysis (5th ed.) (Englewood Cliffs, NJ: Prentice Hall, 1998). [21] H. Hall, Input-friendliness: motivating knowledge sharing across intranets, Journal of Information Science 27 (3) (2001) 139–146. [22] P.H.J. Hendricks, and D.J. Vriens, Knowledge-based systems and knowledge management: freinds or foes? Information & Management 35 (1999) 113–125. [23] P. Hendriks, Why share knowledge? The influence of ICT on the motivation for knowledge sharing. Knowledge and Process Management 6 (2) (1999) 91–100. [24] M. Huysman and V. Wulf, IT to support knowledge sharing in communities towards a social capital analysis. Journal of Information Technology 21 (1) (2006) 40–51. [25] M. Ipe, Knowledge sharing in organizations: a conceptual framework. Human Resource Developement Review 2 (4) (2003) 337–359. [26] D.-G. Ko, L.J. Kirsch, and W. R. King, Antecedents of knowledge transfer from consultants to clients in enterprise system implementation. MIS Quarterly 29 (1) (2005) 59–85. [27] H. Lee and B. Choi, Knowledge management enablers, processes, and organizational performance: an integrative view and empirical examination. Journal of Management Information Systems 20 (1) (2003) 179–228. [28] D.Z. Levin, and R. Cross, The strength of weak ties you can trust: the mediating role of trust in effective knowledge transfer. Management Science 50 (11) (2004) 1477–1490. [29] R.C. Mayer, J.H. Davis, and F.D. Schoorman, An integrative model or organizational trust. Academy of Management Review 20 (3) (1995) 707–734. [30] Y.M. Mei, S.T. Lee, and S. AI-Hawamdeh, Formulating a communication strategy for effective knowledge sharing. Journal of Information Science 30 (1) (2004) 12–22. [31] S.L. Pan and H. Scarbrough, A socio-technical view of knowledge sharing at Buckman Laboratories. Journal of Knowledge Management 2 (1) (1998) 55–66. [32] C.P. Ruppel, and S.J. Harrington, Sharing knowledge through intranets: a study of organizational culture and intranet implementation. IEEE Transactions on professional communication 44 (1) (2001) 37–52. [33] S. Ryu, S.H. Ho, and I. Han, Knowledge sharing behavior of physicians in hospitals. Expert Systems with Application 25 (1) (2003) 113–122. [34] B.H. Sheppard, J. Hartwick, and P.R. Warshaw, The theory of reasoned action: a meta-analysis of past research with recommendations for modifications and future research. Journal of Consumer Research 15 (3) (1988) 325–343. [35] K. Sherif, J. Hoffman, and B. Thomas, Can technology build organizational social capital? The case of a global IT consulting firm. Information & Management 43 (2006) 795–804. [36] J.C.F. So and N. Bolloju, Explaining the intention to share and reuse knowledge in the context of IT service operations. Journal of Knowledge Management 9 (6) (2005). [37] G. Szulanski, Exploring stickness: impediments to the transfer of best practices within the firm. Strategic Management Journal 17 (1996) 27–43. [38] M.C. Thomas-Hunt, T.Y. Odden, and M.A. Neal, Who’s really sharing? Effect of social and expert status on knowledge exchange within groups. Management Science 49 (4) (2003) 464–477. [39] W. Tsai, Knowledge transfer in intraorganizational networks: effects of network position and absorptive capacity on business unit innovation and performance. Academy of Management Journal 44 (5) (2001) 996–1004. [40] R.E. Vries, B. Hooff, and J.A. Ridder, Explaining knowledge sharing: the role of team communication styles, job satisfaction, and performance beliefs. Communication Research 33 (2) (2006) 115–135. [41] M.M. Wasko and S. Faraj, Why should I share? Examining knowledge contribution to electronic networks of practice. MIS Quarterly 29 (1) (2005) 1–23. [42] M.M. Wasko and S.Faraj, “It is what one does”: why people participate and help others in electronic communites of practices. Journal of Strategic Information Systems 9 (2000) 155–173. [43] G. Winden-Wulff and M. Ginman, Explaining knowledge sharing in organizations through the dimension of social capital. Journal of Information Science 30 (5) (2004) 448–458. [44] J.-H. Wu and Y.-M. Wang, Measuring KMS success: A respecification of the DeLone and McLean’s model. Information & Management 43 (2006) 728–739. [45] C. Yang and L.-C. Chen, Can organizational knowledge capabilities affect knowledge sharing behavior? Journal of Information Science 33 (1) (2007) 95–109. [46] P. Zhang and G.M. Dran, Satisfier and dissatisfiers:a two-factor model for website design and evaluation. Journal of the American Society for Information Science and Technology 51 (14) (2000) 1252–1268.
S.Y. Choi et al. / An Exploratory Study of the Effects of Socio-Technical Enablers
315
Appendix: Survey Measurement Items Trust TR1 TR2 TR3 TR4
I believe that other members are honest and reliable. I believe that other members treat each other reciprocally. I believe that other members will act in their best interest. I believe that other members are knowledgable and competent in their area.
Intrinsic Reward IR1 IR1 IR3 IR4
People honour my job when I teach or share my own skills. The more I share my own knowledge, the more my reputation would be enhanced. When I share my knowledge, I can get more chance to show my skills to the other colleagues. When I share my knowledge, people approve me as expert in our team.
Extrinsic Reward ER1 ER2 ER3 ER4
I receive appropriate monetary value when I transfer my know-how to other colleagues. I receive point or mileage whenever I upload my document into system. When I share my knowledge, I can get more chance to promote. My company provides gift or mileage when people put in their knowledge into organization.
KMS Quality KSQ1 KSQ2 KSQ3 KSQ4
KMS is available whenever needed. KMS is easy to use for anyone. KMS is stable without any interruption. KMS provides rapid response rate.
Knowledge Sharing Intention KSI1 KSI2 KSI3 KSI4
I will try to share knowledge. I want to share own knowledge with more people. Members are willing to explain their know-how, experience or skills. I want to share knowledge with my team members.
Knowledge Sharing Behavior KSB1 KSB2 KSB3 KSB4 KSB5
I actually shared know-how with others. I actually shared project document with others. I actually shared task knowledge with others. I actually shared education result with others. I actually shared operation information with others.
316
Advances in Multiple Criteria Decision Making and Human Systems Management Y. Shi et al. (Eds.) IOS Press, 2007 © 2007 The authors. All rights reserved.
Urban System and Strategic Planning: Towards a Wisdom Shaped Management Luigi FUSCO GIRARD Full professor of Economics and Environmental Evaluations, professor of Urban Economics at faculty of Architecture and professor of “Integrated environmental evaluations” at faculty of Engineering – University of Naples Federico II Tel: +39-081-2838761, Fax: +39-081-2538649, E-mail:
[email protected] 1. Introduction The future of XXI century will be built within the cities. It depends on the choices cities will do and on their developing strategies. Every city in the world, thought so different each one from another, has to front a series of common problems (cfr. § 4). Undoubtely in their history the cities have always been characterized by a strong ambivalence, maybe a reflex of human contraddictions: happy/unhappy cities, attractive-welcoming/excluding and marginalizing cities; cities aiming at realizing the concrete utopia and places of negation of human dignity… But today this ambivalence/two-polarity is more and more sharp: cities really appear hanging from evolution to involution. How to manage cities in a general context of accelerated changes, avoiding involutive processes? How to make in progress transformations compatible with cities’ DNA and at the same time capable to improve the quality of life for everybody, also for more marginalised people? How to give a sufficient amount of resources to urban welfare? In order to front these new challenges cities should endow themselves with new tools and use new approaches. We need new approaches and strategies to orientate the change of the city in a more “human” direction (which doesn’t divide but integrates, which valorizes every component in a systemic perspective; where every person gets actual accessibility to job, residence, services, etc.). Among the new management instruments, strategic plans are particularly relevant. They help to build a complexively more desirable future on economic, social and environmental level. But which is the real priority of social and environmental objectives towards the economic ones? How eventually to modify this priority? To which extent economic approach can help? In general, the more we are able to coordinate economic, ecological and social choices, by integrating short/medium/long term strategies, coordinating private/public subjects and civil society, the more these strategic plans become capable of success.
L. Fusco Girard / Urban System and Strategic Planning: Towards a Wisdom Shaped Management 317
The common characteristic of “strategic plans” is the construction of a shared long term “vision”, through public discussion and argueing, based on the good knowledge which good reasons come from. It expresses the building of the collective interest of the city, that is of “common good”, starting from its peripheries, the historical centres, the de-industrialised areas, etc., where usually the poorest people live. This vision is as more robust as the social/civil network of the city is more dense and vital. From this vision the possibility to make a “pact” among different private/public subjects and civil society comes out, with new networks of cooperation/collaboration/ coordination, through partrnerships and alliances, in order to realize some pilot projects. Strategic plans give a strong enphasis to spatial dimension, in order to increase the symbolic and cultural power of attraction of cities, calling new activities, investments, specialised work. Every city is endowing itself with a spatial structure which can valorize the character of places, the public spaces, their particular specificity. The project of architecture and restauration is becoming a very iportant instrument for economic development, because it builds capacity of attraction/valorisation of places, of their identity and diversity, producing new values which combine the old and the new and are regenerators of sense/significance. The particular identity of a city depends on its “places”. The thesis we want to propose here is that the “places” of a city have to be reproduced and multiplied as central elements/knots of a new management strategy. It is more and more necessary that this spatial strategy, in reproducing “places”, is also recentered on an energy strategy and on a cultural strategy. In other terms, the research of a greater energy efficiency together with the use of renewable energy resources and the recycling of natural and water resources becomes the fulcrum of a new planning and developing strategy. It aims at the minimization and optimization of complex fluxes of resources and energy, in order to improve urban metabolism. What above, on its turn, should find its fundament in a spread consensus about a cultural plan. Cities are not only made of bricks, concrete and asphalt, but first of all of lived life, that is of ways of living, traditions, humanity, immaterial and cultural capital. Certainly the future of the city depends on its relationship with the ecosystem, on the quantity of economic capital which the city is able to attract, on the diffusion of equal opportunity for everybody. It depends on technologies, which more and more will give shape and organization to the city, starting from ICT and energy technologies. But it depends above all on the way we are able to manage the interactive relations between tools/technologies and values/symbols/culture. The last part of this paper describes some good practices of management/ government of twelve cities (Lyon, Oporto, Freiburg, Swansea, Wien, Bologna, Ferrara, Grosseto, Praha, Santiago de compostela, Dublin, Vilnius) in the field of knowledge and culture, identifying some common elements and differences (Fusco Girard, You, 2006). That way we can learn a series of lessons.
318 L. Fusco Girard / Urban System and Strategic Planning: Towards a Wisdom Shaped Management
2. The City as an Organism and the “Human” City 2.1. A Double Perspective We are in front of a double perspective: the co-evolutive city, more human, that is able to reflect the same humanity in the ralational dimension, or exactly its negation. The “human” city cannot be the city where a crowd of solitude concentrates. That is exactely the antithesis of the city project. We see the human being more and more lonely, living side by side with other human beings, more lonely tham him. It comes off every element acting as a glue, unifying personal experiences, putting them in reciprocal relationship. Neither the “human” city can be the one where an incresing percentage of population lives inside slums and within more and more degraded peripheries. Unfortunately that doesn’t determine future, but decline of the city. In most of present urban centres and within the same peripheries we observe a progressive withdrawing of common spaces: not only in merely physical terms but also in terms of values, in cultural and ethical terms. Public spaces are contracting themeselves. Though new public spaces are built, they are addressed mainly to a series of economic activities which make individuals stay together, of course, but only through strong consumption friendly schemes. This staying together within consumption is not enough. It is necessary, but not sufficient in order to give life and vitality to the city. The squares of European cities are extraordinary for the balance they have been able to reach between private and public spaces. Those places express cultural and symbolic values: they express the capacity to combine subjective concerns together with social values. They are the spaces of horizontal communication, of participation, where the capacity to be together comes true. We should re-build these public spaces also in cultural-ethical terms, using every possible instrument. The challenge of human development is connected to the reconstruction of these spaces, also symbolic, able to configurate themeselves as places of aggregation and humanity, above all within the peripheries of dis-aggregation. 2.2. The Human City The “human” city is first of all the city where the capacity of coordination among the actions of different subjects is built and improved. The image of “human” city is that of a city able to reduce the increasing differences/disparities (between included and excluded people, between people living in rich areas and people living in degraded peripheries; between people accessing to different networks and people out from them, between employed and unemployed people): to rebuild the glue “keeping together” different subjects. The city promoting a human development is a city where the person – and not the ecosystem or the business – is at the centre within the relational/communitary dimension, with its inalienable rights (to health, high-quality environment, to employment, to culture). It promotes integration starting from its neighbourhoods, producing and reproducing continuously networks, which are constituted by several micro-communities.
L. Fusco Girard / Urban System and Strategic Planning: Towards a Wisdom Shaped Management 319
A city which behaves as s human body has got its own “heart” (one centre or more), which unifies several activities/functions (production, residence, consumption, leisure, etc.), a mind (the business area, with various directional functions), arteries (various material and immaterial network infrastructures). It behaves as a system, because it respects certain common organizational rules, which give the city its particular “face”. Once upon a time in the heart of the city there were the sacred spaces, together with the forum and the market, that is the places of the divine presence and of the profane, of integration between change and permanence, between the visible and the invisible. Nowadays, the city pays attention above all to the production of direct and indirect profitability spaces. In the city of global competition the prioritary interest is for the realization of new technological/scientific/industrial/tertiary hub, that is for “parties” producing economic and financial wealth, and not for the “hearth”. Actually the city is a system linking the ecological/natural system, the social system and the physical system, but also the economic one with a bundle of interdpendencies of extreme complexity: it is a complex system, functioning on the basis of a “plural” rationality, characterized also by contraddictions. Its instable balance have to be continuously re-built. The city as a living organism where every component communicates with other components is articulated in a network model. Its communicative capacity is the fundament of coordinative capacity. The coordination needs and promotes on its turn new knowledge, therefore new communication and production of relational links, as there is respect for specific rules of behaviour and interaction (recognized as common good). This coordinative capacity is the foundation of city’s resilience towards change that is its capacity to regenerate, re-building a new role within the generally changed context. The city forced to re-invent its role in the change tries to mantain somehow its “position” in the classification of cities, though the intensity of external forces which characterize the present process of globalization, thanks to continuous innovation, which is first of all due to new knowledge.
3. Which New Strategies of Management/Government for the City? 3.1. Competition and City Marketing The deep changes in progress, more and more accelerated, open new opportunities for the city, but also new threats. Processes of ordinary management are no more sufficient. A strategic approach is needed, which a new coordination capacity of a plurality of subjects – that is a new management has to come from. The city is the place of space where the economic wealth of a region/country is produced: it is the engine of economic development. What above has suggested to adopt more and more strategies of city management/government of “business” kind in order to improve the comprehensive efficiency and productivity, that is to compete better in the global competition. Every city is in competition with the others. In order to win (or not to loose) in this competition it needs to improve organizational efficiency/efficacy and also the capacity to activate strategic alliances with other cities.
320 L. Fusco Girard / Urban System and Strategic Planning: Towards a Wisdom Shaped Management
What above means that cities are adopting management stategies more and more derived from the firm sector: city marketing, territorial marketing, brand, etc. allow to improve the image of the cities and so its comparative advantage. The government and management of the city have been inspiring much by business, in the light of similarities between the great enterprise and the city. Both produce goods and services, generate employment, satisfy needs and above all are inserted within a competition context. But it is not possible to transfer company models to urban field even if rationality is not only the economic one, aimed at maximizing the profit. Knowing and improving the strategic positioning and the city brand is necessary in order to attract external investments, but it is not sufficient at all. Global capital is very mobile and if it finds better conditions somewhere else it delocalizes itself immediately. Therefore it is necessary to integrate city marketing with endogenous development strategies, which play on medium-long term. Besides the objectives to be achieved in the city are much more numerous, etherogeneous and multidimensional. Think about the decreasing of poverty, the promotion of social justice, of equal opportunities, etc. In short, the city is not a firm, even if as the firm it must be able to evaluate continuously the achieved outcomes in respect of the system of pre-determined objectives with an interactive approach, which tests time by time the chosen solution and promises to improve it. It has to maximize the produced benefits and minimize lost benefits. 3.2. The Management/Government of the Enterprise and of the City The management/government of the enterprise and of the city as a complex system means to take in consideration uncertainties, to fix general frameworks, understanding which are the forces at play and their evolutive dynamic. Both the city and the business are organized on the basis of a network model. Both the management is based on continuously up-dated and reproduced fluxes of knowledge. Produced goods and services embody of their turn growing quantity/quality of knowledge. The cities’ useful life is much longer than the life of an economic enterprise: it is centuries or millenaries. Its management/government must be careful about the short term, but above all about the long term because the dynamic of the cities is secular. Besides it is a social and ecological system, characterized by an extreme complexity, where it is necessary to produce more material wealth without producing at the same time immaterial (cultural, social and environmental) poverty. The foundation of immaterial wealth is in the coordinative capacity of its inhabitants and at the same time in their creativity (Florida, 2002, 2005; Frankey and Verhagen; 2005; Hall, 1998). The re-creation of values and rules, of sense and meanings come out of creativity. Sustainability, creativity and coordination are strictly interwined. The management/government of the city, as a firm, is based on one side on the communication of information and knowledge which promotes coordination of actions of different subjects and therefore their organization (Zeleny, 2005). This systemic organization must be continuously reproduced through creative management/government combining the actions of different subjects. These actions may be of cooperation or of competition (Zeleny, 2005).
L. Fusco Girard / Urban System and Strategic Planning: Towards a Wisdom Shaped Management 321
The idea of governance underlines the research of coordination among different knots of the network. In particular, the management refers to the consensual coordination among different actors, in order to multiply possibilities: in short the core of urban governance is in the capacity of intentional and effective coordination of actions by different subjects towards recognized objectives of common relevance, as in the firm (Zeleny, 2005). Urban governance is made by the series of relationships among institutions, social organizations and businesses, in order to build choices of collective interest and their implementation. Its aim is to manage collective interests in a non centralyzed/ hierarchical way but in a decentralyzed one and so consensual/negotiated/participated way, also through feed-back processes, adjustments, interactions, etc. Good governance associates public institutions, civil society and private operators in the decision-making. By governance it is possible to allow the actors to achieve their objectives in the light of their organizational capacity and their mission but in a context of more general conveniences and of predetermined play rules. And, above all, the play “opens” to the participation of civil society. Both the city and the business management need high creativity, meant as the capacity to ideate original solutions able to combine conflicting opposites (interests, objectives, goals). Creativity identifies innovative solutions, which are positive in the whole and allow benefits to the greater possible number of subjects. 3.3. The City, Today: Towards a New Management Creativity is a basic element of the city management and government, which expresses itself first of all in reducing the conflict between development and natural/social environment, decoupling economic growing from the production of environmental and social damages. Today this “separation” is absolutely indispensable in order to build a sustainable future for the city... A wise management/government starts from the recognizing that the conservation of climate stability and the social cohesion are the main objectives of every development plan/project. Then creativity within the city management becomes the capacity to satisfy the needs of many people and not only of a few, that is to contribute to the common good. The wise management is linked also to the ethical dimension: for instance it allows to save environment for future generations without penalizing the satisfaction of the needs of present generation, re-producing and re-configurating networks knots and relationships. The strategic management of the city aims at the common good, at realizing the good life for everybody, balancing interests and values, coordinating particular interests with the general interest. This strategic management takes in consideration the cultural pluralism which characterizes the city much more than the enterprise. This represents the difference between city and business management.
322 L. Fusco Girard / Urban System and Strategic Planning: Towards a Wisdom Shaped Management
Figure 1. Coordination of actions for urban strategic plan.
4. The Strategic Plan We need new strategies of urban management/government, articulated in a creative way around the following three challenges, which are common to the cities all over the world: a) to improve economic competitiveness, identifying a new role of the city and of its different parties within the changed regional/international context; b) to improve the cohesive potential of the city, through accessible and highquality public spaces, able to stimulate integration, as well as accessibility to a decent house, to services, to work, etc.; c) to improve the ecological/environmental conditions, closing the cycles of urban metabolism, in order to contribute to the stability of climate and to separate economic development from environmental pollution. We need to build a coordination between the urban socio-economic dynamics and the ecological ones. Nowadays to build coordination means more and more to build development, that is future. The strategic plan represents the instrument by which the city plans the construction of its future development, declining and giving implementation over the time to the “vision” through specific “integrated projects”. They involve a plurality of actors (public, private, civil society) in new cooperative networks in a perspective of mediumlong term (Fig. 1). The strategic plan asks a great intentional capacity of coordination of actions. The strategic plan finds its foundation inside the elements of permanence of the city, which determines its identity. We call these elements of permanence of the city the “soul” of the city. It is represented by its “places”. Places represent the “collective knowledge” accumulated through the centuries.
L. Fusco Girard / Urban System and Strategic Planning: Towards a Wisdom Shaped Management 323
In the history these “places” were often been able to determine, thanks to their vitality, a series of concrete impacts on the city, offering directions of growing, orienting the same planning order: they configurated themselves as “central localities”, source of specific actions/choices, which on its turn have incentived/disincentivated behaviours, attracted or rejected activities. They are the spaces of the city where the life and the meeting with others develop, where relationships and social relations grow and intertwine: they are the squares as symbolic places of openess to the other, of personal and not virtual relationship; they are the symbol of relationality which is the essential dimension of humanity. Places make the city more “human”. The same image of a city and its identity are the reflex of the image of its places. Discomforts, crisis and degradation rise from the loss of places. No-places become spaces of unsafety, uncertainty and fear and no more openess to new possibilities. Actually the image of the city goes beyond its visible-perceptive representation. The particular “atmosphere” of a city, its “character” depend on its humanity: on the associated living in it, on the traditions and life traditions, on its values. The soul of the city is represented above all by the quantity/quality of systemic relations linking these “places” to people, determining a certain life style, way of thinking and of relating with the others: the everyday life made by face to face interchanges, etc. When these relationships become rarefied we cannot talk about the soul of the city anymore. An effective management aims at multiplying “places”, making them become the starting point of a new development. We need to fix intentionally urban collective memory and make it become inspirator of new strategies of growing for both material and immaterial wealth, combining conservation and reproduction of “places” with the more and more accelerated transformations, and linking strictly the city to its extra-urban territory (to the natural ecosystem), in order to coordinate the rhythmes of economy with the much longer rhythmes of ecology.
5. Strategic Plan and Building of the “Vision” Competition, social cohesion, environmental/territorial quality are the three metaobjectives of every strategic plan. They give shape to the “vision” of desirable future city, articulating themselves in strategic objectives and operational objectives in the light of specific existent conditions. Different visions are characterized by a different “weight” among different objectives (Fig. 2). 5.1. Economic Competition All the cities are facing a re-dimensioning of the traditional industrial economic base and are reinventing a new role in a more and more open context: regional/national/ international. They try to re-positionate themselves in the light of their own characteristics in respect to the network of competing cities, in order to better compete, valorizing local productions, their identity, with incentives for tourism fruition, for activities localization, etc.
324 L. Fusco Girard / Urban System and Strategic Planning: Towards a Wisdom Shaped Management
Figure 2. Meta-objectives of strategic plan.
The city, from place of manifacturing industry, is transforming more and more in a place of immaterial production: about 70% of present urban economy is represented by services to persons, to business and to the public administration... The industrial production is more and more characterized by a strong content of ideas, knowledge and research. Ideas, creativity, the capacity to apply new knowledge thanks to an infrastructure constituted by fluxes of information become the new “engine” of economic development. The city has always been the place of production/exchange of knowledge. But today this process has accelerated much. Urban economy is more and more based on knowledge economy by which new value is created (Hall, Pfeiffer, 2000). Knowledge production becomes the most important factor to produce development, both in traditional and advanced sectors, both in services and production of goods. A city becomes more competitive as more content of knowledge it is able to produce and to incorporate within its productive processes and its products. Inside the cities the localization of centres of exellence for innovation, creativity, research and learning multiplies. Therefore a characteristic of this urban economy of knowledge is its articulation in inter-urban and extra-urban networks (Hall, Pain, 2006), which determines a more and more polycentric spatial order. Every knot tends to specialyze strongly in a certain sector in order to make the city more competitive towards other cities and collaborative with other ones. Examples of specialization are in emergent sectors. They are the sectors of logystics, aerospatial, communications, information, biotechnology, pharmaceutical chimics, nanotechnologies, tourism, financial services, innovative technologies in energy sector, transport/mobility. ICT sector deserves a particular attention because it makes the city more accessible and therefore more “attractive” and so it contributes much to the localization of new activities (Van der Berg, Van Winden, 2002). Information technologies for knowledge (ICT) open new opportunities to different subjects (public, private and third sector), improve the quality of life and accessibility to resources. They represent the most important driver of development of XXI century. They have made possible the same economy globalization (Van der Berg et al., 2005). They find application in improving the access for people, enterprise and tourists to services, work and infrastructures, as well as to the fruition of cultural/environmental heritage by visitors (e-tourism).
L. Fusco Girard / Urban System and Strategic Planning: Towards a Wisdom Shaped Management 325
The digital city builds very dense networks among University research centres, businesses and high-technology activities in digital media sectors, in order to configurate itself as an attractor for external investments. It invests in infrastructures for large band advanced communications and in ICT, in order to incentivate the promotion of an “environment” favourable to the localization of new digital enterprises, as foundation of the new knowledge/creativity economy. The greater is this specialization the more intense are the connections and relationships of inter-dependence which estabilish themselves among the knots. The coordination becomes then an essential process to stimulate the competition and the different sinergies and therefore to produce positive outcomes. In short, knowledge economy stimulates a network articulation for the growing specialization of every single knot which is connected to all the others in a continuous flux of information and knowledge. Also these are produced inside the city, within its Universities, its research laboratories, its specialized activities. Knowledge economy is a tipically “urban” economy. Cities are both the starting point and the arrival point of these fluxes; inside the cities information and knowledge are exchanged and new knowledge is produced. Training, school, University and research institutions assume a central importance to multiply the opportunities at disposal of different subjects and are more and more connected within new networks of inter-dependence also with enterprises, aimed at acquiring/producing and accumulating new knowledge and at making it share by the greatest possible number of subjects. 5.2. Social Cohesion Maybe the loss of social capital (Coleman, 1990) is one of the most worring aspects of contemporary city. On one side the city consumes social capital; on the other side the city needs more and more social capital as a basic element for its own development. The challenge of re-building community can be synthesized by the challenge to reproduce social capital at a speed which is at least equivalent to that of its consumption. A city which produces economic wealth without redistributing it tends to create internally more and more intense conflicts between an elite of priviledged people and a majority of excluded/marginal people, with the increase of poverty. There is the risk of social disintegration. But within urban policies which is the real priority of the objective of fighting poverty? How to modify that priority within political agenda? The precariety of work, above all among young people, women and immigrantes, is an element reducing social cohesion, together with the increase of poverty. But also cultural reasons come out. By the increasing fragmentation of collective interests, of public spaces, the crisis of citizenship reflecting the crisis of the notion of general interest and the triumph of particular interests, the feeling of belonging to a collectivity come off. That represent the most insidious form of poverty. How is it possible to reproduce community relational values? How to build a community of communities when the inter-personal relationships reduce themselves to minimal levels of utilitaristic exchange? By the triumph of individualism, which refuses to consider the general interest and the common good, or anyhow which put it after the particular interest, the same sense of collective interest is lost, and therefore the capacity of mobilitating towards a project. But also the capacity of promoting an endogenous development is loosing. A cohesive
326 L. Fusco Girard / Urban System and Strategic Planning: Towards a Wisdom Shaped Management
society attracts activities coming from outside and on its turn stimulates new business capacities. The improvement of personal services (of sanitary, social and cultural kind) such as the availability of a house is particularly relevant in building social capital. The participation to associative networks plays an important role in stimulating the production of social/civil capital. But it is necessary to elaborate new instruments of governance able to promote public spirit and participation to general interests. In that sense we cite here the participative budget (which stresses how participating “is convenient”) and eco-budget, as well as strategic plan and Local Agenda 21 (Fusco Girard, You, 2006). In order to improve social cohesion in a pluralistic cultural context it needs to identify common elements and shared values, able to stimulate a good life. Besides it is essential putting in network all the existent niches of social capital, in order to create critical mass. The social capital is not absent in the city, but is hidden and must be explicitated, finding it within the movements for the fair and equitable commerce, microcredit, non monetary exchange, bank of time, voluntary organization, cooperative enterprise. These niches have to be involved around specific urban projects, to reproduce community, social links. They represent the real richness of the city, maybe more important than the material one. 5.3. Environmental Quality There is a continuous erosion of the territory for the city ecological support, due to the absence of an effective government of urban transformations. The urbanization is producing a growing sprawl and multiplying slums: cities are becoming agglomerations without shape, with negative impacts of cumulative kind and relevant damage to landscape/aesthetic/environmental values. The objective of the preservation of environmental quality enlarges to the objective of conservation of climate stability. The recent IPCC Report (2007) on climate change and its consequences has shown how the problem of biosphere protection is extremely urgent, because the time of an irreversible change of climate conditions is nearer than people could think. The climate change is caused by emissions which contribute to greenhouse effect, consequence of conventional energy sources (oil, coal, etc.). Well, the places of space where energy consumptions are maximum are just the cities. They give a strong contribution to the climate destabilization. They are the engines of economic development, but also the places of space where pollution is maximum. As it depends on energy, it can be said that the foundation of the new strategies of urban development is represented by energy strategies. Then it is necessary to re-organize our cities, the system of mobility, of residence, of production with investments of energy efficiency, the recourse to renewable sources, to water ricycle, to natural refreshment, to eco-friendly materials, decoupling the production of economic wealth from pollution and emission of climate altering gas. Many cities have already adopted “solar strategies” (Fusco Girard, Nijkamp, 2004) for their development, beginning to separate economic growing from environmental pollution. The so called “good practicies” are multiplying and show the central role of architectural project.
L. Fusco Girard / Urban System and Strategic Planning: Towards a Wisdom Shaped Management 327
Architecture can activate the transition towards an urban economy more and more de-carbonyzed, which combines solar energy with aeolic, geothermic, micro-hydroelectic energy, with biomasse and also hydrogen. The “solar city”, which tends to a progressive de-carbonization of urban economy, depends strictly on the project of architecture and on transport networks and is based therefore on energy strategy and strategies for water and material recycle. The increase of costs is strongly balanced by consequent benefits. Architecture can reduce energy consumption, can stimulate an increasing use of different renewable energy sources, can stimulate the use of eco-friendly materials as well as the re-use of water resources and to set a link, a coordination with “places”. It is an instrument for decentralization of energy production. Bio-architecture stimulates also energy self-sustainability: facades, roofs, etc. become elements able to contribute to the energy needs of the building and in some cases allow to put the produced surplus inside the network. New technologies allow to multiply formal alternatives and become the starting point for a new design of the space, for new forms which re-configurate it in an original way. Renewable energy play a fundamental role in separating economic development and pollution/greenhouse effect. Its use avoids damages to environment, reduces costs and allow also to obtain profits. In 1994 Aalborg Chart had anticipated clearly that “renewable energy sources represent the only sustainable alternative” (Par. 1.10). This means that renewable energies are the future and that every project building future is based on renewable sources. Besides the same Aalborg Chart underlined clearly the responsability towards climate at planet level, promoting a reduction of emissions generated by fossil fuels, through an “adeguate comprehension of alternatives” and knowledge of urban environment “as energy system”. Successively in 2004 Aalborg Committments dedicated a whole point to the necessity for the city to “reduce the consumption of primary energy and increase the levels of renewable and clean energies”. Besides they promoted the improvement of air and water quality, with more efficient uses. Climate change determines a variation of services given by ecosystems (autoregulation of water cycle, of chemical composition of the air, of seas, recycle of organic materials, etc.). They represent the “free” work of nature which “sustains” human activities. The destabilization/reduction of such functions determines less carrying capacity of territory and then damages in terms of greater direct, indirect and induced costs. We need better policies of land use, of transports, for industry, etc, to reduce the impacts on climate stability.
6. The Centrality of Spatial Dimension Within Strategic Plan 6.1. Space Quality as Attraction Capacity Now we want to underline a central aspect of urban strategic plan: the increasing importance of physical-spatial dimension as the entrance point for new development dynamics. In order to improve competitive strategies in a more and more enlarged economy to a supernational/global context cities have been forced to valorize all the “local” ele-
328 L. Fusco Girard / Urban System and Strategic Planning: Towards a Wisdom Shaped Management
ments which make the difference: the characteristics of “places”, of cultural/landscape/ historical/architectural heritage which characterize them; particular professionalities; strongly specialized productive activities. By strategic plan cities are investing in urban maintanance, refurbishment and restauration of “places” as well as in new architecture to become more able to attract. They are increasing space quality and improving functional integration among residence, work, leisure, mobility, social and cultural services. New investments in spatial structure produce different increases of value, combining beauty, creativity, knowledge, social milieu and economy. That way an “economy of places” is promoted, able to integrate with the “economy of fluxes” (new economy, knowledge economy linked to exportations, to internationalization, etc.). Within this process an increasing importance is recognized to the aesthetic dimension. The beauty of a territory/site attracts and multiplies economic values and allows also to export goods/services out: it becomes a fundamental “strenght factor”, catalyst of economic development (Greffe, 2005). In this context architecture and restauration assume therefore a particular relevance for their capacity to increase values of places, their identity, their diversity: their capacity to give a sense and role to every portion of the urban territory/space, and so to stimulate economic grow and development. Architecture and restauration become key elements of “urban economy of creativity” (Greffe, 2005; Florida, 2005) in the era of global competition, because they contribute to increase the “values” of place, regenerating the value chain and at the same time they promote new connections and interactions between diversity and unity in an ever new combination. 6.2. The New Spatial Strategy: Restauration as a Tool of City Management The intervention of cities is concentrating more and more on some particular spaces, that is inside historical centres, or in brownfield areas where it manages to determine a complexive increase of quality/beauty of physical/spatial scenery. This improvement of quality/beauty determines a greater capacity of attraction of activities and therefore a greater economic wealth. By tourism beauty of space transforms itself into economic wealth. Therefore investing in the regeneration of physical/spatial scenery, improving its quality/beauty, contributing to re-build urban “places” as “spaces of humanity”, that is as “spaces of proximity” and of centrality means to increase the productivity of every kind of capital. Beauty is also a power of attraction inside urban space which incentives the localization of new activities not connected to tourism, increases the intensity level of preexistent ones and stimulates also new demand of localization for specialized knowledge. But increasing the quality/beauty of physical/urban space means also producing positive impacts on human and social capital. There exists an inter-dependence between the characteristics of physical-spatial scenery and the perception of well-being: the improvement of the former produces positive impacts on the latter.
L. Fusco Girard / Urban System and Strategic Planning: Towards a Wisdom Shaped Management 329
In particular the quality/beauty of a physical space determines some impacts on human capital in terms of behaviour, state of health, sense of identity, availability towards inter-personal meeting, greater productivity. Besides the quality/beauty of physical space determines positive impacts on social capital in terms of feeling of belonging and community; availability towards participation to civil and social life; social care, respect and responsability; greater social cohesion, availability towards cooperation; faith into institutions. It often happens that the loss of quality/beauty of urban space generates phenomena of vandalism, unsafety, illegality, deviated behaviours, conflictuality also towards public Institutions that is erosion of social capital. Museums, restauration schools, together with high specialized research activities and artistic activities open new development perspectives, linked not only to tourism but also to local development. Ancient restored buildings can become innovation incubators, within the new global world. But it is necessary to open these innovation incubators also to neighbourhoods, and their citizens, to schools and local population. Beauty economy, creativity economy and civil economy should try to integrate strictly each other. Art/creativity, business and community are to be considered as elements and knots interconnected in new networks, which regenerate a new urban economy: from the restored heritage used as a museum, theatre, artistic or musical functions, to localization of research institutes, creative laboratories, innovation incubators with sophisticated technologies, to recreative, receptive and commercial services, to training activities, involving neighbourhoods, schools and associations, within a circuit feeding itself, creating new economic/employment niches, in a virtuous circle, with a continuous feed-back. 6.3. The Centrality of Spatial Strategy: Architecture as a Tool of City Management Quality architecture becomes effective in the development strategy of the city as it reproduces its places. Places are “public spaces” which should emit a sort of “radioactivity” within urban system. Here, a cultural/spiritual energy should be reproduced in order to realize an authentic sustainable human development of the city. They have always been incubators of community, relational and also economic values. Quality architecture, through fragmenting the project under several points of view, is able to make it be recognized as unitary. On one side just as an “organism” living and interacting in a dynamic context, the architectural building reflects the accelerated change of our times, characterizing itself for a certain level of dynamism/fluidity in its shape. But at the same time, as it contrappones to this change, it becomes a “strong” sign of urban landscape, which “resists” over the years (and also the centuries). From this tension between permanence and dynamism a new quality of the space comes out, without falling in a manieristic monumentalism. The real architecture neither is an astonishing monument to be admired, either “shouted” form inside the neighbourhood, but it is space becoming lived, place of life and of communitary coagulum, balance between economic use values, exchange values and intrinsic values. Its evocative and symbolic content stimulates common belonging, identity, horizontal communication, becoming also the place or the house where people meet, participate not as spectators (as at the theatre) but as co-protagonists: where community is built.
330 L. Fusco Girard / Urban System and Strategic Planning: Towards a Wisdom Shaped Management
The beauty of the architectural building helps the city to become everybody home. Beauty reflecting harmony, proportion, balance among several different dimensions, opposing to chaos and disorder, is the entrance point to “open” to other different dimensions. In particular, it opens to an emotional relationship linking persons, inhabitants to their city: to re-stabilish a link which is often been lost. Where there is no beauty, there is environmental, social and economic degradation; interpersonal relationships are less solidal and more conflictual. The works of real architecture become then the new symbols which characterize the urban system, which configurate themselves as places of coagulum of economic activities and of aggregation of community. They will be able to better develop a series of external effects, that is impacts within the surrounding context, in terms of real humanization. What above becomes of fundamental importance especially inside urban peripheries, where as for definition all the systemic relations are evanescent and streight inexistent. The quality/beauty of physical space contributes to make the city richer of places and more able of economic development and of welcome: the city of everybody, stimulating participation and citizenship. Architecture becomes then an important instrument of reproduction economic wealth and social capital, because it contributes to make stronger the inter-dependance links of social system and builds denser civil networks of cooperation and communication.
7. Strategic Plan and Energy Strategy The foundation of the new urban management is in its energy strategy. Our future will be more or less sustainable in relation to the decisions about our energy technologies. Nowadays we have to preserve the climate stability as a fundamental objective of every plan-project (IPCC 2007). We cannot realize economic catalysts enriching the present generation but damaging the future ones. Economic grow can represent a vantage for present fruitors (direct, indirect and induced activities) but represents a cost for future generations, for (direct, indirect and induced) pollution effects and for impacts on climate change. Preservation of cultural landscape beauty and conventional energies are in conflict in the medium-long term. Therefore the strategic plan finds its basis in the way energy is produced and consumed: that is in the energy plan. Energy is not already “given” to the city, but is to be “produced”. The combination of conservation of cultural heritage and new architecture with new energy systems represents the entrance point to activate a new development strategy for the city. Actually energy problem is structurally inter-dependent with the use of land and space, with urban/spatial planning, with transport, housing, infrastructures and industry policies (Fusco Girard and Nijkamp, 2004). Strategic plan can find implementation only through strategies of “solar city”.
L. Fusco Girard / Urban System and Strategic Planning: Towards a Wisdom Shaped Management 331
The “solar city” is characterized by a new and original mix of all the renewable energies and by the maximum energy efficiency (Fusco Girard and Nijkamp, 2004). Beauty and new renewable energy sources are congruent and can produce economic development, “linking together” improvement of cultural/environmental heritage, innovations, spin-off activities, new research and development activities and therefore new employment. The productive sector of renewable energies and of energy efficiency is characterized by a very promising potential of expansion, especially in relation to the small and medium enterprises, and therefore can improve local development. The capacity of generating new employment in a high-technology sector derives from the more and more spread application of renewable energies within residential, transport and industrial sectors, such as within agricultural and services sectors. New professional and research activities are opened. Then strategic plan has to find in the energy dimension its starting point for activating a process of change within the economic policy of the city, within industrial and employment policies. We want to underline here how the energy strategy becomes the barycentre of new strategies of urban development, because it faces the three critical knots mentioned above. Renewable energies are our future if we want to build a better world. Strategic plan, barycentred on the energy knot, stimulates new economic, industrial and employment policies. It combines itself with a more sustainable use of the flux of natural and water resources and in the improvement of urban methabolism, through materials and waste recycle. The reduction of energy consumption is often achieved through an increase in the use of materials. The new urban management translates itself into the stimolous of architectures, infrastructures and land use, able to minimize consumptions of energy and natural resources.
8. Strategic Plan and Knowledge Strategy Every project of change and every tool of the strategic plan are destined to remain ineffective, if they are not based on a real share by the inhabitants. They are the pioneers of transition. Substantially we come back to the knowledge knot and more generally to the cultural question. Knowledge and more in general culture are the “key energy” for urban development. The future of the city is not determined only by planning or economic or environmental choices. The future of the city depends on the way of thinking and of living of every inhabitant, on his aspirations, work, life and on the lifestyle he plays: on the priorities driving him within the choices. These are deeply configurated by the flux of information and knowledge and by existent infrastructure. From them, a certain lifestyle and a certain demand derive. For instance the predisposition of GIS stressing the level of energy consumptions inside every house and relative savings, due to actions of energy efficiency, microgenerators and different lifestyles, stimulates the individual responsability and an emotional involvement: in short it forces the person to think about energy, which is no more “automatically given”.
332 L. Fusco Girard / Urban System and Strategic Planning: Towards a Wisdom Shaped Management
The general thesis we want to develop is that a more desirable urban future asks for the activation not only of technical/administrative/management processes, but above all of cultural ones. The foundation of sustainability in the city is in its culture: in the knowledge, creativity, values of its inhabitants. Culture is the real power orienting change and able to drive the transformations of the city. Cities investing in knowledge and culture are the most flexible and resilient, capable of auto-organization. Cities managing to improve their cultural strategy can better enter global economy. Culture produces trust, that is the principle which economic and public/political activities are based on; it fights rassegnation and separation, it builds integration. Culture produces hope and future: there is no future at all if every individual thinks only about his own particular interests, independently from every attention to the interest going beyond the subjective one. A cultural project should support every policy of urban development (for infrastructures, regeneration of historical centres, economic and environmental policies, etc.), activating first of all a dense horizontal communicative process among all the knots of the urban cultural network, towards a cooperative polycentric organization. In order to do that, it should connect in an efficient network the different activities linked to the fruition of knowledge, to the communication of knowledge, to the innovation in knowledge. The urban strategic project for culture should allow to fix and transmit the collective memory and the cultural tradition of the city, which nowadays risks to dissolve under the pressure od the culture of the instant, that is of pan-economicism. The availabilty to participation, to the construction of collective choices by inhabitants depends on the quality and quantity of information and knowledge. The capacity to take care about public spaces, starting from green and cultural assets depends on the available information and knowledge, and more in general the circumstance that inhabitants take care about the above three challenges depends on them as well. The strategic cultural project must stimulate also the reintegration of social capital (Fusco Girard, You, 2006). The production of such social capital can be incentivated directly and indirectly by putting in network associationism, volutary organization, people sticking to fair and equal commerce, to micro-credit, shared economy, non monetary exchange systems based on reciprocity, such as LETS, RES, SEL, to “banks of time”, etc., which don’t pass through the competitive capitalistic organizative model. They constitute the system of civil economy, which produces relational values (Zamagni, 1999). All the experiences of urban development indicate that a change happened only when associations/movements characterized by strong civil/social/spiritual capital have put themselves in network. What above is not interested in the private operator or in the local government. Only particular institutions such as University and education world have real interest and capacity to activate this process, together with the network of Associations and civil society. School and University represent the starting point of this process, as they integrate practical knowledge (know-how) and critical knowledge (know-why). Only through the critical knowledge it is possible to choose the most idoneous priorities within the framework of so accelerated changes, and the same reasons for such priority.
L. Fusco Girard / Urban System and Strategic Planning: Towards a Wisdom Shaped Management 333
Local Agenda 21 for Culture process (Barcelona, 2004) makes this strategic project for culture start from the bottom, on the basis of participation of citizens and different institutions. First of all it is production and transmission of knowledge. The outcome of this process is the cultural project of the city, as the new “public space” of dialogue and evaluation, through which it is possible to build specific answers of the city to the three great challenges of our time, making identity, social responsability and citizenship grow. At the end it becomes the real catalyzer for new processes of economic development and regeneration.
9. Some Best Practices of Urban Management/Governance What exposed up to now finds a practical verify in a series of best practices realized during last years by some European cities object of a specific study (Fusco Girard, You, 2006). These best practices represent the starting point for new strategies of urban management and governance. a)
Local agenda 21 has been one of the governance instruments common to many experiences (Dublin, Lyon, Freiburg, Wien, Vilnius, Oporto, Praha, Bologna, Ferrara, Grosseto) to build from bottom-up a “vision” of sustainable development and the implementation of its strategic plan. By local agenda 21 they have tried to coordinate the rhythmes of urban ecosystem with the rhythmes of urban economy.
Maybe the most clamorous case is represented by the city of Swansea, the world capital city for the production of copper until last century. It has produced so much economic wealth as much as ecological poverty at the same time, for the consequent environmental pollution. The result has been a “landscape” and a territory recognized as one of the most degraded zones in United Kingdom. The strategic plan, built on the reconstruction of the territory as an ecological support and therefore on the reconstruction of natural landscape, has shown that the coordination between ecological system and economic system is convenient. b) Through the strategic plan the cities above have improved their competitive capacity within the national/international market, investing in the economy of culture/knowledge, in order to improve their “position”. Through the strategic plan cities have expressed their projectuality, combining art and technology, memory and innovation, competitiveness and community. Thematisms of strategic plan are different: cities of the new economy (Vilnius); green/ecological city (swansea and Grosseto), city of logistics and international exchange (such as Lyon); cities of culture (such as Santiago and Ferrara); cities of innovation of excellence (Dublin and Freiburg), cities as community of communities, or as a system of neighbourhoods (Bologna). c)
Behind every best practice realized by the cities there is a strong creative capacity, through which the distance between what is considered as desirable and the “status quo” has been reduced. Creativity has multiplied produced outcomes, creating new richness: therefore it is “convenient”.
334 L. Fusco Girard / Urban System and Strategic Planning: Towards a Wisdom Shaped Management
Creativity is strictly connected to the same notion of sustainability, as a respect of the limit which becomes a stimolous for a new project, a process of re-invention which substitutes the conventional, the routine, etc. Besides conditions of scarcity, also those of crisis, of conflict are often able to generate new ideas and new alternatives. Strategic plan has succeded as it has promoted the “creativity”of the city. It does not depend only on its centres of technological/scientific research or of artistic production, but of the spread culture of its inhabitants, of their way of thinking and of acting. A critical, creative and long-term way of thinking allows to promote the general interest, the common good, to preserve future generations and marginal subjects. The creative city is not only the city of an elite of artists and scientists, but it is of people with a more and more high-quality training, connecting in ever new networks. d) Creativity expresses the capacity to combine conflicting opposites (values, objectives, criteria, interests) in an original way, with positive solutions for everybody, with an approach based on “and … and”, and not on “or … or”; that is doing without trade-off (Zeleny, 1988, 2005). They show that the conflict is not much within reality but in our projectual incapacity: it is rather in our mind (Zeleny, 1982). For instance, the experience of Freiburg, but also of Ferrara, shows that the preservation of natural environment through the use of renewable energy sources is not only expression of solidariety towards future generations, but is also a convenient process under the economic point of view. In the same way, the recourse to micro-credit is not only expression of attention/ solidariety towards the conditions of marginal subjects, but it is also convenient, because it produces social capital and economic profits. In particular the experience of Lyon and Bologna stresses how the production of civil networks, through participation, is also convenient under the economic point of view, because it multiplies the social connections and guarantees systemic behaviours. Actually the intentional coordination of the actions of different subjects promotes cooperative capacity among the actors, collaboration, co-responsability, which local development depends on: coordination is economically convenient. The experience of Wien, Santiago de Compostela, Ferrara, Oporto, Dublin, Vilnius, etc. shows also how investing in knowledge and more in general in culture suits also economically because it activates new activities and new capitals. All the cities are investing into the improvement of their cultural environment: in the valorization of their local specificities (culture, history, etc.). They transform the vantage in cultural field into economic vantage. The investment in culture has been first of all in the cultural/architectural heritage through the restauration of historical centres and the new architecture of high quality (Ginger e Fred by Ghery in Praha, Exupery Station by Calatrava or the new anphitheatre by Renzo Piano in Lyon, the cultural centre by Alvaro Siza in Santiago, etc.). It has determined a relevant impact on cultural tourism, which has registred relevat increases in many cities (from Praha to Wien and Dublin). The investment in culture has regarded also visible/theatre/cinematographic arts, multimedial activities, software production, leisure, specialized research, etc. Many cities are investing in culture for the regeneration of their degraded peripheral areas.
L. Fusco Girard / Urban System and Strategic Planning: Towards a Wisdom Shaped Management 335
e)
Another common element to many good practices (from Wien to Freiburg, from Ferrara to Lyon, to Santiago de Compostela and Dublin) is the attention to the energy knot, that is to mantain over the time the climate stability as prioritary objective through measures of energy efficiency and the use of renewable sources.
The city of Freiburg in particular has elaborated the strategy of “solar city” which has rapidly spread in the international context, with significative impacts on productive and employment field. For instance, Oxford city has defined itself as “historical city and solar city”. Within Temple Bar regeneration, Dublin has realized the Green Building with a strict integration between art and renewable energy sources. Daégu city (in Korea) has adopted a strategic plan based on energy innovation, on economic development linked in particular to the new activities induced by new technologies, on the development of a spread culture able to sollicitate a new demand. f)
Behind every good practice there is a vision of future on the long (or medium/ long) term, as a fruit of community participation. This vision is not a technical construction, but a cultural one. It has involved also an emotional tension, generating enthusiasm and trust. It has not been only participation to the project, but an adhesion born from the consideration of the project not as an external element, but linked to its places and values.
This comprehensive vision built upon values has incentivated the inter-personal communication and has facilitated it, though plurality, complexity and many turbolences. g) A further element common to the different experiences is represented by the strategy of governance adopted at local level through best practices. They represent facts of high quality and excellence. The challenge is to multiply best practices, making them become element of continuity/ordinariety across time and space. That means to spread at the maximum level the knowledge of best practice (the contrary of what happens in the private sector, where they often remain secret) in order to improve government processes: good policies are born from the critical knowledge of good practices. h) A fundamental element of strategic plan for its implementation is represented by the capacity of intentional and effective coordination of the actions of different involved actors, dependent on the communicative capacity. It builds greater social/cooperative cohesion and together greater competitivity General conditions for coordination appear to be connected to the following three circumstances: • • •
the trust among subjects; critical knowledge; temporal horizont of medium-long term.
Actually the trust is the fundamental principle both for the functioning of market and of public institutions and of democracy. If trust is missing people are not available for a coordination with other subjects: there are closure and danger of particularisms. Guarantees are created and then other ones. The trust in institutions depends above all on their efficiency/efficacy in satisfying needs.
336 L. Fusco Girard / Urban System and Strategic Planning: Towards a Wisdom Shaped Management
The critical knowledge is linked to the capacity to utilyze fora as places where to begin reflecting and deciding not only how/when/where to coordinate actions, but first of all “because”. It is linked to the autonomous interpretation of single participants, to an ermeneutic process of discernment, on the basis of an adequate knowledge of reality, to the capacity evaluation of costs and benefits of coordination, (both in the short to medium-long term). It is linked also to the articulation of objectives, that is their hierarchization, in the light of priorities between non negotiable objectives and instrumental objectives. Every choice regarding the city (land use, use of space, of infrastructures, etc.) has a temporal horizon of many decades (and often centuries). The long period horizon is necessary “to balance” the asked sacrifices (lost benefits) with acquired benefits by different subjects. This horizon de-power the conflict of interests and allows on its turn a better critical interpretation of knowledge. There is a relation between the city crisis and the time crisis. Under the push of new technologies there is an enphasys of short time (the “real time”). Under the tyranny of the instant and of emergency, everything is reduced to here and now, without the capacity to foresee the impacts of choices in the long term, with a general deresponsabilization towards the achievement of the general interest. People behave as enterpreneurs, applying the economic calculation of costs/profits. By extending the temporal perspective people become enterpreneurs of humanity, catalysts of relationality and reconciliation. Strategic plan (and also the action plan of Local Agenda 21) is an instrument used (and to be used more and more) for the extention of temporal horizon: it serves first of all for producing a long period culture, that is just the culture of sustainable development. In the end sustainability is no other than the “memory of future”. Strategic plan (and the action plan of Local Agenda 21) has succeded as it has managed to convince that time is dual: there is a near time and a far time. i)
In conclusion the strategic plan has succeded as it has managed to face the cultural knot.
The city management is not a technical/management/administrative fact, but a cultural one above all. The foundation of strategic plan is represented by values. It helps to reason by values and to reproduce/recreate values. It is successful if it manages to communicate the expert knowledge (which is an elite heritage) to all the city, transforming it in common knowledge, in current culture, in knowledge which is not only useful but also civil. That way strategic plan has re-built the public space in cultural terms.
10. Evaluation in Strategic Plan The effective coordination of several actors around strategic projects, which resources are to be converged towards, becomes a central element, in order to multiply the opportunities offered by the city. Behind the capacity of coordination there exists an evaluation of reciprocal conveniences by the single involved subjects. The activity of foreseeing, evaluation and monitoring the impacts coming out from the different actions by various subjects represents a fundamental element for the implementation of strategic plan and to improve the programmed action.
L. Fusco Girard / Urban System and Strategic Planning: Towards a Wisdom Shaped Management 337
The integrated socio-economic-environmental evaluation helps to identify priorities within the urban strategic plan, to improve the quality of the action/project and also to build new partnership, agreements and pacts among different actors. Through multicriteria evaluation methods is also possible to better understand all the characteristics of an action/project (Cochrane; Zeleny, 1973; Zeleny, 1982; Nijkamp, Voog, 1990) and to deduce graduatories of priorities among alternative projects by which the strategic plan realizes, or among different trajectory of actions, characterized by different levels of uncertainty and complexity. Every city, and in particular Praha, Wien and Lyon, have picked up a series of information, data and knowledge in order to support the decisional and planning process, as well as the participative process and monitoring step. The local information system has been made available thanks to the new information technologies. From them it is possible to deduce new indicators by which to monitorate constantly the achieved outcomes. Many of these indicators are of qualitative (or quanti-qualitative) kind and refer to the perception of well being by inhabitants (Cunning, 1999).
11. Conclusions: Towards a Urban Management Shaped by Wisdom It is necessary “to go beyond” the mere efficiency in the city management: beyond the specialistic technicality, beyond the partial (or sectorial) interpretation of sustainable development. It is necessary “to go beyond” the mere production of material wealth, towards a wealth made also by relationships, reciprocities, cooperative/coordinative capacity. It is necessary to improve the quality of urban space by excellent architecture, but it is not sufficient if we don’t invest into people, in their critical, creative and relational capacity. The wise management, required also within SPDS by EU (1999), combines in original and systematic way the production of manmade and natural capital with human and social capital, in order to implement concrete strategies of sustainable development. It is open to outcomes, it is adaptive and iterative; open to visioning and to concrete experiments/projects. The “wise” management of the city is characterized by a temporal horizon of long period, by the capacity of coordination of a growing plurality of subjects (also in relation to the pression due to migratory processes) and by a particular enphasis on redistributive objectives: the increasing poverty can put in crisis just the organization of the city. It is interesting to modify the structural processes which produce marginality, and not only to their effects. The “wise” management is the one multiplying the networks among institutions (producing/transmitting knowledge), enterprises and public subjects, stimulating new partnerships and investments in the material and immaterial infrastructure among the knots of urban network. The “wise” management expresses itself also in the promotion of new social/civil networks among the different niches of social capital already existent in the city, in order to improve coordination and cooperation. Coordination and cooperation are values which economic affairs are built upon, auto-producing wealth without waiting for transfers of public resources: that way an authentic endogenous development is promoted.
338 L. Fusco Girard / Urban System and Strategic Planning: Towards a Wisdom Shaped Management
The “wise” management is the one that coordinates and closes the cycles of urban metabolism, with recycle of materials and water rsources. It turns over priorities of economy and politics, because is worried first of all about guaranting that free services of ecosystem can continue to be erogated. In other terms the value chain recognizes its own foundation inside intrinsic values of ecosystems, which economic use values and market values come from. That means that planning, infrastructural, architectural project is based on energy needed for its management in useful life time. That is, it is based on local energy strategy, which utilizes all the energy sources (and not only conventional ones). That way it activates the transition towards the post-oil urban economics. Architecture is assuming a growing importance within the city management, because it promotes an attractive image which makes the difference. Ancient architecture, localized in the city knots (that is in its places) expresses a millenary knowledge in the use of local resources and materials, in reuse of water, in natural refreshment, in orientation towards solar irradiation and to the specific climate. From this link between fluxes of materials and energy, a symbiosis between architecture and ecology is derived in order to guarantee the regenerability of used resources, maximum durability and well-being, besides to attract economic activities. The “wise” management is based on the critical capacity. It depends on the flux of knowledge at disposal, but also on the interpretative capacity. It depends on the capacity to utilize in the best way this knowledge in choosing not only “how, when, where to do” but also “what” to do: that is to choose the aims toward orientate actions and above all to justify the “why” of it. Only if we are able to answer the question “why” idoneous hierarchies of priority can be built. The “wise” management is based on creativity, which expresses itself in the best practices. It is founded on the experiences of best practice. From best practices an empiric confirmation emerges: investing in “common good” is convenient also economically; reducing social injustice, preserving the ecosystem as expression of solidariety towards future generations, coordinating the actions of many subjects towards shared finalities is convenient not only on the long term… Best practices show that it is possible to communicate meta-economic values through the empiric demonstration that they are economically convenient, for the creativity of intervention projects (Van Gigch, 1991). That way, the scarce priority of values such as fight of poverty, preservation of environment, achievement of common good in the management/government by politics can be turned over and a new hierarchy of objectives is justified, on the basis of which to allocate resources. To take in consideration these values/objectives in non marginal way means to promote a wiser management/government of the city, avoiding that economic interests are per definition winner towards meta-economic values, but finding a balance between them. The knowledge of good practices becomes then a central element not only in the government/management of the city by institutions, but also among people. For instance, they allow to turn over or modify short time priorities with new long term priorities, activating wiser behaviours by everybody. The “wise” management is the one that recognizes the priority of investment in the cultural project. Culture is the key energy for urban development. The future of the city is not determined only by planning or economic or environmental choices: the future of a city
L. Fusco Girard / Urban System and Strategic Planning: Towards a Wisdom Shaped Management 339
depends on the way of thinking and living of its inhabitabts, on their aspirations, their work, their lifestyle, on the priorities guiding their choices. Knowledge and culture are the real force orienting the change and able to guide the long term change of the city. Cities which are investing in knowledge and culture are the most flexible and resilient, because more capable of self-organization. The strategic project for culture should support and on its turn orientate every policy of urban development, activating first of all an horizontal communicative process among all the knots of urban network, towards a cooperative polycentric organization. The urban strategic plan for culture becomes, in the end, the real catalyst of new processes of development and economic regeneration, as it promotes a way of thinking in the long term, in a critical and creative way. A “wise” management combines culture and technology. The city needs urgely new technologies to improve economic production, efficiency and accessibility to its services, to increase vivibility and safety of its inhabitants, to reduce the dependance on exhausting energy sources. But the city needs also values, meanings, symbols able to give sense to the choices of here and now. The future of the city will be more and more shaped by the relation between these last elements and technologies.
References Cochrane J., Zeleny M. (1973), Multiple Criteria Decision Making, …South carolina Press, Columbia. Colemann J. (1990), Foundations of Social Theory, Harvard University Press, Cambridge. Cummins R. (1999), “A Psycometric evaluation of Comprehensive Quality of Life Scale” in Yuan L. et al. (eds.), Urban Quality of Life, National University of Singapore, Singapore. ESDP (1999), European Spatial Development Perspective, Bruxelles. Florida R. (2002), The Rise of Creative Class, Basoc Books, New York Florida R. (2005), The flight of the Creative class, Hupers Business, New York. Fusco Girard L., P. Nijkamp, (2004), Energia, bellezza, partecipazione: La sfida della sostenibilità. Valutazioni integrate tra conservazione e sviluppo, FrancoAngeli, Milano. Fusco Girard L. (1989), Conservazione e sviluppo, FrancoAngeli, Milano. Fusco Girard L. (a cura di) 2003, The Human Sustainable City, Ashgate, Londra. Fusco Girard L. e B. Forte (1999), Sviluppo umano e città sostenibile, Francoangeli, Milano. Fusco Girard L. e P. Nijkamp (1997), Le valutazioni per lo sviluppo sostenibile, Francoangeli, Milano. Greffe X. (2003), La Valorisation economique du patrimoine, La Documentation Francaise, Paris. Greffe X. (2005), Culture and Local Development, OECD, Paris. Hall P. (1998), Cities in Civilization, Weidenfeld and Nicolson, London. Hall P., Pain K. (2006), The polycentric metropolis. Learning from mega-city regions in Europe, Earthscan, London. Hall P., Pfeiffer U. (2000), Urban 21: A Global Agenda for 21st Century Cities, E and FN Spon, London. IPCC (2007), Fourth Report on Climate Change, Paris. Nijkamp P. (1996), “Spatial Sustainability”, in Regional Sciences, vol. 75, n°1. Pearce D. (1995), Capturing Global Environmental Value, Earthscan, London. UNEP (2003), Switched On: Renewable Energy, Opportunities in Tourism Industry. Van der Berg L., Van Winden W. (2002), ICT as Catalyst of Sustainable Urban Development, Ashgate, Aldershot. Van der Berg, Pol P., van Winden and Woets P. (2005), European cities in knowledge Economy, Ashgate, Aldershot. Van Gigch J. (1991), System design modeling and metamodelling, Planning Press, New York. Zamagni S. (2003), Sustainable Development,in Fusco Girard (a cura di), The Human Sustainable City, Ashgate, Londra.
340 L. Fusco Girard / Urban System and Strategic Planning: Towards a Wisdom Shaped Management Zeleny M. (1982), Multi Criteria Decision Making, McGraw-Hill, New York. Zeleny M. (1988), Beyond Capitalism and Socialism: Human Manifesto, in Human System Management Vol. 7 n°3. Zeleny M. (1998), Multi Criteria Decision Making: Eight Concepts of Optimality, in Human System Management Vol. 17 n°2. Zeleny M. (2005), Human system management, World Scientific Publishing, Singapore.
Advances in Multiple Criteria Decision Making and Human Systems Management Y. Shi et al. (Eds.) IOS Press, 2007 © 2007 The authors. All rights reserved.
341
Digest® Wisdom: Collaborate for Win-Win Human Systems Nicholas C. GEORGANTZAS Fordham University Business Schools, 113 West 60th Street, Suite 617-D New York, N.Y. 10023-7484, USA, E-mail:
[email protected] Abstract. Model analysis in system dynamics (SD) entails articulating exactly how the structure of circular, feedback relations among variables in a system determines its performance through time. This article combines human games with SD to show the use and benefits of model analysis with the pathway participation metric (PPM) implemented in the Digest® software. Four SD game models depict Markovian paradoxical games with two players or groups that choose between collegiality and discord tactics as the means of conflict resolution in business and civil litigation. Paradoxical self-referential games are non-constant sum (one player’s loss is not automatically the other’s payoff) conflicts, where the two players or groups compete with dynamic (i.e., time varying) probabilities of collaboration. Their game is paradoxical because both parties can either win or lose simultaneously. It becomes self-referential when the payoff or ‘tempting’ parameters, and the prior discord and loss coefficients depend explicitly on the participants’ collaboration probabilities. Large subsets of initial discord tactics converge on a fixed-point attractor to sustain collaboration equilibria. Games end once the point attractor has absorbed all dynamics, leaving the system in a stable, negative feedback state. Keywords. Civil litigation, collaborative law, collegiality, conflict, Deming, games, system dynamics
Client-driven, the entire system dynamics (SD) modeling process aims at helping managers articulate exactly how the structure of circular feedback relations among variables in a system they manage determines its performance through time (Forrester and Senge 1980). In the endless hunt for superior organizational performance, which only ‘systemic leverage’ endows (Georgantzas and Ritchie-Dunham 2003), SD brings its basic tenet: the structure of feedback loop relations in a system gives rise to its dynamics (Meadows 1989, Sterman 2000, p. 16). A coherent problem-solving method, SD can attain its spectacular Darwinian sweep (Atkinson 2004) as long as it formally links system structure and performance. To help academics and practitioners see exactly what part of system structure affects performance through time, i.e., detect shifting loop polarity and dominance (Richardson 1995), SD researchers use tools from discrete mathematics and graph theory first to simplify and then to automate model analysis (Gonçalves, Lerpattarapong and Hines 2000, Kampmann 1996, Mojtahedzadeh 1996, Mojtahedzadeh et al. 2004, Oliva 2004, Oliva and Mojtahedzadeh 2004). Mostly, they build on Nathan Forrester’s (1983) idea to link loop strength to system eigenvalues.
342
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
Cast as a methodological application, this article shows the use and benefits of model analysis with Mojtahedzadeh’s (1996) pathway participation metric (PPM) implemented in his Digest® software (Mojtahedzadeh, Andersen and Richardson 2004). Shown here is part of a modeling project that combined human games with SD to answer specific client concerns about the dynamic consequences of choosing between collegiality and discord tactics as the means of conflict resolution in business and civil litigation. The in-house counsel of a large insurance firm, the client company objects to its lawyers spending so much time in disputes during the pre-trial phase of the civil litigation process. Extracted from that project, four SD game models depict Markovian paradoxical games with two players or groups. Paradoxical self-referential games are non-constant sum (one player’s loss is not automatically the other’s payoff) conflicts, where the two players or groups compete with dynamic (i.e., time varying) probabilities of collaboration. Their game is paradoxical because both parties can either win or lose simultaneously. It becomes self-referential when the payoff or ‘tempting’ parameters, and the prior discord and loss coefficients depend endogenously on the participants’ collaboration probabilities. Past research (Nicolis 1986, Nicolis, Bountis and Togias 2001, Rapoport 1966, Swingle 1970) has found two-contestant paradoxical self-referential game models with exogenous parameters to be conservative, possessing two centers around which games oscillate forever. When the payoff, prior discord and loss parameters vary endogenously, however, then the dynamics becomes dissipative, possessing a single fixedpoint attractor of moderate equal gains. Large subsets of initial discord tactics converge on this attractor to attain constant probabilities of collaboration. According to Winch (1997), SD researchers often shy away from games, particularly ones not quantified, because of these models’ limited accounting for feedback in competitive systems that depend on potential payoffs and fixed costs of social transformation and change. Another concern is that managers detest models which reduce their real-life analyses and deliberations to abstract mathematics. Indeed, conventional economic logic suggests that payoff, prior discord and losses are causally prior to collaboration probabilities. But this article’s simulation results show that causality runs in both directions. Its causal loop worldview makes SD uniquely suited to computing win-win scenarios in business and civil litigation, tying pieces together into an account of a society that is far more generative and empowering than alternatives based on conventional economics (Atkinson 2004, Castoriadis 1994). It might be the “somewhat critical manner” (Forrester 2003, p. 331) in which system dynamicists approach economics and operational research (OR) that keeps them isolated from economics and OR. Yet, Sterman’s principle #4 for the successful use of SD states: System dynamics does not stand alone. Use other tools and methods as appropriate. Most modeling projects are part of a larger effort... Modeling works best as a complement to other tools, not as a substitute (Sterman 2000, p. 80). And, according to Repenning: Other communities in the social sciences maintain different worldviews, but our worldview is more consistent with those in psychology, sociology and anthropology. We should not abandon our efforts to become better connected with our colleagues in economics and operations research (Repenning 2003, pp. 319–320). Following the background section below, the article shows a SD interpretation of four paradoxical self-referential games. Causal tracing and partial loop analyses embel-
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
343
lish model description as it moves from exogenous payoff, prior discord and loss parameters to endogenous, time-varying coefficients to symmetric and asymmetric impartiality with respect to each player’s attention to the other’s collegiality or lack of it. The results section follows the same progression as model description does but, using Digest®, it also looks at the four model variants in terms of their shifting loop polarity and prominence. The article does not merely translate the work of Nicolis (1986) and Nicolis et al. (2001) into SD models to replicate their results, but dares to ask how and why these models produce the result they do. With the help of Digest® the article ventures beyond dynamic and operational thinking, seeks insight from system structure and thereby accelerates circular causality thinking (Richmond 1993). Digest® helps detect exactly how loop polarity and prominence determine system performance. If both players collaborate free from undue bias and preconceived notions, equally ignoring each other’s collegiality or lack of it, then they move closer to the maximum payoff and both parties collaborate with probability 1 (one). In the asymmetric model, however, where one of the players takes less into account the other’s collegiality, is the most impartial player or group that profits the most!
Background Almost everyone in business can benefit from improved negotiation skills. And management by cooperation might best express Deming’s thinking in the late 1980s and early 1990s. Deming (2000) talks about ‘win-win’ situations in a new climate, diametrically opposed to the ‘I win-you lose’ attitude of the competitive rivalry ethic. According to the OQPF Roundtable (2000), rivalry produces win-lose results, whereas collegiality and collaboration among individuals and organizational units can produce win-win, all-gain results for the benefit of all concerned. Win-win results require not only commitment to collegiality and teamwork, but also the removal of reward systems that promote local optimization and system damage. Indeed, managing change is tough in human systems that exhibit autopoiesis and self-sustainability (Zeleny 1997, 2005). Organizational process improvements show a start and fizzle pattern so frequently that organization researchers use it as a reference mode to develop dynamic hypotheses about change (Morrison 2001). It shows the tendency for changes to run out of energy and to lose momentum almost as soon as they begin. Change efforts that foster learning and build understanding of processes and skills for collaborating effectively, in turn lead to benefits in subsequent collaborative efforts (Morrison 2002). In the context of civil litigation, conflict is the heart and soul of the law. To help manage conflict, rules of law have been prescribed to govern human interaction. In essence, the function of the law is to establish: rules and procedures that constrain the power of all parties, hold parties accountable for their actions, and prohibit the accumulation of autocratic or oligarchic power. It provides a variety of means for the non-violent resolution of disputes between private individuals, groups, or between those actors and the government (Crocker and Hampson 1996, p. 586). If the role of law is to resolve conflict, then how is it that the system in which the law functions is itself the producer of more conflict? On a closer look, the civil judicial
344
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
system is inherently adversarial. The performance of two adverse parties fighting for their desired outcomes defines its organizational makeup. Erroneously, policies that govern the judicial process are premised on constant-sum war games (Nicolis 1986, pp. 218–222), where one’s gain is another’s loss and this becomes the guiding force in resolving legal disputes. When litigants choose to file a lawsuit, to take discovery, to file motions, to decline settlement offers and to appeal, the players engage in duels where one’s death is another’s victory. Engaging in a duel makes the opponents undertake risk. So, understanding litigants’ proclivities for risk is essential to understanding their behavior, the nature of litigation and the likely impact of changes in the civil justice system (Rachlinski 1996). The notable Chief Justice Warren E. Burger notes: “our litigation system is too costly, too painful, too destructive, too inefficient for a civilized people” (cf. Arnold 2000). Collegiality, a value bestowed upon the civilized world, is not exactly what characterizes litigation. Instead, litigation, particularly the pre-trial phase of the process, provides the battlefield for a combat to be won, not a dispute to be resolved. To collaborate for a mutually desirable outcome is an anathema for litigation proponents who consider collaboration a concession of weakness. Fisher, Ury and Patton (1991) offer practical guidelines for business executives and lawyers dealing with each other, with superiors and staff, with customers, partners, suppliers and with government regulators. Behind this ‘principled negotiation’ approach is the belief that when each side comes to understand the interests of the other, they can jointly create new options that are mutually advantageous, resulting in a wise settlement. This belief guides the work on litigation economics that originated with a trio of articles from the early 1970s (Gould 1973, Landes 1971, Posner 1973). Subsequently, Shavell (1982) and then Priest and Klein (1984) made significant contributions to the field. Namely Priest and Klein (1984) predict that, for example, failing to settle before trial, suits have a 50 percent chance of a plaintiff’s verdict. Shavell (1982) shows the economic divergence of private and social goods in litigation. Cooter and Rubinfeld (1989) provide an excellent review of this literature. The same belief also drives lawyer Stuart G. Webb of Minneapolis, Minnesota, the founder of collaborative law (Bushfield 2002). In 1990, after 15 years of practicing ‘traditional’ family law, Webb decided to end the frustration and stress that he and his clients were experiencing (McArdle 2004). Today, collaborative law is becoming the norm in civil litigation, where interdisciplinary problem solving prevents adversarial tactics. Based on being proactive, litigants seek first to understand and then to be understood. Through their cooperative mode of negotiation and using neutral experts to resolve conflict, the field is gaining momentum and attention. Perhaps the most profound development in the legal profession, it adds yet another way to alternative dispute resolution, and shares a commitment to achieving settlement without formal litigation. The benefits of engaging in collegial dispute resolution have motivated twocontestant game research since the Trucking Game of Deutsch and Krauss (1960). Swingle (1970) surveys many situations where such conflicts emerge, with players who need not look at games for moral principles; they already have their own. It is possible to frame the paradoxical, self-referential games that unfold in business and civil litigation using discrete-time Markov chains. Courcoubetis and Yannakakis (1988) and Hansson and Jonsson (1994), for example, investigate discrete-time Markov chain
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
345
models. Large classes of stochastic systems operate, however, in continuous time. So continuous time Markov chains form a useful extension in a generalized framework for decision and control (Ross 1983). Psycho-physiological research and evidence (Gershon et al. 1977, Wolf and Berle 1976) as well as trends in psychosomatic medicine (Hill 1976) encouraged Nicolis (1986) to study hierarchical systems in human communication (Watzlawick, Beavin and Jackson 1967). Given that all human cognitive levels use the same hardware, namely groups of collaborating neurons and tissues, Nicolis focused on collaborative communication regimes that weigh homeostatic probabilities as they strive to maximize a pre-selected ‘figure of merit’ or payoff. To quantize the state space at each hierarchical level, Nicolis replaced the stochastic nonlinear differential equations that correspond to continuous state descriptions with discrete-time Markov chains. The transition matrices that characterize these chains fully describe the transitions among all possible system states at each hierarchical level (Nicolis 1986, pp. 184 and 377–378). Following Rapoport (1966), and with the help of Itô stochastic differential equations, Wiener-Lévy processes and the Fokker-Planck-Kolmogorov equation, Nicolis (1986) and Nicolis et al. (2001) propose a logic for specifying the properties of such systems. Building on the work of Nicolis (1986) and Nicolis et al. (2001), this article in turn shows a generic structure for modeling paradoxical self-referential Markov games in continuous time with SD. Mojtahedzadeh’s Digest® software plays a crucial role in the analysis of the article’s four SD game models. The pathway participation metric inside Digest® detects and displays prominent causal paths and loop structures by computing each selected variable’s dynamics from its slope and curvature, i.e., its first and second time derivatives. Without computer simulation, even experienced modelers find it hard to test their intuition about the connection between circular causality and SD (Oliva 2004, Mojtahedzadeh et al. 2004). Using Digest® is, however, a necessary but insufficient condition for creating insightful system stories. The canon? Insightful system stories demand integrating insight from dynamic, operational and feedback loop thinking (Mojtahedzadeh et al. 2004, Richmond 1993). Linked to eigenvalue and dominant loop research, Mojtahedzadeh’s (1996) pathway participation metric is most promising in formally linking performance to system structure. Mojtahedzadeh et al. (2004) give an extensive overview of PPM that shows its conceptual underpinnings and mathematical definition, exactly how it relates to system eigenvalues and concrete examples to illustrate its merits. Very briefly, PPM sees a model’s individual causal links or paths among variables as the basic building blocks of structure. PPM can identify dominant loops, but does not start with them as its basic building blocks. Using a recursive heuristic approach, PPM detects compact structures of chief causal paths and loops that contribute the most to the performance of a selected variable through time. Mojtahedzadeh et al. (2004, pp. 7–11) also present Digest®, the software that enables the painless use of the PPM algorithm for model analysis. Most caringly, the software use and its outputs consume about five pages of their article. So, briefly again, Digest® detects the causal paths that contribute the most to generating the dynamics a selected variable shows. It first slices a selected variable’s time path or trajectory into discrete phases, corresponding to seven behavior patterns through time (legend, Fig. 9). Once the selected variable’s time trajectory is cut into phases, PPM decides which pathway is most prominent in generating that variable’s performance within each phase.
346
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
Figure 1. The i = 1, 2 contestant (a) 2 × 2 game matrix, (b) Markov chain, (c) conditional propensiti.es for collegiality Ci and discord Di, and (d) transition probabilities Pji, as the game shifts from state Sj to Si, j = 1, 2, 3, 4 (adapted from Nicolis 1986 and Nicolis et al. 2001).
As causal paths combine to form loops, combinations of such circular paths shape the most influential or prominent loops within each phase. Mojtahedzadeh et al. (2004) conclude with research directions vis-à-vis combining multiple loops, which drive performance within a single performance phase, for added insight into the dynamic trajectory of a single variable and merging multi-variable analyses into a coherent articulation of exactly how system structure drives overall system performance. They are also concerned whether model analysis imparts useful insights to clients’ real-life performance challenges and if both academics and practitioners will understand the pathway participation metric enough to have faith in Digest®. In response to these concerns, Mojtahedzadeh is testing PPM with a multitude of classic SD models, such as, for example, Alfred and Graham’s (1976) urban dynamics model (cf. Mojtahedzadeh et al. 2004). Similarly, Oliva and Mojtahedzadeh (2004) use Digest® to show that the shortest independent loop set (SILS), which Oliva (2004) structurally derived via an algorithm for model partition and automatic calibration, does contain the most influential or prominent causal paths that Digest® detects. This article contributes to this line of work. 1. Model Description with Causal Tracing and Partial Causal Loop Analysis Extending Rapoport’s (1966) work on human behavior and decision making through cybernetic-mathematical analysis, Nicolis (1986) deduced the dynamics of communication system hierarchies from the game between two players involved in alternating plays with set rules. Figure 1 shows the two-contestant (i = 1, 2) 2 × 2 payoff matrix,
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
347
Figure 2. Stock and flow diagrams of the (a) collegiality probabilities Xi of players i = 1, 2 and (b) state Sj occupancy probabilities uj, j = 1, 2, 3, 4, in the payoff functions Gi, i = 1, 2.
Markov chain diagram, conditional probabilities or ‘propensities’ for collegiality and discord, Ci and Di, respectively, by players i = 1, 2. The game states Sj, j = 1, 2, 3, 4, shift from state Sj to Si according to the transition probabilities Pji of Fig. 1. The lower-left and upper-right triangles of the matrix squares (Fig. 1a) show the payoff (+ sign) and loss (– sign) coefficients, which determine the conditional probabilities or ‘propensities’ for collegiality and discord, Ci and Di, respectively, for players i = 1, 2. The ß1 and ß2 parameters are tempting factors. They reflect the expected payoff of the first and second player, respectively, when they employ discord tactics and thereby move into states S2 and S3 of Fig. 1a. The same parameters correspond to losses –ßi when one player chooses collegiality and the other chooses discord. Both factors are unity at state S1 but become losses of magnitude –kißi at state S4, where both players choose discord tactics, with loss coefficients ki > 1, i = 1, 2. In his treatment of hierarchical systems, Nicolis (1986) derived both the conditional and the transition probabilities of Fig. 1c and 1d. Given that the two players were at state Sj (j = 1, 2, 3, 4) at a previous step, in such Markovian paradoxical games Nicolis (1986) and Nicolis et al. (2001) define the conditional probability Xi that the ith player (i = 1, 2) collaborates using the equations of Fig. 1c. Figure 1d shows the transition probabilities Pji as the game shifts from state Sj to state Si. If both players had collaborated in a collegial fashion at the previous step, i.e., their game were at state S1, then Xi denotes their collaboration probabilities. If the last step was, however, in any state Sj other than S1, then the di parameters depict these probabilities. Figure 2a shows the stock and flow diagram of the collaboration probabilities Xi model sector, with the exogenous payoff ßi, prior discord di and loss ki parameters, i = 1, 2. Figure 2b shows the sector of the state Sj occupancy probabilities uj, j = 1, 2, 3, 4, in the payoff functions Gi, i = 1, 2. And Table 1 shows the equations of both model sectors. The collaboration probability Xi stocks (Eqs 1 and 2) are the deterministic, unitless outcomes of the rate equations 3 and 4 of Table 1a. Nicolis (1986) and Nicolis et al. (2001) derive both the flows (Eqs 3 and 4) and the auxiliary parameters (Eqs 5 through 10 of Table 1a and Fig. 2a) from the players’ payoff functions Gi, i = 1, 2 (Fig. 2b and Eqs 18 and 19, Table 1b). The Markovian kinetics does not evolve under fixed propensities, such as the exogenous, constant, prior discord parameters di, i = 1, 2
348
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
Table 1. Collaboration, state occupancy probability, and payoff or loss equations for players i = 1, 2 (a) Collaboration probability Xi equations, with exogenous parameters ßi, di and ki, i = 1, 2 Stocks or Level Variables
Equation #
X1(t) = X1(t – dt) + (∆x1) * dt {unitless} INIT X1 = 0 {unitless}
(1) (1.1)
X2(t) = X2(t – dt) + (∆x2) * dt {unitless}
(2)
INIT X2 = 1 – X1 {unitless}
(2.1)
Flows or Rate Variables ∆x1 = (a1 * X2^2 + b1 * X2 + c1) * (X1 * X2 – d1 * d2 – 1)^(–2) / t {unit = 1 / day}
(3)
∆x2 = (a2 * X1^2 + b2 * X1 + c2) * (X1 * X2 – d1 * d2 – 1)^(–2) / t {unit = 1 / day}
(4)
Auxiliary Parameters or Converters a1 = d1 * d2 * ß1 * ( k1 + 1) {unitless}
(5)
a2 = d1 * d2 * ß2 * ( k2 + 1) {unitless}
(6)
b1 = – k1 * ß1 * d1 * d2 * (1 + d1 + d2) + d1 * d2 * ß1 * (d1 – d2) + d1 * d2 {unitless}
(7)
b2 = – k2 * ß2 * d1 * d2 * (1 + d1 + d2) + d1 * d2 * ß2 * (d2 – d1) + d1 * d2 {unitless}
(8)
c1 = (d1^2) * (d2^2) * ß1 * (k1 – 1) + d1 * d2 * ß1 * (k1 – 1) {unitless}
(9)
c2 = (d1^2) * (d2^2) * ß2 * (k2 – 1) + d1 * d2 * ß2 * (k2 – 1) {unitless}
(10)
Exogenous Auxiliary Constants ß1 = 4 {unitless}
(11)
ß2 = 4 {unitless}
(12)
d1 = 0.5 {unitless}
(13)
d2 = 0.5 {unitless}
(14)
k1 = 1.2 {unitless}
(15)
k2 = 1.2 {unitless}
(16)
t = 6 {days}
(17)
(b) Payoff or loss function Gi, i = 1, 2, and state Sj occupancy probability uj, j = 1, 2, 3, 4, equations Auxiliary Parameters or Converters G1 = u1 – k1 * ß1 * u4 + ß1 * (u3 – u2) {unitless} G2 = u1 – k2 * ß2 * u4 + ß2 * (u2 – u3) {unitless} ∑ = X1 * X2 – d1 * d2 – 1 {unitless} u1 = – d1 * d2 / ∑ {unitless} u2 = (– d1 * d2 * X1 + d1 * X1 * X2 + d1 * d2 – d1) / ∑ {unitless} u3 = (– d1 * d2 * X2 + d2 * X1 * X2 + d1 * d2 – d2) / ∑ {unitless} u4 = ((1 – d1 – d2) * X1 * X2 + d1 * d2 * (X1 + X2 – 2) + d1 + d2 – 1) / ∑ {unitless}
(18) (19) (20) (21) (22) (23) (24)
(Eqs 13 and 14, Table 1a). Learning takes place as the system of differential equations 3 and 4 (Table 1a) governs the time evolution of the players’ propensities for collegiality, while litigants and collaborative law proponents play their human games in multiple iterations through time.
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
349
Figure 3. Causal tracing and partial loop analysis with exogenous parameters.
SD knowledge ecology begins by differentiating stocks from flows and how stocks and other variables and parameters determine the flows. Identifying the integration points facilitates understating one source of dynamic behavior in the system. The stock and flow diagram of Fig. 2 shows accumulations and flows essential in generating the dynamic behavior of players in the pre-trial phase of the civil litigation process. It also tells, with the help of the Table 1 equations, what drives the flows in the system. Stock and flow diagrams like the one of Fig. 2 help accelerate what Richmond (1993) calls operational thinking. But stock and flow diagrams do not automatically unearth which balancing or negative (–) causal loops and which reinforcing or positive (+) loops govern the system. Causal loop or influence diagrams (CLDs or IDs) are the tools that convey information about circular causality. With dynamic thinking implicitly present (Richmond 1993), the causal tracing and partial loop analysis of Fig. 3 can begin to accelerate feedback loop thinking by exploring exactly how the system’s causal structure causes its behavior, as players learn by playing their game iteratively. The causal tracing of X1 (X2 is symmetric), causal diagram and partial loop analysis of Fig. 3 show how situations that call for collegiality in business and civil litigation might initially look simple, as long as the payoff, prior discord and loss parameters are exogenous. Three causal loops govern the time evolution of the players’ propensities for collaboration: two balancing or negative (–) loops (#1 and #2, Fig. 3), and one positive (+) or reinforcing loop (#3, Fig. 3).
350
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
Figure 4. Revised Xi model sector, now with endogenous parameters ßi, di and ki, i = 1, 2. Table 2. Revised Xi model sector equations, with endogenous parameters ßi, di and ki, i = 1, 2 Endogenous Auxiliary Parameters or Converters ß1 = ß0 / (X1 + X2 +1) {unitless} ß2 = ß0 / (X1 + X2 +1) {unitless} d1 = (X1 + X2) / 2 {unitless; 0 ≤ d1 ≤ 1} d2 = (X1 + X2) / 2 {unitless; 0 ≤ d2 ≤ 1} k1 = k0 + (X1 + X2) / 2 {unitless} k2 = k0 + (X1 + X2) / 2 {unitless} Exogenous Auxiliary Constants ß0 = 4 {unitless} k0 = 1.2 {unitless; 1 ≤ k0 ≤ 2}
Equation # (25) (26) (27) (28) (29) (30) (31) (32)
1.1. Endogenous Parameter Model with Complete Self-Reference The more realistic model of paradoxical self-referential games in business and civil litigation calls for the ßi, di and ki, i = 1, 2 parameters of Fig. 2 and Table 1 to depend on the Xi stocks. Figure 4 and Table 2 show the revised collaboration probability Xi model sector and sector equations, respectively, with the now endogenous parameters ßi, di and ki, i = 1, 2. The ghosted X1 and X2 stocks (lower middle, Fig. 4) now enter Eqs 25 through 30 of Table 1, which replace the exogenous constant parameters of Fig. 2a and Table 1a (Eqs 11 through 16, Table 1a).
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
351
Figure 5a. Causal tracing about X1, with endogenous parameters ßi, di and ki, i = 1, 2.
The Xi stocks are still the probabilities of collegiality and collaboration given that both players collaborate. And the di, i = 1, 2 prior discord coefficients still hold prior lack of collegiality probability values when at least one player had previously chosen discord. If the Xi stocks increased (decreased), then the di values would increase (decrease) too. The revised di equations (Eqs 27 and 28) of Table 2 tell that both players equally notice each other’s collegiality or lack of it. Similarly, the equations of the now endogenous payoff ßi and loss ki, i = 1, 2 parameters of Fig. 4 and Table 2 (Eqs 25, 26, 29 and 30) show that players will choose collegiality and collaboration when less payoff and more losses result from choosing discord. Self-reference favors collegiality: decreasing ß0 (Eq. 31) decreases payoff and increasing k0 (Eq. 32) increases the losses from choosing discord tactics. Figure 5a shows the causal tracing about the X1 stock (X2 is symmetric), and Fig. 5b the causal diagram and partial loop analysis with the now endogenous parameters ßi, di and ki, i = 1, 2. Comparing Fig. 5 with Fig. 3 shows how drastically both the ‘reachability’ of the X1 stock and the model’s dynamic complexity have increased owed to the endogeneity of these six parameters.
352
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
Figure 5b. Partial loop analysis, with endogenous parameters ßi, di and ki, i = 1, 2.
Both iThink® (Richmond et al. 2006) and Vensim® (Eberlein 2002) show that parameter endogeneity causes the number of loops that determine the players’ propensities for collegiality and collaboration to increase from three (Fig. 3) to 208 (Fig. 5b) loops per player. To illustrate the increase in system complexity through a couple of examples, Fig. 5b shows the last two loops in which the X1 stock plays a part. Both loops are of length seven, but causal loop #207 is a balancing or negative and loop #208 is a positive or reinforcing one. 1.2. Symmetric Impartiality Model The exogenous impartiality or ‘indifference’ parameter p (0 ≤ p ≤ 1) of Fig. 6a and Eq. 37 (Table 3) allows assessing the situation where both players become symmetrically unbiased toward each other’s collegiality or propensity to collaborate and initial discord tactics. As p decreases, each player becomes equally, i.e., symmetrically, more indifferent toward the other, free to collaborate collegially, without undue bias and preconceived notions.
353
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
Figure 6. Causal structure modifications of Fig. 4 for (a) symmetric and (b) asymmetric impartiality. Table 3. Effects of the exogenous parameter p (0 ≤ p ≤ 1) on the endogenous ßi and di, i = 1, 2 parameters in the symmetric impartiality model (Fig. 6a) Endogenous Auxiliary Parameters or Converters ß1 = ß0 / (X1 + p * X2 +1) {unitless} ß2 = ß0 / (p * X1 + X2 +1) {unitless} d1 = (X1 + p * X2) / 2 {unitless} d2 = (p * X1 + X2) / 2 {unitless} Exogenous Auxiliary Constant p = 0.9 {unitless; 0 ≤ p ≤ 1}
Equation # (33) (34) (35) (36) (37)
Table 4. Effects of the endogenous parameter q (0 ≤ q < p ≤ 1) on the endogenous ß2 and d2 parameters in the asymmetric impartiality model (Fig. 6b) Endogenous Auxiliary Parameters or Converters ß2 = ß0 / (q * X1 + X2 +1) {unitless} d2 = (q * X1 + X2) / 2 {unitless} q = 0.95 * p {unitless; (0 ≤ q < p ≤ 1) }
Equation # (38) (39) (40)
Table 3 shows exactly how this exogenous p (0 ≤ p ≤ 1) affects the endogenous parameters ßi and di, i = 1, 2 (Eqs 33 through 36) in the model of symmetric impartiality (Fig. 6a). Clearly, when p = 1, then the model reverts to the endogenous parameters one (Fig. 4 and Table 1). But it might prove interesting to look further into this impartiality phenomenon asymmetrically. 1.3. Asymmetric Impartiality Model So far, the game model variants of Fig. 2a, Fig. 4 and Fig. 6a have shown symmetric structures, where the game dynamics would have been invariant if the two players were to swap positions. But the endogenous parameter q (0 ≤ q < p ≤ 1) of Fig. 6b and Eq. 40 (Table 4) now takes the place of p in Eqs 38 and 39 of Table 4, which replace Eqs 34 and 36 of Table 3, respectively.
354
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
This way q helps assess what would happen if one player were to account less for the other’s collegiality or propensity to collaborate. With q in place, the model permits one to treat the effects the two players might have on one another independently. In order to assess how sensitive ‘where the game ends’ is to such effects, for example, one might set the endogenous parameter q = 0.95 p (Eq. 40, Table 4), and then let the exogenous parameter p vary in Eq. 37 (Table 3). Making it so moves the game’s single fixed-point attractor off its symmetric impartiality location, so the more impartial player or group profits the most!
2. Simulation Results with Prominent Structure and Polarity Analysis Dynamic thinking is implicit in feedback loop thinking (Richmond 1993). Each circular causal loop structure has, however, its own performance implications, so the overall dynamics of model structure remains unclear until computer simulation comes to the rescue. In addition to helping one draw stock and flow diagrams and count multitudes of feedback loops on the glass of computer screens, iThink® (Richmond et al. 2006) and Vensim® (Eberlein 2002) make dynamic thinking explicit through repeated simulation runs. Figures 7 and 8 show the comparative, i.e., multi-line, phase plots of the Xi probability and Gi payoff spaces, respectively, with exogenous parameters ßi = 4, 5, 6, di = 0.5 and ki = 1.2, 1.4, 1.6, i = 1, 2. The Xi phase plots show two elliptic centers, K1 and K2 (marked on the upper right-hand plot of Fig. 7), above and below the X1 = X2 diagonal, around which centers ‘locked-in’ games oscillate perpetually. Also marked on the upper right-hand plot of the Gi payoff phase space (Fig. 8), the two hyperbolic saddles of the stable U1 and unstable U2 manifolds separate the K1 and K2 regions from the rest of the phase space. The Gi functions estimate instantaneous payoff, so each player’s collegiality depends on the rate at which payoff changes as a result of collaboration. The oscillating K1 and K2 periodic attractors show how games with exogenous parameters can result in perpetual, never-ending conflicts. Outside the oscillating K1 and K2 regimes, however, player tactics that begin at the S4 state of discord and lack of collegiality can lead to the S1 state of full collaboration (top panel, Fig. 1). To be useful, model analysis must create insight via coherent, dynamic explanations of how influential pieces of system structure give rise to performance through time. Turning to the time domain output of Digest®, the time-series graph on the topleft panel of Fig. 9 shows the shifting prominent structure and polarity phases of X1 and X2, respectively, for the exogenous, constant parameter model. Each phase of each collaboration probability Xi, i = 1, 2 is a distinct phase of the simulation time. Within each phase, Digest® computes both the slope and the curvature of each Xi stock from its first and second time derivatives, respectively. According to the legend of Fig. 9, the clear, un-shaded phases of the thumbnail icons on the top right show balancing (–) growth or decline dynamics. The shaded phases show reinforcing (+) growth or decline. The behavior phases of the X1 and X2 collaboration probability stocks alternate periodically through time, diametrically opposed, in perfect syzygy with each other.
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
355
Figure 7. Phase plots of the Xi probability space, with exogenous parameters ßi, di and ki, i = 1, 2.
The behavior phases of X1 and X2 differ from their shifting prominent structure and polarity phases (top left, Fig. 9). The two lower panels of Fig. 9 (above the legend) show which causal pathways or structures contribute the most to generating the observed dynamics. Corresponding to the first phase of X1, for example, is the balancing (–) causal pathway or structure #1, the most prominent loop in generating the initial decline of X1. According to Digest®, structure #1 is the most prominent pathway not only in generating the shifting prominent structure phases 1, 3 and 5 of X1, but also in generating phases 1, 3 and 5 of X2. This prominent structure is the outer causal loop #3 of Fig. 3. In principle, it is a reinforcing loop or positive pathway. Its polarity changes, however, as it takes the role of the most prominent causal structure in generating the dynamics of both X1 and X2.
356
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
Figure 8. Phase plots of the Gi payoff and loss space, with exogenous parameters ßi, di and ki, i = 1, 2.
With X1 = 0.8 (left axis, Fig. 9) and X2 = 0.2 (right axis), at time t = 0 (zero) days, both stocks decrease in tandem until t = 4.5 days. But the X1 stock follows a balancing decline and the X2 stock shows reinforcing decline dynamics. After t = 4.5 days and until t = 5.75 days, both stocks are in the second phase of Fig. 9. There, the balancing prominent causal pathway or structure #2 takes over, contributing the most to generating the dynamics of X1 and X2. Interestingly, according to Digest® again, this same causal loop structure #2 changes its polarity from negative to positive when it regains prominence in the fourth phase of Fig. 9. After t = 36 days, the same repeated pattern of shifting prominent structure and polarity phases cause the X1 and X2 stocks to oscillate forever.
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
357
Figure 9. Shifting prominent structure and polarity phases of X1 and X2, with exogenous parameters.
2.1. Endogenous Parameter (Complete Self-Reference) Model Results The more complex and realistic model of paradoxical self-referential games in business and civil litigation calls for ßi, di and ki, i = 1, 2 to depend on the Xi stocks (Fig. 4 and 5, and Table 2). In this complete self-reference model, each player considers the other’s collegiality or discord completely. Owing to the dependence of the six parameters on Xi, Fig. 10 shows that the resulting performance is very different from Fig. 7 and Fig. 8, now showing dissipative dynamics with moderate equal gains. On the left panel of Fig. 10, practically all initial collegiality and discord tactics lead players to the fixed-point attractor K, a stable node with a rather large basin of attraction. K is on the X1 = X2 diagonal. As K gets closer to (1, 1), the two players can enjoy equal and higher payoff. It is worth noting that the two hyperbolic saddle points of the U1 and U2 manifolds on Fig. 7 are still present on the left panel of Fig. 10, now outside the unit square. On the right panel of Fig. 10, the G1 and G2 payoff functions
358
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
Figure 10. Phase plots of the Xi and Gi spaces, with endogenous parameters ßi, di and ki, i = 1, 2.
increase monotonically at first. Near the (0, 0) point, both Gis are negative, representing losses. As the players move K closer to (1, 1), however, the Gi values become positive, yielding a higher payoff, equal for the two players on the G1 = G2 diagonal. But if their fixed-point attractor moves above or below the diagonal, it is respectively either the first or the second player who enjoys the highest payoff (see the asymmetric impartiality model results section below). A “surprising result” Nicolis et al. (2001, p. 322) exclaim when they see that almost all initial collegiality and discord tactics lead players to the fixed-point attractor K (Fig. 10). This response is typical even among seasoned researchers who rely on dynamic and operational thinking, but do not seek insight from system structure to accelerate their circular causality thinking (Richmond 1993). In the analysis of the endogenous parameter model results, Mojtahedzadeh’s Digest® allows exploring how the system structure of circular causal relations might determine the players’ actions as they learn by playing their game iteratively. Back to the time domain (Fig. 11). The behavior phases of X1 and X2 (top-right thumbnail icons) differ widely in the endogenous parameter model (Fig. 4 and Table 2). The X1 stock changes phases three times before it reaches its fourth, equilibrium phase. Conversely, the X2 stock changes only once before it enters its second and final equilibrium phase. With X1 = 0.7 (left axis, Fig. 11) and X2 = 0.3 (right axis) at time t = 0 days, both stocks now increase in tandem. The first player’s collegiality probability X1 follows reinforcing growth until t = 2.5 days and then switches into balancing growth dynamics (see legend, Fig. 9). The second player responds with an also rising collaboration probability X2, which also shows reinforcing growth. And s/he too switches into balancing growth dynamics but not until t = 9.5 days. Both stocks show balancing dynamics in their final, equilibrium behavior phase, phase 4 for player one and phase 2 for player two, respectively. But the X1 stock follows balancing decline, whereas the X2 stock shows balancing growth dynamics. The time-series graph on the top left of Fig. 11 shows the shifting prominent structure phases of X1 and X2, respectively, for the model with endogenous parameters ßi, di and ki, i = 1, 2 (Fig. 4 and 5b, and Table 2). The two lower panels of Fig. 11 show
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
359
Figure 11. Shifting Xi prominent structure phases with endogenous parameters ßi, di and ki, i = 1, 2.
which causal pathways contribute the most to the dynamics of the endogenous parameter model. Corresponding to the first phase of the X1 stock are the two reinforcing (+) prominent causal pathways or structures #3 and #4. These two loops are most prominent in generating the initial reinforcing growth of X1. The same causal pathways are also most prominent in generating the shifting phases 1 and 5 of X2. Although dominated by these two reinforcing causal structures in its fifth phase, the X2 stock persistently shows balancing growth dynamics.
360
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
The first three phases of X1 and X2 are identical in terms of their timing, up to time t = 9.5 days. Thereafter, the fourth phase of X1 ends at t = 26.75 days, while the fourth phase of X2 lasts until t = 27.25 days. Then the phase times shift again so that the X1 and X2 stocks end their fifth phase concurrently at t = 29.25 days. Which makes their sixth, final equilibrium phases coincide through time. A small block arrow in the middle of the large time-series graph of Fig. 11 shows where the X1 and X2 phases break off. Without strong evidence to the contrary, this phase break off might well be what causes the paradoxical, completely self-referential game model with endogenous parameters to end up in a fixed-point attractor. Although small, this shifting phase break off might be sufficient to disrupt the periodic attractor of the exogenous parameter model and, thereby, to help the players reach the fixed equilibrium point K on Fig. 10. This implies extra leverage in encouraging collegiality and collaboration in business and civil litigation. To encourage collegiality, Nicolis et al. (2001, p. 325) suggest decreasing the tempting payoff and/or increasing losses due to discord, by making the exogenous ß0 and k0 parameters (Eqs 31 and 32, Table 2) smaller and larger, respectively. But it might also be possible for litigants to reach the stable equilibrium of a win-win victory if they find a way to break off the time-wise identical phases of a perpetual, never-ending conflict (Fig. 9). 2.2. Symmetric Impartiality Model Results A third way to encourage collegiality is to help players become increasingly and symmetrically unbiased toward each other’s collegiality or propensity to collaborate and initial discord tactics. The impartiality or indifference parameter p (0 ≤ p ≤ 1) of Fig. 6a and Eq. 37 (Table 3) allows making it so. As p decreases on Fig. 12, each player becomes symmetrically more impartial toward the other, free to collaborate without undue bias and preconceived notions. As p decreases on Fig. 12, under symmetric impartiality (Fig. 6a and Table 3), the fixed-point attractor K moves up the X1 = X2 diagonal toward (1, 1). Likewise, the players’ equal (symmetric) payoff moves up too (Fig. 13), as they equally discount each other’s propensities. Figure 14 shows exactly how the declining p affects both players’ collegiality probabilities Xi, and the percentage and cumulative percentage changes in their symmetric payoff function Gi, i = 1, 2. Again, at p = 1, performance reverts to that of the endogenous parameters model (Fig. 10), but including it here makes it easy to compare results. Specifically, as p declines from 1 to 0.7 on the left panel of Fig. 14, the two players’ fixed-point collegiality attractor moves from K = (0.6844, 0.6844) to K = (0.9951, 0.9951), well into the S1 state of full collaboration (Fig. 1). This is a 40 percent cumulative gain in the players’ propensity to collaborate. Similarly, as p decreases along the same interval on the right panel of Fig. 14, the players’ symmetric payoff functions move from G1 = G2 = 0.15 to G1 = G2 = 0.99. Owed to each player’s collegiality and equal impartiality toward the other’s propensity to collaborate, this Gi shift represents an amazing win-win scenario of a 263 percent cumulative gain in the players’ equal payoff. But how does this astounding improvement come about? What causes it? Why?
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
Figure 12. Phase plots of the Xi space, i = 1, 2 for the symmetric impartiality model (ß0 = 4, k0 = 1.2).
Figure 13. Phase plots of the Gi space, i = 1, 2 for the symmetric impartiality model (ß0 = 4, k0 = 1.2).
361
362
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
Figure 14. Effects of p on Xi, Gi, i = 1, 2 and percentage gains from the symmetric impartiality model.
In the time domain of Fig. 15, the behavior phases of X1 and X2 do not look much different from the thumbnail icons on the top right of Fig. 11. Initially, both stocks show reinforcing growth, followed by balancing growth dynamics, which X2 sustains until the fixed-point attractor K has absorbed all dynamics, leaving both collegiality stocks in a stable, negative feedback state. The shifting prominent structure phases of Fig. 15 tell, however, an entirely different story. Time wise, only the first prominent structure phase of Fig. 15 is identical for the two players’ collegiality stocks. After time t = 4 days, the third phase of X1 and the second phase of X2 meet again at t = 12.5 days. But while the fourth phase of X1 ends at t = 25.5 days, the third phase of X2 lasts until t = 31.75 days. Corresponding to their first, common shifting phase is the reinforcing (+) structure #3, a causal pathway most prominent in generating the initial reinforcing growth of both X1 and X2. After t = 4 days, reinforcing (+) loop #7 becomes prominent in generating the performance of both stocks, but the prominent causal path via which it affects X1 changes after t = 4.5 days. Hence, X1 enters a balancing growth era after t = 4.5 days, while X2 continues to ascend on its reinforcing growth trajectory until t = 12.5 days. Subsequently, after t = 12.5 days, the X1 stock embarks on its fourth prominent structure phase, while X2 is just beginning its third phase. Another small block arrow, now in the middle of the large time-series graph of Fig. 15, again shows where the X1 and X2 phases break off. Without strong evidence to the contrary, once more, this phase break off might well be what causes this paradoxical, completely self-referential game model with endogenous parameters and symmetric impartiality to end up in a fixed-point attractor. Although still small, the shifting phase break off might be sufficient enough to move the system widely through its state space, thereby helping the players reach the fixed-point attractor K of Fig. 12. According to cybernetics, the science of signal processing and automatic control, and thermodynamics, the more widely a system moves through its state space, the faster it ends up in an attractor (Prigogine and Strengers 1984, Zhabotinsky 1973).
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
363
Figure 15. Shifting prominent structure phases of X1 and X2 with symmetric impartiality.
A “surprising result” Nicolis et al. (2001, p. 325) exclaim again when they see that players caught in this paradoxical, completely self-referential game with endogenous parameters and symmetric impartiality can eventually conclude at the S1 state of full collaboration (Fig. 1), as K moves up the diagonal with decreasing p (Fig. 12). Once more, this reaction is typical among modelers deprived of the accelerated circular causality thinking (Richmond 1993) that Mojtahedzadeh’s (1996) pathway participation metric offers in his Digest® software. 2.3. Asymmetric Impartiality Model Results The results so far have been from game model variants with symmetric structures (Fig. 2a, Fig. 4 and Fig. 6a). The dynamics would have been invariant if the two players were to swap positions. But the endogenous parameter q (0 ≤ q < p ≤ 1) of Fig. 6b and Eq. 40 (Table 4) now helps assess what would happen if one player were to account less for the other’s collegiality. Figure 16 shows the phase plots of the Xi space and Fig. 17 the phase plots of the Gi space, i = 1, 2, respectively, from the asymmetric impartiality model.
364
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
Figure 16. Phase plots of the Xi space, i = 1, 2 for the asymmetric impartiality model (ß0 = 4, k0 = 1.2).
The X1X2 phase plot on the lower-right panel of Fig. 16 (where p = 1 and q = 0.95), clearly shows the game’s asymmetry. Both p and q decrease on Fig. 16, following a ‘Z’ pattern in reverse. As they do, under this asymmetric impartiality model (Fig. 6b and Table 4), the fixed-point attractor K climbs up toward (1, 1) yet consistently stays below the X1 = X2 diagonal of the unit square. Likewise, as p and q decrease on Fig. 17, the players’ unequal (asymmetric) payoff moves up too. But the payoff of the second player is consistently higher than the payoff of the first, as the second player accounts less for the other’s collegiality or lack of it. Figure 18 shows exactly how the declining p and q affect each player’s collegiality probability Xi, and the percentage and cumulative percentage changes in their asymmetric payoff functions Gi, i = 1, 2. Owed to the game’s asymmetry, at p = 1, the dynamics now does not revert to the endogenous parameter model results of Fig. 10. Specifically, as p declines from 1 to 0.7 and q from 0.95 to 0.665, on the top panel of Fig. 18, the fixed-point attractor K moves from K = (0.7767, 0.5955) to K = (1.000, 0.9490), well into the S1 state of full collaboration (Fig. 1). This is only a 27 percent cumulative gain in the first player’s propensity to collaborate, but a 51 percent cumulative gain the second player’s collegiality. Similarly, on the lower panel of Fig. 18, the two players’ asymmetric payoff functions move from G1 = 0.15 to G1 = 1.07 for the
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
365
Figure 17. Phase plots of the Gi space, i = 1, 2 for the asymmetric impartiality model (ß0 = 4, k0 = 1.2).
first player and from G2 = 0.52 to G2 = 1.24 for the second player. Owed to both players’ collegiality but unequal impartiality toward each other’s propensity to collaborate, the cumulative shift is 281 percent in G1 and 108 percent in G2, an amazing win-win scenario compared to the perpetual, never-ending conflicts of the exogenous parameter model. The immediate implication is that impartiality pays. Namely following one’s own tendencies toward collegiality and collaboration pays more than paying attention to an opponent’s propensity for collegiality or the lack of it. Had the first player played more impartially toward the second, then the first player’s payoff would have been higher than the second player’s. But how does the asymmetric structure of the last impartiality model cause these results? How can asymmetric impartiality ensure that paradoxical self-referential games with endogenous payoff, prior discord and loss parameters can end at the S1 state of full collaboration? In the time domain of Fig. 19, one last time, the behavior phases of X1 and X2 look very similar to the thumbnail icons on the top right of Fig. 11 and Fig. 15. The first, reinforcing growth phase of X1 does last longer, however, as one moves from Fig. 11 to Fig. 15 to Fig. 19. As a result, in the endogenous parameter models, both players’ collaboration probabilities increase as they move from complete self-reference (p = 1) to symmetric impartiality (p =0.8) to asymmetric impartiality (p = 0.8, q = 0.76). Balanc-
366
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
Figure 18. Effects of p on Xi, Gi, i = 1, 2 and percentage gains from the asymmetric impartiality model.
ing growth follows the two stocks’ initial reinforcing growth dynamics. All three game models end when the fixed-point attractor K has absorbed all dynamics, leaving the system in a stable, negative feedback state. Time wise again, only the first prominent structure phase of Fig. 19 is identical for the two stocks. After time t = 3.75 days, the third phase of X1 and the second phase of X2 meet again at t = 12.25 days. The fourth phase of X1 now ends at t = 25.25 days, but the third phase of X2 lasts until t = 29.75 days. Once more, corresponding to their first, common shifting phase is reinforcing (+) pathway #3, causing the initial growth of both X1 and X2. After t = 3.75 days, reinforcing (+) loop #7 again becomes prominent in generating the behavior of both stocks, but the prominent causal path via which it affects X1 changes after t = 6 days. So X1 enters a balancing growth era after t = 6 days, but X2 continues to ascend on its reinforcing growth trajectory until t = 12.25 days. After t = 12.5 days, X1 embarks on its fourth prominent structure phase, while the second player’s stock X2 is just beginning its third phase.
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
367
Figure 19. Shifting prominent structure phases of X1 and X2 with asymmetric impartiality.
Between t = 3.75 and t = 6 days, the prominent structure phase break off on the top left of Fig. 19 is so wide that is hard to miss (no block arrow needed). Without strong evidence to the contrary, again this phase break off might well be what gives the asymmetric impartiality game model ample help to end up in the fixed-point attractor K. And as on Fig. 15, on Fig. 19 too, the second prominent structure phase break off is even wider than the first. After time t = 12.25, the balancing (–) prominent structure #9 tapers off both stocks’ growth. This causal pathway stays dominant for X1 until t = 25.25 days, and even longer for X2, until t = 29.75 days. It is between t = 25.25 and t = 29.75 days that the second phase break off occurs, precisely when balancing (–) structure #5 becomes most prominent in taming both stocks, thereby enabling the fixed-point attractor K to absorb all dynamics.
368
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
3. Discussion and Conclusion Nothing is random in life. Even if it pervades all business processes and systems, “randomness is a measure of our ignorance,” says Sterman (2000, p. 127). Georgantzas and Orsini (2003) concur and Hayes (2001, p. 300) further elucidates: “Fretting about a dearth of randomness seems like worrying that humanity might use up its last reserves of ignorance.” Purely deterministic, this article’s SD game models help explore non-constant sum, paradoxical self-referential games between two players, who enter a conflict situation trying to maximize their respective potential payoffs. While playing rationally according to set rules (von Neumann and Morgenstern 1944), both players can concurrently win or lose in such human games, depending on whether they choose collaboration or opt for discord tactics. Four SD game models help explore the dynamic repercussions of these two means of conflict resolution in business and civil litigation. The models and associated computer simulation results might apply even in situations where organizational control parameters and collegiality probabilities depend predominantly on intrinsic motivation rather than on extrinsic rewards. Early formulations of such games used exogenous constant parameters for the payoff ßi and loss ki coefficients linked to discord, and a fixed prior discord parameter di (Nicolis 1986). Later, however, Nicolis et al. (2001) made their models more realistic by treating these parameters endogenously, so that ki and di increase together as the Xis increase and the ßis decrease in inverse proportion to increasing collaboration (i = 1, 2). Nicolis et al. see these parameters as the ‘environment’ surrounding the contestants… influencing [their tactics], but is in turn plastically modified by them ‘natural selection’ is not a one-way process; it is a feedback loop between the environment and the organisms involved (Nicolis et al. 2001, p. 330). Although Nicolis et al. err slightly in counting the number of loops involved, by 207 loops to be exact, the results of this SD interpretation of their models support their enlightening results. If collegiality equally affects both players’ tendency to collaborate, for example, the game ends at fixed collaboration probabilities with moderate payoffs for both instead of ending at a state of full collaboration with maximum payoffs. But the more collegial both players are and the more they disregard each other’s tendency to collaborate, the more their initial conflict is likely to end in full collaboration. In the long term, those who wisely choose to collaborate regardless of the others’ attitudes can see their payoff increase drastically (lower panel, Fig. 18). These results support both the collaborative law proponents and Deming’s (2000) new climate and win-win predicament. But if Deming and his predicament can find their place in the traditional civillitigation battlefield and combats, then why do business, government and other nonprofit organizations still resist them? Deming’s win-win proposition does not advocate socialism, typically an attempt to redistribute wealth through taxation and social programs. Deming is not talking about redistribution of wealth but, rather, about principles and methods that can increase the wealth of all concerned in human systems. Poise and impartiality pay because the less attention a player pays to other’s collegiality, the more that player gains as the initial conflict approaches a fixed state of mutual collegiality and collaboration. Even if two players reach a stalemate with fixed discord tactics and limited payoffs, they can still make progress and see their payoffs increase if each player follows a collegiality policy independently of the other’s willingness to reciprocate or not.
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
369
One way to promote collegiality and collaboration in business and civil litigation systems is to decrease the tempting payoff or to increase loss due to discord, by making the exogenous ß0 and k0 parameters (Eqs 31 and 32, Table 2) smaller and larger, respectively. Yet a third possibility would entail for at least one of the litigants to simply break off the identical time-wise phases of a perpetual, never-ending conflict (Fig. 9), shocking the system into a phase break off to reach the stable equilibrium of a win-win triumph. Naturally, doing so takes trust, with all those loops between it, the economic and political fabric of a semi-autonomous society, and the semi-autonomous people who make up that society (Castoriadis 1994, Good 2000). It is Mojtahedzadeh’s Digest®, with its analysis of shifting prominent structure and polarity phases that has helped reveal this third possibility, as a means to promote collegiality in business, civil litigation and, perhaps, even in today’s multitude of international conflicts. Digest® offers a novel mode for understanding and explaining what would normally require dominant loop (Richardson 1995) and eigenvalue analyses (Forrester 1983). Qualitatively, using Mojtahedzadeh’s (1996) pathway participation metric implemented in Digest® feels much akin to simulation than to eigenvalue analysis. Armed with PPM, however, Digest® delivers results equivalent in rigor to eigenvalue analysis, but with the finesse of computer simulation. Indeed, as Oliva (2004, p. 331) points out, SD is particularly keen in understanding system performance, “not structure per se”, in lieu of SD’s core tenet that system structure causes performance. Undeniably, while looking for systemic leverage in strategy making (Georgantzas and Ritchie-Dunham 2003), modelers do play with structural changes for superior performance in business and civil litigation. Having model analysis tools such as Digest® helps articulate structural complexity and thereby enables both effective and efficient strategy designs. To abolish the shifting loop polarity phases of Fig. 9, one could easily decompose the equations of motion on Table 1a (Eqs 3 and 4). But doing so would increase the number of feedback loops of the exogenous parameter model (Fig. 2a and Fig. 3) from three to 19 loops per Xi, i = 1, 2 and lead to a new causal structure, unlike the one Nicolis (1986) and Nicolis et al. (2001) put forth. It would be both premature and unproductive to draw broad generalizations based on this article’s limited findings. Although tempting to use its results, some conflicts may provide justification but others contradiction. Conflicts in human systems often involve multiple players and stakeholder groups, rendering them far too complex to explain with this article’s tiny SD game models. If valuable, the results merely suggest possible future research in modeling paradoxical games between two or among multiple players. One possible extension is, for example, to treat the already endogenous di parameters of prior discord as state variables, like the Xis. That might entail adding at least two more stocks and flows to each model to render the phase space multidimensional and to add new exciting features to the new models, such as the possibility of chaotic attractors (Nicolis et al. 2001). Meanwhile, unaware that only full collaboration can maximize payoff, business managers and civil litigation lawyers alike continue to choose between collegiality and discord tactics hoping to see their payoff rise. In response, following Mr. Webb’s daring lead, collaborative law is becoming the practice of law using a multidisciplinary approach to problem solving that deems adversarial techniques and tactics unnecessary (Bushfield 2002). All parties to a dispute and their attorneys agree to resolve the dispute without going to court. The process is characterized by a strong commitment of collegiality founded on an atmosphere of honesty, cooperation, integrity and profes-
370
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
sionalism geared toward the future well being of all concerned. Where positions differ, participants create proposals that meet the fundamental needs of both parties and they will compromise if they must to settle all issues in a collegial win-win manner.
References [1] Alfeld L.E. and Graham A. 1976. Introduction to Urban Dynamics. MIT Press, Cambridge MA. Reprinted by Productivity Press: Portland OR and currently available form Pegasus Communications: Waltham MA. [2] Arnold T. 2000. Collaborative dispute resolution: an idea whose time has come? American Law Institute (ALI) – American Bar Association (ABA) Continuing Legal Education, ALI-ABA Course of Study, SF16 (Oct.): 379. [3] Ashford A.C. 2001. Unexpected behaviors in higher-order positive feedback loops (D-4455-2). In Road Maps 7: A Guide to Learning System Dynamics. System Dynamics in Education Project, MIT: Cambridge MA. [4] Atkinson G. 2004. Common ground for institutional economics and system dynamics modeling. System Dynamics Review 20(4): 275–286. [5] Bushfield N. 2002. History and development of collaborative law. Available online (02/02/05): www.iahl.org/articles/04_History_and_Development.htm. [6] Castoriadis C. 1994. The Imaginary Institution of Society. MIT Press: Cambridge MA. [7] Cooter R.D. and Rubinfeld D.L. 1989. Economic analysis of legal disputes and their resolution. Journal of Economic Literature 27: 1067. [8] Courcoubetis C. and Yannakakis M. 1988. Verifying temporal properties of finite state probabilistic programs. In Proceedings of the IEEE Conference on Decision and Control, pp. 338–345. [9] Crocker C. and Hampson F.O. (Eds.). 1996. Managing Global Chaos: Sources of the Response to International Conflict. US Institute of Peace Press: Washington DC. [10] Deming W.E. 2000. The New Economics for Industry, Government, Education (2e). MIT Press: Cambridge MA. Originally published in 1993 by MIT’s Center for Advanced Engineering Study (CAES): Cambridge MA. [11] Deutsch M. and Krauss R.M. 1960. The effect of threat on interpersonal bargaining. Journal of Abnormal and Social Psychology 61: 181–189. [12] Eberlein R.L. 2002. Vensim® PLE Software (5.2a). Ventana Systems, Inc.: Harvard MA. [13] Fisher R., Ury W. and Patton B. 1991. Getting to Yes: Negotiating Agreement Without Giving In (2nd Edition). Penguin Group: New York NY. [14] Forrester J.W. 2003. Dynamic models of economic systems and industrial organizations. System Dynamics Review 19(4): 331–345. [15] Forrester J.W. and Senge P.M. 1980. Tests for building confidence in system dynamics models. In A.A. Legasto Jr., J.W. Forrester and J.M. Lyneis (Eds.), TIMS Studies in the Management Sciences, Vol. 14: System Dynamics. North-Holland: New York NY, pp. 209–228. [16] Forrester N. 1983. Eigenvalue analysis of dominant feedback loops. In Plenary Session Papers Proceedings of the 1st International System Dynamics Society Conference, Paris, France: 178–202. [17] Georgantzas N.C. and Orsini J.N. 2003. Tampering dynamics. In Proceedings of the 21st International System Dynamics Society Conference, 20–24 July, New York NY. [18] Georgantzas N.C. and Ritchie-Dunham J.L. 2003. Designing high-leverage strategies and tactics. Human Systems Management 22(1): 217–227. [19] Gershon E.S., Belmaker R.H., Kety S.S. and Rosenbaum M. 1977. The Impact of Biology in Modern Psychiatry. Plenum: New York, NY. [20] Gonçalves P., Lerpattarapong C. and Hines J.H. 2000. Implementing formal model analysis. In Proceedings of the 18th International System Dynamics Society Conference, August 6–10, Bergen, Norway. [21] Good D. 2000. Individuals, interpersonal relations and trust. In Gambetta D. (Ed.) Trust: Making and Breaking Cooperative Relations. Department of Sociology, University of Oxford: Oxford UK, pp. 31–48. [22] Gould J.P. 1973. The economics of legal conflicts. Journal of Legal Studies 2: 279. [23] Hansson H. and Jonsson B. 1994. A logic for reasoning about time and reliability. Formal Aspects of Computing 6: 512–535. [24] Hayes B. 2001. Randomness as a resource. American Scientist 89(4): 300–304. [25] Hill O.W. 1976. Modern Trends in Psychosomatic Medicine (Vol. 3). Butterworth: London, UK.
N.C. Georgantzas / Digest Wisdom: Collaborate for Win-Win Human Systems
371
[26] Kampmann C.E. 1996. Feedback loops gains and system behavior. In Proceedings of the 12th International System Dynamics Society Conference, July 21–25, Cambridge MA. [27] Landes W.M. 1971. An economic analysis of the courts. Journal of Law & Economics 14: 61. [28] McArdle E. 2004. From ballistic to holistic. The Boston Globe (Jan. 11). [29] Meadows D.H. 1989. System dynamics meets the press. System Dynamics Review 5(1): 68–80. [30] Mojtahedzadeh M.T. 1996. A Path Taken: Computer-Assisted Heuristics for Understanding Dynamic Systems. Ph.D. Dissertation. Rockefeller College of Public Affairs and Policy, SUNY: Albany NY. [31] Mojtahedzadeh M.T., Andersen D. and Richardson G.P. 2004. Using Digest® to implement the pathway participation method for detecting influential system structure. System Dynamics Review 20(1): 1–20. [32] Morrison J.B. 2001. Limits to the pace of learning in participatory process improvement. In Proceedings of the 19th International System Dynamics Society Conference, Atlanta GA. [33] Morrison J.B. 2002. The right shock to initiate change: a sensemaking perspective. In Best Paper Proceedings of the Academy of Management, Denver CO. [34] Nicolis J.S. 1986. Dynamics of Hierarchical Systems: An Evolutionary Approach. Springer-Verlag: Berlin, Germany. [35] Nicolis J.S., Bountis T. and Togias K. 2001. The dynamics of self-referential paradoxical games. Dynamical Systems 16(4): 319–332. [36] von Neumann J. and Morgenstern O. 1944. The Theory of Games and Economic Behavior. Princeton University Press, Princeton NJ. [37] Oliva R. 2004. Model structure analysis through graph theory: partition heuristics and feedback structure decomposition. System Dynamics Review 20(4): 313–336. [38] Oliva R. and Mojtahedzadeh M.T. 2004. Keep it simple: a dominance assessment of short feedback loops. In Proceedings of the 22nd International System Dynamics Society Conference, July 25–29, Keble College, Oxford University, Oxford UK. [39] OQPF Roundtable. 2000. Deming’s Point Seven: Adopt and Institute Leadership – A Commentary on Deming’s Fourteen Points for Management. The Ohio Quality and Productivity Forum (OQPF), PO Box 17754, Covington KY 41017-0754. [40] Posner R.A. 1973. An economic approach to legal procedure and judicial administration. Journal of Legal Studies 2: 399. [41] Priest G.L. and Klein B. 1984. The selection of disputes for litigation. Journal of Legal Studies 13:1. [42] Prigogine I. and Strengers I. 1984. Order Out of Chaos. Bantam Books: New York NY. [43] Rachlinski J.J. 1996. Gains, losses and the psychology of litigation. Southern California Law Review (Nov). [44] Rapoport A. 1966. Two-Person Game Theory: The Essential Ideas. University of Michigan Press: Ann Arbor MI. [45] Repenning N.P. 2003. Selling system dynamics to (other) social scientists. System Dynamics Review 19(4): 303–327. [46] Richardson G.P. 1995. Loop polarity, loop prominence, and the concept of dominant polarity. System Dynamics Review 11(1): 67–88. [47] Richmond B. 1993. Systems thinking: critical thinking skills for the 1990s and beyond. System Dynamics Review 9(2): 113–133. [48] Richmond B. et al. 2006. iThink® Software (9.1). iSee Systems™: Lebanon NH. [49] Ross S. 1983. Stochastic Processes. Wiley: New York NY. [50] Shavell S. 1982. The social versus the private incentive to bring suit in a costly legal system. Journal of Legal Studies 11: 333. [51] Sterman J.D. 2000. Business Dynamics: Systems Thinking and Modeling for a Complex World. Irwin McGraw-Hill: Boston MA. [52] Swingle P.G. (Ed.). 1970. The Structure of Conflict. Academic Press: New York NY. [53] Watzlawick P., Beavin J.H. and Jackson D.D. 1967. Pragmatics of Human Communication. Norton: New York NY. [54] Winch G.W. 1997. The dynamics of process technology adoption and the implications for upgrade decisions. Technology Analysis and Strategic Management 9(3): 317–328. [55] Wolf S. and Berle B.B. 1976. The Biology of the Schizophrenic Process. Plenum: New York, NY. [56] Zeleny M. 2005. Human Systems Management: Integrating Knowledge, Management and Systems. World Scientific: Hackensack, NJ. [57] Zeleny M. 1997. Autopoiesis and self-sustainability in economic systems. Human Systems Management 16(4): 251–262. [58] Zhabotinsky A.M. 1973. Autowave processes in a distributed chemical system. Journal of Theoretical Biology 40: 45–61.
372
Advances in Multiple Criteria Decision Making and Human Systems Management Y. Shi et al. (Eds.) IOS Press, 2007 © 2007 The authors. All rights reserved.
Selected Publications of Milan Zeleny Publications before 1973, although already voluminous, are mostly in Czech, not well known, and after 1968 forbidden and not allowed to be cited in the country of origin. In spite of this prohibition, Milan Zeleny became listed as the most cited of Czech economists by 2005. Among works deserving a note as indicators of his future interests are the following: • • • • • • • •
“Analýza složitých procesů metodou kritické cesty” (Analysis of Complex Projects by Critical Path Method), Ekonomicko-matematická laboratoř při EÚ ČSAV, Výzkumná publikace č. 4, Praha, 1964, p. 130. “Metody analýzy sítí (CPM, PERT)” (Network Analysis Techniques), Ekonomicko-matematický obzor, 1 (1965) 3, pp. 225–262. “Vícerozměrný model složitých technicko-ekonomických procesů (VRMstep)” (The Multidimensional Model of Complex Technological Projects), Pozemní stavby, 13 (1965), pp. 351–4. (With P. Bezděk). “Vědecké řízení složitých soustav” (Scientific Management of Complex Systems), Věda a život, č. 1–2 (1965), pp. 705–709. “Metody optimálního vyvažování výrobních linek” (Optimal Balancing of Production Lines), Podniková Organizace, 20 (1966) 19, pp. 456–458. Američtí specialisté o kritické cestě (American Specialists on Critical Path), Ekonomicko-matematická laboratoř při EÚ ČSAV, Informační publikace č. 25, Praha, 1966, p. 88. “Analýza sítě technikou dynamického programování” (Network Analysis by Dynamic Programming Technique), Ekonomicko–matematický obzor, 3 (1967) 1, pp. 63–74. “Markovský přístup k řešení problémů analýzy sítě” (A Markovian Approach to Network Analysis Methods-MANAM), Ekonomicko–matematický obzor, 3 (1967) 2, pp. 214–259.
1973 Multiple Criteria Decision Making, University of South Carolina Press, Columbia, S.C., 1973, p. 816. (Editor with J.L. Cochrane). “Compromise Programming,” in: Multiple Criteria Decision Making, edited by M. Zeleny and J.L. Cochrane, University of South Carolina Press, Columbia, S.C., 1973, pp. 262–301. Also: “A Priori and A Posteriori Goals in Macroeconomic Policy Making,” pp. 373–391 (With J.L. Cochrane). “A Selected Bibliography of Works Related to the Multiple Criteria Decision Making,” pp. 779–796.
Selected Publications of Milan Zeleny
373
1974 Linear Multiobjective Programminq, Springer-Verlag, New York, 1974, p. 220. “The Techniques of Linear Multiobjective Programming,” Revue Française d’Automatique, d’Informatique et de Recherche Operationelle, 8 (1974) V-3, pp. 51–71. (With P.L. Yu). “A Concept of Compromise Solutions and the Method of the Displaced Ideal,” Computers and Operations Research, 1 (1974) 4, pp. 479–496. 1975 “The Set of All Nondominated Solutions in Linear Cases and A Multicriteria Simplex Method,” Journal of Mathematical Analysis and Applications, 49 (1975) 2, pp. 430–468. (With P.L. Yu). “Managers Without Management Science?” Interfaces, 5 (1975) 4, pp. 35–42. “New Vistas of Management Science,” Computers and Operations Research, 2 (1975) 2, pp. 121–125. 1976 “On the Inadequacy of the Regression Paradigm Used in the Study of Human Judgment,” Theory and Decision, 7 (1976) 1/2, pp. 57–65. Multiple Criteria Decision Making: Kyoto 1975, Springer-Verlag, New York, 1976, p. 340. (Editor). “The Theory of the Displaced Ideal,” in: Multiple Criteria Decision Making: Kyoto 1975, Springer-Verlag, New York, 1976, pp. 287–317. Also: “MCDM Bibliography – 1975,” pp. 318–340. “Multicriteria Simplex Method: A FORTRAN Routine,” pp. 318–340. “Games with Multiple Payoffs,” International Journal of Game Theory, 4 (1976) 4, pp. 179–191. “Simulation of Self-Renewing Systems,” in: Evolution and Consciousness: Human Systems in Transition, edited by E. Jantsch and C.H. Waddington, Addison-Wesley, Reading, Ma., 1976, pp. 150–165. (With N.A. Pierre). Book review: A. Kaufmann, Introduction to the Theory of Fuzzy Subsets, Volume 1, Interfaces, 6 (1976) 4, pp. 113–115. “The Attribute-Dynamic Attitude Model (ADAM),” Management Science, 25 (1976) 1, pp. 12–26.
374
Selected Publications of Milan Zeleny
“Linear Multiparametric Programming by Multicriteria Simplex Method,” Management Science, 23 (1976) 2, pp. 159–170. (With P.L. Yu). “Conflict Dissolution,” General Systems Yearbook, XXI, 1976, pp.131–136. 1977 “Intuition and Probability,” The Wharton Magazine, 1 (1977) 4, pp. 63–68. Multiple Criteria Decision Making, TIMS Studies in the Management Sciences, Vol. 6, North-Holland Publishing Co., Amsterdam, 1977, p. 270. (Editor with M.K. Starr). “MCDM – State end Future of the Arts,” in: Multiple Criteria Decision Making, TIMS Studies in the Management Sciences, Vol. 6, North-Holland Publishing Co., Amsterdam, 1977, pp. 5–30. (With M.K. Starr). Also: “Adaptive Displacement of Preferences in Decision Making,” pp. 147–158. “Self-Organization of Living Systems: A Formal Model of Autopoiesis,” Int. J. General Systems, 4 (1977) 1, pp. 13–28. Columbia Journal of World Business, Focus: Decision Making, XII (1977) 3, p. 136. (Editor with M.K. Starr). “Decision Making: An Overview,” Columbia Journal of World Business, XII (1977) 3, pp. 5–8. (With M.K. Starr and R.L. Denosowicz). “Membership Functions and Their Assessment,” in: Current Topics in Cybernetics and Systems, edited by J. Rose, Springer-Verlag, Berlin, 1978, pp. 39l–392. 1978 “APL-AUTOPOIESIS: Experiments in Self-Organization of Complexity,” in: Progress in Cybernetics and Systems Research, vol. III, edited by R. Trappl et al., Hemisphere Publishing Corp., Washington, D.C., 1978, pp. 65–84. “Multidimensional Measure of Risk: Prospect Ranking Vector (PRV),” in: Multiple Criteria Problem Solving, edited by S. Zionts, Springer-Verlag, New York, 1978, pp. 529–548. 1979 Book reviews: B. Trentowski, Stosunek Filozofii do Cybernetyki czyli sztuki rzadzenia narodem; A.A. Bogdanov, Tektologia: vseobschaia organizatsionaia nauka; J.Ch. Smuts, Holism and Evolution; S. Leduc, The Mechanism of Life, Int. J. General Systems, 5 (l979) 1, pp. 63–71. “Cybernetics and General Systems – A Unitary Science?” Kybernetes, 8 (1979) 1, pp. 17–23.
Selected Publications of Milan Zeleny
375
“The Self-Service Society: A New Scenario of the Future,” Planning Review, 7 (1979) 3, pp. 3–7, 37–38. “Intuition, Its Failures and Merits,” in: Surviving Failures, edited by B. Persson, Humanities Press, Atlantic Highlands, N.J., 1979, pp. 172–183. Uncertain Prospects Ranking and Portfolio Analysis Under the Conditions of Partial Information, Mathematical Systems in Economics 44, Oelschlager, Gunn & Hain Publishers, Cambridge, M.A. 1979/1980. (With G. Colson). “The Last Mohicans of OR: Or, It Might Be in the ‘Genes’,” Interfaces, 9 (1979) 5, pp. 135–141. 1980 Computers and Operations Research, Special Issue on Mathematical Programming with Multiple Objectives, 7 (1980) 1/2. (Editor). “Descriptive Decision Making and Its Applications,” in: Applications of Management Science, Vol. 1, edited by R.L. Schultz, JAI Press, Greenwich, Conn., 1981, pp. 327–388. “Multiple Objectives in Mathematical Programming: Letting the Man In,” in Computers and Operations Research, Special Issue on Mathematical Programming with Multiple Objectives, 7 (1980) 1/2, pp. 1–4. Also: “Multicriterion Concept of Risk Under Incomplete Information,” pp. 125–143. (With G. Colson). “Ellipsoid Algorithms in Mathematical Programming,” Human Systems Management, 1 (1980) 2, pp. 173–178. Autopoiesis, Dissipative Structures, and Spontaneous Social Orders, AAAS Selected Symposium 55, Westview Press, Boulder, Co., 1980. (Editor). “Autopoiesis: A Paradigm Lost?” in: Autopoiesis, Dissipative Structures, and Spontaneous Social Orders, AAAS Selected Symposium 55, edited by M. Zeleny, Westview Press, Boulder, Co., 1980, pp. 3–43. “Towards a Self-Service Society,” Human Systems Management, 1 (1980) 1, pp. 1–3. Book review: L.C. Thurow, The Zero-Sum Society, Human Systems Management, 1 (1980) 3, pp. 276–277. “Strategic Management within Human Systems Management,” Human Systems Management, 1 (1980) 2, p. 179–180. Book review: P. Nijkamp end A. van Delft, Multi-criteria Analysis end Regional Decision-making, Journal of American Statistical Association, 75 (1980) 372.
376
Selected Publications of Milan Zeleny
“Multiple Scenarios of Reindustrialization,” Human Systems Management, 1 (1980) 4, pp. 281–282. 1981 “Satisficinq, Optimization and Risk in Portfolio Selection,” in: Readings in Strategy for Corporate Investment, edited by F.G.J. Derkinderen and R.L. Crum, Pitman Publishing, Boston, 1981, pp. 200–219. “Socio-Economic Foundations of a Self-Service Society,” in: Progress in Cybernetics and Systems Research, vol. 10, Hemisphere Publishing, Washington, D.C., 1982, pp. 127–132. Autopoiesis: A Theory of Living Organization, Elsevier North Holland, New York, NY, 1981. (Editor). “What Is Autopoiesis?” in: Autopoiesis: A Theory of Living Organization, edited by M. Zeleny, Elsevier North Holland, New York, NY, 1981, pp. 4–17. “Autogenesis: On the Self-Organization of Life,” in: Autopoiesis: A Theory of Living Organization, edited by M. Zeleny, Elsevier North Holland, New York, NY, 1981, pp. 91–115. “On the Squandering of Resources and Profits via Linear Programming,” Interfaces, 11 (1981) 5, pp. 101–107. “The Pros and Cons of Goal Programming,” Computers and Operations Research, 8 (1981) 4, pp. 357–359. “Self-Service Trends in the Society,” in: Applied Systems and Cybernetics, Vol. 3, edited by G.E. Lasker, Pergamon Press, Elmsford, N.Y., 1981, pp. 1405–1411. “Fuzzy Sets: Precision and Relevancy,” in: Applied Systems and Cybernetics, Vol. 6, edited by G.E. Lasker, Pergamon Press, Elmsford, N.Y., 1981, pp. 2719–2721. Cybernetics Forum, Special Issue Devoted to Autopoiesis, 10 (1981) 2/3, Summer/Fall 1981. (Editor). “Autopoiesis Today,” in: Cybernetics Forum, Special Issue Devoted to Autopoiesis, edited by M. Zeleny, 10 (1981) 2/3, Summer/Fall 1981, pp. 3–6. Also: “SelfOrganization of Living Systems: A Formal Model of Autopoiesis,” pp. 24–38. “Self-Service Aspects of Health Maintenance: Assessment of Current Trends,” Human Systems Management, 2 (1981) 4, pp. 259–267. (With M. Kochen). Book review: I. Kristol and N. Glaser (eds.), The Crisis in Economic Theory, Special Issue of “Public Interest,” Human Systems Management, 2 (1981) 3, pp. 228–230.
Selected Publications of Milan Zeleny
377
“A Case Study in Multiobjective Design: De Novo Programming,” in: Multiple Criteria Analysis: Operational Methods, edited by P. Nijkamp and J. Spronk, Gower Publishing, Hampshire, 1981, pp. 37–52. 1982 Multiple Criteria Decision Making, McGraw-Hill, New York, 1982. Multiple Criteria Decision Making: Selected Case Studies, McGraw-Hill, New York, 1982. (Editor with C. Carlsson and A. Törn). “High Technology Management,” Human Systems Management 3 (1982) 2, pp. 57–59. “New Vistas in Management Science,” in: Cases and Readings in Management Science, edited by E.F. Turban and P. Loomba, Business Publications, Plano, Texas, 1982, pp. 319–325. 1983 “Holistic Aspects of Biological and Social Organizations: Can They Be Studied?” in: Environment and Population: Problems of Adaptation, edited by John B. Calhoun, Praeger Publishers, New York, 1983, pp. 150–153. “Qualitative versus Quantitative Modeling in Decision Making,” Human Systems Management, 4 (1983) 1, pp. 39–42. Book review: W. Lowen, Dichotomies of Mind, Human Systems Management, 4 (1983) 1, pp. 52–54. “The Social Progress of Nations,” Human Systems Management, 4 (1983) 1, pp. 1–2. 1984 “On the (Ir) Relevancy of Fuzzy Sets Theories,” Human Systems Management, 4 (1984) 4, pp. 301–306. MCDM – Past Decade and Future Trends, A Source Book of Multiple Criteria Decision Making, JAI Press, Greenwich, Conn., 1984. (Editor). “Introduction: Ten Years of MCDM,” in: MCDM – Past Decade and Future Trends, A Source Book of Multiple Criteria Decision Making, edited by M. Zeleny, JAI Press, Greenwich, Conn., 1984, pp. ix–xiii. Also: “Multicriterion Design of High-Productivity Systems,” pp. 169–187. Book review: Mark Davidson, Uncommon Sense, Human Systems Management, 5 (1984) 1, pp. 87–88.
378
Selected Publications of Milan Zeleny
1985 “Multicriterion Design of High-Productivity Systems: Extensions and Applications”, In: Decision Making with Multiple Objectives, Proceedings, Cleveland, Ohio, 1984, (eds.) Y.Y. Haimes and Vira Changkong, Lecture Notes in Economics and Mathematical Systems, No. 242, Springer-Verlag, New York, New York, pp. 308–321, 1985. “Multiple Criteria Decision Making (MCDM),” in: Encyclopedia of Statistical Sciences, vol. 5, John Wiley & Sons, New York, 1985, pp. 693–696. “Spontaneous Social Orders,” in: The Science and Praxis of Complexity, The United Nations University, Tokyo, 1985, pp. 312–328. “La gestione a tecnologia superiore e la gestione della tecnologia superiore,” in: La sfida della complessità, edited by G. Bocchi and M. Ceruti, Feltrinelli, Milano, 1985, pp. 401–413. “Spontaneous Social Orders,” Int. J. General Systems, 11 (1985) 2, pp. 117–131. Book review: “Marxism-Leninism and Systems Approach”; Integration of Science and the Systems Approach, edited by Z. Javurek, A.D. Ursul and J. Zeman, Int. J. General Systems, 11 (1985) 2, pp. 176–180. “Multicriterion Design of High-Productivity Systems: Extensions and Applications,” in: Decision Making with Multiple Objectives, edited by Y.Y. Haimes and V. Chankong, Springer-Verlag, New York, 1985, pp. 308–321. 1986 “Les ordres sociaux spontanés,” in: Science et pratique de la complexité, Actes du colloque de Montpellier, Mai 1984, IDATE/UNU, La Documentation Française, Paris, 1986, pp. 357–378. “An External Reconstruction Approach (ERA) to Linear Programming,” Computers and Operations Research, 13 (1986) 1, pp. 95–100. Book review: “Destiny and Control in Human Systems,” by Charles Musés, Human Systems Management, 6 (1986) 2, pp. 95–100. “Management of Human Systems & Human Management of Systems,” Erhvervs økonomisk Tidsskrift, April 1986, pp. 107–116. “High Technology Management,” Human Systems Management, 6 (1986) 2, pp. 109–120. “The Roots of Modern Management: Bat’a-System,” Human Systems Management, 6 (1986) 1, pp. 4–7.
Selected Publications of Milan Zeleny
379
“At the End of the Division of Labor,” Human Systems Management, 6 (1986) 2, pp. 97–99. “Optimal System Design with Multiple Criteria: De Novo Programming Approach,” Engineering Costs and Production Economics, 10 (1986), pp. 89–94. “Management of Human Systems & Human Management of Systems,” in: Trends and Megatrends in the Theory of Management, ed. E. Johnsen, Bratt International, Lund, 1986, pp. 35–44. “Arthur Koestler (1905–1983),” Human Systems Management, 4 (1983/84) 1, pp. 48–49. “Erich Jantsch (1929-–1980),” Human Systems Management, 2 (1981) 2, pp. 119–120. “The Law of Requisite Variety: Is It Applicable to Human Systems?” Human Systems Management, 6 (1986) 4, pp. 269–271. “On Human Systems Management: An Emerging Paradigm,” Human Systems Management, 6 (1986) 2, pp. 181–184. 1987 “Multicriteria Decision Making,” in: Systems & Control Encyclopedia, Pergamon Press, Elmsford, N.Y., 1987, pp. 3116–3121. “Autopoiesis,” in: Systems & Control Encyclopedia, Pergamon Press, Elmsford, N.Y., 1987, pp. 393–400. “Simulation Models of Autopoiesis: Variable Structure,” in: Systems & Control Encyclopedia, Pergamon Press, Elmsford, N.Y., 1987, pp. 4374–4377. “Optimal System Design: Towards New Interpretation of Shadow Prices in Linear Programming,” Computers and Operations Research, 14 (1987) 4, pp. 265–271. (With M. Hessel). “The Roots of Modern Management: Bat’a-System,” (in Japanese, transl. Y. Kondo) Standardization and Quality Control, 40 (1987) 1, pp. 50–53. “Is Japan Reluctant To Go International?” Human Systems Management, 7 (1987) 2, pp. 85–86. “Management Support Systems: Towards Integrated Knowledge Management,” Human Systems Management, 7 (1987) 1, pp. 59–70. “Cybernetyka,” Int. J. General Systems, 13 (1987) 3, pp. 289–294. “Systems Approach to Multiple Criteria Decision Making: Metaoptimum,” in: Toward Interactive and Intelligent Decision Support Systems, edited by Y. Sawaragi, K. Inoue and H. Nakayama, Springer-Verlag, New York, 1987, pp. 28–37.
380
Selected Publications of Milan Zeleny
1988 “Three-Men Talk on Bat’a-System,” (In Japanese) Standardization and Quality Control, 41 (1988) 1, pp. 15–24. “La grande inversione: Corso e ricorso dei modi di vita umani,” in: Physis: abitare la terra, edited by M. Ceruti and E. Laszlo, Feltrinelli, Milano, 1988, pp. 413–441. “Bat’a System of Management: Managerial Excellence Found,” Human Systems Management, 7 (1988) 3, pp. 213–219. Book review: The Tree of Knowledge: The Biological Roots of Human Understanding, by H.R. Maturana and F.J. Varela, Human Systems Management, 7 (1988) 4, pp. 379–380. “Tectology,” Int. J. General Systems, 14 (1988) 4, pp. 331–343. “Beyond capitalism and socialism: Human manifesto,” Human Systems Management, 7 (1988) 3, pp. 185–188. “Osmotic Growths: A Challenge to Systems Science,” Int. J. General Systems, 14 (1988) 1, pp. 1–17. (With G.J. Klir and K.D. Hufford). “Integrated Process Management: A Management Technology for the New Competitive Era,” in: Global Competitiveness: Getting the U.S. Back on Track, edited by M.K. Starr, W.W. Norton & Co., New York, 1988, pp. 121–158. (With M. Hessel and M. Mooney). “What Is Integrated Process Management?” Human Systems Management, 7 (1988) 3, pp. 265–267. “On Management-Paradigm Transition,” Editorial, Human Systems Management, 7 (1988) 4, pp. 279–281. Interview on Artificial Life, in: “Child of a Lesser God” (E. Regis and T. Dworetzky), Omni, 11 (1988) 1, pp. 92–170. “Parallelism, Integration, Autocoordination and Ambiguity in Human Support Systems,” in: Fuzzy Logic in Knowledge-Based Systems, Decision and Control, edited by M.M. Gupta and T. Yamakawa, North-Holland, New York, 1988, pp. 107–122. 1989 “Precipitation Membranes, Osmotic Growths, and Synthetic Biology,” in: Artificial Life, edited by C.G. Langton, Santa Fe Institute Studies in the Sciences of Complexity, vol. VI, Addison-Wesley, Reading, MA, 1989, pp. 125–139. (With G.J. Klir and K.D. Hufford).
Selected Publications of Milan Zeleny
381
“Integrated Process Management: A Management Technology for the New Competitive Era,” Part 1 (in Japanese, transl. Y. Kondo), Standardization and Quality Control, 42 (1989) 10, pp. 61–68. Part 2, 42 (1989) 11, pp. 78–85. (With M. Hessel and M. Mooney). “Knowledge as a New Form of Capital, Part 1: Division and Reintegration of Knowledge,” Human Systems Management, 8 (1989) 1, pp. 45–58. “Part 2: Knowledge-Based Management Systems,” 8 (1989) 2, pp. 129–143. “Quality Management Systems: Subject to Continuous Improvement?” Human Systems Management, 8 (1989) 1, pp. 1–3. “The Role of Fuzziness in the Construction of Knowledge,” in: The Interface Between Artificial Intelligence and Operations Research in Fuzzy Environment, eds. J.-L. Verdegay and M. Delgado, Interdisciplinary Systems Research Series no. 95, Verlag TÜV Rheinland, 1989, pp. 233–252. Book Reviews: “Today and Tomorrow,” by H. Ford; “Toyota Production System,” by T. Ohno; “Tough Words for American Industry,” by H. Karatsu, Human Systems Management, 8 (1989) 2, pp. 175–178. “Leon Festinger (1920–1989),” Human Systems Management, 8 (1989) 2, pp. 97–98. “Manfred Kochen (1928–1989),” Human Systems Management, 8 (1989) 2, pp.95–96. “Osaka Lectures on IPM,” (in Japanese, transl. Y. Kondo), Standardization and Quality Control, 42 (1989) 12, pp. 75–82. “Integration Trends in the 90s,” Human Systems Management, 8 (1989) 2, pp. 91–93. Special Review: “On Systems Writings of A.A. Malinovskii,” Int. J. General Systems, 15 (1989) 3, pp. 265–269. Book Review: “Patterns, Thinking, and Cognition,” by H. Margolis, Human Systems Management, 8 (1989) 3, pp. 248–249. Book Review: “The New Realities,” by P.F. Drucker, Human Systems Management, 8 (1989) 3, pp. 243–245. Book Review: “Sociocracy,” by G. Endenburg, Human Systems Management, 8 (1989) 3, pp. 245–248. “Stable Patterns from Decision-Producing Networks; New Interfaces of DSS and MCDM,” MCDM WorldScan, 3 (1989) 2–3, pp. 6–7. “Cognitive Equilibrium: A New Paradigm of Decision Making?” Human Systems Management, 8 (1989) 3, pp. 185–188.
382
Selected Publications of Milan Zeleny
“The Grand Reversal: On the Corso and Ricorso of Human Way of Life,” World Futures, 27 (1989), pp. 131–151. 1990 “Paul A. Weiss (1898–1989),” Human Systems Management, 9 (1990) 1, pp. 3–4. “Why Is There No Theory of Perestroika?” Human Systems Management, 9 (1990) 1, pp. 1–2. “Moving from the Age of Specialization to the Era of Integration,” Human Systems Management, 9 (1990) 3, pp. 153–171. (With R. Cornet and J.A.F. Stoner). “Synthetic Biology and Osmotic Growths,” in: Systems & Control Encyclopedia, Supplementary Volume 1, Pergamon Press, Elmsford, N.Y., 1990, pp. 573–578. “Amoeba: The New Generation of Self-Managing Human Systems,” Human Systems Management, 9 (1990) 2, pp. 57–59. “Multicriteria Decision Making,” in: Systems & Control Encyclopedia, Supplementary Volume 1, Pergamon Press, Elmsford, N.Y., 1990, pp. 431–437. “Management Wisdom of the West,” Human Systems Management, 9 (1990) 2, pp. 119–125. “Knowledge As Capital/Capital As Knowledge,” Human Systems Management, 9 (1990) 3, pp. 129–130. “Management Wisdom of the West,” Part 1 (in Japanese), Standardization and Quality Control, 43 (1990) 11, pp. 41–48. Part 2, 43 (1990) 12, pp. 43–48. “Trentowski’s Cybernetyka,” in: Systems & Control Encyclopedia, Supplementary Volume 1, Pergamon Press, Elmsford, N.Y., 1990, pp. 587–589. “Simulation Models of Autopoiesis: Variable Structure,” in: Systems & Control Encyclopedia, Supplementary Volume 1, Pergamon Press, Elmsford, N.Y., 1990, pp. 543–547. “Optimizing Given Systems vs. Designing Optimal Systems: The De Novo Programming Approach,” Int. J. General Systems, 17 (1990) 4, pp. 295–307. “De Novo Programming,” Ekonomicko-matematicky obzor, 26 (1990) 4, pp. 406–413. Book Review: “The Eternal Venture Spirit,” by K. Tateisi, Human Systems Management, 9 (1990) 2, pp. 127–128. 1991 “All Autopoietic Systems Must Be Social Systems,” Journal of Social and Biological Structures, 14 (1991) 3, pp. 311–332. (With K.D. Hufford).
Selected Publications of Milan Zeleny
383
“Gestalt System of Holistic Graphics: New Management Support View of MCDM,” Computers and Operations Research, 18 (1991) 2, pp. 233–239. (With E. Kasanen and R. Östermark). “Management Challenges in the 1990s,” in: Managing Toward the Millennium, edited by J.E. Hennessy and S. Robins, Fordham University Press, New York, 1991, pp. 3–65. (With R. Cornet and J.A.F. Stoner). “Spontaneous Social Orders,” in: A Science of Goal Formulation: American and Soviet Discussions of Cybernetics and Systems Theory, edited by S.A. Umpleby and V.N. Sadovsky, Hemisphere Publishing Corp., Washington, D.C., 1991, pp. 133–150. “Knowledge As Capital: Integrated Quality Management,” Prometheus, 9 (1991) 1, pp. 93–101. “Transition To Free Markets: The Dilemma of Being and Becoming,” Human Systems Management, 10 (1991) 1, pp. 1–5. “Are Biological Systems Social Systems?” Human Systems Management, 10 (1991) 2, pp. 79–81. “Privatization,” Human Systems Management, 10 (1991) 3, pp. 161–163. “Cognitive Equilibrium: A Knowledge-Based Theory of Fuzziness and Fuzzy Sets,” Int. J. General Systems, 19 (1991) 4, pp. 359–381. “Fuzzifying the ‘Precise’ Is More Relevant Than Modeling the Fuzzy ‘Crisply’ (Rejoinder by M. Zeleny),” Int. J. General Systems, 19 (1991) 4, pp. 435–440. “Cognitive Equilibrium,” Ekonomicko-matematicky obzor, 27 (1991) 1, pp. 53–61. “Measuring Criteria: Weights of Importance,” Human Systems Management, 10 (1991) 4, pp. 237–238. 1992 Foreword to Knowledge in Action: The Bata System of Management, (First English translation of T. Bata’s “Uvahy a projevy”), IOS Press, Amsterdam, 1992, pp. v–vii. “An Essay Into a Philosophy of MCDM: A Way of Thinking or Another Algorithm?” Invited Essay, Computers and Operations Research, 19 (1992) 7, pp. 563–566. “The Application of Autopoiesis in Systems Analysis: Are Autopoietic Systems Also Social Systems?” Int. J. General Systems, 21 (1992) 2, pp. 145–160. (With K.D. Hufford). “The Ordering of the Unknown by Causing It to Order Itself,” Int. J. General Systems, 21 (1992) 2, pp. 239–253. (With K.D. Hufford).
384
Selected Publications of Milan Zeleny
“Reforms in Czechoslovakia: Tradition or Cosmopolitanism?” in: Management Reform in Eastern and Central Europe: Use of Pre-Communist Cultures, ed. by M. Maruyama, Dartmouth Publishing Company (Dover), 1992, pp. 45–64. “Structural Recession in the U.S.A.,” Human Systems Management, 11 (1992) 1, pp. 1–4. “Beauty, Quality and Harmony,” Human Systems Management, 11 (1992) 3, pp. 115–118. “Governments and Free Markets: Comparative or Strategic Advantage?” Editorial, Human Systems Management, 11 (1992) 4, pp. 173–176. 1993 “Alla ricerca di un equilibrio cognitivo: bellezza, qualità e armonia,” in: Estimo ed economia ambientale: le nuove frontiere nel campo della valutazione, edited by L. Fusco Girard, FrancoAngeli, Milano, 1993, pp. 113–131. “Kenneth Boulding (1910–1993),” Human Systems Management, 12 (1993) 2, pp. 159–161. “Working at Home” Human Systems Management, 12 (1993) 2, pp. 81–83. “Economics, Business and Culture,” Human Systems Management, 12 (1993) 3, pp. 171–174. “Eastern Europe: Quo Vadis?” Human Systems Management, 12 (1993) 4, pp. 259–264. 1994 Book review: “Management & Employee Buy-Outs as a Technique of Privatization,” by David P. Ellerman (Ed.), Human Systems Management, 13 (1994) 1, pp. 79–81. “W. Edwards Deming (1900–1993),” Human Systems Management, 13 (1994) 1, pp. 75–78. “Foreign Policy: A Human Systems View,” Human Systems Management, 13 (1994) 1, pp. 1–4. “Fuzziness, Knowledge, and Optimization: New Optimality Concepts,” in: Fuzzy Optimization: Recent Advances, edited by M. Delgado, J. Kacprzyk, J.-L. Verdegay and M.A. Vila, Physica-Verlag, Heidelberg, 1994, pp. 3–20. “In Search of Cognitive Equilibrium: Beauty, Quality and Harmony,” Multi-Criteria Decision Analysis, 3 (1994), pp. 48.1–48.11. “Nicholas Georgescu-Roegen (1906–1994),” Human Systems Management, 13 (1994) 4, pp. 309–311.
Selected Publications of Milan Zeleny
385
“Towards Trade-Offs-Free Management,” Human Systems Management, 13 (1994) 4, pp. 241–243. 1995 “The Ideal-Degradation Procedure: Searching for Vector Equilibria,” in: Advances In Multicriteria Analysis, edited by P.M. Pardalos, Y. Siskos, C. Zopounidis, Kluwer, 1995, pp. 117–127. “Reengineering,” Human Systems Management, 14 (1995) 2, pp. 105–108. “Trade-Offs-Free Management via De Novo Programming,” International Journal of Operations and Quantitative Management, 1 (1995) 1, pp. 3–13. “Ecosocieties: Societal Aspects of Biological Self-Production,” Soziale Systeme, 1 (1995) 2, pp. 179–202. “Global Management Paradigm,” Human Systems Management, 14 (1995) 3, pp. 191–194. “Human and Social Capital: Prerequisites for Sustained Prosperity,” Human Systems Management, 14 (1995) 4, pp. 279–282. 1996 “Work and Leisure,” in: International Encyclopedia of Business & Management, Routledge, London, 1996, pp. 5082–8. Also: “Multiple Criteria Decision Making,” pp. 978–90. “Critical Path Analysis (CPA),” pp. 904–9. “Optimality and Optimization,” pp. 3767–80. “Bata-System of Management,” pp. 351–4. “On Social Nature of Autopoietic Systems,” in: Evolution, Order and Complexity, edited by E.L. Khalil and K.E. Boulding, Routledge, London, 1996, pp. 122–145. “Rethinking Optimality: Eight Concepts,” Human Systems Management, 15 (1996) 1, pp. 1–4. “Customer-Specific Value Chain: Beyond Mass Customization?” Human Systems Management, 15 (1996) 2, pp. 93–97. “Asset Optimization and Multi-Resource Planning” Human Systems Management, 15 (1996) 3, pp. 153–155. “Comparative Management Systems: Trade-Offs-Free Concept,” in: Dynamics of Japanese Organizations, edited by F.-J. Richter, Routledge, London, 1996, pp. 167–177. “Knowledge As Coordination of Action,” Human Systems Management, 15 (1996) 4, pp. 211–213.
386
Selected Publications of Milan Zeleny
“Tradeoffs-Free Management,” in: The Art and Science of Decision-Making, edited by P. Walden et al., Åbo University Press, Åbo, 1996, pp. 276–283. 1997 “Eight Concepts of Optimality,” in: Multicriteria Analysis, edited by J. Clímaco, Springer-Verlag, Berlin, 1997, pp. 191–200. “Towards the Tradeoffs-Free Optimality in MCDM,” in: Multicriteria Analysis, edited by J. Clímaco, Springer-Verlag, Berlin, 1997, pp. 596–601. “From Maximization to Optimization: MCDM and the Eight Models of Optimality,” in: Essays in Decision Making, edited by M.H. Karwan, J. Spronk and J. Wallenius, Springer-Verlag, 1997, pp. 107–119. “Ecosocietà: aspetti sociali dell’auto-produzione biologica,” in: Teorie Evolutive e Transformazioni Economiche, edited by E. Benedetti, M. Mistri and S. Solari, CEDAM-Padova, 1997, pp. 121–142. “The Decline of Forecasting?” Human Systems Management, 16 (1997) 1, pp. 1–3. “The Fall of Strategic Planning,” Human Systems Management, 16 (1997) 2, pp. 77–79. “Work and Leisure,” in: IEBM Handbook on Human Resources Management, Thomson, London, 1997, pp. 333–339. Also: “Bata-System of Management,” pp. 359–362. “Autopoiesis and Self-Sustainability in Economic Systems,” Human Systems Management, 16 (1997) 4, pp. 251–262. “Insider Ownership and LBO Performance,” Human Systems Management, 16 (1997) 4, pp. 243–245. “Bata, Thomas (1876–1932),” in: IEBM Handbook of Management Thinking, Thomson, London, 1997, pp. 49–52. 1998 “National and Corporate Asset Optimization: From Macro- to Micro-Reengineering,” in: Economic Transformation & Integration: Problems, Arguments, Proposals, edited by R. Kulikowski, Z. Nahorski and J. Owsinski, Systems Research Institute, Warsaw, 1998, pp. 103–118. “Multiple Criteria Decision Making: Eight Concepts of Optimality,” Human Systems Management, 17 (1998) 2, pp. 97–107. “Telework, Telecommuting and Telebusiness,” Human Systems Management, 17 (1998) 4, pp. 223–225.
Selected Publications of Milan Zeleny
387
1999 “Beyond the Network Organization: Self-Sustainable Web Enterprises,” in: Business Networks in Asia, edited by F.-J. Richter, Quorum Books, Westport, CT, 1999, pp. 269–285. “Global Management Paradigm,” Fordham Business Review, 1 (1999) 1, pp. 91–101. “What is IT/S? Information Technology in Business,” Human Systems Management, 18 (1999) 1, pp. 1–4. “Industrial Districts of Italy: Local-Network Economies in a Global-Market Web,” Human Systems Management, 18 (1999) 2, pp. 65–68. “Strategy for Macro- and Micro-Reengineering in Knowledge-based Economies” in: The Socio-Economic Transformation: Getting Closer to What? edited by Z. Nahorski, J. Owsinski, and T. Szapiro, Macmillan, London, 1999, pp. 113–125. 2000 “New Economy of Networks,” Human Systems Management, 19 (2000) 1, pp. 1–5. “Global E-MBA for the New Economy,” Human Systems Management, 19 (2000) 2, pp. 85–88. IEBM Handbook of Information Technology in Business, Editor, Thomson, London, 2000, p. 870. “Introduction: What Is IT/S?” in: IEBM Handbook of Information Technology in Business, edited by M. Zeleny, Thomson, London, 2000, pp. xv–xvii. Also: “High Technology Management,” pp. 56–62. “Global Management Paradigm,” pp. 48–55. “Mass Customization,” pp. 200–207. “Autopoiesis (Self-Production),” pp. 283–290. “Business Process Reengineering (BPR),” pp. 14–22. “Knowledge vs. Information,” pp. 162–168. “Integrated Process Management,” pp. 110–118. “Self-Service Society,” pp. 240–248. “Telepresence,” pp. 821–827. “Kinetic Enterprise & Forecasting,” pp. 134–141. “New Economy,” pp. 208–217. “Tradeoffs Management,” pp. 450–458. “Critical Path Analysis,” pp. 308–314. “Decision Making, Multiple Criteria,” pp. 315– 329. “Optimality and Optimization,” pp. 392–409. New Frontiers of Decision Making for the Information Technology Era, Editor with Y. Shi, World Scientific Publishers, 2000, p. “Elimination of Tradeoffs in Modern Business and Economics,” in: New Frontiers of Decision Making for the Information Technology Era, edited by M. Zeleny and Y. Shi, World Scientific Publishers, 2000, pp. “New Economy and the Cluetrain Manifesto,” Human Systems Management, 19 (2000) 4, pp. 151–156.
388
Selected Publications of Milan Zeleny
2001 IEBM Handbook of Information Technology in Business, Editor, Paperback edition, Thomson, London, 2001, p. 870. “Knowledge and Self-Production Processes in Social Systems,” UNESCO Encyclopedia. “Bat’a, Tomás (1876–1932),” Biographical Dictionary of Management. “Human Systems Management at 20,” Human Systems Management, 20 (2001) 1, pp. 1–2. “Herbert A. Simon (1916–2001),” Human Systems Management, 20 (2001) 1, pp. 3–4. “Claude E. Shannon (1916–2001),” Human Systems Management, 20 (2001) 1, pp. 5–6. “Autopoiesis (Self-production) in SME Networks,” Human Systems Management, 20 (2001) 3, pp. 201–207. 2002 “Knowledge of Enterprise: Knowledge Management or Knowledge Technology?” International Journal of Information Technology & Decision Making, 1 (2002) 2, pp. 181–207. 2004 “Knowledge-Information Circulation through the Enterprise: Forward to the Roots of Knowledge Management,” in: Data Mining and Knowledge Management, edited by Y. Shi, W. Xu, and Z. Chen, Springer-Verlag, Berlin-Heidelberg, 2004, pp. 22–33. 2005 Cesty k úspěchu (Roads to Success: On the lasting values of Bata management system), Čintámani, Brno, 2005. “The Evolution of Optimality: De Novo Programming,” in: Evolutionary MultiCriterion Optimization, edited by C.A. Coello Coello et al., Springer-Verlag, BerlinHeidelberg, 2005, pp. 1–13. “Knowledge of Enterprise: Knowledge Management or Knowledge Technology?” in: Governing and Managing Knowledge in Asia, edited by T. Menkhoff, H-D. Evers, and Y.W. Chay, World Scientific, 2005, pp. 23–57. Human Systems Management: Integrating Knowledge, Management and Systems, World Scientific, 2005.
Selected Publications of Milan Zeleny
389
2006 “Knowledge-Information Autopoietic Cycle: Towards Wisdom Systems,” International Journal of Management and Decision Making, 7 (2006) 1, pp. 3–18. “The Mobile Society: Effects of Global Sourcing and Network Organisation’, Int. J. Mobile Learning and Organisation, 1 (2006) 1, pp. 30–40. “Innovation Factory: Production of Value-Added Quality and Innovation,” Economics and Management, 9 (2006) 4, pp. 58–65. “Entering the Era of Networks: Global Supply and Demand Outsourcing Networks and Alliances,” in: Quantitative Methoden der Logistik und des Supply Chain Management, edited by M. Jacquemin, R. Pibornik, and E. Sucky, Verlag Dr. Kovač, Hamburg, 2006, pp. 85–97. “The Innovation Factory: Management Systems, Knowledge Management and Production of Innovations,“ in: Expanding the Limits of the Possible, edited by P. Walden, R. Fullér, and J. Carlsson, Åbo, November 2006, pp. 163–175. 2007 “Knowledge Management and the Strategies of Global Business Education: From Knowledge to Wisdom,“ in: The Socio-EconomicTransformation: Getting Closer to What? edited by Z. Nahorski, J. W. Owsiński and T. Szapiro, Palgrave Macmillan, Houndmills, 2007, Ch. 7, pp. 101–116. “From Knowledge to Wisdom: On Being Informed and Knowledgeable, Becoming Wise and Ethical,” International Journal of Information Technology & Decision Making, 2007 (to appear). “Strategy and Strategic Action in the Global Era: Overcoming the Knowing-Doing Gap,” Int. J. Technology Management, 2007 (to appear). The BioCycle of Business: Managing Corporation as a Living Organism, 2007 (to appear). “Knowledge Management and Strategic Planning: A Human Systems Perspective,” in: Knowledge and Values in Strategic Spatial Planning for Small and Medium Sized Cities, edited by L. Fusco-Girard, Springer-Verlag, 2007 (to appear).
390
Advances in Multiple Criteria Decision Making and Human Systems Management Y. Shi et al. (Eds.) IOS Press, 2007 © 2007 The authors. All rights reserved.
Biosketches of Contributing Authors Chapter 1. MULTI-OBJECTIVE PREFERENCES AND CONFLICTING OBJECTIVES: THE CASE OF EUROPEAN MONETARY INTEGRATION Maurizio Mistri Maurizio Mistri (Ferrara, Italy, 11.24th. 1941) is Associate Professor of International Economics and Lecturer in Information Economics at the Faculty of Political Sciences of the University of Padua. He has written many articles and books in the field of International Economics, with a particular attention to the european economic integration and to the dynamics of international migrations. He is currently devoting his attention to the analysis of relationships between the cognitive sciences and economic decisions, and the role of information and the way in which it is handled in market structuring processes. He was one of the first scholars to deal with the development and structuring of the industrial districts in terms of information theory. A field of scientific interest of Maurizio Mistri is actually that one of the economic analysis of economic institutions, with an attention to the international economic institutions.The approach of Maurizio Mistri is a cognitivist institutionalism. From his last works we remember: “Consumer Learning, Connectionsims, and Hayek’s Theoretical Legacy” (2002), “Behavioral Rules in Industrial Districts: Loyalty, Trust,and Reputation” (2003), “The Emergence of Cooperation and the case of Italian Industrial District as a Socio-economic Habitat” (2003), “Procedural Rationality and Institutions:The Production of Norms by Means of Norms” (2003), “Addiction and the Dynamic Inconsistency of Consumption Plans” (2004), “Il Distretto Industriale Marshalliano tra Cognizione e Istituzioni” (2006). Maurizio Mistri is currently chairman of the degree course on International Economics, University of Padua, and is member of the Academic Senate of the same University. Chapter 2. MULTICRITERIA ROUTING MODELS IN TELECOMMUNICATION NETWORKS-OVERVIEW AND A CASE STUDY Joao C.N. Clımaco, Jose M.F. Craveirinha and Marta M.B. Pascoal Joao Carlos Namorado Climaco is Full Professor at the Faculty of Economics of the University of Coimbra and President of the Scientific Committee of the INESC – CoimCoimbra. He obtained the Master of Science Degree in “Control Systems” at the Imperial College of Science and Technology,University of London (1978); the “Diploma of Membership of the Imperial College of Science and Technology” (1978); the Ph.D. in Optimization and Systems Theory, Electrical Engineering Department, University of Coimbra (1982); and the title of “Agregac cao” at the University
Biosketches of Contributing Authors
391
of Coimbra (1989). He was, in the past, Vice-President of ALIO – Latin Ibero American OR Association, Vice-President of the Portuguese OR Society and Member of the International Executive Committee of the International Society on Multiple Criteria Decision Making. Actually he is member of the IFIP WG 8.3 on Decision Support Systems. He belongs to the Editorial Board of the following Scientific Journals: “Journal of Group Decision and Negotiation”, “Investigao Operacional” (Journal of the Portuguese OR Society), and ENGEVISTA (a Brazilian Journal). He is also member of the Coimbra University Assembly and of the Editorial Board of the University of Coimbra Press. He is author/co-author of more than 120 papers in Scientific Journals (about 90) and in Specialized Books (about 30) using the peer refereeing system. His current major interests of research are: multiple criteria decision aiding, multiobjective combinatorial problems, and management and planning of telecommunication networks. Jose Manuel Fernandes Craveirinha is currently Full Professor in Telecommunications at the Department of Electrical Engineering Science of the Faculty of Sciences and Technology of the University of Coimbra, Portugal. He received the undergraduate Diploma in Electrical Engineering Science (E.E.S.) – Telecommunications & Electronics at Instituto Superior Technico, (Lisbon Technical University), 1975, the M.Sc. (1981) and Ph.D. in E.E.S. – at the University of Essex (UK) (7/1984) and the tittle of “Agregac cao” (st. “Doctor of Science”), in E.E.S. – Telecommunications at the University of Coimbra (7/1996). He also has coordinated a research group in Teletraffic Theory & Network Planning at INESC-Coimbra R&D institute since 1986 and was director of this institute in 1994–99. His main scientific areas of research have been stochastic modelling of teletraffic, reliability analysis and planning of telecommunication networks. His main present interests are in routing modelling in Internet and multiple criteria routing methods for multiservice networks, namely Internet/MPLS and WDM optical networks. Marta Margarida Braz Pascoal is Assistant Professor at the Mathematics Department of the Faculty of Science and Technology of the University of Coimbra. She obtained the undergraduate diploma in Mathematics – specialization in Computer Science at the University of Coimbra (1995), the Master of Science Degree in Applied Mathematics at the University of Coimbra (1998) and the Ph.D. in Mathematics – specialization in Applied Mathematics at the University of Coimbra (2005). Her current major interests of research are: ranking solutions of combinatorial problems and multiobjective combinatorial problems.
392
Biosketches of Contributing Authors
Chapter 3. POST-MERGER HIGH TECHNOLOGY R&D HUMAN RESOURCES OPTIMIZATION THROUGH THE DE NOVO PERSPECTIVE Chi-Yo Huang and Gwo-Hshiung Tzeng Chi-Yo Huang received his B.S. degree in Electrical Engineering in 1990 from National Cheng-Kung University, Taiwan, his M.S. degree in Computer Engineering in 1993 from Syracuse University, New York, and his Ph.D. degree in Management of Technology in 2006 from National Chiao-Tung University, Taiwan. He has worked in the IC industry for more than 11 years as both IC design engineer and marketing manager, being responsible for PC chipset design, marketing strategy and product definition, technology transfer and new business development. After working in the IC industry, he joined the academic field in 2007, holding a position as an Assistant Professor in the Department of Industrial Education, National Taiwan Normal University, Taiwan. His recent research interests include competitive strategies of high-technology firms, innovation policy analysis, technological forecasting of high-tech products, multiple criteria decision making and e-commerce. Gwo-Hshiung Tzeng was born in 1943 in Taiwan. In 1967, he received the Bachelor’s degree in business management from the Tatung Institute of Technology; in 1971, he received the Master’s degree in urban planning from Chung Hsing University; and in 1977, he received the Ph.D. degree course in management science from Osaka University, Osaka, Japan. He was an Associate Professor at Chiao Tung University, Taiwan, from 1977 to 1981, a Research Associate at Argonne National Laboratory from July 1981 to January 1982, a Visiting Professor in the Department of Civil Engineering at the University of Maryland, College Park, from August 1989 to August 1990, a Visiting Professor in the Department of Engineering and Economic System, Energy Modeling Forum at Stanford University, from August 1997 to August 1998, and a professor at Chaio Tung University from 1981 and distinguished chair professor from 2002 to the present, and a President at Kainan University from 2004 to 2005 and distinguished chair professor from 2002 to the present. His current research interests include statistics, multivariate analysis, network, routing and scheduling, multiple criteria decision making, fuzzy theory, hierarchical structure analysis for applying to technology management, energy, environment, transportation systems, transportation investment, logistics, location, urban planning, tourism, technology management, electronic commerce, global supply chain, etc. He has got the national distinguished chair professor and award (highest honor offered) of the Ministry of Education Affairs of Taiwan and three times of distinguished research award and two times of distinguished research fellow (highest honor offered) of National Science Council of Taiwan. Fellow IEEE Member (From September 30, 2002), and Pinnacle of Achievement Award 2005. He organized a Taiwan affiliate chapter of the International Association of Energy Economics in 1984 and he was the Chairman of the Tenth international Conference on Multiple Criteria Decision Making, July 19–24, 1992, in Taipei. He is a member of IEEE, IAEE, ISMCDM, World Transport, the Operations Research Society of Japan, the Society of Instrument and Control Engineers Society of Japan, the City Planning Institute of Japan, the Behaviormetric Society of Japan, the Japan Society for Fuzzy Theory and Systems; and participating many Society of Taiwan.
Biosketches of Contributing Authors
393
Chapter 4. AN EXAMPLE OF DE NOVO PROGRAMMING David L. Olson and Antonie Stam David L. Olson is the James & H.K. Stuart Professor in MIS and Othmer Professor at the University of Nebraska. He has published research in over 90 refereed journal articles, primarily on the topic of multiple objective decision-making. He teaches in the management information systems, management science, and operations management areas. He has authored the books Decision Aids for Selection Problems, Introduction to Information Systems Project Management, and Managerial Issues of Enterprise Resource Planning Systems and co-authored the books Decision Support Models and Expert Systems; Introduction to Management Science; Introduction to Simulation and Risk Analysis; Business Statistics: Quality Information for Decision Analysis; Statistics, Decision Analysis, and Decision Modeling; Multiple Criteria Analysis in Strategic Siting Problems, and Introduction to Business Data Mining. He is a Fellow of the Decision Sciences Institute. Antonie Stam is the Leggett & Platt Distinguished Professor of Management Information Systems in the Management Department at the University of Missouri. Prior to joining the University of Missouri in the Fall of 2000, he was a Professor in the Department of Management Information Systems at the University of Georgia. He holds a Ph.D. in Management Science from the University of Kansas. Professor Stam has served in Visiting Professor and Research Scientist roles in Belgium, Austria, Finland, France and South Africa, and has consulted with companies and organizations in the US, China and Finland. He is a member of the Association for Information Systems, the American Statistical Association, INFORMS and the Decision Sciences Institute. His primary research interests include information systems, decision support systems, applied artificial intelligence, multicriteria decision making and applied statistics. He has published in journals such as Management Science, Decision Sciences, Journal of the American Statistical Association, Operations Research, Public Opinion Quarterly, Multivariate Behavioral Research, International Journal of Production Research, Journal of Marketing Research, SIAM Journal on Matrix Analysis and Applications, and others. Chapter 5. MULTI-VALUE DECISION-MAKING AND GAMES: The Perspective of Generalized Game Theory on Social and Psychological Complexity, Contradiction, and Equilibrium Tom R. Burns and Ewa Roszkowska Tom R. Burns is Professor Emeritus at the Department of Sociology, University of Uppsala, Uppsala, Sweden. Among his engagements, he has been a Jean Monnet Visiting Professor at the European University, Florence, Italy, 2002; Visiting Scholar, Stanford University, Spring, 2002, Spring, 2004–2007; Fellow at Swedish Collegium for Advanced Study in the Social Sciences (Spring, 1992; Autumn, 1998), and Fellow at the European University Institute (Spring, 1998). Burns has published more than 10 books and numerous articles in the
394
Biosketches of Contributing Authors
areas of administration and management. governance and politics, the sociology of technology and environment, and the analysis of markets and market regulation. He has also published extensively on social theory and methodology, with a focus on the new institutional theory, generalized game theory, and socio-cultural evolutionary theory. Among his books are Man, Decisions, Society (1985), The Shaping of Socio-economic Systems (1986), Creative Democracy (1988), Societal Decision-making: Democratic Challenges to State Technocracy (1992), Municipal Entrepreneurship and Energy Policy: A Five Nation Study of Politics, Innovation, and Social Change (1994), Transitions to Alternative Energy Systems: Entrepreneurs, New Technologies, and Social Change (1984), The Shaping of Social Organization: Social Rule System Theory and Its Applications. Ewa Roszkowska received her Ph. in Mathematics at the University of Warsaw in 1995. She teaches at the Faculty of Economics of Białystok University and at the Bialystok School of Economics. Her main interest concerns applications of mathematics, especially game theory in economics. She has published many papers focused on negotiations, generalization of game theory, consumer behavior and non-linear dynamic analysis in economic models. She is an author or a co-author of more than 40 research articles or chapters in books. She participated in the Research Group “Procedural Approaches to Conflict Resolution” at the University of Bielefeld, ZIF, Germany 2001–2002. In 2002 she also was appointed a visiting professor at the same university. Chapter 6. COMPARING ECONOMIC DEVELOPMENT AND SOCIAL WELFARE IN THE OECD COUNTRIES: A Multicriteria Analysis Approach Evangelos Grigoroudis, Michael Neophytou and Constantin Zopounidis Evangelos Grigoroudis is Assistant Professor at the Technical University of Crete, Department of Production Engineering and Management. He received his diploma in Production and Management Engineering and the M.Sc. and Ph.D. degrees in Decision Sciences and Operations Research from the Technical University of Crete. He acts as reviewer for scientific journals and books, and he is author of a book on the measurement of service quality, and a large number of research reports and papers in scientific journals and conference proceedings referring to multiple criteria analysis, consumer behaviour, and customer satisfaction. His research interests include operational research, multicriteria decision analysis, management and control of quality, and decision support systems. Michael Neophytou has a diploma in Production Engineering and Management (Technical Univercity of Crete). He is the production supervisor at the feed factory of Mills of Crete and is currently attending a master course in management science at the department of Production Engineering and Management at Technical University of Crete. His research interests include multicriteria decision analysis, total quality management, strategic planning and customer satisfaction analysis.
Biosketches of Contributing Authors
395
Constantin Zopounidis is Professor of financial management and operations research at the Dept. of Production Engineering and Management, Technical University of Crete, Greece. His research interests include multiple criteria decision making, financial engineering and financial risk management. He has published over 300 referred papers in such journals as Decision Sciences, European Journal of Operational Research, Decision Support Systems, The Journal of the Operational Research Society, Expert Systems with Applications, Global Finance Journal, International Journal of Intelligent Systems in Accounting, Finance and Management and Computational Economics. He has edited or co-edited more than 35 books on financial management and multicriteria decision aid, while he is Editor-in-Chief of two journals: (1) Operational Research: An International Journal; (2) The Journal of Financial Decision Making. Chapter 7. THE ENLIGHTENMENT, POPPER AND EINSTEIN Nicholas Maxwell Nicholas Maxwell has devoted much of his working life to arguing that we need to bring about a revolution in academia so that it seeks and promotes wisdom and does not just acquire knowledge. He has published five books on this theme: What’s Wrong With Science? (1976), From Knowledge to Wisdom (Blackwell, 1984), The Comprehensibility of the Universe (Oxford University Press, 1998), The Human World in the Physical Universe (Rowman and Littlefield, 2001) and Is Science Neurotic? (Imperial College Press, 2004): see www.nick-maxwell.demon.co.uk. He has also published many papers on such diverse subjects as scientific method, the rationality of science, the philosophy of the natural and social sciences, the humanities, quantum theory, causation, the mind-body problem, aesthetics, and moral philosophy. For nearly thirty years he taught philosophy of science at University College London, where he is now Emeritus Reader and Honorary Senior Research Fellow. Chapter 8. VALUE FOCUSED MANAGEMENT (VFM): Capitalizing on the Potential of Managerial Value Drivers Boaz Ronen, Zvi Lieber and Nitza Geri Boaz Ronen is a Professor of Technology Management and Value Creation at Tel Aviv University, Faculty of Management. He holds a B.Sc. in Electronics Engineering, and an M.Sc and Ph.D in Business Administration. Prior to his academic career he worked for over 10 years in the Hi-Tech industry. He has consulted to numerous corporations, healthcare organizations and government agencies worldwide. He was also a visiting professor in leading business schools. Prof. Ronen has published over 100 papers in leading academic and professional journals, and co-authored four books.
396
Biosketches of Contributing Authors
Zvi Lieber is a financial and investment consultant. He is an actuary and an economist and holds an MBA and a Ph.D. in business administration from the University of Chicago. Prior to his early retirement, Dr. Lieber was a faculty member at Tel Aviv University, The Leon Recanati Graduate School of Business Administration, for about 25 years. He has lectured in accounting, costing, finance and corporate value creation. Zvi Lieber is a member of several boards of directors and public committees. Nitza Geri is a lecturer at the Department of Management and Economics at The Open University of Israel. She holds a B.A. in Accounting and Economics, an M.Sc. and a Ph.D. in Technology and Information Systems Management from Tel-Aviv University. She is a CPA (Isr.) and prior to her academic career she had over 12 years of business experience. Her main research interests are strategic information systems, electronic commerce, economics of information goods and managerial accounting. Chapter 9. ZELENY’S HUMAN SYSTEMS MANAGEMENT AND THE ADVANCEMENT OF HUMANE IDEALS Alan E. Singer Alan E. Singer is a Reader at the Department of Management, University of Canterbury, Christchurch, New Zealand. He was the John L Aram Professor of Business Ethics at Gonzaga University, Spokane, Washington, USA 2004–5. He has often visited CBA at the University of Hawaii at Manoa, as Erskine fellow and on sabbatical. He has over 125 publications (see full c.v. on the Canterbury website). View some selected abstracts of recent and earlier papers. Many of Alan’s papers involve the interplay of qualitative/quantitative, judgemental/analytic and economic/ethical factors in areas such investment and disinvestment, business-government relations, social and environmental issues. They have explored the linkages between ethics, management strategies and entrepreneurship. Recent papers have focused on the idea of a strategy ~ethics dualism and have used this to prescribe augmentations to business strategies in the areas of healthcare, poverty and intellectual property. He has served on several journal editorial boards. Alan is the worldwide book review editor for Human Systems Management and cordially invites email submissions of book review articles that are consistent with the journal aims and objectives.
Biosketches of Contributing Authors
397
Chapter 10. OUTSOURCING RISKS ANALYSIS: A PRINCIPAL-AGENT THEORY-BASED PERSPECTIVE Hua Li and Haifeng Zhang
Hua Li is Professor of management science and engineering in the School of Economics and Management at Xidian University, holds an MS on mathematics from Shaanxi Normal University and a Ph.D. on mechanical manufacture from Xidian University. Dr. Hua Li has been awarded the 3rd Prize of Advancement of Science and Technology by the Education Department of China and the 2nd Prize of Advancement of Science and Technology by the Government of Shaanxi Province. He has authored over 60 journal articles and two books, ranging from biomathematics, nonlinear dynamic analysis, to decision-making, supply chain management, industrial engineering and information technology outsourcing. He finished his Postdoctoral report Study on development schemas of BPO industry in Xi’an with his cooperative tutor Professor Yingluo Wang (2006). Recently, he is studying on the two science research projects in the field of service outsourcing which are supported by the governments of Shaanxi Province and Xi’an city respectively.
Haifeng Zhang is a graduate student for his second year in School of Economy and Management at Xidian University (Xi’an), and Hua Li is his tutor. His research focuses on risks analysis and control of outsourcing, supervisory and incentive mechanisms for outsourcing, and supply chain management. He is preparing his dissertation in the field of information technology outsourcing. Haifeng Zhang’s master major is Management Science and Engineering, and he received his bachelor degree of Industrial Engineering. In 2006, he took part in “Study on BPO Development Strategy for Xi’an Software Industry”, which was a soft science research project in Shaanxi Province. In the project, he was in charge of investigation and analysis of the current situation for Xi’an BPO industry. He was also awarded Infineon (Xi’an) scholarship in the year 2006.
398
Biosketches of Contributing Authors
Chapter 11. MOBILE TECHNOLOGY: Expanding the Limits of the Possible in Everyday Life Routines Christer Carlsson and Pirkko Walden
Prof. Christer Carlsson, Director of the Institute of Advanced Management Systems Research, and a professor of management science at Abo Akademi University in Abo, Finland. He is a Fellow of the International Fuzzy Systems Association, an Honorary Member of the Austrian Society for Cybernetics and an Honorary Chairman of the Finnish Operations Research Society. He is in the Steering Group of the European Centre for Soft Computing in Oviedo, Spain and in the Steering Group of the BISC program at UC Berkeley. Professor Carlsson got his DSc (BA) from Abo Akademi University in 1977, and has lectured extensively at various universities in Europe, in the U.S., in Asia and in Australia. Professor Carlsson has organised and managed several research programs in industry in his specific research areas: knowledge based systems, decision support systems and soft computing, and has carried out theoretical research work also in multiple criteria optimisation and decision making, fuzzy sets and fuzzy logic, and cybernetics and systems research. Some recent research programs, which include extensive industrial cooperation, include Smarter (reducing fragmentation of working time with modern information technology), SmartBulls, SoftLogs (eliminating demand fluctuations in the supply chain with fuzzy logic), Waeno (improving the productivity of capital in giga-investments using fuzzy real options), MetalIT (knowledge management and foresight in the metal industry), OptionsPort (optimal R&D portfolios where R&D projects are fuzzy real options), Imagine21 (foresight of new telecom services using agent technology), Chimer (mobile platforms for sharing the cultural heritage among European school children), AssessGrid (risk assessment and management for grid computing) and Enabling Technologies for Mobile Services (mobile technology based products and services with enabling technologies; a national Finnish research program with an international partner network in France, Germany, Austria, UK, Hong Kong, Singapore and the USA). He is on the editorial board of several journals including the Electronic Commerce Research and Applications, Fuzzy Sets and Systems, ITOR, Cybernetics and Systems, Scandinavian Journal of Management, Belgian Journal of Operational Research, Intelligent Systems in Accounting, Finance and Business and Group Decision and Negotiation. He is the author of 4 books, and an editor or co-editor of 5 special issues of international journals and 12 books; he has published more than 240 papers. His most recent monographs are Fuzzy Reasoning in Decision Making and Optimization (with Robert Fullér), Studies in Fuzziness and Soft Computing Series, Springer-Verlag, Berlin/Heidelberg, 2002, and Fuzzy Logic in Management (with Mario Fedrizzi, Robert Fullér), Kluwer, Dordrecht 2003.
Biosketches of Contributing Authors
399
Pirkko Walden, Deputy Director of the Institute for Advanced Management Systems Research (IAMASR), Leader of the TUCS Mobile Commerce Laboratory, is a professor of marketing and information systems at Åbo Akademi University. She is an Area Editor of Journal of Decision Systems and is serving as a reviewer for several international journals and international conferences. She has published 2 monographs, 3 edited books and more than 100 papers of which more than 50 articled with peer review. Chapter 12. INFORMED INTENT AS PURPOSEFUL COORDINATION OF ACTION Malin Brännback Malin Brännback is Professor of International Business at Åbo Akademi University, Department of Business Studies, and Docent at the Swedish School of Economics and Business Administration and the Turku School of Economics and Business Administration in Finland. She has held a variety of teaching and research positions in such fields as strategic management, international marketing, and decision-making processes. She also holds a degree in pharmacy. She has published widely on entrepreneurship, strategic management, biotechnology, and other topics in articles, monographs, and conference presentations. Chapter 13. COMPETENCE SET ANALYSIS AND EFFECTIVE PROBLEM SOLVING Po-Lung Yu and Yen-Chu Chen Po-Lung Yu, Distinguished Professor of the National Chiao-Tung University, Taiwan, previously (1977–2004) Carl A. Scupin Distinguished Professor of the University of Kansas, was raised in Taiwan, further educated and trained in USA. He earned BA—International Business (1963) from the National Taiwan University, and Ph.D.—Operations Research and Industrial Engineering (1969) from the Johns Hopkins University. Dr. Yu taught at the University of Rochester (1969–73 where he was proud to be the dissertation advisor of M. Zeleny) and the University of Texas at Austin (1973–77). He won awards for outstanding research and for teaching, including the Edgeworth-Pareto Award by the International Society on Multiple Criteria Decision Making (1992). He has published 16 books and more than 150 professional articles in the areas of habitual domains/human software, competence set analysis, multicriteria decision making, optimal control, differential games, mathematical programming, optimization theory and their applications. He served as the Area Editor of Operations Research—Decision
400
Biosketches of Contributing Authors
Making (1994–98) and has served as an Associate Editor of the Journal of Optimization Theory and Applications since 1977, in addition to an Advisory Editor of a number of Journals. .Professor Yu, as a recognized internationally as a remarkable thinker, scholar, teacher and advisor, has given many keynote addresses around the world, academically and publicly. His audiences of habitual domains, sometimes exceeding thousands of people, include professors, students, corporate executives, ministers, military generals, monks, nuns, house wives, jail prisoners, etc… Yen-Chu Chen is a Ph.D. student of Institute of Information Management, National Chiao Tung University. She earned a B.A. in Management Information Systems from the National Chengchi University in 1991 and received her Master degree of Business Administration from the George Washington University in 1993. Prior to pursuing doctoral studies, she worked as a system analyst in the Information Center of Fuhwa Bank, Taiwan. She has been a lecturer in the Department of Information Management at Hsiuping Institute of Technology since 2000. Her research interests include habitual domains and competence set analysis. Currently she is working with Professor P. L. Yu for her graduate research. Chapter 14. INFORMATION AND KNOWLEDGE STRATEGIES: TOWARDS A REGIONAL EDUCATION HUB AND HIGHLY INTELLIGENT NATION Thow Yick Liang Liang Thow Yick is an Associate Professor in Technology and Strategy at the Lee Kong Chian School of Business, Singapore Management University. Among his previous appointments, he was the Head of Technology at the Singapore Management University, and an Associate Professor at the Faculty of Business Administration, National University of Singapore. He has published in a spread of journals including Information Processing and Management, Human Systems Management, Information and Management, International Journal of Human Resources Development and Management, Information Technology and Behavior, Journal of Mathematical Physics, Physical Review D, and Nuovo Cimento. He has also been invited to contribute articles to the Encyclopedia of Computer Science and Technology, and the Encyclopedia in Library and Information Science. Currently, Professor Liang’s main research domain encompasses the new leadership and management philosophy and practices for intelligent human organizations and their interacting agents, with a special focus on the evolution and coevolution dynamics of complex adaptive systems. He has conceptualized the Intelligent Organization Theory and Intelligence Strategy that encompasses the 3C-OK framework and the Intelligent Person Model. Besides, he has also published a book entitled Organizing Around Intelligence (World Scientific, 2004).
Biosketches of Contributing Authors
401
Chapter 15. NEEDED: PRAGMATISM IN KM Zhichang Zhu Zhichang Zhu’s normal education stopped when he was sixteen, due to China’s ‘Cultural Revolution’. Without a first degree, he obtained an M.Sc. in Information Management (1990) and a Ph.D. in Management Systems and Sciences (1995), sponsored by British scholarships. Zhichang has been a communist Red Guard, farm labourer, shop assistant, lorry driver, corporate manager, assistant to the dean of a business school, software engineer, systems analyst, business consultant, senior lecturer and visiting professor, in China, Japan, Germany, Singapore, Sri Lanka and England. His current research focuses on strategy and knowledge management from a comparative institutional perspective. Zhichang has published in Human Relations, Journal of the Operational Research Society and Organisation Studies. Address: University of Hull Business School, Hull, HU6 7RX, UK. Email:
[email protected].
Chapter 16. KNOWLEDGE MANAGEMENT PLATFORMS AND INTELLIGENT KNOWLEDGE BEYOND DATA MINING Yong Shi and Xingsen Li Professor Yong Shi currently is the director of Chinese Academy of Sciences Research Center on Data Technology & Knowledge Economy and Assistant President of the Graduate University of Chinese Academy of Sciences. He has been the Charles W. and Margre H. Durham Distinguished Professor of Information Technology, College of Information Science and Technology, Peter Kiewit Institute, University of Nebraska, USA since1999. Dr. Shi’s research interests include business intelligence, multiple criteria decision making, data mining, information overload, and telecommunication management. He has published seven books, more than 100 papers in various journals and numerous conferences/proceedings papers. He is the Editor-in-Chief of International Journal of Information Technology and Decision Making (SCI), an Area Editor of International Journal of Operations and Quantitative Management, a member of Editorial Board for a number of academic journals, including International Journal of Data Mining and Business Intelligence. Dr. Shi has received many distinguished awards including Outstanding Young Scientist Award, National Natural Science Foundation of China, 2001; Member of Overseas Assessor for the Chinese Academy of Sciences, May 2000; and Speaker of Distinguished Visitors Program (DVP) for 1997–2000, IEEE Computer Society. He has consulted or worked on business projects for a number of international companies in data mining and knowledge management.
402
Biosketches of Contributing Authors
Xingsen Li is a student in School of Management, Graduate University of the Chinese Academy of Sciences for his Doctor degree in management science and engineering. He has been the vice President of the Youth academic department of Extension Engineering Specialized Committee, China Association of Artificial Intelligence. His research interests include business intelligence, data mining, knowledge management, enterprise operation management and human system management. His scholar career is very special. He studied in Zhejiang University and got a bachelor degree in mechanics in 1991, after had worked for 6 years in enterprises, he studied for a master’s degree in computer-aided designing in China University of Mining and Technology (Beijing), then worked in Institute of Software, Chinese Academy of Sciences. During this 5-yearperiod, he had been served as software programmers, systems analysts, project manager, enterprises management consultant and vice General Manager, these various positions made him accumulated much experience. He analyses things systemically from the view of enterprise management, information technology and knowledge management by Extension Theory and traditional Chinese wisdom. He has published 26 papers in various Chinese journals and international conferences/proceedings. Chapter 17. CONTINUOUS INNOVATION PROCESS AND KNOWLEDGE MANAGEMENT Ján Košturiak and Róbert Debnár Professor Jan Kosturiak (1961) – Managing Director of Fraunhofer IPA Slovakia, Professor of Industrial Engineering at the University of Zilina (Slovakia), Technical University of Kosice (Slovakia), Honorary Professor at University of Applied Sciences FH Ulm (Germany), Profesor at ATH Bielsko Biala (Poland). Cofounder of IPA Slovakia a and Academy of Productivity and Innovations.
Associate professor Robert Debnar (1968) – Deputy Director and co-founder of Fraunhofer IPA Slovakia. Associate Professor of the Technical University of Kosice, Faculty of Manufacturing Technologies. Co-authors of the books – Factory 2001, Just in Time, Simulation of Manfacturing Systems, Lean and Innovative Enterprise, Fractal Company, design of Manufacturing Systems, CA Technologies and many papers in Slovakia, the Czech republic, USA, UK, Poland, Germany, Austria and Switzerland. Jan Kosturiak and Robert Debnar were involved in many research and industrial projects – e.g. ILIPT, Impex, Dycomans, BMW, Daimler Chrysler, Siemens VDO, Nemak Rautenbach, Volkswagen, Skoda Auto, etc.
Biosketches of Contributing Authors
403
Chapter 18. AN EXPLORATORY STUDY OF THE EFFECTS OF SOCIO-TECHNICAL ENABLERS ON KNOWLEDGE SHARING Sue Young Choi, Young Sik Kang and Heeseok Lee Sue Young Choi is a Doctoral Candidate of MIS at the Graduate School of Management, Korea Advanced Institute of Science and Technology, Korea. She received his M.S from Korea Advanced Institute of Science and Technology and her B.S. from Yonsei University. Her research focuses on knowledge management, electronic commerce, and organizational memory. Her recent publication appears in Computer & Industrial engineering. Young Sik Kang is a Doctoral Candidate of MIS at the Graduate School of Management, Korea Advanced Institute of Science and Technology, Korea. He received his M.S. in Industrial Engineering from Pohang University of Science and Technology and his B.S. from Korea University. His research interests include electronic commerce, online service continuous usage, online social network service, and knowledge management.
Heeseok Lee is a Professor at the Graduate School of Management, Korea Advanced Institute of Science and Technology, Korea. He has been a Director of KAIST Executive Program since 1998. He received his Ph.D. in MIS from the University of Arizona, an M.S. from Korea Advanced Institute of Science and Technology, and a B.S. from Seoul National University. He was previously on the faculty for the University of Nebraska at Omaha. His research interests include business sustainability and IT strategy. His recent publications appear in Journal of Management Information Systems, Information and Management, Journal of Organizational Computing and Electronic Commerce, and Information Systems Chapter 19. URBAN SYSTEM AND STRATEGIC PLANNING: TOWARDS A WISDOM SHAPED MANAGEMENT Luigi Fusco Girard Luigi Fusco Girard is full professor of Economics and Environmental Evaluations, professor of Urban Economics at faculty of Architecture and professor of “integrated environmental evaluations” at faculty of Engineering – University of Naples Federico II. He is President of Ph.D. School of Architecture and coordinator of Ph.D. Programme in Evaluation Methods for Integrated Conservation of Architectural, Urban and Environmental heritage at University of Naples Federico II. Some of his recent publications are: The Human Sustainable City.
404
Biosketches of Contributing Authors
Challenges and Perspectives from the Habitat Agenda, L. Fusco Girard, B. Forte, M. Cerreta, P. De Toro, F. Forte (eds.), Ashgate, Aldershot, 2003. Energia, Bellezza, Partecipazione L. Fusco Girard, P. Nijkamp (eds.), Angeli, Milano (2004). Città attrattori di Speranza, L. Fusco Girard, N.You (eds.), Angeli, Milano (2006). Chapter 20. Digest® WISDOM: COLLABORATE FOR WIN-WIN HUMAN SYSTEMS Nicholas C. Georgantzas Nicholas Constantine GEORGANTZAS is Professor, Management Systems, and Director, System Dynamics Consultancy, Fordham University Business Schools. An Associate Editor, System Dynamics Review, he is also senior consultant to management and a legal adviser, specializing in simulation modeling for learning in strategy, production and business process (re) design. Author of Scenario-Driven Planning (Greenwood Press 1995), he has published over 80 articles in refereed scholarly journals, conference proceedings and edited books. His publications include articles in systems thinking, knowledge technology and strategy design, focusing on the necessary theory and tools for learning in and about the dynamic systems in which we all live.
Advances in Multiple Criteria Decision Making and Human Systems Management Y. Shi et al. (Eds.) IOS Press, 2007 © 2007 The authors. All rights reserved.
405
Author Index Brännback, M. Burns, T.R. Carlsson, C. Chen, Y.-C. Choi, S.Y. Clímaco, J.C.N. Craveirinha, J.M.F. Debnár, R. Fusco Girard, L. Georgantzas, N.C. Geri, N. Grigoroudis, E. Huang, C.-Y. Kang, Y.S. Košturiak, J. Lee, H. Li, H. Li, X.
218 75 202 229 303 17 17 289 316 341 149 108 47 303 289 303 195 272
Liang, T.Y. Lieber, Z. Maxwell, N. Mistri, M. Neophytou, M. Olson, D.L. Pascoal, M.M.B. Ronen, B. Roszkowska, E. Shi, Y. Singer, A.E. Stam, A. Tzeng, G.-H. Walden, P. Yu, P.-L. Zhang, H. Zhu, Z. Zopounidis, C.
253 149 131 3 108 v, 65 17 149 75 v, 272 176 v, 65 47 202 229 195 269 108