DK3654—PRELIMS——17/10/2006—15:33—TRGANESH—XML MODEL C – pp. 1–15
DK3654—PRELIMS——17/10/2006—15:33—TRGANESH—XML MODEL C – pp. 1–15
DK3654—PRELIMS——17/10/2006—15:33—TRGANESH—XML MODEL C – pp. 1–15
DK3654—PRELIMS——17/10/2006—15:33—TRGANESH—XML MODEL C – pp. 1–15
DK3654—PRELIMS——17/10/2006—15:33—TRGANESH—XML MODEL C – pp. 1–15
DK3654—PRELIMS——17/10/2006—15:33—TRGANESH—XML MODEL C – pp. 1–15
DK3654—PRELIMS——17/10/2006—15:33—TRGANESH—XML MODEL C – pp. 1–15
Dedication This book is dedicated to Stephanie J. Greisler (posthumously) and Katherine N. Stupak With special thanks to the U.S. team of Brenda Adams, York College of Pennsylvania, and Julie Spadaro, project editor at Taylor & Francis; and Doug Barry and Paula Lane at Alden Prepress in the U.K.
DK3654—PRELIMS——17/10/2006—15:34—TRGANESH—XML MODEL C – pp. 1–15
Preface One day the father of a very wealthy family took his son on a trip to the country with the firm purpose of having his son develop understanding of how poor people live. He left his son at a farm to spend time with what would be considered a very poor family. The father retrieved his son a few days later. On the return trip home the father asked his son, “How was your time on the farm?” “It was great, Dad.” “Did you see how poor people live?” the father asked. “Oh yeah,” said the son. “So, tell me, what did you learn?” asked the father. The son answered: † † † † † † † †
I saw that we have one dog that we keep inside; they have four dogs that run through the fields. We have a pool that reaches to the middle of our garden; they have a creek that has no end. We have imported lanterns in our garden; they have the stars at night. Our patio reaches to the front yard; they have the whole horizon. We have a small piece of land to live on; they have fields that go beyond our sight. We buy our food; they grow theirs. We have walls around our property to protect us; they have friends to protect them. We have servants who serve us; they serve each other.
The boy’s father was speechless. Then his son added, “Thanks, Dad, for showing me how poor we are.” —Author Unknown
Isn’t perspective an enlightening and sometimes a paradoxical thing? Acceptance of technology without understanding, adoption without analysis, and use without questioning evince behavior not too far removed from cultures where magic and the occult are a part of the fabric of everyday life. In that we are societies increasingly awash in technology, having knowledge of the vocabulary, facts, and concepts surrounding technology is important; however, wisdom, discernment, and dialogue regarding the applications, implications, and ramifications of technology are vital for citizenries increasingly experiencing the byproducts (intended and unintended) of technology. The disparity between the deep penetration of advanced technology throughout the general population and the fundamental lack of understanding of the principles and collateral effects emanating from these technological resources portends a potentially volatile polity. This, in tandem with burgeoning socio-technical systems, raises important public policy questions. Foremost among them is the matter of technological determinism. The specter of science discovering, technology directing, and man conforming must be avoided. B.F. Skinner aptly states, “The real problem is not whether machines think but whether men do.” The nine chapters of this book that follow the introduction are configured to challenge the reader to consider technology from several perspectives: markets and the public sector (a macroperspective) and organizations, groups, and individual consumers (a micro-perspective). By engaging this book, the reader will be better able to face the challenges posed by the concluding article “Organizational Passages” (Stupak and Martin). May our praxis abilities be buoyed, may the quality of our thought be richer, and may our inquiry into contemporary technological issues be enhanced through this text. Once upon a time we were just plain people. But that was before we began having relationships with mechanical systems. Get involved with a machine and sooner or later you are reduced to a factor. —Ellen Goodman, “The Human Factor,” The Washington Post, January 1987
Peace.
DK3654—PRELIMS——17/10/2006—15:34—TRGANESH—XML MODEL C – pp. 1–15
Editors David Greisler, DPA (
[email protected]) is Assistant Professor of Business at York College of Pennsylvania. Teaching operations management and business strategy in both the undergraduate and graduate programs, Dr. Greisler also does focused consulting in both service and manufacturing settings emphasizing process improvement, organizational assessment, strategic planning, and executive coaching. Prior to joining York College in the Fall of 2002, David S. Greisler spent twenty-two years in the healthcare industry. Fifteen of those years were with WellSpan Health, an integrated healthcare delivery system serving southcentral Pennsylvania. In his last five years with WellSpan he served as Chief Operating Officer of WellSpan Medical Group, a 235-physician multispecialty group practice. The Medical Group has 51 office locations in York and Adams counties. Dr. Greisler holds an undergraduate degree from Johns Hopkins University, a Master in Health Service Administration from George Washington University, a Master in Public Administration from the University of Southern California, and a Doctorate in Public Administration from the University of Southern California. In 1997 he was appointed Senior Academic Fellow at Mount Vernon College in Washington, D.C. As the primary author of more than 50 professional publications, Dave has presented his writings at both national and international conferences. Dr. Greisler’s life is made complete through his faith in Christ and the joy of raising his six-year-old son Luke. Ronald J. Stupak, PhD (
[email protected]) is a recognized authority on organizations undergoing major change. He was a tenured Senior Professor of organizational behavior in the School of Public Administration at the University of Southern California, an executive, a line manager, and a public servant. In 1994, while at USC, he was also appointed the 1994 Distinguished Scholar in Residence at the National Center for State Courts in Williamsburg, Virginia. Earlier, in his career as a federal executive, he helped to establish the Federal Executive Institute (FEI) in Charlottesville, Virginia. In 1996 he received the Warren E. Burger Award for his outstanding contributions to court management and judicial leadership. Finally, he received outstanding teaching awards at Miami University, USC, and the FEI. His reputation as a theorist and a practitioner is clearly illustrated by the following biographical data: He has been on the editorial boards of the Public Administration Review, the Journal of Urban Affairs, the Justice System Journal, The Public Manager, and served as Editor-in-Chief of The Federal Management Journal from 1987 to 1990. Currently, he is Co-Editor-in-Chief of the Journal of Power and Ethics and on the Editorial Boards of the Journal of Management History and Public Administration and Management. He has written over 175 books and articles on a wide range of issues including domestic and foreign policy, public administration, politics, organizational excellence, executive development, and strategic planning. In addition, Stupak has served as a consultant for hundreds of organizations, including the Federal Bureau of Prisons, Hewlett-Packard, the Federal Emergency Management Agency, the Central Intelligence Agency, Johnson & Johnson, the Supreme Courts of Wisconsin and New Jersey, the U.S. Marine Corps, the York Health System, York, Pennsylvania, and the Anne Arundel Health System, Annapolis, Maryland. Clearly, he has been active at the “cutting edge” of public/private partnerships and the changing workforce as evidenced in his numerous consultations and publications in the areas of workforce improvement, employee productivity, leadership performance, and customer satisfaction.
DK3654—PRELIMS——17/10/2006—15:34—TRGANESH—XML MODEL C – pp. 1–15
Stupak received his BA (Summa cum Laude) from Moravian College, his MA in and PhD from The Ohio State University and in 1998 was awarded an Honorary Degree of Doctor of Laws from Moravian College. In the past decade, he has focused on private consulting, becoming active and critically involved in several new areas of interest: leadership development, transitions in family businesses, the organizational impact of changes in health care, and executive coaching.
DK3654—PRELIMS——17/10/2006—15:34—TRGANESH—XML MODEL C – pp. 1–15
Table of Contents Chapter 1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Chapter 2
The Culture of Technology: Savants, Context, and Perspectives. . . . . . . . . . . . . . . . 13
Chapter 3
Public Sector Perspectives on Technology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Chapter 4
Technology Transfer: Innovations, Concerns, and Barriers . . . . . . . . . . . . . . . . . . . . 219
Chapter 5
Ethical Issues and Concerns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Chapter 6
Managing Change and Measuring Performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
Chapter 7
Technology’s Contribution to Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
Chapter 8
National Security Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
Chapter 9
Negotiating Technology Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665
Chapter 10 Technology and the Professions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845
DK3654—PRELIMS——17/10/2006—15:34—TRGANESH—XML MODEL C – pp. 1–15
1
Introduction
CONTENTS Science Versus Technology..............................................................................................................2 What Is Technology?......................................................................................................................2 Who Is in Charge: Man or Machine?..............................................................................................4 Understanding Technology................................................................................................................5 Ethical Considerations......................................................................................................................6 Technology: Buy It Once—Own It Forever..................................................................................7 Cheap Speed....................................................................................................................................8 Electronics Technology: The Great Enabler....................................................................................8 Adaptation, Forecasting, and Opportunity........................................................................................9 Trading Technology for Political Goals..........................................................................................10 The Next Revolution: Nanotechnology and Biotechnology............................................................11 Managing Change: Therein Lies the Rub........................................................................................12
People are the quintessential element in all technology.. Once we recognize the inescapable human nexus of all technology our attitude toward the reliability problem is fundamentally changed. Garrett Hardin, Skeptic, July–August 1976
It is not enough that you should understand about applied science in order that your work may increase man’s blessings. Concern for man himself and his fate must always form the chief interest of all technical endeavors, concern for the great unsolved problems of organization of labor and the distribution of goods — in order that the creations of our mind shall be a blessing and not a curse to mankind. Never forget this in the midst of your diagrams and equations. Albert Einstein, in an address at Cal Tech, 1931
In isolation, technology is a value-free commodity. Yet it is also the principal tool that enables humans to determine their destiny. Technology has evolved beyond a mere set of tools to assist the human race to build, travel, communicate, cure itself, or annihilate itself many times over. Technological tools have progressed to the point where the accumulation, storage, and manipulation of massive data may soon cross the threshold into knowledge. Knowledge, in turn, bestows unparalleled power on those capable of effectively wielding it. The ability to create technology is a singular gift entrusted only to humans. This gift manifests itself in the creation of machines and creative techniques that amplify mental and physical capabilities. Technology multiplies the ability to do work, accomplish more complex goals, explore and understand the physical universe, and overcome the challenges, obstacles, and threats of the
1
DK3654—CHAPTER 1—3/10/2006—14:50—SRIDHAR—XML MODEL B – pp. 1–12
2
Handbook of Technology Management in Public Administration
day. Indeed, this application of practical creative genius offers the potential for individual self-fulfillment, empowerment, and happiness. However, technological marvels and ubiquitous tools of convenience bring a disquieting overdependence and vulnerability to disruption, criminality, abuse, and mismanagement, as well as physical and spiritual isolation. Herein lies the true challenge—to develop management skills commensurate with the physical capabilities afforded by the continuous evolution of technology. Can we marshal the necessary foresight and management acumen to derive the maximum benefit from these new tools? Or will their latent potential to improve the human condition remain unrealized? While it is certain that we will continue to shape and apply technology to our daily needs, it is equally certain that we will also fall short of extracting technology’s full potential. From a management perspective, complacency, arrogance, and simple laziness will continue to be human obstacles to be overcome or mitigated. The perpetual lag in the development of analytical methods, auditable decision-making processes, and legal protections will continue to inhibit the ability, for example, to take maximum advantage of the information-processing capabilities that are continually advancing. Striving to perfect the human side of the equation in the management and implementation of technological tools will continue for the foreseeable future. While our impressive array of tools allows us to tackle larger and more complex problems, it also enables us to make bigger, more far reaching, and more dangerous mistakes. The application and management of technology mold a nation’s social, political, economic, educational, medical, and military interests. For many years, analysts have examined specific technologies—basic and applied, civil and military—in assessing their impact upon weapons systems and commercial systems alike. Many studies have purported to reveal the negative consequences of technological growth upon the environment. Most such studies share a morbid Malthusian tone of hopeless foreboding in which mankind will suffer from mass starvation, catastrophic climate change, and all manner of unavoidable victimization. Although these analyses provide entertaining reading and even occasional insight into the global technology outlook, they do not accurately elucidate the future of global technological growth, capture its inherent uncertainties, or assess its impact on the four key elements of national power—society, politics, the military, and the economy. This handbook addresses the management, implementation, and integration of technology across a wide variety of disciplines while highlighting the lessons learned along the way.
SCIENCE VERSUS TECHNOLOGY Technology is a difficult term to pin down into a universally accepted definition. So much of what is interpreted as “technology” represents impressionistic categories peculiar to broad fields of human activity. However, before delving into the diverse definitions of “technology,” it is important to address the larger issue of the differences between “science” and “technology.” Much confusion reigns concerning boundaries within the intertwining relationship between science and technology. These two fields coexist in the space known as “applied science.” This coexistence, however, is highly complex, and the area is very gray and indistinct. Rather than simply state that technology picks up where science ends or one of hundreds of other imprecise adages, it is more useful to draw effective distinctions between the two by highlighting their most fundamental differences (Table 1.1).
WHAT IS TECHNOLOGY? Now that we have arrayed some of the differences between science and technology we can proceed to explore the diversity of views regarding what technology is. As revealed in Table 1.2, no single or unified definition of “technology” exists across the professions. Table 1.2 offers a representative sample of useful definitions created by reputable organizations. It is up to readers to choose the most useful construct for their field of activity.
DK3654—CHAPTER 1—3/10/2006—14:50—SRIDHAR—XML MODEL B – pp. 1–12
Introduction
3
TABLE 1.1 Science Versus Technology Science Goal: The pursuit of knowledge and understanding for its own sake Corresponding scientific processes Discovery (controlled by experimentation) Analysis, generalization, and creation of theories Reductionism, involving the isolation and definition of distinct concepts Making virtually value-free statements The search for and theorizing about cause (e.g., gravity, electromagnetism) Pursuit of accuracy in modeling Drawing correct conclusions based on good theories and accurate data Experimental and logical skills
Using predictions that turn out to be incorrect to falsify or improve the theories or data on which they were based
Technology Goal: The creation of artifacts and systems to meet people’s needs Key technological processes Design, invention, and production Analysis and synthesis of design Holism, involving the integration of many competing demands, theories, data, and ideas Activities always value-laden The search for and theorizing about new processes (e.g., control, information) Pursuit of sufficient accuracy in modeling to achieve success Making good decisions based on incomplete data and approximate models Design, construction, testing, planning, quality assurance, problem-solving, decision-making, interpersonal, and communication skills Trying to ensure, by subsequent action, that even poor decisions turn out to be successful
TABLE 1.2 Selected Definitions of “Technology” The application of scientific advances to benefit humanity Application of knowledge to develop tools, materials, techniques, and systems to help people meet and fulfill their needs In education, a branch of knowledge based on the development and implementation of computers, software, and other technical tools, and the assessment and evaluation of students’ educational outcomes resulting from their use of technology tools 1. Human innovation in action that involves the generation of knowledge and processes to develop systems that solve problems and extend human capabilities. 2. The innovation, change, or modification of the natural environment to satisfy perceived human needs and wants The application of science to the arts. The advances in theoretical knowledge, tools, and equipment that drive industry The application of science and engineering to the development of machines and procedures in order to enhance the human condition or to improve human efficiency in some respect The practical application of science to commerce or industry Electronic media (such as video, computers, compact discs, lasers, audio tape, satellite equipment) used as tools to create, learn, explain, document, analyze, or present artistic work or information
www.sln.fi.edu/franklin/glossary.html www.user.mc.net/~kwentz/eduspeak.html www.ncrel.org/sdrs/areas/misc/glossary.htm
www.iteawww.org/TAA/Glossary.htm
www.bloomington.in.us/hoosiernet/CALL/ telecommunity_94/glossary.html www.dsea.com/glossary/html/glossary2.html
www.cogsci.princeton.edu/cgi-bin/webwn www.ncpublicschools.org/curriculum/ArtsEd/ vglossar.htm
(Continued)
DK3654—CHAPTER 1—3/10/2006—14:50—SRIDHAR—XML MODEL B – pp. 1–12
4
Handbook of Technology Management in Public Administration
TABLE 1.2 (Continued) In the context of export control, technical data, technical information, technical knowledge, or technical assistance. Any specific information and know-how (whether in tangible form, such as models, prototypes, drawings, sketches, diagrams, blueprints, manuals, software, or in intangible form, such as training or technical services) that is required for the development, production, or use of a good, but not the good itself Literally, “the study of methods”; equally, the study of skills. Often erroneously described as “applied science” (and thus assumed to be dependent on science for its theories), technology in practice develops empirically, frequently resolving tasks and dealing with exceptions and paradoxes via methods that are known to work without knowing, scientifically, just how they work. In this sense, much of science is better understood as “codified technology,” the summation of skills in practice: it can be worked consistently, but we still cannot reduce it to a consistent system of order The methods of application of an art or science as opposed to mere knowledge of the science or art itself The production of goods and services that mankind considers useful. Technology is not the same as science, though in today’s society the two are closely linked. Many of our products—our computers, our plastics, our medicines—are direct products of our knowledge of the behavior of atoms and molecules. However, it is not necessary to understand the science in order to make use of technology. Long before the chemistry of steel was understood, mankind knew how to make a better sword The practical application of knowledge, especially in a particular area such as engineering. A capability given by the practical application of knowledge. A manner of accomplishing a task, especially using technical processes, methods, or knowledge. The specialized aspects of a particular field of endeavor The practical application of knowledge, especially in a particular area Applied science, i.e., natural science and the scientific method applied to solving practical problems. It usually considers at least the potential for commercial exploitation The practice, description, and terminology of any or all of the applied sciences that have practical value and/or industrial use
WHO IS
IN
CHARGE: MAN
OR
www.llnl.gov/expcon/gloss.html and www.exportcontrols.org/glossary.html
www.tomgraves.com.au/index.php
www.scientology.org/wis/wisger/gloss.htm www.sasked.gov.sk.ca/docs/chemistry/ mission2mars/contents/glossary/t.htm
www.projectauditors.com/Dictionary/T.html
www.i-c-s-inc.com/glossary.htm www.beta-rubicon.com/Definitions.htm
www.unistates.com/rmt/explained/glossary/ rmtglossaryt.html
MACHINE?
“Science discovers, Technology makes, Man conforms.”—This sentiment was captured as the motto of the 1933 Chicago Century of Progress Exposition and expresses an important but overstated fear that the technology mankind creates will rule his destiny. This concept, broadly known as technological determinism, argues that society is forced to adjust to its machinery rather than make its machinery conform to human purposes. Some scholars, such as Jacques Ellul and Lewis Mumford, have even called for resistance to autonomous technology, for the restoration of human agency. While a popular theme in science fiction, the day of mankind’s subordination to machinery shows no signs of arriving.
DK3654—CHAPTER 1—3/10/2006—14:50—SRIDHAR—XML MODEL B – pp. 1–12
Introduction
5
Those who sound the determinist call often argue that military technology has twisted the relationship between man and machine to the point where society has reshaped itself expressly to facilitate the creation of technological marvels of warfare. Some have asserted that during the Cold War the U.S. conceded commercial markets to the Germans and Japanese because high levels of Pentagon spending siphoned off “large numbers of engineers, physicists, mathematicians and other scientists from export industries into defense related fields.” An important counterargument holds that technology is not autonomous or deterministic at all but socially constructed, driven by social, economic, and political forces that are human in origin and thus subject to human control. Certainly supporting this view would be the semiconductor and related computer industries. Once driven primarily by the voracious needs of the defense sector for ever more rapid and sophisticated computational and design power, the civilian side of the industry is now in the driver’s seat with the much larger commercial sector driving demand and research agendas. The defense sector has largely been relegated to the role of consumer and follower of industry capabilities.
UNDERSTANDING TECHNOLOGY Is technology an irresistible force? Is it a force for positive change? Must it be accepted wholesale, or is it subject to a natural selection process where humans consciously or unconsciously filter technology by regulating and controlling the aperture through which it must pass before finding acceptance in daily human life? And once it passes through these gates and finds acceptance, is it actually understood by the general population? Does the population possess a vocabulary that indicates more than a passing familiarity with these new technologies? Or are these applications accepted as mere “tools” with little additional significance? It is important for a citizenry awash in technology and the byproducts of science to have some knowledge of their basic facts, concepts, and vocabulary. Certainly, those who possess such knowledge have an easier time following news reports and participating in public discourse on science and technology. Curiously, the disparity between the deep penetration of advanced technology throughout the general population and the fundamental lack of understanding of the principles underlying the tools the public wields points to the presence of a rather large sub-class that can be termed “technical idiot savants.” Technical idiot savants comprise technology users; technicians without depth; “six-month experts” with grandiose titles such as “Certified Network Engineer” or “Certified Web Designer” created by certificate-granting proprietary programs; help-desk workers who are just one step ahead of the clients they serve; and “the rest of us”—the end-users who are able to use canned programs to do their jobs but have no inkling of how they operate or what to do when something goes wrong. The compartmentalization of technical knowledge and skills is a key characteristic of the rise to prominence of technical idiot savants within our society. Much has been written over the past several decades on the “inevitability” that the technological revolution will transform international, interpersonal, and business relations. Contemporary claims to that effect are just the latest to argue that technological change has profound effects on human and societal relations. In fact, the overwhelming theme in the social and scientific literature supports these presumptions. But are the effects of technological change as far reaching as the literature suggests, and do they penetrate very deeply into the general culture, its organizations, or the psyche of its citizens? And where does the educational system enter into the process of understanding the role of technology in the physical world and day-to-day existence? Intuitively, one could reasonably conclude that those living in a period of rapid change would have an intimate understanding of those forces. After all, historians and archaeologists often portray ancient cultures in such an idealized manner. A presumption often underlying their studies is that all
DK3654—CHAPTER 1—3/10/2006—14:50—SRIDHAR—XML MODEL B – pp. 1–12
6
Handbook of Technology Management in Public Administration
elements of the group, tribe, or civilization were aware of, or at least affected by, the technology of the time. While it is a virtual certainty that living conditions and patterns of activity were influenced by the contemporary state of technological development, it is by no means certain that all members of those societies understood the technology available to them or its underlying scientific basis. In all likelihood, they simply used technology as a tool without giving it much thought. Acceptance without understanding, adoption without analysis, and use without questioning evince behavior not too far removed from cultures where magic and the occult are part of the fabric of everyday life. The assumption that a ubiquitous system impenetrable to casual understanding will be accepted as fact relates to the twenty-first-century willingness to follow technology’s lead with little thought as to where it will go or what really goes on in that black box on the desk. It is this lack of transparency—an impenetrable mystery to most—and a shift away from the mechanical replication of man’s labors to the mimicking of his thought process that creates a climate for ready acceptance in spite of a generalized bewilderment over how the technology works. Such a reality forces the next question: Is it important to understand a technology so broadly and readily accepted and applied throughout the population? The answer is an unequivocal yes.
ETHICAL CONSIDERATIONS Inseparable from the importance of technical literacy and a basic understanding of the tools of the day are questions of responsibility on the part of both government and technology holders to adequately and accurately inform the public of the intricacies and consequences of technological advancements upon social policy. The current debate over human embryonic stem cell research marks a contemporary case in point. The moral dilemmas and issues posed by the recently evolved technological ability of scientists and technicians to effectively clone human stem cells in the hope of developing new therapeutic approaches to several fatal or debilitating health conditions are quite profound. Stakeholders in this debate fall into the four general camps described in Table 1.3. Unfortunately, this debate is characterized by hyperbole, misinformation, and overstated claims by individuals and groups with much to gain or lose by its outcome. The intrinsic societal weakness pointed out by this increasingly politicized debate is the lack of an impartial arbiter capable of providing an accurate, believable, and understandable overview of this technology-enabled avenue of biomedical research.
TABLE 1.3 Stem Cell Debate Stakeholders Camps Victims groups Researchers and the pharmaceutical industry Moderate decision-makers Religious groups
Motivation Desperately seeking miracle cures Seeking new therapeutic approaches and potentially large profits Attempting to balance moral questions against uncharted scientific ground and overstated potential Expressing concerns over sanctioning euthanasia-like programs
DK3654—CHAPTER 1—3/10/2006—14:50—SRIDHAR—XML MODEL B – pp. 1–12
Introduction
7
TECHNOLOGY: BUY IT ONCE—OWN IT FOREVER The technology investment choices facing business and government often entail long-term consequences. Life-cycle costs vary widely depending upon the application, scale of investment, and mission criticality of the technology chosen. The most extreme example of being wedded to an early technology choice is the military. As depicted in Figure 1.1, the long life cycles of military products are routinely measured in years for sensors and countermeasures, decades for small arms, scores and half-centuries for ships, and in the case of the B-52, a century for aircraft.
SCIENCE Basic Science Theoretical Science
Experimental Science
Develop fundamental principles
Verify fundamental principles
Fundamental Themes
Experimental Themes
Develop applications-oriented principles
Demonstrate functions and testbeds Applied Science
Engineering Principles Analytical Methods
Technologies
Math and science of engineering design and applications
Future technologies, products, design/analysis/verification tools. Etc.
Exploratory Development Develop next generation product functionality & applications testbeds
Product Development Develop cost-effective, high reliability, and competitive products
Product Engineering ENGINEERING
FIGURE 1.1 Boundaries between engineering and science. Published by the Stevens Institute of Technology, it is a useful visualization of how science, applied science, and engineering interact.
DK3654—CHAPTER 1—3/10/2006—14:50—SRIDHAR—XML MODEL B – pp. 1–12
8
Handbook of Technology Management in Public Administration
1969 1973
51+ Years
2021+
F-15 1961 1957
1954
1957
2040+
KC-135 1946
79+Years
2030+
C-130
86+Years
1965 2040+
B-52
94+Years
100
FIGURE 1.2 Longevity of military aircraft.
CHEAP SPEED Despite warnings from some that Moore’s law may be reaching its theoretical limits, so far silicon chip technology has continued to advance at a furious pace, with processing power increasing exponentially even as the cost per transistor continues to shrink (see Figure 1.2). Such long-term commitments to a particular technological design are usually not anticipated at a program’s inception. It was never imagined by the designers of the B-52 in 1946, for example, that their creation would still be in active service in 2004 or that its life would be extended until at least 2040. Compared with the life cycle of large military systems, electronic subsystems and consumer or office products experience rapid change and turnover.
ELECTRONICS TECHNOLOGY: THE GREAT ENABLER No single technology experiences more rapid, steady, and predictable patterns of change than microelectronics. Intel co-founder Gordon Moore predicted in 1965 that the semiconductor industry would be able to double the number of transistors on a single microprocessor every 18– 24 months, resulting in a rapid turnover in generations of microcircuits. As the number of transistors doubles, so does processing speed, which in turn increases the power of computer systems in which the transistors are embedded (see Figure 1.3). AVERAGE TRANSISTOR PRICE $1.00 0.10 0.01 0.001 0.0001 0.00001 0.000001 0.0000001 '70
'80
'90
TRANSISTORS PER CHIP 100,000,000 10,000,000 1,000,000 100,000 10,000 1,000 100 10 1
'00
'70 Source: Intel
FIGURE 1.3
DK3654—CHAPTER 1—3/10/2006—14:50—SRIDHAR—XML MODEL B – pp. 1–12
'80
'90 '00
Introduction
9
To date, Moore’s prediction has held with an uncanny accuracy. Many analysts have predicted the end of Moore’s law, variously arguing that chipmakers have reached the technical limit regarding miniaturization of circuitry or that the physical capacity of silicon as a microchip substrate will soon reach its limit. However, continuous innovations in microlithography and enhanced scientific understanding of materials properties have repeatedly proved critics wrong. While a boon to systems developers and purveyors of consumer electronics, the rapid upward spiral of information-processing capability presents a serious problem for weapons developers and others building systems expected to last decades or longer. In addition to long development and production lead times, planners must also account for system sustainability and multiple generations of technological advances over the typical decades-plus life of a major system. This means that as today’s microprocessors rapidly become obsolete—having been succeeded by more advanced offspring—spare or replacement parts become increasingly difficult to acquire. As the mantle of microelectronics leadership and risk-taking has migrated from the public to the private sector, military procurement requirements are no longer the locomotive pushing technological development. In fact, the military has largely been relegated to the role of follower—as one of innumerable implementers. As one customer among many, the military finds itself increasingly scavenging for spare parts to keep its equipment functioning properly. It confronts the problem of dependency upon an incompatible culture: an industry that finds it unprofitable to maintain a repairand-replacement infrastructure for legacy technology versus a military infrastructure that is largely composed of legacy technology; this dynamic forces government into the curious position of spawning small suppliers—including manufacturers—of obsolete technology in order to keep its legacy systems functional. How long Moore’s law will continue to be applicable is a hotly debated topic within industry. Many experts believe that the physical limit of silicon-based technologies is close at hand. The rapid obsolescence of microelectronics will remain a challenge as industry and academia alike seek to develop successors to silicon-based devices. Incremental improvements are envisioned with the perfection of new substrate compounds such as gallenium arsenide or ferritin (ferritin is a protein found in both plants and animals and represents a biotechnology approach to microprocessor advancements). However, entirely new technologies like quantum computing may have a revolutionary impact upon the fundamental character of the industry. In any event, the promise of new materials and processing techniques holds the likelihood that Moore’s law will still apply into the foreseeable future and the parallel challenge of absorbing new technologies while managing obsolescence will remain as well.
ADAPTATION, FORECASTING, AND OPPORTUNITY The U.S. has successfully adapted to both the agricultural revolution and the industrial revolution. It is now, along with most of the world, adapting to the ongoing information revolution. A parallel revolution on the verge of beginning is the nanotechnology revolution, which will enable the superminiaturization of many commercial and military products with very low power requirements. The complexity of variables and their interrelationships confounds most attempts to accurately predict the end-state of technological innovation and change. The following factors need to be considered in forecasting macro or societal changes: the effects of war, availability of capital, rate of change, government regulation or incentives, deliberate scientific research, competition, locus of production, dependency relationships, access to raw materials and markets, religion, accident, environment, innovative use of one idea in another field, and stumbling onto a solution while looking for another. Recognizing that the future holds many unknowns is crucial, as is the assignment of coefficients of influence, or weighting factors, to each of the identified variables—to include the unknowns.
DK3654—CHAPTER 1—3/10/2006—14:50—SRIDHAR—XML MODEL B – pp. 1–12
10
Handbook of Technology Management in Public Administration
The rate of technological change has always been heavily influenced by communication, to wit: a great surge of innovation occurred in the European Middle Ages after the reestablishment of communications between villages following the Dark Ages. In the sixteenth century, printed books gave scientists and engineers the opportunities to share their ideas with each other on a grander scale, producing the next wave of inventions. The Industrial Revolution was the first massive and sustained expression of mechanical and scientific innovation. Scientific and technical disciplines became mainstream training choices for large numbers of citizens with access to higher education. The late-1800s saw the beginning of an explosion in technical training with the result that more than 90 percent of all scientists and engineers who ever lived are alive now. Industry took its place alongside the clergy and the military as an avenue for social and economic advancement for the less privileged members of society. Stendhal’s Le Rouge et le Noir would now be expanded to include le gris, the gray world of industry. The 1980s saw the further expansion of upward mobility through technology. The computer revolution, in particular the introduction of personal computers and local area networks into business and governmental settings, provided the opportunity for true egalitarian bootstrapping to occur. This period saw individuals trapped in administrative and clerical positions realize that managing the office computer network and showing flexibility and adaptability in the face of new technology provided the opportunity to showcase their talents to management. By stepping into the breach and learning how to fully utilize and manage the office automation tools of the day, these enterprising spirits, who were often ethnic and racial minorities, became invaluable to the success of enterprises large and small. Co-workers and superiors alike quickly became dependent upon them, and the world’s most successful upward mobility program simply happened. The rise of the Internet as a new form of communication created a second generation of social and economic opportunity that repeated the success of the office automation era. This time, however, a support structure replete with certificate- and degree-granting programs was in place to provide professional certification to a new and technologically adept vanguard. Along the way, the concept of literacy was redefined without anybody to lead or even notice it. The parlance of networking, X.25 protocols, and UNIX replaced Shakespeare, Whitman, and Hawthorne as the benchmarks for information technology-based careers. For millions of less-advantaged individuals, a white-collar technical career path suddenly appeared, and its economic and social rewards were large indeed. A third generation of opportunity is only now taking shape. It will be a culture-shaping blend of disparate technological capabilities that will enable the instant and massive exchange of information on anything, with anybody, anywhere. The resultant synergy and technological change promise to be staggering and will have effects and consequences that are even less predictable than before.
TRADING TECHNOLOGY FOR POLITICAL GOALS In 1997, the U.S. rejected the export to Russia of Convex and IBM supercomputers. The veto of the transfer by the U.S. government triggered Russia’s director of Minatom, the counterpart to the U.S. Department of Energy, Victor Mikhaylov, to state publicly that Russia was promised access to U.S. supercomputer technology by Vice President Gore in exchange for Russian accession to the Comprehensive Test Ban Treaty. Vladislav Petrov, head of Minatom’s Information Department, stated that the Clinton administration promised Russia the computers during the test ban treaty negotiations to allow Russia to engage in virtual testing of warhead designs. Indeed, Mikhaylov also told reporters that other Silicon Graphics and IBM supercomputers that were illegally shipped to Russia would be used to simulate nuclear explosions. Why is this important? Virtual testing, modeling, and simulation are essential to clandestinely maintain or advance nuclear weapons technology. As the planet shows no sign of nearing the point where nuclear weapons are banned, it is reasonable to assume that current or aspiring nuclear weapons states
DK3654—CHAPTER 1—3/10/2006—14:50—SRIDHAR—XML MODEL B – pp. 1–12
Introduction
11
will vigorously attempt to acquire high-performance computers to advance their nuclear programs with a degree of covertness hitherto impossible to achieve. The development of supercomputers has been underwritten and driven relentlessly by the weapons program because of the high cost of physical testing and the severity of the test environment. “The technical limitations are enormous: extreme temperatures (10 million degrees) and material velocities (4 million miles per hour), short time scales (millionths of a second) and complicated physical processes make direct measurement impossible. Computers provide the necessary tools to simulate these processes.” On a prima facie level, most would instinctively argue that eliminating explosive nuclear chain reactions from the planet is highly desirable and would help make the world a safer place. However, the reverse may actually be the case; i.e., the elimination of physical tests and their migration to cyberspace may make the world a more dangerous place. Can such a counterintuitive proposition be true? Consider the trillions of dollars’ worth of detection, monitoring, and early-warning infrastructure designed to identify and measure foreign nuclear weapons programs that would be rendered useless by virtual testing.
THE NEXT REVOLUTION: NANOTECHNOLOGY AND BIOTECHNOLOGY The parallel development of the intertwined disciplines of nanotechnology and biotechnology holds the promise of spawning numerous concentric revolutions in disparate fields simultaneously. While the extent of their individual or collective impact generates much speculation and hype, once they reach fruition existing systems of communication, health care, transportation, inventory control, education, navigation, computing, environmental monitoring, and war fighting will experience both incremental and radical change. Nanotechnology has applications in numerous areas, including robotics, sensors, communications, information storage, materials, catalysis, and medicine. Nanotechnology possesses three hallmark characteristics that provide “revolutionary” potential: the first is unprecedented miniaturization; the second is ultra-low power requirements; and the third is low cost. In combination, these characteristics hold the promise to enable altogether new capabilities as well as the miniaturization of existing technologies. While the key challenge to nano-scale technology is to apply it to largerscale systems, once achieved, commercialization will be rapid and the technology will become globally available. Biotechnology will bring about advances in virtually every aspect of life. Futurists are predicting that by 2025 biotechnology will enable markedly increased life spans, genetic engineering before birth, disease prevention and cure, and new biological warfare agents. Greater productivity in agriculture will likely continue because of biotechnology advances as well. This science will be fueled by the further unlocking and publication of the secrets of human, plant, and animal genomes, and the creation of innovative techniques to enable widespread applications. Biotechnology promises to help overcome a long-standing barrier to the next stage of the “green revolution” and the expansion of crop productivity—the lack of potable water resources in arid regions. Advances are bringing closer the day we will bio-engineer and develop hardier plants at the molecular level that can be irrigated with seawater. When that day arrives, more deserts, which today account for most available land, could become fertile; crop-bearing fields and food production will become cheaper; world hunger will be greatly abated; and adversary aggressiveness for land or water resource acquisition will abate commensurately. In addition, biotechnology will afford the engineering of foods with enhanced nutritional value and taste. Plants and animals will be used more frequently to produce pharmaceuticals and needed chemicals. Foods with vaccines will help in the protection of people and animals against disease, and the delivery of medical support will be expedited through edible vaccines and pharmaceuticals derived from plants and animals.
DK3654—CHAPTER 1—3/10/2006—14:50—SRIDHAR—XML MODEL B – pp. 1–12
12
Handbook of Technology Management in Public Administration
MANAGING CHANGE: THEREIN LIES THE RUB New technologies in the marketplace, while offering great promise in so many areas of human existence, are by no means an easy fit or a panacea. Countless trade-offs will present themselves in the future. Economic dislocations, job loss, and social tensions will undoubtedly arise as new choices and previously unknown options present themselves to investors, policy-makers, and consumers alike. Many of the new technologies will be a double-edged sword. Genetic modification of agricultural products, for instance, will yield crops that are more disease and drought resistant. But this very technology will continue to give rise to well-founded fears that such “supercrops” will bring with them new health and economic dangers. Consumers have already voiced concerns about the long-term stability of genetically modified crops and the effects of ingesting genetically modified foods. Genetically modified seeds could migrate or displace and contaminate traditional strains, create new seed supplier monopolies, and subsequently administer the coup de graˆce to small or family farmers. New communication possibilities will also enable ubiquitous surveillance measures that will impinge upon traditional notions of privacy and civil liberties. The same knowledge of the human genome that will unlock new therapeutic approaches to improving the human condition will also facilitate the development of bio-weapons that can target specific populations such as ethnic or racial groups that possess unique and identifiable genetic markers. Such tools of “ethnocide” will represent a choice previously unavailable to those who possess them. The technologies that hold the possibility of uniting society in productive and rewarding ways at the same time facilitate an ever-increasing level of isolation and estrangement. Tele-entertainment, tele-education, tele-marketing, tele-shopping, tele-commerce, etc., at once link countless individuals with common interests while driving them into isolation by replacing physical contact and interaction with an antiseptic virtual alternative. The benefits of virtual convenience are in some measure offset by individual cocoons that have an as-yet-unknown social and psychological cost. The Internet as a public meeting place unites similar interest groups for such purposes as hobbies, religion, politics, and, of course, terrorism. On a more prosaic level, telecommuting has partially lived up to its promise of increasing overall productivity, but it has also spawned a cottageindustry mentality where increasing numbers of employees are encouraged to work at home part time on a piecework basis in exchange for an hourly wage and few if any traditional employee benefits. While employers are able to significantly reduce their overhead costs by maintaining an army of ghost employees, they are creating and maintaining an increasingly vulnerable, isolated, and unrepresented segment of the workforce that can limit the benefits and economic well-being of full-time, on-site personnel. The challenge facing us all is not the development of new tools, techniques, or technologies— mankind has no shortage of innovative capacity. It is the effective integration and management of these gifts into particular work and policy settings that will be the most difficult goal to achieve. This handbook is a tour d’horizon of a broad variety of industries to compile selected lessons learned in order to share their experiences and perhaps assist the reader in solving contemporary problems.
DK3654—CHAPTER 1—3/10/2006—14:50—SRIDHAR—XML MODEL B – pp. 1–12
2
The Culture of Technology: Savants, Context, and Perspectives
CONTENTS Chapter Highlights...........................................................................................................................13 Technological Culture: Inventing the Future by Creating a New Human Context .......................15 Metaphors as Trojan Horse: Persuasion about Technology............................................................16 References........................................................................................................................................21 How to Manage Geeks.....................................................................................................................21 You’ve Got to Have Your Own Geeks.......................................................................................21 Get to Know Your Geek Community..........................................................................................22 Learn What Your Geeks Are Looking For.................................................................................22 Create New Ways to Promote Your Geeks.................................................................................22 Either Geeks Are Part of the Solution—or They’re the Problem...............................................23 The Best Judges of Geeks Are Other Geeks...............................................................................23 Look for the Natural Leaders Among Your Geeks.....................................................................23 Be Prepared for When the Geeks Hit the Fan.............................................................................24 Too Many Geeks Spoil the Soup.................................................................................................24 The Coming Collapse of the Age of Technology...........................................................................25 Can the Machine Stop?................................................................................................................25 The Forces of Internal Breakdown..............................................................................................27 The End of Global Management.................................................................................................32 Creating a Shadow System..........................................................................................................32
Technology makes the world a new place.
Shoshana Zuboff
CHAPTER HIGHLIGHTS † Technological culture creates a new human context as it drives history, links inextricably
with the power of science, and impacts the perspectives of journalists, historians, and public intellectuals. 13
DK3654—CHAPTER 2—3/10/2006—14:52—SRIDHAR—XML MODEL C – pp. 13–33
14
Handbook of Technology Management in Public Administration † To describe and analyze the values that drive the technological imperatives: Are the
undergirding norms of technology value neutral? Are changed metaphors a sign of cultural transformations? † The fast-forward pace of technology raises critical questions about the nature of technology in terms of elitism, educational philosophies, and humankind’s ability to control and direct its powerful thrusts.
Technological culture is a double-edged sword. On the one hand, it can open possibilities, create opportunities, and multiply options, while on the other hand, it can invade individuals’ privacy, present pressures for conformity in terms of lifestyles, encourage bureaucratic patterns, and overemphasize organizational rationality in the artificial environment of the postmodern age. To analyze the broader parameters of the technological imperative, this chapter features discussions of the creation of innovative paradigms, new boundaries, diversity frameworks, and operational breakthroughs emanating from technology. At the same time, this chapter contains questions about the speed, determinism, violence, and intrusions of technology into the personal, organizational, and social environments as we move forward. Moving forward portends the incorporation of highly skilled technical staff within the management structure of an organization. This is often problematic. Intuitively, one would think that successful technologists embedded within an organization would be ideal candidates for internal career development by migrating them into the management side of the house. Unfortunately, technical high performers may not always possess the intellectual or temperamental flexibility required for them to succeed with this transition. While flexibility, adaptability, and multitasking may be characteristic traits of successful managers, the most unique attributes of successful technological wizards are often an extraordinary degree of determination, single-mindedness, and an ability to maintain an unusually sharp focus on a particular problem for an extended period of time. Not all those we identify as technical savants are high performers, even within their area of expertise. Often, technical competence is measured relative to the skill levels of nontechnical or managerial personnel. It is not uncommon for those perceived as technical savants to be, in reality, advanced technology users, technicians without depth, “6-month experts” created by certificate-granting proprietary programs with grandiose titles such as “Certified Network Engineer” or “Certified Web Designer,” help-desk workers who are just one step ahead of the clients they serve, and the “rest of us”—the end-users—who are able to use canned programs to do their jobs but have no inkling of how they operate or what to do when something goes wrong. These are the technical idiot savants, clearly a distinct and separate class from true technical savants. The core strength of relentless determination of true technical savants may be their undoing when forced into a management role. Managers often have multiple problems to solve and are given many people to supervise. Accordingly, the ability to delegate responsibility is often essential to successful management outcomes. Delegation of authority and responsibility is particularly important to the solving of multiple problems simultaneously. However, technical wizards often have a natural inclination to approach problem solving in a linear or sequential manner with direct personal involvement. When given a problem nothing will stand in their way until the problem is solved. But such creative single-mindedness also carries with it the baggage of paralysis in that the problem-solving task at hand will be to the exclusion of all else and little else will get done. In essence, to make sense of the nature of technology and its “players,” we must master ourselves so that we can master the technological dynamics before they accelerate us beyond our values. The way forward may be, paradoxically, not to look ahead but to slow down so that we can look around.
DK3654—CHAPTER 2—3/10/2006—14:53—SRIDHAR—XML MODEL C – pp. 13–33
The Culture of Technology: Savants, Context, and Perspectives
15
Science and technology multiply around us. To an increasing extent they dictate the languages we speak and think. Either we use those languages or we remain mute. James Graham Ballard
TECHNOLOGICAL CULTURE: INVENTING THE FUTURE BY CREATING A NEW HUMAN CONTEXT* Technological culture with its perceived accompanying conformity in terms of lifestyles, bureaucratic patterns, and organizational rationality bluntly demonstrates to many persons that the interstices of freedom are closing rapidly in the artificial environment of the post-modern society. The assimilating capacities of the transnational computerized world are seen as mechanistic, restrictive, sterile, and predictable. Conformity and stability seem to be closing the frontiers of uniqueness, individuality, cultural diversity, judicial choice, and personal style. In effect, the pressures from the computerized technological juggernaut have shaken humankind’s confidence in its ability to control the pace, direction, speed, and purpose of technology. Surely, for many, the astronauts and the computer wonks are not worshiped as heroes, rather they are seen as a reflection of all of us becoming totally encapsulated and conditioned into a rigid interconnected, artificial environment. And yet, the eternal albatross around one’s neck is the continual recognition that one is responsible for shaping, controlling, and creating humankind’s future. Therefore, at this critical juncture of technological acceleration, organizational reengineering and global interdependencies, we must project and create dynamic transformations in our thinking about, leadership of, and actions in the technological age. In essence, we must shape the future, rather than allowing ourselves to be anchored only in our historical past. Clearly, some revolutionary/radical thought on technology is called for to prepare us to shape the environment of the 21st century. For example: 1. It appears that too many of the dimensions of technology are being approached from the wrong end of the spectrum. The emphasis on inputs, whether in terms of economic resources, case management, historical perspectives, philosophical insights, institutional imperatives or current events, seems to be overshadowing the vital need to place more time and effort on creating the output of new objectives, measurable goals, “addedvalue,” and high-performing systems. 2. Technology is a phenomenon that demands its own cultural necessities—therefore a projection in terms of what technology demands of individuals becomes essential without all the “looking backward” to outdated philosophical ideas, bankrupt economic systems, rigid institutionalized perspectives, micro-management styles, and archaic political structures. Humans must accept the proposition that they are a “part of nature” and that what they create becomes an extension of themselves and ultimately an extension of nature. In essence, technology and computers are not artificial; are not sterile; are not unloving. Rather, the technological computer world is the new natural environment. We must discover new ideas, new perspectives, new leadership styles, new empowerment techniques, new organizational arrangements, new human interactive processes, dynamic visions, and viable social and political structures which will enable us to shape the new nature of the technological world. 3. Yesterday is, in many ways, ancient history; we must sever the “albatross” of the past from our necks so that we can invent and create value systems which will allow us to reap the benefits of pleasure, performance, and productivity that technology can provide. And finally, since change itself is changing in terms of speed, scope, and synergy, we must leap forward and develop processes that keep us ahead of the technological power curve. * Ronald J. Stupak wrote this provocative piece while he was the Distinguished Scholar-in-Residence at the National Center for State Courts in 1994.
DK3654—CHAPTER 2—3/10/2006—14:53—SRIDHAR—XML MODEL C – pp. 13–33
16
Handbook of Technology Management in Public Administration
4. Love, death, birth, rights, health, justice, and nature must be so radically redefined that it is imperative that we race ahead of our time to explore the philosophical demands and cultural realities of the future, or else we will find ourselves corrupting the magical abundance that technology promises. Don’t corrupt it; don’t fear it; don’t try to avoid it; learn to live within it. No, don’t even learn to live within it—become a visionary and learn to live beyond it! The creation of innovative paradigms, new boundaries, diversity frameworks, management processes, and operational breakthroughs are the responsibilities of futureoriented leaders, especially in the courts, as we move fast-forward into the 21st century.
METAPHORS AS TROJAN HORSE: PERSUASION ABOUT TECHNOLOGY* Perhaps the oldest form of technology—far older than computers or the printing press or firearms— is rhetoric. As a kind of thought experiment, I want to propose that we think about one specific rhetorical device, metaphor, as technology for shaping our subjectivities. In our discussions about newer electronic technologies, we should not overlook the existing ones which are so familiar as to be largely invisible in their workings. It’s a little cute to propose a paper with a self-descriptive title (i.e., to have a metaphor [simile] in the title of a paper about metaphors), so I’ll try to compensate by a straight-forward statement of my contentions. As the paper has developed, I find that it has focused on the general, and that it’s less about technology and education specifically than about how contemporary developed culture persuades its subjects to accept and use new technologies. The application to education is one of the places I’ve cut corners. Briefly, the argument is this: any new phenomenon in culture, such as the Internet, has to be introduced through existing narratives—which are the Trojan Horse of my title. Probably these narratives are plural because there are competing interests involved in the technology’s introduction (those who stand to profit from it vs. those with a vested interest in the status quo, for example). In both cases, metaphors are important signs of the narratives we subjects are invited to use as interpretive guides to the new phenomenon; metaphors are not simply decorative or playful, but constitutive. In the particular phenomenon under discussion, the Internet, we have a selection of three of what might be called governing metaphors—those loosely associated with danger, those associated with safety, those associated with a tool which may be used in positive or negative ways. What I plan to do here is bring some of these metaphors up for discussion and indicate what I see as some of their implications for technology and education: I do not see any sort of impermeable wall between the educational system in contemporary culture and other areas (in particular, media). Rather, the educational system deals with subjects who have been largely created by their uses for media (and the uses media have for them), and we ignore this creation by the culture at much risk to our own pedagogical and professional purposes. Governing metaphor #1: the Internet is a dangerous place. This we know because it’s called the electronic frontier where thousands of people are attacked daily in flame wars. Open the wrong e-mail message and your hard drive will be infected by viruses. It may crash. There may be bugs in your program. Once you open an account, you are likely to receive tons and tons of Spamq, a repulsive potted meat product evidently much in abundance on the information superhighway (and we all know the dangers of highways). Cyberspace is populated by geeks and nerds, not normal people like you and me; you, or what’s worse, your children, may be enticed by cybersluts offering cybersex, your children will be diverted from spending time on normal pursuits (like watching * Gary Thompson (Ph.D., Rice University, 1979, in American literature) has been a Faculty member in the English Department at Saginaw Valley State University, University Center, Michigan, since 1979. He was Fulbright Professor at the Uniwersytet MariiCurie-Sklodowska, Lublin, Poland, from 1982 to 1984 and at Uniwersytet Gdanski, Gdansk, Poland, from 1987 to 1988. He is the author of the textbook, Rhetoric Through Media, published by Allyn & Bacon, 1997. His home page can be found at http://www.svsu.edu/wglt
DK3654—CHAPTER 2—3/10/2006—14:53—SRIDHAR—XML MODEL C – pp. 13–33
The Culture of Technology: Savants, Context, and Perspectives
17
Beavis and Butthead or slasher movies or listening to Korn) by the temptation to descend into the virtual MUD where they will MOO like virtual animals . The net can ensnare you; better unplug your computer and read a book. But wait! We can relax. The Internet offers us nothing to fear (governing metaphor #2). This we know because it’s an information superhighway. It’s America On-Line! Compu-Serve! It’s a space to be navigated by means of search engines with names like Yahoo (here a cry of celebration, not the insult derived from Swift), LookSmart, Altavista. It’s a gateway into the future, where we can surf our way into the new millennium . And it’s not just reading words and pictures on a screen, but interacting with them, so that we become part of the text. In this utopia, text in fact will change into hypertext—and that, sez virtual Martha Stewart, is a good thing. Readers decide where to click and what to experience—no more dictatorial authors. Microsoft will carry us away—they ask us in their advertising Where do you want to go today, and it seems that we can go any place on the net. You can seek out people and chat sites where others share your interests. You’ll find diversity—in these chat rooms, you’ll meet people of all ages and geographies and beliefs. And it’s a democratic medium, where instead of having to buy a printing press or broadcasting equipment, a few dollars a month allows your own web page. Safety? Danger? A cold dash of realism may tell us that both metaphors are deceptive. The computer is just a cold machine (governing metaphor #3). A Tom Tomorrow cartoon from a few years ago presented three panels of metaphors—“I’m surfing the Internet” (a man surfing over waves of numbers), “I’m navigating the information superhighway” (driving down a superhighway of bits and bytes), “I’m flying in cyberspace” (floating in a galaxy)—then brought it all down literally with Sparky’s comment “No you’re not—you’re sitting across a computer screen.” We may believe it, in some sense, when we’re told metaphorically that we’re cruising the information superhighway or surfing waves of data, we may fear attacks by hackers or our children’s cyberseduction, but these are largely projections of our own desires and fears. When we’re hooked into the Internet (like so many fish, or perhaps patients on IVS) we are in places of no more and no less danger or safety than our offices or homes. The primary physical dangers are those of carpal tunnel syndrome and visual fatigue from staring at screens. As for the computer, it’s just a machine: GIGO (Garbage In, Garbage Out); the information superhighway’s a stream of bits moving over wires at the speed of light, in patterns that we read and assign meaning to. Well, I don’t know. In the first place, there’s nothing “mere” about machines. There’s also the matter of whatever psychological and social dangers are created by the rhetoric we read and create. We should not underestimate these—culturally created dangers might include global warming or the destruction of the ozone layer by our taste for personal convenience, reliance on SUVs to drive us one by one to work, preference for air conditioning, and so on. Moreover, we are “inside” of our perceptions of the world (if I can use a metaphor of containment). As George Lakoff has shown, metaphors are not just decoration, but are constitutive of the reality being described: “Since much of our social reality is understood in metaphorical terms, and since our conception of the physical world is partly metaphorical, metaphor plays a very significant role in determining what is real for us.” These metaphorical characterizations of the Internet are constitutive of the reality we are creating—not virtual reality or verbal reality, but just plain reality. In other words, it’s not that there’s the material words and signs over here, and some ideal Reality over there which they approximate but never map exactly. The material words and signs are what we have. And our recirculation of these systems of metaphors has material consequences. When our universities are spending hundreds of thousands on labs and then hiring work-study students as the only available assistance for students, when students routinely start their “research” on any topic by using a search engine rather than visiting the campus library, when the University of Phoenix rises from the ashes to offer “distance education” to a “class” of students who are never in physical proximity to each other or to an instructor—there’s nothing virtual about that. The metaphors of danger have served to discourage the anxious from initiating their Internet use; the metaphors of safety have served to encourage others to adopt electronic modes of education, among other uses; the metaphors of tools
DK3654—CHAPTER 2—3/10/2006—14:53—SRIDHAR—XML MODEL C – pp. 13–33
18
Handbook of Technology Management in Public Administration
mostly serve as a claim to the authority of common sense to dispel the other two. And all three categories of metaphors serve to create, not simply color or influence, education (and discourse) as we approach and move beyond the year 2000. To help establish the power of metaphor to shape public discourse about the Internet, I want to introduce some of the technology-as-threat variety (I will not be able to give equal time to the others). This narrative about technology reaches back at least as far as the introduction of industrialism in the later 18th–early 19th century (think: Frankenstein, Hard Times, “Life in the Iron Mills”) and perhaps back to the Luddites (who generally get a bad rap), and it’s been kept familiar for us in Western culture through fictional narratives (think of films: Metropolis, Brave New World, Nineteen Eighty-Four, Terminator, 2001: A Space Odyssey, Brazil, 12 Monkeys). So long as the Internet was an offshoot of U.S. defense technology, with scientists and university personnel talking only to each other, it was below the radar, so to speak, in cultural terms. But when PCs became as common as microwave ovens and VCRs, when the net went visual with the World Wide Web, when the net as communications system went public with Prodigy and America On-Line and tag lines in advertising and promotions, then we had to deal with it somehow. And technology-as-threat was one story at hand (especially if you yourself weren’t connected). Calling technology-as-threat a story doesn’t mean that there aren’t real threats. The principal ones aren’t individual in origin. Increased use of e-mail and Internet commerce multiply the potential for surveillance—witness the use of company e-mail in personnel disputes, legal and criminal disputes; witness the occurrence of credit card numbers intercepted from e-commerce; witness the use of “cookies” to generate further promotions; witness the dispute over the Pentium-3 chip. The Internet allows like-minded people to find each other and communicate in ways previously much more difficult, which generates not only positives such as breast cancer support groups and e-mail from ordinary people under attack in places like Bosnia and Kosovo, but negatives (to my mind, at least) such as neo-Nazi groups, the so-called Nuremberg Files tacitly encouraging abortion protesters to harass or kill medical personnel, and on-line atomic bomb or nerve gas recipes. There are playful invocations of danger which reflect anxieties: I found 33 web sites under the general heading “Ate My Balls,” linked to photographic images of the likes of Barney, Batman, Beanie Babies, Bigfoot, Bob Barker, Doctor Who, Garth Brooks, the Spice Girls . all this castration anxiety is modified slightly in a few cases (“Bill Gates Bought Our Balls,” “His Holyness, Pope John Paul II, Excommunicated My Balls,” etc.). There are news stories about cyber-stalkers, real dangers of Internet viruses like Melissa or the Chernobyl virus (multiplied by panic factors and misrepresentations such as the old canard about “Opening this e-mail message will trash your hard drive!”), and parents’ concerns about what their young technology savants are really up to on-line. (For example, are they looking up instructions for building pipe bombs, like the two Columbine High School students?) One set of parental anxieties was signaled by a 1995 cover story from Time magazine about “Cyberporn,” for which I have some slides. (Time’s story from archives includes the illustrations.) Here we see a child’s face staring in horrified wonder at the brave new technological world confronting him. (Perhaps the child’s face can be construed not as looking through the computer screen, as was the apparent intention of those producing Time’s cover, but as looking at the hundreds of thousands of Time readers.) Inside the magazine was a sensational story by Philip Elmer-Dewitt, based on a report represented as coming from Carnegie-Mellon University, one of the U.S.’s premier institutions in computer science. The report asserted that a high percentage of Internet users were downloading pornographic sites, available to children by any search engine (given the content, I’d recommend Yahoo). The article was accompanied by what might be called artist’s conceptions of sex with the machine. When Steven Spielberg released Jaws in 1975, beachgoers were driven off the sand and into the movie-houses; perhaps Time’s intent was to chase people off their computers and back to the magazines and television. This example can stand for a limited but significant trend in narratives about the net—along with the recurrent Ann Landers accounts of husbands and wives seduced by time spent in chat
DK3654—CHAPTER 2—3/10/2006—14:53—SRIDHAR—XML MODEL C – pp. 13–33
The Culture of Technology: Savants, Context, and Perspectives
19
rooms and by their virtual honeys. The crucial problem with the Time story is that it was built from bad data: the “study” was produced by a Carnegie-Mellon undergraduate, Martin Rimm—not even a major in the social sciences, but in the engineering department, who had apparent ambitions to project himself into a national position of expertise. When the problems with the study surfaced, thanks in large part to people from the Electronic Frontier Foundation and two Internet researchers at Vanderbilt, he was crossed off a witness list for a congressional subcommittee gathering testimony on the dangers of cyberspace. The study did not follow established social science protocols, the population was not typical of the U.S. (undergraduate males in a Pittsburgh January at a mostly male, mostly technological school—what would you think they’d be downloading, eh? Advice from Martha Stewart?), and on the basis of this he’d manipulated his way into a major magazine cover story. And Time went along with the story in part because it reinforced negative aspects of a competing medium. You wouldn’t find this story given such treatment as of 1999. Existing print and broadcast media have largely made their peace with the net: it’s here, so they may as well develop web sites and try to pull some net users back to their familiar forms of text. Some of these media have embraced interactivity. The technology-as-threat story is so well established in the culture that a lot of positive stories are required as counter-balance: these can be seen in the smarmy Microsoft ads (e.g., the recent schlocky, syrupy one about schools in Arizona that teachers are lined up to work for, because they use technology to interest students in learning), the space-suited guys from Intel, and many other promotions; but a lot of the positives are coming from faculty wanting to encourage use of the new technology as well. We have technology-as-progress: there’s an underlying anxiety that we have to encourage our students to use new technologies or they will be overtaken by others from different regions, different classes, different nations. There’s the convenience argument, for which I’d offer as example the web site Syllabus: Taming the Electronic Frontier from Brad Cox [formerly?] on the faculty at George Mason University. We can find models for the creation of positive spin in the introduction of television. Cecelia Tichi has shown how an expensive and potentially dangerous technology—remember the Cold War anxieties about radiation?—was brought into public consciousness by designs linking the television screen to the hearth, the icon for family togetherness, and by ads showing television in a social, frequently a familial, context. The U.S. middle class was convinced by images such as these not only to bring a source of radiation into their homes, but to make television the centerpiece of the nuclear family. Television is “really” both—there are idealized moments, such as those shown in the introductory advertising, and there are moments which inspire cynicism, such as the Diane Arbus-like photos of Lloyd DeGrane’s Tuned In: Television in American Life. And the Internet is “really” both threat and promise—it’s capacious enough to support many contradictory versions simultaneously. The metaphors through which we describe safety and danger create the conditions they describe. Moreover, language is not separable from the technology—it is an intrinsic part of the technology of the net and of language. There can be no separating the material devices which in one sense make up the net (thousands of PCs, servers, phone and cable lines, and so on) from the mental devices by which we think of them as one unified phenomenon. This is analogous to, say, the way we think of government even though it’s “really” a group of marble and steel-and-concrete-andglass buildings, a few million office workers, some gentlemen who like bright lights and microphones and the sound of their own voices, and so on. These buildings and people, material as they are, are linked conceptually to abstractions: law, justice, public service, along with some others. All that brings these disparate and contrary pieces into a unity is the metaphor conceiving them as one thing—and this is equally true, though more recent, of the Internet. Any one of us is limited in the ability to significantly affect the Internet, any more than we can affect government (assuming we would want to). Our participation is voluntary to a large extent, just as (in the U.S. at least) voting is not compulsory. But we are subject to large cultural phenomena nonetheless: we pay taxes, observe laws, recognize the authority of prime ministers and
DK3654—CHAPTER 2—3/10/2006—14:53—SRIDHAR—XML MODEL C – pp. 13–33
20
Handbook of Technology Management in Public Administration
presidents, and even if we move to Montana and march around in fatigues we’re still subjects of culture. What we can do is to become more conscious about the conditions of our participation, by noticing that there are options. In the time remaining, I would like to focus a little more specifically on educational uses for the Internet. Much discussion about technology and education is discipline-bound—that is, for a variety of reasons, papers at professional conferences and published essays are grounded first in the expectations of discourse for composition, for communication, for sociology, and so on, and are open only secondarily to cross-disciplinary concerns. My own field is rhetoric, which is a little more cross-disciplinary than some; my examples are from the area of writing instruction, and may not extend easily to other academic fields. Chris Anson, for example, in writing about “Teaching and Writing in a Culture of Technology,” mixes positives and negatives about computers and writing classes (positive—increased fluency, more openness to revision, a decentered classroom, the potential for more or different social interaction; negative—accentuated class differences on the basis of prior use of computers, less face to face interaction, potential for increased abuse of the academic underclass). But increased fluidity of the medium does not extend to fluidity across disciplines: there’s much play with metaphors but not much critical attention to metaphors. Richard Lanham sees the development of electronic text as the material embodiment of a central development in 20th-century culture: [T]echnology isn’t really leading us [towards democratization]. The arts and the theoretical debate that tags along after them have done the leading, and digitization has emerged as their condign embodiment.. The central issue to be explained is the extraordinary convergence of twentieth-century thinking over its whole intellectual spectrum with the digital means that now give it expression. It is the computer as fulfillment of social thought that needs explication (242–243; his italics).
In other words, those who see the Internet as a threat to humanistic education should not primarily be concerned that technology will crowd out material presence in text; rather, they should be concerned that technology’s easy adoption in writing and other classes is an indication that there was nothing necessarily “humanistic” about them to begin with. In general, I think it’s safe to say that both those in educational institutions and outside think of education as a separate enterprise from other areas of the culture such as media (Althusser’s Ideological State Apparatuses). There was considerable public outcry in the early ’90s when Channel One was introduced into U.S. classrooms, not only because of the time incursion on teachers’ mission but also because of the injection of commercialism into a space supposed to be free of its influences (as is the case in U.S. public schools with religious practices). Computers, however, are a legitimized incursion into the educational sphere—metaphorically a Trojan Horse for other, noneducational matters. First, their entry is as physical machines, used for word processing and (more rarely) courses in the sciences. For these uses they are conceived of as tools analogous to typewriters or laboratory equipment. But computers become something other than tools when they serve as media—e.g., for playing CD-ROM disks in libraries and for connecting to the Internet. Such uses draw on their capability of offering visual and auditory as well as verbal resources: the argument is that students who might not sift through print resources such as encyclopedias or books will access comparable material via multimedia. And as for the Internet, no public school library is likely to avoid the temptation of allowing students to reach on-line resources. Electronic media have transformed the delivery of information. The metaphor of Internet as dangerous place, however, has led many schools and some public libraries to place restrictions on students’ access to the net. For example, students may be under surveillance, sites may be blocked, and time of use may be restricted. In addition to the widely publicized “dangers” of pornography, schools may be concerned that students will waste time in chat rooms or on entertainment sites (e.g., playing on-line games) rather than working on school projects.
DK3654—CHAPTER 2—3/10/2006—14:53—SRIDHAR—XML MODEL C – pp. 13–33
The Culture of Technology: Savants, Context, and Perspectives
21
REFERENCES Anson, Chris M., Distant voices: teaching and writing in a culture of technology, College English 61, no. 3, pp. 261–79, January 1999. Lakoff, George and Johnson, Mark, Metaphors We Live By, Chicago, IL: University of Chicago Press, 1980. Lanham, Richard, Literacy Online: The Promise (and Peril) of Reading and Writing with Computers, Tuman, Myron C., Ed., Pittsburgh, PA: University of Pittsburgh Press, pp. 221–43, 1992. Time, July 3, 1995.
HOW TO MANAGE GEEKS* There’s a saying in Silicon Valley: “The geeks shall inherit the earth.” That’s a sign, if you needed one, that we have permanently entered a new economy. Once a term of derision, the label “geek” has become a badge of honor, a mark of distinction. Anyone in any business in any industry with any hope of thriving knows that he or she is utterly dependent on geeks—those technical wizards who create great software and the powerful hardware that runs it. The geeks know it too—a fact that is reflected in the rich salaries and hefty stock options that they now command. But how do you manage these geek gods? Perhaps no one knows better than Eric Schmidt, CEO of Novell Inc. Schmidt, 44, is a card-carrying geek himself: his resume boasts a computer-science PhD and a stint at Sun Microsystems, where he was the chief technology officer and a key developer of the Java software language. And, as if his technical skills weren’t enough to prove the point, Schmidt even looks the part, with his boy-genius face, his wire-rim spectacles, and his coder’s pallid complexion. Two years ago, Schmidt left Sun and took charge at Novell, where he has engineered an impressive turnaround. After years of gross mismanagement, the $1 billion networking-software company, headquartered in Provo, Utah, had been written off by competitors and industry observers alike. Since Schmidt’s arrival, however, the company has become steadily profitable, its stock price has more than doubled, and, within its field, Novell has again come to be seen as a worthy competitor to Microsoft. A good deal of the credit for Novell’s turnaround must go to Schmidt, who excels at getting the best out of his geeks. He has used his tech savvy to bring focus to Novell’s product line and his geek-cred to reenergize a workforce of highly skilled but (until recently) deeply dispirited technologists. In general, Schmidt speaks of his geeks in complimentary terms, while acknowledging their vulnerabilities and shortcomings. “One of the main characteristics of geeks is that they are very truthful,” says Schmidt (who, in fact, uses the term “geek” only occasionally). “They are taught to think logically. If you ask engineers a precise question, they will give you a precisely truthful answer. That also tends to mean that they’ll only answer the question that you asked them. If you don’t ask them exactly the right question, sometimes they’ll evade you—not because they’re lying but because they’re being so scrupulously truthful.” With that rule of geek behavior in mind, Fast Company went to Novell headquarters to ask Schmidt a series of precise, carefully worded questions. His answers add up to a short course in how to bring out the best in your geeks.
YOU’VE GOT TO HAVE YOUR OWN GEEKS Today innovation drives any business. And since you don’t want to outsource your innovation, you need to have your own geeks. Look at trends in e-commerce: who would have thought that all of these “old” companies would have to face huge new distribution-channel issues, all of which are * Russ Mitchell (
[email protected]), a senior writer for U.S. News & World Report, writes about business and technology from Silicon Valley. You can visit Novell Inc. on the Web (www.novell.com). Copyright q 2004 Gruner C Jahr U.S.A. Publishing. All rights reserved. Fast Company, 375 Lexington Avenue, New York, NY 10017.
DK3654—CHAPTER 2—3/10/2006—14:53—SRIDHAR—XML MODEL C – pp. 13–33
22
Handbook of Technology Management in Public Administration
driven by technology? The truth is, you need to have a stable of technologists around—not just to run your systems but also to help you figure out which strategies to pursue, which innovations to invest in, and which partnerships to form. The geeks control the limits of your business. It’s a fact of life: if the technologists in your company invent something ahead of everybody else, then all of a sudden your business will get bigger. Otherwise, it will get smaller. You simply have to recognize and accept the critical role that technologists play. All new-economy businesses share that property.
GET TO KNOW YOUR GEEK COMMUNITY According to the traditional stereotype, geeks are people who are primarily fascinated by technology and its uses. The negative part of that stereotype is the assumption that they have poor social skills. Like most stereotypes, it’s true in general—but false at the level of specifics. By society’s definition, they are antisocial. But within their own community, they are actually quite social. You’ll find that they break themselves into tribes: mainframe-era graybeards, UNIX people who started out 20 years ago, the new PC-plus-Web generation. They’re tribal in the way that they subdivide their own community, but the tribes don’t fight each other. In fact, those tribes get along very well—because all of them fight management. Perhaps the least-becoming aspect of the geek community is its institutional arrogance. Remember, just because geeks have underdeveloped social skills doesn’t mean that they don’t have egos. Tech people are uppity by definition: a lot of them would like to have been astronauts. They enjoy the limelight. In a power relationship with management, they have more in common with pro basketball players than they do with average workers. Think of your techies as free agents in a highly specialized sports draft. And the more specialized they are, the more you need to be concerned about what each of them needs as an individual.
LEARN WHAT YOUR GEEKS ARE LOOKING FOR This is a golden era for geeks—it doesn’t get any better than this. In the early 1970s, an engineering recession hit, and we reached a low point in engineering and technical salaries. Ever since then, salaries have been going way up. Geeks have figured out that increasing their compensation through stock options is only fair: they expect to share in the wealth that they help to create through technology. Today technology salaries are at least twice the national average. In fact, tech salaries are going through the roof, and nontech salaries are not—which presents a serious problem for many companies. But, as important as money is to tech people, it’s not the most important thing. Fundamentally, geeks are interested in having an impact. They believe in their ideas, and they like to win. They care about getting credit for their accomplishments. In that sense, they’re no different from a scientist who wants credit for work that leads to a Nobel Prize. They may not be operating at that exalted level, but the same principle applies.
CREATE NEW WAYS
TO
PROMOTE YOUR GEEKS
If you don’t want to lose your geeks, you have to find a way to give them promotions without turning them into managers. Most of them are not going to make very good executives—and, in fact, most of them would probably turn out to be terrible managers. But you need to give them a forward career path, you need to give them recognition, and you need to give them more money. Twenty years ago, we developed the notion of a dual career ladder, with an executive career track on one side and a technical career track on the other. Creating a technical ladder is a big first step. But it’s also important to have other kinds of incentives, such as awards, pools of stock, and nonfinancial kinds of compensation. At Novell, we just added a new title: distinguished engineer. To become a distinguished engineer, you have to get elected by your peers.
DK3654—CHAPTER 2—3/10/2006—14:53—SRIDHAR—XML MODEL C – pp. 13–33
The Culture of Technology: Savants, Context, and Perspectives
23
That requirement is a much tougher standard than being chosen by a group of executives. It’s also a standard that encourages tech people to be good members of the tech community. It acts to reinforce good behavior on everyone’s part.
EITHER GEEKS ARE PART OF
THE
SOLUTION—OR THEY’RE
THE
PROBLEM
Here’s another thing you need to know about the geek mind-set: because tech people are scientists or engineers by training, they love to solve really hard problems. They love to tackle a challenge. The more you can get them to feel that they’re helping to come up with a solution to a tough problem, the more likely they are to perform in a way that works for you. When you talk with them, your real goal should be to engage them in a dialogue about what you and they are trying to do. If you can get your engineering team to agree with what you’re trying to accomplish, then you’ll see them self-organize to achieve that outcome. You’ll also need to figure out what they’re trying to accomplish—because, no matter what you want, that’s probably what they’re going to do. The next thing you need to remember is that you can tell them what to do, but you can’t tell them how to do it. You might as well say to a great artist, “I’ll describe to you what a beautiful painting is. Then I’ll give you an idea for a particular painting. I’ll tell you which colors to use. I’ll tell you which angle to use. Now you just paint that painting.” You’d never get a great painting out of any artist that way—and you’ll never get great work out of your geeks if you try to talk to them like that. You need to give them a problem or a set of objectives, provide them with a large amount of hardware, and then ask them to solve the problem.
THE BEST JUDGES OF GEEKS ARE OTHER GEEKS Make sure that there is always peer-group pressure within your project teams. For example, if you want to motivate your project leaders, just require them to make presentations to each other. They care a great deal about how they are perceived within their own web of friends and by the professional community that they belong to. They’re very good at judging their own. And they’re often very harsh: they end up marginalizing the people who are terrible—for reasons that you as a manager may not quite understand. It sounds like I’m touting tech people as gods, but there are plenty of bad projects, and there is plenty of bad engineering and bad technology. You’re always going to encounter “techies” who are arrogant and who aren’t as good as they think they are. A team approach is the best way to deal with that problem. Tech people know how to deal with the wild ducks in their group—on their own and with the right kind of peer pressure.
LOOK
FOR THE
NATURAL LEADERS AMONG YOUR GEEKS
In a high-tech company that is run by engineers, what matters most is being right. And what’s “right” is determined by outcomes. You can listen to lots of exceptionally bright people talk about their brilliant vision. I’ve done it for the past 25 years. But what matters is, Which ones deliver on their vision? When a project is on the line, who actually gets the job done? Every team has a natural leader—and often that leader is not a team’s official manager. Your job is to get the team motivated. Once you do that, the natural leaders will emerge very quickly. If you keep an eye on the team, you can figure out who those natural leaders are—and then make sure that they’re happy and that they have everything they need to do their job. For instance, natural leaders need to feel that they have access to the company’s senior managers. Don’t forget: they feel like they’re changing the world—so you need to make them feel like you’re helping them do that. There are easy ways that you can help them out. For example, encourage them to bypass layers of middle management and to send you e-mail directly. Sure, that will piss off the people in middle management, but it’s better to piss off those people than to piss off your key project leaders.
DK3654—CHAPTER 2—3/10/2006—14:53—SRIDHAR—XML MODEL C – pp. 13–33
24
BE PREPARED
Handbook of Technology Management in Public Administration FOR
WHEN
THE
GEEKS HIT
THE
FAN
You can divide project teams into two categories. First, there is the preferred variety: you get an engineering team that’s absolutely brilliant, that executes well, and that’s exactly right in its assumptions. Second, there is the more usual variety: you get an engineering team that has a very strong opinion about what it’s trying to do—but that’s on the wrong track, because some of its assumptions are wrong. That second kind of team is what you have to focus your attention on. But often you can’t intervene from the top down. You have to find a way to come at the problem from the side. At Novell, we have a series of checkpoints at which our teams get lateral feedback—feedback that comes from outside of the management hierarchy. Every six weeks, we have three days of product reviews. But it’s not just management conducting those reviews. We also bring in smart technologists with good memories: they remind us of what everybody committed to. In most technology companies, there are always a few people who, everyone agrees, have better taste than anyone else. Those are the people whom everyone goes to; they serve as reviewers or advisers. At Sun Microsystems, for instance, it’s Bill Joy. At Novell, it’s Drew Major, the founder and chief scientist. Everyone knows that when Drew gets involved in a project, he’ll size up quickly what needs to get done, and people will listen to him. In general, as long as you consider everyone’s ideas, most teams react well to management decisions. If you have to make a business decision that conflicts with what your engineers want to do, they’ll accept it—as long as it is truly a business decision. On the other hand, if the decision is based on a technology analysis by someone whom the engineers do not respect professionally, then they’ll never agree to it. So, if you’re facing a decision that you know will affect a team negatively, you must vet that decision through a technologist who has that team’s respect.
TOO MANY GEEKS SPOIL THE SOUP If you want your geeks to be productive, keep your teams small. The productivity of any project is inversely proportional to the size of the project team. In the software business, most problems draw on the efforts of large numbers of people. Typically, companies deal with a problem by putting together a large team and then giving that team a mission. But in this industry, that approach almost never works. The results are almost invariably disappointing. Still, people keep doing it that way— presumably because that’s the way they did it last year. The question is, How do you break out of that mode? It seems to be a cancer on the industry. On a large team, the contributions of the best people are always smaller, and overall productivity is always lower. As a general rule, you can count on each new software project doubling in team size and in the amount of code involved—and taking twice as long—as the preceding project. In other words, the average duration of your projects will go from 2 years to 4 years to 8 years to 16 years, and so on. You can see that cycle with almost any technology. Two or three people invent a brilliant piece of software, and then, five years later, 1,000 people do a bad job of following up on their idea. History is littered with projects that follow this pattern: Windows, UNIX, Java, Netscape Navigator. The smaller the team, the faster the team members work. When you make the team smaller, you make the schedule shorter. That may sound counterintuitive, but it’s been true for the past 20 years in this industry, and it will be true for another 20 years. The only method that I’ve found that works is to restrict the size of teams arbitrarily and painfully. Here’s a simple rule of thumb for techie teams: no team should ever be larger than the largest conference room that’s available for them to meet in. At Novell, that means a limit of about 50 people. We separate extremely large projects into what we call “Virtual CDs.” Think of each project as creating a CD-ROM of software that you can ship. It’s an easy concept: each team has to ship a CD of software in final form to someone
DK3654—CHAPTER 2—3/10/2006—14:53—SRIDHAR—XML MODEL C – pp. 13–33
The Culture of Technology: Savants, Context, and Perspectives
25
else—perhaps to another team, perhaps to an end user. When you treat each project as a CD, you enable one group to say to another, “Show me the schedule for your CD. When is this deliverable coming?” It’s the kind of down-to-earth approach that everyone can understand, that techies can respect and respond to, and that makes almost any kind of project manageable.
THE COMING COLLAPSE OF THE AGE OF TECHNOLOGY* A little-noticed event of exceptional importance occurred on the 8th of May, 1998. The conservative, power-oriented champion of science, progress, and reason, Science magazine, published an article by the distinguished British scientist James Lovelock which said: We have confidence in our science-based civilization and think it has tenure. In so doing, I think we fail to distinguish between the life-span of civilizations and that of our species. In fact, civilizations are ephemeral compared with species.
Lovelock, originator of the Gaia Hypothesis—about the central role of life in the earth’s selfregulating system that includes atmosphere, climate, land, and oceans—went on to recommend that we “encapsulate the essential information that is the basis of our civilization to preserve it through a dark age.” The book would be written not on ephemeral, digital magnetic, or optical media but on “durable paper with long-lasting print.” It would record in simple terms our present knowledge of science and technology, including the fundamentals of medicine, chemistry, engineering, thermodynamics, and natural selection. As the monasteries did in the Dark Ages, the book would help to keep our culture from vanishing during a prolonged period of chaos and upheaval. Set aside the question of whether such a task could be done, or whether science ought to be described for future generations in a neutral way. What commands our attention first is that Science magazine was willing to print two precious pages based on the premise that our scientific-technological civilization is in real danger of collapse.
CAN
THE
MACHINE STOP?
Nearly everyone in our society, experts and lay people alike, assumes that the events and trends of the immediate future—the next five to twenty-five years—are going to be much like those of the present. We can do our business as usual. In the world at large, there will be a continued increase in global economic, social, and environmental management; a continued decrease in the importance of national and local governments compared with transnational corporations and trade organizations; more sophisticated processing, transfer, and storage of information; more computerized management systems along with generally decreased employment in most fields; increased corporate consolidation; and a resulting increase in the uniformity of products, lifestyles, and cultures. The future will be manifestly similar to today. Power carries with it an air of assured permanence that no warnings of history or ecology can dispel. As John Ralston Saul has written, “Nothing seems more permanent than a long-established government about to lose power, nothing more invincible than a grand army on the morning of its annihilation.” The present economic-technical-organizational structure of the industrial and most of the nonindustrial world is the most powerful in history. Regardless of one’s political orientation, it’s very difficult to imagine any other system, centralized or decentralized, ever replacing it. Reinforcing this feeling is the fact that our technology-driven economic system has all the trappings of royalty and empire, without the emperor. It rolls on inexorably, a giant impersonal machine, devouring and processing the world, unstoppable. * By David Ehrenfeld. Reprinted from Tikkun, 2/28/1999. q Institute for Labor and Mental Health.
DK3654—CHAPTER 2—3/10/2006—14:53—SRIDHAR—XML MODEL C – pp. 13–33
26
Handbook of Technology Management in Public Administration
Futurists of all political varieties, those who fear and loathe the growing power as well as those who welcome it, share faith in its permanence. Even those who are aware of the earth’s growing social and environmental disasters have this faith. Robert D. Kaplan originally writing in the Atlantic Monthly in 1994, is an example. “We are entering a bifurcated world,” said Kaplan, in “The Coming Anarchy.” Part of it, in West Africa, the Indian subcontinent, Central America, and elsewhere in the underdeveloped world, will be subject to ethnic conflict, food scarcity, massive overcrowding, militant fundamentalism, the breakdown of national governments and conventional armies, and the resurgence of epidemic disease, all against a backdrop of global climatic change. But the other part of the world will be “healthy, well-fed, and pampered by technology.” We’ll be all right, those of us with the money and the technology. The system will not fail us. Despite the grip of the idea of irreversible progress on the modern mind, there are still some people who believe in the cyclical view of history. Have they generated a different scenario of the future? Not necessarily. The archaeologist Joseph Tainter notes in his book, The Collapse of Complex Societies, that collapse and disintegration have been the rule for complex civilizations in the past. There comes a time when every complex social and political system requires so much investment of time, effort, and resources just to keep itself together that it can no longer be afforded by its citizens. Collapse comes when, first, a society “invests ever more heavily in a strategy that yields proportionately less” and, second, when “parts of a society perceive increasing advantage to a policy of separation or disintegration.” Forget the Mayan and Roman Empires: What about our own? Certainly the problem of spending more and getting less describes our present condition. Are we receiving full value from an international banking and finance system that shores up global speculators with billions of dollars of public money, no matter how recklessly they gamble and whether they win or lose? Our NAFTA strategy has cost this country tens of thousands of jobs, reduced our food security, and thrown our neighbor, Mexico, into social, economic, and environmental turmoil; is this an adequate repayment for the dollars and time we have spent on free trade? More than 70 percent of government-supported research and development is spent on weapons that yield no social, and a highly questionable military, benefit; the Pentagon loses—misplaces—a billion dollars worth of arms and equipment each year. Is this a profitable investment of public funds? We may ask whether the decline in returns on investment in this system has reached the critical point. Tainter quotes from a popular sign: “Every time history repeats itself the price goes up.” The price is now astronomical. If we follow Tainter, however, we need not worry about our future. In the curiously evasive final chapter of his book, he states that “Collapse today is neither an option nor an immediate threat.” Why not? Because the entire world is part of the same complex system. Collapse will be prevented, in effect, by everyone leaning on everyone else. It reminds me of that remote island, described by the great British humorist P. G. Wodehouse, where the entire population earned a modest but comfortable living by taking in each other’s washing. I don’t have this kind of blind faith. I don’t believe in the permanence of our power. I doubt whether the completely globalized, totally managed, centralized world is going to happen. Technoeconomic globalization is nearing its apogee; the system is self-destructing. There is only a short but very damaging period of expansion left. Now if I were playing it comparatively safe, I would stick to the more obvious kinds of support for my argument, the things I know about as an ecologist. I would write about our growing environmental problems, especially certain kinds of pollution and ecosystem destabilization: global soil erosion; global deforestation; pollution and salinization of freshwater aquifers; desertification; saline seeps, like those that have ruined so much prime agricultural land in Australia; growing worldwide resistance of insects to insecticides; acid rain and snow; on-farm transfer of genes for herbicide resistance from crops to weeds; the loss of crop varieties; the collapse of world fisheries; the decline, especially in Europe, of mycorrhizal fungi needed for tree growth; the effects of increasing CO2 and introduced chemicals in the atmosphere, including but not limited to global warming; the hole in the ozone layer; the extinction and impending extinction of keystone species
DK3654—CHAPTER 2—3/10/2006—14:53—SRIDHAR—XML MODEL C – pp. 13–33
The Culture of Technology: Savants, Context, and Perspectives
27
such as the pollinators needed for the propagation of so many of our crops and wild plants; the accelerated spread of deleterious exotic species such as the Asian tiger mosquito; the emergence of new, ecologically influenced diseases, and the resurgence of old diseases, including, for example, the recent discovery of locally transmitted malaria in New Jersey, New York City, Michigan, Toronto, California, and Texas; the spread of antibiotic resistance among pathogenic bacteria; and finally the catastrophic growth of the human population, far exceeding the earth’s carrying capacity—all of these things associated with the techno-economic system now in place. Some of the problems I mentioned are conjectural, some are not; some are controversial, some are not; but even if only half or a fifth of them materialize as intractable problems, that will be quite enough to bring down this technological power structure.
THE FORCES
OF INTERNAL
BREAKDOWN
But I am not going to dwell on the ecological side effects of our technology, important as they are; most of them have already received at least some attention. I am leaving this comparatively safe turf to discuss the forces of internal breakdown that are inherent in the very structure of the machine. Part of the system’s power comes from our faith in its internal strength and cohesiveness, our bland and infuriating confidence that somebody is at the wheel, and that the steering and brakes are working well. The causes of the problems affecting our global system are numerous, overlapping, and often obscure—I will not try to identify them. The problems themselves, however, are clear enough. I have grouped them in six broad categories. 1. The Misuse of Information. One of the most serious challenges to our prevailing system is our catastrophic loss of ability to use self-criticism and feedback to correct our actions when they place us in danger or give bad results. We seem unable to look objectively at our own failures and to adjust the behavior that caused them. I’ll start with three examples. First observation: in 1997, NASA launched the Cassini space probe to Saturn. After orbiting the earth, it is programmed to swing around Venus to gain velocity, then head back toward earth at tremendous speed, grazing us, if all control thrusters function exactly as planned, at a distance of only 312 miles, using our gravity to accelerate the probe still more and turn it into a Saturn-bound trajectory. The space probe cost $3.5 billion and carries in its nuclear energy cell seventy-two pounds of plutonium-238, the most deadly substance in existence. Alan Kohn, former emergency-preparedness operations officer at the Kennedy Space Center, described Cassini as “criminally insane.” Yet this dramatic criticism from a NASA insider, plus similar concerns expressed by many outside scientists, did not stop the project. The second example: on February 15, 1996, President Clinton launched his Technology Literacy Challenge, a $2 billion program which he hoped would put multimedia computers with fiber optic links in every classroom. “More Americans in all walks of life will have more chances to live up to their dreams than in any other time in our nation’s history,” said the president. He singled out a sixth-grade classroom in Concord, New Hampshire, where students were using Macintosh computers to produce a very attractive school newspaper. Selecting two editorials for special notice, he praised the teacher for “the remarkable work he has done.” An article in New Jersey’s Star Ledger of February 26, 1996 gave samples of the writing in those editorials. The editorial on rainforest destruction began: “Why people cut them down?” The editorial about the president’s fights with Congress said, “Conflicts can be very frustrating. Though, you should try to stay away from conflicts. In the past there has been fights.” The third example: around the world, funds are being diverted away from enormously successful, inexpensive methods of pest control, such as the use of beneficial
DK3654—CHAPTER 2—3/10/2006—14:53—SRIDHAR—XML MODEL C – pp. 13–33
28
Handbook of Technology Management in Public Administration
insects to attack pests, to the costly, risky, and unproven technologies favored by multinational, biotechnology corporations. Hans Herren, whose research in the biological control of the insect pests of cassava helped avert a famine threatening 200 million Africans, said: “When I visit [African] agricultural research institutes, I find the biological control lab half empty, with broken windows . but the biotechnology lab will be brand new with all the latest equipment and teeming with staff.” These examples, superficially quite different, show that we are not using the information at hand about the results of our past actions to guide and direct what we plan to do next. This inability to correct ourselves when we go astray is exacerbated by the dangerously high speed of our decision-making (Jeremy Rifkin calls it the “nanosecond culture”), a consequence of modern, computer-assisted communications. This speed short-circuits the evolutionary process of reasoned decision-making, eliminating time for empirical feedbacks and measured judgment. Messages arriving by express mail, fax, and e-mail all cry out for an immediate response. Often it is better to get a night’s sleep before answering. A final example of the misuse of information is information glut. We assume these days that information is like money: you can’t have too much of it. But, in fact, too much information is at least as bad as too little: it masks ignorance, buries important facts, and incapacitates minds by overwhelming the critical capacity for brilliant selectivity that characterizes the human brain. That quantity and quality are so often inversely related in today’s information flow compounds this problem. If our feedback alarm bells were sounding properly, we would curtail the flow of junk—instead, we worship it. 2. The Loss of Information. The acceleration of obsolescence is a plague afflicting all users of contemporary technology. Although obsolescence is an inherent part of any technology that isn’t moribund, several factors have combined in the last few decades to exaggerate it out of manageable proportions. One factor is the sheer number of people involved in technology, especially information technology—each has to change something or make something new to justify a salary. Another factor is the market’s insistence on steadily increasing sales, which in turn mandates an accelerated regimen of planned obsolescence. The social disruption caused by accelerated obsolescence is well known. A less familiar, yet equally important, result is the loss of valuable knowledge. The technical side of this was described by Jeff Rothenberg in an article in the January 1995 issue of Scientific American, entitled “Ensuring the Longevity of Digital Documents.” It turns out that neither the hardware nor the software that underlie the information revolution has much staying power. “It is only slightly facetious,” says Rothenberg, “to say that digital information lasts forever—or five years, whichever comes first.” The most durable digital storage medium, the optical disk, has a physical lifetime of only thirty years and an estimated time to obsolescence of ten years. Digital documents are evolving rapidly, and shifts in their basic form are frequent. Translation backwards or forwards in time becomes difficult, tedious, and expensive—or impossible. The result is the loss of much of each previous generation’s work, a generation being defined as five to twenty years. There is always something “better” coming; as soon as it arrives, we forget all about it. One striking example of the obsolescence nightmare, documented by Nicholson Baker in The New Yorker and by Clifford Stoll in his book Silicon Snake Oil, concerns the widespread conversion of paper library card catalogs to electronic ones. Having spent a fortune to convert their catalogs, libraries now find themselves in an electronic-economic Catch-22. The new catalogs don’t work very well for many purposes, and the paper catalogs have been frozen or destroyed. Better electronic systems are always on the horizon. Consequently, libraries spend a third or more of their budgets
DK3654—CHAPTER 2—3/10/2006—14:53—SRIDHAR—XML MODEL C – pp. 13–33
The Culture of Technology: Savants, Context, and Perspectives
on expensive upgrades of software and hardware, leaving little money for books and journals. A second example of the effects of obsolescence is the wholesale forgetting of useful skills and knowledge—everything from how to operate a lathe to how to identify different species of earthworms. Whole branches of learning are disappearing from the universities. The machine is jettisoning both knowledge and diversity (a special kind of information) simultaneously. To illustrate the loss of biodiversity, biologists Stephen Hall and John Ruane have shown that the higher the GNP in the different countries of Europe—the more integrated into “the system” they are—the higher the percentage of extinct breeds of livestock. I’m sure that the same relationship could be shown for agricultural crop varieties or endangered languages. The system is erasing our inheritance. Another problem involving the loss of information is incessant reorganization, made easier by information technology and causing frequent disruption of established social relationships among people who work and live together. Changes occur too rapidly and too often to permit social evolution to work properly in business, in government, in education, or in anything touched by them. An article by Dirk Johnson in the “Money and Business” section of The New York Times of March 22, 1998 described some recent problems of the Leo Burnett advertising agency, which gave the world the Jolly Green Giant and the Marlboro Man. Johnson described one especially serious trouble for a company that prides itself on its long-term relationships with clients: “No one at Burnett can do much about a corporate world that shuttles chief executives in and out like managers on a George Steinbrenner team and that has an attention span that focuses on nothing older than the last earnings report. It is not easy to build client loyalty in such a culture, as many other shops can attest.” 3. Increasing Complexity and Centralized Control. A third intrinsic problem with the techno-economic system is its increasing complexity and centralized control, features of much of what we create—from financial networks to nuclear power plants. Nature, with its tropical rainforests, temperate prairies, and marine ecosystems, is also complex. But nature’s slowly evolved complexity usually involves great redundancy, with duplication of functions, alternative pathways, and countless, self-regulating, fail-safe features. Our artificial complexity is very different: it is marked by a high degree of interlinkage among many components with little redundancy, by fail-safe mechanisms that are themselves complex and failure-prone, and by centralized controllers who can never fully comprehend the innumerable events and interactions they are supposed to be managing. Thus, our artificial systems are especially vulnerable to serious disturbances. System-wide failures—what the Yale sociologist Charles Perrow calls “normal accidents”—occur when one component malfunctions, bringing down many others that are linked in ways that are poorly observed and understood. The fruits of this complexity and linkage are everywhere, from catastrophic accidents at chemical and nuclear plants to the myriad effects of accelerated climatic change. Accidents and catastrophes are the most noticeable results of running a system that is beyond our full understanding, but the more routine consequences, those that don’t necessarily make the front-page headlines, may be more important. Many of these consequences stem from a phenomenon first described by John von Neumann and Oskar Morgenstern in 1947, and applied to social systems by the biologist Garrett Hardin in 1968. Von Neumann and Morgenstern pointed out that it is mathematically impossible to maximize more than one variable in an interlinked, managed system at any particular time. If one variable is adjusted to its maximum, it limits the freedom to maximize other variables—in other words, in a complex system we cannot make everything “best” simultaneously. As we watch economists desperately juggling stock prices,
DK3654—CHAPTER 2—3/10/2006—14:53—SRIDHAR—XML MODEL C – pp. 13–33
29
30
Handbook of Technology Management in Public Administration
wages, commodity prices, productivity, currency values, national debts, employment, interest rates, and technological investments on a global scale, trying to maximize them all, we might think about von Neumann’s and Morgenstern’s theory and its implications for the fate of our complex but poorly redundant techno-economic machine. 4. Confusing Simulation with Reality. The fourth group of problems is the blurring of the distinction between simulation and reality. With globalization, the ease of making large changes in a simulation on the computer screen is accompanied by a corresponding ease of ordering these large changes in the real world—with disastrous results in activities as different as the planning of massive public works projects, the setting of monetary policy, and the conduct of war. Beginning with the Vietnam war, all contemporary American military adventures have had this through-the-looking-glass quality, in which the strategies and simulations conform poorly to actual events on the ground. As Martin van Creveld shows in his book, The Transformation of War, the simulated war of the military strategists is increasingly different from the realities of shifting, regional battlefields with their “low-intensity conflicts” against which high-tech weapons and classic military strategies are often worse than useless. As we attempt to exert more complicated controls over our world, more modeling, with its assumptions and simplifications, is needed. This in turn causes all kinds of errors, some serious, most hidden. According to James Lovelock, years before the ozone hole was discovered by a lone pair of British observers using an old-fashioned and inexpensive instrument, it was observed, measured, and ignored by very expensive satelliteborne instruments which had been programmed to reject data that were substantially different from values predicted by an atmospheric model. Simulation had triumphed over reality. What I call the “pseudocommunity problem” is another illustration of the fuzzy line that exists between simulation and reality. It began with television, which surrounded viewers with friends they did not have and immersed them in events in which they did not participate. The philosopher Bruce Wilshire, an articulate and charismatic lecturer, has observed that students who are otherwise polite and attentive talk openly and unselfconsciously during his lectures, much as if he were a figure on a television screen who could not be affected by their conversation. E-mail and the Internet have made this situation much worse. E-mail has opened up a world of global communications that has attracted many of our brightest and most creative citizens, especially young people. Considerable good has come of this— for the handicapped, for those who have urgent need of communicating with people in distant places, for those living in politically repressed countries, and others. But the ease and speed of e-mail are traps that few evade. Real human interaction requires time, attention to detail, and work. There is a wealth of subtlety in direct conversation, from body language to nuances of voice to choice of words. In e-mail this subtlety is lost, reduced to the level of the smiley face used to indicate a joke. The result is a superficial, slipshod substitute for effective communication, often marked by careless use of language and hasty thought. Every hour spent online in the “global village” is an hour not spent in the real environment of our own communities. It is an hour of not experiencing the love and support of good neighbors; an hour of not learning how to cope with bad neighbors, who cannot be erased by a keystroke; an hour of not becoming familiar with the physical and living environment in which we actually live. Perhaps this is why a recent study of the social involvement and psychological well-being of Internet users, published in American Psychologist, found a significant decrease in the size of their social circle and a significant increase in their depression and loneliness after one to two years on-line. There are no good substitutes for reality.
DK3654—CHAPTER 2—3/10/2006—14:53—SRIDHAR—XML MODEL C – pp. 13–33
The Culture of Technology: Savants, Context, and Perspectives
5. The Unnecessary Exhaustion of Resources. Our techno-economic system is distinguished by its exceptionally high consumption of renewable and nonrenewable resources. When this is pointed out, advocates of the system answer that substitutes for depleted resources will be found or new technologies will eliminate the need for them. To date, neither of these claims has been demonstrated to be true in any significant case. Meanwhile, resources—from food to forests, from fresh water to soil—are disappearing quickly. Blindness to warnings of impending shortage limits our options: by not responding while there are still the time and resources left to take corrective action, we are forced to work under crisis conditions, when there is little that can be done. The problem is too familiar to require much elaboration, but the most conspicuous example deserves brief mention. The global production of oil will probably peak—and thereafter decline—sometime between 2000 and 2010, regardless of new oil field development. Almost every part of our technology, including nuclear technology, depends on oil. Oil is not going to disappear any time soon. But we have already used more than half of the world’s supply, with most of this consumption in the last few decades. As the recent report of Petroconsultants S. A. and the book, The Coming Oil Crisis, by the distinguished petroleum geologist C. J. Campbell make plain, we have only a very few years left of cheap oil. The loss of cheap oil will strike far more deeply than can be predicted by economists’ price-supply curves; it will fatally damage the stability of the transnational corporations that run our global techno-economic system. Transnational corporations are, ultimately, economic losers. Too often they rely on the sale of products that don’t work well and don’t last, that are made in unnecessarily expensive ways (usually as a result of making them very quickly), that are expensively transported, carry high environmental and human costs, and are purchased on credit made available by the seller. At present, these products are subsidized by subservient, lobbyist-corrupted governments through tax revenues and favorable regulation; their flaws are concealed by expensive advertising promotions which have coopted language and human behavioral responses in the service of sales; and they are imposed on consumers by the expensive expedient of suppressing alternative choices, especially local alternatives. All of this depends on the manipulation of a huge amount of surplus wealth by the transnationals, wealth that has been generated by cheap oil. When the oil becomes expensive, with no comparably inexpensive energy substitutes likely, when jobs disappear and the tax base shrinks, when consumers become an endangered species, and when corporate profits dwindle and the market values of their stocks decline, the fundamental diseconomies of global corporations will finally take their toll and we will begin to see the transnationals disintegrate. 6. The Loss of Higher Inspiration. There is one final, internal problem of the system, maybe the most important: namely, a totally reductionist, managed world is a world without its highest inspiration. With no recognized higher power other than the human-made system that the people in charge now worship, there can be no imitation of God, no vision of something greater to strive for. Human invention becomes narrow, pedestrian, and shoddy; we lose our best models for making lasting, worthy societies. One such model is the noble dream of people and their communities functioning nondestructively, justly, and democratically within a moral order. The other—long a reality—is nature itself, whose magnificent durability we will never totally comprehend, but which has much to teach us if we want to learn. When people and communities become mere management units and nature is only something to exploit, what is left worth striving after? We become no better than our machines, and just as disposable.
DK3654—CHAPTER 2—3/10/2006—14:53—SRIDHAR—XML MODEL C – pp. 13–33
31
32
Handbook of Technology Management in Public Administration
THE END OF GLOBAL MANAGEMENT The reductionist idea of a fully explainable and manageable world is a very poor model of reality by any objective standard. The real world comprises a few islands of limited understanding in an endless sea of mystery. Any human system that works and survives must recognize this. A bad model gives bad results. We have adopted a bad model and now we are living with the terrible consequences. The present global power system is a transient, terminal phase in a process that began 500 years ago with the emerging Age of Reason. It has reached its zenith in the twentieth century, powered by the global arms trade and war and enabled by a soulless, greed-based economics together with a hastily developed and uniquely dangerous technology. This power system, with its transnational corporations, its giant military machines, its globalized financial system and trade, its agribusiness replacing agriculture—with its growing numbers of jobless people and people in bad jobs, with its endless refugees, its trail of damaged cultures and ecosystems, and its fatal internal flaws, is now coming apart. Realization of the machine’s mortality is the necessary first step before we begin to plan and work for something better. As the great British philosopher Mary Midgley says, “The house is on fire; we must wake up from this dream and do something about it.” Looming over us is an ominous conjunction of the internal sources of breakdown I have just described with the many, interlinked ecological and social threats that I only briefly listed. What can we do? Obviously a crash as comprehensive as the one that’s coming will affect all of us, but that doesn’t mean that there is nothing that can be done to soften the blow. We should begin by accepting the possibility that the system will fail. While others continue to sing and dance wildly at the bottom of the avalanche slope, we can choose to leave the insane party. I do not mean going back to some prior state or period of history that was allegedly better than the world today. Even if going back were possible, there is no halcyon period that I would want to regain. Nor do I mean isolating ourselves in supposedly avalanche-proof shelters—gated communities of like-minded idealists. No such shelter could last for long; nor would such an isolated existence be desirable. In the words of my friend, geographer Meg Holden, we should be unwilling “to admit defeat in the wager of the Enlightenment that people can create a nation based not on familial, racial, ethnic, or class ties, but on . the betterment of self only through the betterment of one’s fellow citizens.” There is no alternative but to move forward—a task that will place the highest demands on our ability to innovate and on our humanity. Moving forward requires that we provide satisfying alternatives to those who have been most seriously injured by the present technology and economics. They include farmers, blue-collar workers suddenly jobless because of unfair competition from foreign slave labor or American “workfare,” and countless souls whose lives and work have been made redundant by the megastores in the shopping malls. If good alternatives are not found soon, the coming collapse will inevitably provoke a terrible wave of violence born of desperation.
CREATING A SHADOW SYSTEM Our first task is to create a shadow economic, social, and even technological structure that will be ready to take over as the existing system fails. Shadow strategies are not new, and they are perfectly legal. An illustration is Winston Churchill’s role in Britain before the start of World War II. Churchill was a member of the governing Conservative Party but was denied power, so he formed his own shadow organization within the party. During the 1930s, while Hitler was rearming Germany and the Conservative leadership was pretending that nothing was happening, Churchill spoke out about the war he knew was coming, and developed his own plans and alliances. When Hitler’s paratroopers landed in Holland and Belgium in 1940, and Prime Minister Neville Chamberlain’s silk umbrella could not ward them off, Churchill was chosen by popular acclaim to replace
DK3654—CHAPTER 2—3/10/2006—14:53—SRIDHAR—XML MODEL C – pp. 13–33
The Culture of Technology: Savants, Context, and Perspectives
33
him as prime minister. He created a dynamic war cabinet almost overnight, thanks to his shadow organization. The shadow structure to replace the existing system will comprise many elements, with varying mixes of the practical and theoretical. These elements are springing up independently, although linkages among them are beginning to appear. I will give only two examples; both are in the early stages of trial and error development. The first is the rapid growth of community-sponsored agriculture (CSA) and the return of urban farmers’ markets. In CSAs, farmers and local consumers are linked personally by formal agreements that guarantee the farmers a timely income, paid before the growing season, and the consumers a regular supply of wholesome, locally grown, often organic, produce. The first CSA project in the United States was started in 1985, in western Massachusetts, by the late Robyn Van En—just thirteen years later, there are more than 600 CSAs with over 100,000 members throughout the United States. Urban farmers’ markets similarly bring city-dwellers into contact with the people who grow their food, for the benefit of both. Although difficulties abound—economic constraints for farmers whose CSAs lack enough members, the unavailability of the benefits of CSAs to the urban poor, who do not have cash to advance for subsequent deliveries of produce—creative solutions seem possible. A related development has been the burgeoning of urban vegetable gardening in cities across the country. One of the most exciting examples is the garden project started by Cathrine Sneed for inmates of the San Francisco Jail and subsequently expanded into the surrounding urban community. On another front, less local and immediate but equally important, is the embryonic movement to redefine the rights of corporations, especially to limit the much-abused legal fiction of their “personhood.” The movement would take away from corporations the personal protections granted to individuals under the U.S. Constitution and Bill of Rights. The Constitution does not mention corporations; state charter laws originally made it plain that corporations could exist and do business only through the continuous consent of state governments. If a corporation violated the public trust, its charter could be revoked. The loss of the public right to revoke charters of corporations and the emergence of the corporation as an entity with limited liability and the property and other personal rights of citizens, was a tragic and often corrupt chapter in nineteenth-century American law. It has led to our present condition, in which transnational corporations with no local or national allegiances control many of the government’s major functions, subverting democracy and doing much to create the unstable conditions I have described. Recapturing the government’s right to issue and cancel corporate charters should be a primary goal of those trying to build a more durable and decent social, economic, and technical system. Michael Lerner, editor of Tikkun, has suggested that we add a Social Responsibility Amendment to the Constitution containing the key provision that each corporation with annual revenues of $20 million or more must receive a new corporate charter every twenty years. Similar ideas are being advanced by Richard Grossman and Ward Morehouse of the Program on Corporations, Law & Democracy; by Peter Montague, editor of Rachel’s Environment & Health Weekly; and by others in the United States and Canada. Although still embryonic, the movement has drawn support from both conservatives and liberals—the shadow structure is neither of the right nor the left, but is an emerging political alliance that may gain power when the transnationals decline. In the words of Vaclav Havel, president of the Czech Republic, spoken in Philadelphia’s Independence Hall on July 4, 1994: “There are good reasons for suggesting that the modern age has ended.. It is as if something were crumbling, decaying and exhausting itself, while something else, still indistinct, were arising from the rubble.” What is crumbling is not only our pretentious techno-economic system but our naive faith in our ability to control and manage simultaneously all the animate and inanimate functions of this planet. What is arising—I hope in time—is a new spirit and system rooted in love of community, and love of the land and nature that sustain community. And the greatest challenge will be to make this spirit and system truly new and truly enduring by finding ways to develop our love of nature and community without returning to destructive nationalisms, without losing our post-Enlightenment concern for the common good of the rest of humankind and nature.
DK3654—CHAPTER 2—3/10/2006—14:53—SRIDHAR—XML MODEL C – pp. 13–33
3
Public Sector Perspectives on Technology
CONTENTS Chapter Highlights...........................................................................................................................39 Industrial Competitiveness and Technological Advancement: Debate over Government Policy ..........................................................................................................................42 Summary ......................................................................................................................................42 Most Recent Developments.........................................................................................................43 Background and Analysis............................................................................................................43 Technology and Competitiveness ...........................................................................................43 Federal Role.............................................................................................................................44 Legislative Initiatives and Current Programs .........................................................................47 Increased R&D Spending ........................................................................................................47 Industry–University Cooperative Efforts ................................................................................49 Joint Industrial Research .........................................................................................................50 Commercialization of the Results of Federally Funded R&D ...............................................50 A Different Approach? ............................................................................................................53 107th Congress Legislation .....................................................................................................54 Legislation ...................................................................................................................................54 Science and Technology Policy: Issues for the 107th Congress, Second Session ........................55 Summary ......................................................................................................................................55 Introduction..................................................................................................................................56 Issues............................................................................................................................................56 Research and Development Budgets and Policy ....................................................................56 National Institutes of Health (NIH) ........................................................................................57 Defense Science and Technology ...........................................................................................57 Public Access to Federal R&D Data.......................................................................................58 Quality of Federal R&D Data .................................................................................................58 Government Performance and Results Act (GPRA) and the President’s Management Agenda ...............................................................................................................59 Science and Technology Education ........................................................................................60 Homeland Security ..................................................................................................................61 Aviation Security Technologies ..............................................................................................62 Critical Infrastructure ..............................................................................................................62 Advanced Technology Program ..............................................................................................63 Technology Transfer................................................................................................................63 Federal R&D, Drug Costs, and Availability...........................................................................63 Telecommunications and Information Technology ................................................................64
35
DK3654—CHAPTER 3—3/10/2006—17:19—SRIDHAR—XML MODEL C – pp. 35–217
36
Handbook of Technology Management in Public Administration
Slamming .................................................................................................................................64 Broadband Internet Access......................................................................................................65 Spectrum Management and Wireless Technologies ...............................................................65 Internet Privacy........................................................................................................................66 E-Government..........................................................................................................................66 Federal Chief Information Officer (CIO) ................................................................................67 Information Technology R&D ................................................................................................68 Voting Technologies................................................................................................................68 Biotechnology: Privacy, Patents, and Ethics ..........................................................................68 Global Climate Change ...........................................................................................................69 Aeronautics R&D ....................................................................................................................71 Space Programs: Civil, Military, and Commercial.................................................................71 Commercial Satellite Exports..................................................................................................73 Related CRS Reports...................................................................................................................73 Research and Development Budgets and Policy ....................................................................73 Homeland Security ..................................................................................................................74 Technology Development........................................................................................................74 Telecommunications and Information Technology ................................................................75 Rethinking Silicon Valley: New Perspectives on Regional Development ....................................77 Abstract ........................................................................................................................................77 Introduction..................................................................................................................................78 Conventional Modeling of Silicon Valley ..................................................................................79 Problems with Modeling .............................................................................................................80 Issues Central to the Project of Developing a Model to Produce Silicon Valley Effects in Southeast Asia.............................................................................................................81 Starting Points for an Alternative Model....................................................................................81 States and Silicon Valley Models in Southeast Asia..................................................................82 States in Southeast Asia ..........................................................................................................82 Problematic State Effects ........................................................................................................83 Control/Direction Orientation .................................................................................................84 Outcomes Driven .....................................................................................................................84 Short-Term Focus ....................................................................................................................85 Universities and Silicon Valley Models in Southeast Asia........................................................85 Need for Universities...............................................................................................................85 Firms and Silicon Valley Models in Southeast Asia ..................................................................87 Culture .....................................................................................................................................88 Conclusion ...................................................................................................................................89 Notes ............................................................................................................................................90 GMOs: Generating Many Objections .............................................................................................93 Advantages of GMOs ..................................................................................................................93 Human Implications ....................................................................................................................93 Testing for the Presence of GMOs .............................................................................................94 Scientific Testing of the GMO Process.......................................................................................95 International Objections to GMOs ..............................................................................................95 International Conferences on GMOs...........................................................................................96 International Regulation of GMOs..............................................................................................96 European Economic Community ............................................................................................97 The World Trade Organization ...............................................................................................98 Regulations in the United States .............................................................................................98 Patents on GMOs.........................................................................................................................99 Summary ......................................................................................................................................99
DK3654—CHAPTER 3—3/10/2006—17:19—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
37
References..................................................................................................................................100 Technologies in Transition, Policies in Transition: Foresight in the Risk Society .....................101 Abstract ......................................................................................................................................101 Introduction................................................................................................................................101 The Risk Society and Its Implications for Science Policy .......................................................102 The Move to Foresight ..............................................................................................................105 Foresight in More than One Country........................................................................................106 Tensions in Foresight ................................................................................................................107 Foresight, Risk and the NHS.....................................................................................................108 Conclusion .................................................................................................................................111 Acknowledgments .....................................................................................................................111 References..................................................................................................................................111 New Technology and Distance Education in the European Union: An EU–U.S. Perspective .....................................................................................................................................112 Facts and Trends in the Higher Education Sector ....................................................................113 European Schools Are Going Online........................................................................................114 What Next for Education in Europe?........................................................................................115 References..................................................................................................................................116 Information Technology and Elementary and Secondary Education: Current Status and Federal Support ..............................................................................................116 Summary ....................................................................................................................................116 Recent Action ............................................................................................................................117 Introduction................................................................................................................................117 Current Interest in Technology for Elementary and Secondary Education .............................117 Status of Technology in Schools...............................................................................................118 Major Issues...............................................................................................................................118 Impact of Technology ...........................................................................................................119 Cost of Technology ...............................................................................................................120 Differences in Access to Technology ...................................................................................120 Access to the Internet ............................................................................................................121 Amount of Technology Use and Types of Uses...................................................................122 Training of the Teaching Force.............................................................................................124 Curriculum Development ......................................................................................................125 Federal Support for Technology in Schools .............................................................................126 Characteristics of Current Federal Support...........................................................................126 Selected Federal Programs and Activities Supporting Technology in Education ...............126 U.S. Department of Education ..............................................................................................126 U.S. Department of Agriculture ............................................................................................129 U.S. Department of Commerce .............................................................................................129 National Aeronautics and Space Administration..................................................................130 National Science Foundation.................................................................................................130 Federal Communications Commission—E-Rate Program ...................................................130 Other Federal Activities ........................................................................................................131 Federal Policy Questions...........................................................................................................131 Should the Federal Government Provide Support for Applying Technology to Elementary and Secondary Education? ........................................................131 What Activities, If Any, Should the Federal Government Support? ...................................132 How Should Federal Support Be Provided? .........................................................................132 What Level of Federal Support Should Be Provided? .........................................................133 References..................................................................................................................................133
DK3654—CHAPTER 3—3/10/2006—17:19—SRIDHAR—XML MODEL C – pp. 35–217
38
Handbook of Technology Management in Public Administration
Strategic Management of Government-Sponsored R&D Portfolios: Lessons from Office of Basic Energy Sciences Projects..............................................................137 Abstract ......................................................................................................................................137 Introduction................................................................................................................................137 A “Portfolio” Approach to Government R&D Management and Evaluation .........................138 A “Constrained” Portfolio Approach ....................................................................................140 Contrasting Government R&D Portfolio Approaches ..........................................................140 Research Design and Procedures: The Research Value Mapping Project...............................142 Portfolio One: Output Maximization ........................................................................................143 Output Portfolio: Basic Research and Scientific Knowledge...............................................143 Output Portfolio: Technology Development and Transfer...................................................146 Output Portfolio: Software and Algorithms..........................................................................149 Portfolio Two: Balanced Portfolio ............................................................................................152 Balanced Portfolio, Part One: Graduate Student Training ...................................................152 Balanced Portfolio, Part Two: Causes and Effects of Balance ............................................154 Conclusions: Outputs, Impacts, and Portfolio Strategy............................................................158 Funding Stability ...................................................................................................................158 CRADAs and Cooperative Research ....................................................................................158 The Basic/Applied False Dichotomy ....................................................................................159 Manager Leverage .................................................................................................................159 Developing a Balanced Portfolio ..........................................................................................159 References..................................................................................................................................160 Technology Transfer: Use of Federally Funded Research and Development .............................161 Summary ....................................................................................................................................161 Most Recent Developments.......................................................................................................161 Background and Analysis..........................................................................................................162 Technology Transfer to Private Sector: Federal Interest......................................................163 Technology Transfer to State and Local Governments: Rationale for Federal Activity ...............................................................................................................164 Current Federal Efforts to Promote Technology Transfer....................................................164 Federal Laboratory Consortium for Technology Transfer ...................................................164 P.L. 96-480, P.L. 99-502, and Amendments ........................................................................165 P.L. 100-418, Omnibus Trade and Competitiveness Act .....................................................167 Patents ....................................................................................................................................170 Small Business Technology Transfer Program.....................................................................171 Further Considerations ..........................................................................................................171 107th Congress Legislation ...................................................................................................173 Legislation .................................................................................................................................173 P.L. 108-7, H.J.Res. 2 Omnibus FY2003 Appropriations Act .............................................173 H.R. 175 (Royce)...................................................................................................................173 China: Possible Missile Technology Transfers from U.S. Satellite Export Policy—Actions and Chronology .....................................................................................173 Summary ....................................................................................................................................173 Introduction and Issues for Policy ............................................................................................174 Security Concerns......................................................................................................................175 China Great Wall Industry Corporation................................................................................175 Missile Technology or Expertise ..........................................................................................176 Administration and Congressional Action ................................................................................184 Policies on Sanctions and Space Launch Agreement...........................................................184 Waivers for Post-Tiananmen Sanctions ................................................................................185 Additional Congressional Mandates .....................................................................................186
DK3654—CHAPTER 3—3/10/2006—17:19—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
39
Hearings of the 105th Congress ................................................................................................187 Investigations .............................................................................................................................187 Cox Committee......................................................................................................................187 Clinton Administration’s Response.......................................................................................191 Senate Task Force .................................................................................................................191 Clinton Administration’s Response.......................................................................................193 Export Controls and Intelligence ..........................................................................................193 Legislation to Revise Export Controls......................................................................................194 105th Congress ......................................................................................................................194 106th Congress ......................................................................................................................196 107th Congress ......................................................................................................................197 Denied and Pending Satellite Exports.......................................................................................197 Role of Congress ...................................................................................................................197 Chinasat-8 ..............................................................................................................................199 Others.....................................................................................................................................199 Chronology of Major Events.....................................................................................................200 1988........................................................................................................................................200 1989........................................................................................................................................200 1990........................................................................................................................................200 1991........................................................................................................................................200 1992........................................................................................................................................201 1993........................................................................................................................................202 1994........................................................................................................................................203 1995........................................................................................................................................203 1996........................................................................................................................................204 1997........................................................................................................................................205 1998........................................................................................................................................206 1999........................................................................................................................................210 2000........................................................................................................................................212 2001........................................................................................................................................213 2002........................................................................................................................................213 References..................................................................................................................................213
Change does not take time; it takes commitment.
Thomas Crum
Decisions to invest in technology solely for technology reasons rarely support improved business performance. Ronald J. Stupak
CHAPTER HIGHLIGHTS Themes threading throughout this chapter center around: † Information management and the bittersweet emergence of information as a commodity † Technology-driven partnerships and partnering between private and public sector
organizations . the potential and the pitfalls
† The impact that technology is having on organizational structure, function, and cost
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
40
Handbook of Technology Management in Public Administration † Values conflicts arising as the private and public sectors continue to intersect nationally
and internationally
† The highly charged and gray world of public–private collaboration (Note: Differing
approaches from nation-to-nation and administration-specific oscillations within nations contextualize the complexity of this area) † Ownership rights and the funding and management of technology through national and international “co-opetition” † Acquisition, education, and implementation issues surrounding operationalization of promulgated policy † Security concerns that the next terrorist incident in the United States will involve national infrastructure, namely our information processing acumen
Public sector perspectives are necessarily many, varied, and complex. To capture the essence of concerns, a panoply of settings (international, federal, state, and local) and topics (law, politics, policy, economics, and education) are pursued. Patterns of policy issues connect the seemingly disparate topics including (1) public–private collaboration, (2) ownership rights arising from research and development efforts, (3) technology acquisition, education, and implementation, and (4) security concerns. Examples of the interesting work being done in the public sector follow. From an international perspective: † In many industrialized countries, information and communication technology is seen as a
means through which governments can address issues of social exclusion. Beyond rhetorical concerns over bridging the perceived “digital divide” and alleviating disparities between the information “rich” and “poor,” little critical consideration has been given as to how technology is achieving socially inclusive aims. † The organizational principle of the triple helix is the expectation that the university will play a greater role in society, the so-called “third mission.” The triple helix thesis is that the university–industry–government interaction is the key to improving the conditions for innovation in a knowledge-based society. Industry is a member of the triple helix as the locus of production; the government as the source of contractual relations that guarantee stable interactions and exchange; the university as a source of new knowledge and technology, the generative principle of knowledge-based economies. From the federal perspective: † Policy ideas serve as a prism that determines a policy maker’s perception of a situation
and the parameters for possible action. Ideas exert their greatest impact through their influence on agenda setting, be it a public or private setting. † With more than $100 billion in play, the promulgation of science and technology policy at the federal level has significant private sector and public sector implications. Congress has a wide range of science and technology issues with which to deal. E-government, Internet privacy, broadband access, biotechnology, and national/homeland security are among a few of the weighty issues. † There is ongoing interest in the pace of U.S. technological advancement due to its influence on economic growth, productivity, and industrial competitiveness. Because of the lack of consensus on the scope and direction of national policy (perhaps most poignantly debated through funding augmentation of private sector technological development), Congress has taken an incremental approach to funding new and ongoing research. † The federal government has shown a heightened commitment to IT security, but the lack of information sharing between federal and state governments remains a critical problem.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
Without seamless sharing of information among and through layers of government, the United States will be hard-pressed to successfully move forward in critical infrastructure protection. From state and local perspectives (including education): † Larger local municipal governments are likely to be more proactive and strategic in
advancing e-government. Of those that have entered the e-world, many municipal governments are still in either stage one or stage two, where they simply post and disseminate government information over the Web or provide on-line channels for two-way communication, particularly for public service requests. Overall, the current state of non-federal e-government is still very primitive. † Rapid advances in communication and technology mean that the capacity to learn throughout life has become as important to human survival—if not quite so immediate—as access to food, water, and shelter. It is now generally acknowledged that a highly educated population is essential for an economy to thrive. Education is therefore demonstrably an investment in the future economic health of a society. In many industrialized countries, information and communication technology is now being seen as a ready means through which governments can address issues of social exclusion. Yet beyond rhetorical concerns over bridging the perceived “digital divide” and alleviating disparities between the information rich and poor, little critical consideration has been given as to how technology is being used to achieve socially inclusive aims. † Are universities moving from “brick and mortar” to “click and mortar?” New challenges call for a better understanding of the issues at stake and for appropriate concerted decisions between all actors involved. The use of information and communication technologies will continue to follow a progressive evolutionary route. The university of tomorrow is more likely to offer a mix of on-campus and Internet-based off-campus courses to respond to increasing need for accessibility, diversity, flexibility, and affordability of education services. One of the critical success factors for the universities lies in their ability to address the lifelong learning market in collaboration amongst themselves but also with private partners.
The inwardly directed (operational) view of technology: † Organizational leaders have been largely ineffective in administering the information
technology function with the same rigor they bring to running business generally. The management of information technology is still left to information technology leaders, who struggle to balance the changing, multiple demands of the companies for which they work. Business leaders must assertively and actively engage the information technology function by owning decisions about information technology instead of just making them and assuming someone else will be accountable. † Whether military, commercial, or non-profit, enterprises must react to changes in their environments by leveraging available information to make effective decisions. Consequently, managing information plays a central role in any enterprise . public or private . and is a major factor influencing the quality of its decisions. † Operations are becoming increasingly fast-paced and diverse as the management of information moves closer to the center of power. Communication systems move, transport, and store information in such massive amounts that we speak of information overload . that is, having rapid and volumetric access to more information than we can effectively process and use. Information management in the private sector must focus on leveraging information by having it in the right place, at the right time, and in
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
41
42
Handbook of Technology Management in Public Administration
the right format to facilitate collaborative partnering and to gain or maintain market advantage. The outwardly (market) directed view of technology: † The contemporary model for success in the private sector and non-profit worlds is
†
†
†
†
†
partnership. Businesses, foundations, non-profits, and government agencies are encouraging the development of partnerships to streamline operations, connect with markets, enhance capabilities, and reduce costs. The public and private sectors are increasingly sustained by networks . soft networks of social interaction supported by hard networks of computerware that plugs into the Internet. The blurring that is occurring underscores the need to be purposeful in recognizing, managing, and evaluating these networks. The opportunistic public sector individual will gain insight and perspective by critically assessing technology development, application, and evaluation in the private sector. If there is one point for public “environmental scanning” of private interests, it would be critically assessing technology, while at the same time continually adapting best practices into the public sector. Start-up companies face the dilemma of whether to build technology infrastructure from scratch or buying prepackaged applications at a higher cost to get up and running quickly. To properly determine direction, start-ups need to study both the competition that already exists and the likelihood of other competitors developing. If government agencies face incentives to fund the most commercially promising proposals they receive, they will be inclined to support projects that would be privately profitable—and thus would be undertaken anyway—rather than projects that would benefit society, but are privately unprofitable. “Management-based regulation” is gaining favor. To a significant degree, managementbased regulation shifts the locus of policy decision-making from the government to private parties. Instead of specifying technology or performance standards, regulators outline criteria for private sector planning and conduct varying degrees of oversight to ensure that firms are engaging in effective planning and implementation that satisfies the established regulatory criteria.
INDUSTRIAL COMPETITIVENESS AND TECHNOLOGICAL ADVANCEMENT: DEBATE OVER GOVERNMENT POLICY* SUMMARY There is ongoing interest in the pace of U.S. technological advancement due to its influence on U.S. economic growth, productivity, and international competitiveness. Because technology can contribute to economic growth and productivity increases, congressional interest has focused on how to augment private-sector technological development. Legislative activity over the past decade has created a policy for technology development, albeit an ad hoc one. Because of the lack of consensus on the scope and direction of a national policy, Congress has taken an incremental approach aimed at creating new mechanisms to facilitate technological advancement in particular areas and making changes and improvements as necessary. Congressional action has mandated specific technology development programs and obligations in federal agencies that did not initially * From The CRS Report for Congress, by Wendy H. Schacht, Resources, Science, and Industry Division. q Copyright 2003 Congressional Research Service.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
43
support such efforts. Many programs were created based upon what individual committees judged appropriate within the agencies over which they had authorization or appropriation responsibilities. The use of line item funding for these activities, including the Advanced Technology Program and the Manufacturing Extension Program of the National Institute of Standards and Technology, as well as for the Undersecretary for Technology at the Department of Commerce, is viewed as a way to ensure that the government encourages technological advance in the private sector. The Clinton–Gore Administration articulated a national technology policy during its first term and continued to follow its guidance. This policy included both direct and indirect governmental support for private sector activities in research, development, and commercialization of technology. Many of the ideas reflected past congressional initiatives. Some legislative activity beginning in the 104th Congress has been directed at eliminating or significantly curtailing many of these federal efforts. Although this approach was not successful, the budgets for several programs declined. Similar questions were raised concerning the proper role of the federal government in technology development and the competitiveness of U.S. industry. As the 108th Congress develops its budget priorities, how the government encourages technological progress in the private sector again may be explored and/or redefined.
MOST RECENT DEVELOPMENTS The former Clinton Administration adopted a strategy for technological advancement as part of a defined national economic policy, an approach initially supported by congressional initiatives that supplemented funding for various technology development activities including the Advanced Technology Program (ATP) and the Manufacturing Extension Partnership (MEP) at the National Institute of Standards and Technology. However, many of these efforts have been revisited since the 104th Congress given the Republican majority’s statements in favor of indirect measures such as tax policies, intellectual property rights, and antitrust laws to promote technological advancement; increased government support for basic research; and decreased direct federal funding for private sector technology initiatives. While no program was eliminated, several were financed at reduced levels. In the 107th Congress, the Small Business Technology Transfer Program was extended through FY2009 by P.L. 107-50. P.L. 107-77 appropriated $106.5 million for the Manufacturing Extensions Partnership during FY2002 and provided $184.5 million for the Advanced Technology Program. The President’s FY2003 budget requested $108 million for ATP and $13 million for MEP. This latter figure, a significant reduction from the previous fiscal year, was based on the understanding that all manufacturing extension centers that have operated more than six years should continue without federal funding. Several Continuing Resolutions funded these programs at FY2002 levels until the 108th Congress enacted P.L. 108-7, which provided FY2003 appropriations of $180 million for ATP and $106.6 million for MEP. H.R. 175, introduced January 7, 2003, would abolish ATP. The FY2004 budget proposed by the Administration includes $27 million for ATP to cover ongoing commitments; no new projects would be funded. Manufacturing extension would be financed at $12.6 million to operate centers that have not reached six years of federal support.
BACKGROUND AND ANALYSIS Technology and Competitiveness Interest in technology development and industrial innovation increased as concern mounted over the economic strength of the nation and over competition from abroad. For the United States to be competitive in the world economy, U.S. companies must be able to engage in trade, retain market shares, and offer high quality products, processes, and services while the nation maintains economic growth and a high standard of living. Technological advancement is important because the
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
44
Handbook of Technology Management in Public Administration
commercialization of inventions provides economic benefits from the sale of new products or services; from new ways to provide a service; or from new processes that increase productivity and efficiency. It is widely accepted that technological progress is responsible for up to one-half the growth of the U.S. economy, and is one principal driving force in long-term growth and increases in living standards. Technological advances can further economic growth because they contribute to the creation of new goods, new services, new jobs, and new capital. The application of technology can improve productivity and the quality of products. It can expand the range of services that can be offered as well as extend the geographic distribution of these services. The development and use of technology also plays a major role in determining patterns of international trade by affecting the comparative advantages of industrial sectors. Since technological progress is not necessarily determined by economic conditions—it also can be influenced by advances in science, the organization and management of firms, government activity, or serendipity—it can have effects on trade independent of shifts in macroeconomic factors. New technologies also can help compensate for possible disadvantages in the cost of capital and labor faced by firms. Federal Role In the recent past, American companies faced increased competitive pressures in the international marketplace from firms based in countries where governments actively promote commercial technological development and application. In the United States, the generation of technology for the commercial marketplace is primarily a private sector activity. The federal government traditionally becomes involved only for certain limited purposes. Typically these are activities that have been determined to be necessary for the “national good” but which cannot, or will not, be supported by industry. To date, the U.S. government has funded research and development (R&D) to meet the mission requirements of the federal departments and agencies. It also finances efforts in areas where there is an identified need for research, primarily basic research, not being performed in the private sector. Federal support reflects a consensus that basic research is critical because it is the foundation for many new innovations. However, any returns created by this activity are generally long term, sometimes not marketable, and not always evident. Yet the rate of return to society as a whole generated by investments in research is significantly larger than the benefits that can be captured by the firm doing the work. Many past government activities to increase basic research were based on a “linear” model of innovation. This theory viewed technological advancement as a series of sequential steps starting with idea origination and moving through basic research, applied research, development, commercialization, and diffusion into the economy. Increases in federal funds in the basic research stage were expected to result in concomitant increases in new products and processes. However, this linear concept is no longer considered valid. Innovations often occur that do not require basic or applied research or development; in fact most innovations are incremental improvements to existing products or processes. In certain areas, such as biotechnology, the distinctions between basic research and commercialization are small and shrinking. In others, the differentiation between basic and applied research is artificial. The critical factor is the commercialization of the technology. Economic benefits accrue only when a technology or technique is brought to the marketplace where it can be sold to generate income or applied to increase productivity. Yet, while the United States has a strong basic research enterprise, foreign firms appear more adept at taking the results of these scientific efforts and making commercially viable products. Often U.S. companies are competing in the global marketplace against goods and services developed by foreign industries from research performed in the United States. Thus, there has been increased congressional interest in mechanisms to accelerate the development and commercialization processes in the private sector. The development of a governmental effort to facilitate technological advance has been particularly difficult because of the absence of a consensus on the need for an articulated policy.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
45
Technology demonstration and commercialization have traditionally been considered private sector functions in the United States. While over the years there have been various programs and policies (such as tax credits, technology transfer to industry, and patents), the approach had been ad hoc and uncoordinated. Much of the program development was based upon what individual committees judged appropriate for the agencies over which they have jurisdiction. Despite the importance of technology to the economy, technology-related considerations often have not been integrated into economic decisions. There have been attempts to provide a central focus for governmental activity in technology matters. P.L. 100-519 created, within the Department of Commerce, a Technology Administration headed by a new Undersecretary for Technology. In November 1993, President Clinton established a National Science and Technology Council to coordinate decision-making in science and technology and to insure their integration at all policy levels. However, technological issues and responsibilities remain shared among many departments and agencies. This diffused focus has sometimes resulted in actions which, if not at cross-purposes, may not have accounted for the impact of policies or practices in one area on other parts of the process. Technology issues involve components that operate both separately and in concert. While a diffused approach can offer varied responses to varied issues, the importance of interrelationships may be underestimated and their usefulness may suffer. Several times, Congress has examined the idea of an industrial policy to develop a coordinated approach on issues of economic growth and industrial competitiveness. Technological advance is both one aspect of this and an altogether separate consideration. In looking at the development of an identified policy for industrial competitiveness, advocates argue that such an effort could ameliorate much of the uncertainty with which the private sector perceives future government actions. It has been argued that consideration and delineation of national objectives could encourage industry to engage in more long-term planning with regard to R&D and to make decisions as to the best allocation of resources. Such a technology policy could generate greater consistency in government activities. Because technological development involves numerous risks, efforts to minimize uncertainty regarding federal programs and policies may help alleviate some of the disincentives perceived by industry. The development of a technology policy, however, would require a new orientation by both the public and private sectors. There is widespread resistance to what could be and has been called national planning, due variously to doubts as to its efficacy, to fear of adverse effects on our market system, to political beliefs about government intervention in our economic system, and to the current emphasis on short-term returns in both the political and economic arenas. Yet proponents note that planning can be advisory or indicative rather than mandatory. The focus provided by a technology policy could arguably provide a more receptive or helpful governmental environment within which business can make better decisions. Advocates assert that it could also reassure industry of government’s ongoing commitment to stimulating R&D and innovation in the private sector. Consideration of what constitutes government policy (both in terms of the industrial policy and technology policy) covers a broad range of ideas from laissez-faire to special government incentives to target specific high-technology, high-growth industries. Suggestions have been made for the creation of federal mechanisms to identify and support strategic industries and technologies. Various federal agencies and private sector groups have developed critical technology lists. However, others maintain that such targeting is an unwanted, and unwarranted, interference in the private sector, which will cause unnecessary dislocations in the marketplace or a misallocation of resources. The government does not have the knowledge or expertise to make business-related decisions. Instead, they argue, the appropriate role for government is to encourage innovative activities in all industries and to keep market-related decision-making within the business community that has ultimate responsibility for commercialization and where such decisions have traditionally been made. The relationship between government and industry is a major factor affecting innovation and the environment within which technological development takes place. This relationship often has been adversarial, with the government acting to regulate or restrain the
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
46
Handbook of Technology Management in Public Administration
business community, rather than to facilitate its positive contributions to the nation. However, the situation is changing; it has become increasingly apparent that lack of cooperation can be detrimental to the nation as it faces competition from companies in countries where close governmentindustry collaboration is the norm. There are an increasing number of areas where the traditional distinctions between public and private sector functions and responsibilities are becoming blurred. Many assumptions have been questioned, particularly in light of the increased internationalization of the U.S. economy. The business sector is no longer being viewed in an exclusively domestic context; the economy of the United States is often tied to the economies of other nations. The technological superiority long held by the United States in many areas has been challenged by other industrialized countries in which economic, social, and political policies and practices foster government-industry cooperation in technological development. A major divergence from the past was evident in the approach taken by the former Clinton Administration. Articulated in two reports issued in February 1993 (A Vision of Change for America and Technology for America’s Economic Growth, A New Direction to Build Economic Strength), the proposal called for a national commitment to, and a strategy for, technological advancement as part of a defined national economic policy. This detailed strategy offered a policy agenda for economic growth in the United States, of which technological development and industrial competitiveness are critical components. In articulating a national technology policy, the approach initially recommended and subsequently followed by the Administration was multifaceted and provided a wide range of options while for the most part reflecting current trends in congressional efforts to facilitate industrial advancement. This policy increased federal coordination and augmented direct government spending for technological development. While many past activities focused primarily on research, the new initiatives shifted the emphasis toward development of new products, processes, and services by the private sector for the commercial marketplace. In addition, a significant number of the proposals aimed to increase both government and private sector support for R&D leading to the commercialization of technology. To facilitate technological advance, the Clinton approach focused on increasing investment; investment in research, primarily civilian research, to meet the nation’s needs in energy, environmental quality, and health; investment in the development and commercialization of new products, processes, and services for the marketplace; investment in improved manufacturing to make American goods less expensive and of better quality; investment in small, high-technology businesses in light of their role in innovation and job creation; and investment in the country’s infrastructure to support all these efforts. To make the most productive use of this increased investment, the Administration supported increased cooperation between all levels of government, industry, and academia to share risk, to share funding, and to utilize the strengths of each sector in reaching common goals of economic growth, productivity improvement, and maintenance of a high living standard. On November 23, 1993, President Clinton issued Executive Order 12881 establishing a National Science and Technology Council (NSTC), a cabinet-level body to “. coordinate science, space, and technology policies throughout the federal government.” The approach adopted by the former Administration has been questioned by recent Congresses and by the current Bush Administration. However, despite the continuing debate on what is the appropriate role of government and what constitutes a desirable government technology development policy, it remains an undisputed fact that what the government does or does not do affects the private sector and the marketplace. The various rules, regulations, and other activities of the government have become de facto policy as they relate to, and affect, innovation and technological advancement. It has been argued that these actions are not sufficiently understood or analyzed with respect to the larger context within which economic growth occurs. According to critics, these actions also are not coordinated in any meaningful way so that they promote an identifiable goal, whether that goal is as general as the “national welfare” or as specific as the growth of a particular industry.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
47
Legislative Initiatives and Current Programs Over the past several years, legislative initiatives have reflected a trend toward expanding the government’s role beyond traditional funding of mission-oriented R&D and basic research toward the facilitation of technological advancement to meet other critical national needs, including the economic growth that flows from new commercialization and use of technologies and techniques in the private sector. An overview of recent legislation shows federal efforts aimed at (1) encouraging industry to spend more on R&D, (2) assisting small high-technology businesses, (3) promoting joint research activities between companies, (4) fostering cooperative work between industry and universities, (5) facilitating the transfer of technology from federal laboratories to the private sector, and (6) providing incentives for quality improvements. These efforts tend toward removing barriers to technology development in the private sector (thereby permitting market forces to operate) and providing incentives to encourage increased private sector R&D activities. While most focus primarily on research, some also involve policies and programs associated with technology development and commercialization. Increased R&D Spending To foster increased company spending on research, the 1981 Economic Recovery Tax Act (P.L. 97-34) mandated a temporary incremental tax credit for qualified research expenditures. The law provided a 25% tax credit for the increase in a firm’s qualified research costs above the average expenditures for the previous three tax years. Qualified costs included in-house expenditures such as wages for researchers, material costs, and payments for use of equipment; 65% of corporate grants towards basic research at universities and other relevant institutions; and 65% of payments for contract research. The credit applied to research expenditures through 1985. The Tax Reform Act of 1986 (P.L. 99-514) extended the research and experimentation (R&E) tax credit for another three years. However, the credit was lowered to 20% and is applicable to only 75% of a company’s liability. The 1988 Tax Corrections Act (P.L. 100-647) approved a one-year extension of the research tax credit. The Omnibus Budget Reconciliation Act (P.L. 101-239) extended the credit through September 30, 1990 and made small start-up firms eligible for the credit. The FY1991 Budget Act (P.L. 101-508) again continued the tax credit provisions through 1992. The law expired in June 1992 when former President Bush vetoed H.R. 11 that year. However, P.L. 103-66, the Omnibus Budget Reconciliation Act of 1993, reinstated the credit through July 1995 and made it retroactive to the former expiration date. The tax credit again was allowed to expire until P.L. 104-188, the Small Business Job Protection Act, restored it from July 1, 1996 through May 31, 1997. P.L. 105-34, the Taxpayer Relief Act of 1997, extended the credit for 13 months from June 1, 1997 through June 30, 1998. Although it expired once again at the end of June, the Omnibus Consolidated Appropriations Act, P.L. 105-277, reinstated the tax credit through June 30, 1999. During the 105th Congress, various bills were introduced to make the tax credit permanent; other bills would have allowed the credit to be applied to certain collaborative research consortia. On August 5, 1999, both the House and Senate agreed to the conference report for H.R. 2488, the Financial Freedom Act, which would have extended the credit for five years through June 30, 2004. This bill also would have increased the credit rate applicable under the alternative incremental research credit by one percentage point per step. While the President vetoed this bill on September 23, 1999, the same provisions are included in Title V of P.L. 106-170 signed into law on December 17, 1999. The Small Business Development Act (P.L. 97-219), as extended (P.L. 99-443), established a program to facilitate increased R&D within the small-business, high-technology community. Each federal agency with a research budget was required to set aside 1.25% of its R&D funding for grants to small firms for research in areas of interest to that agency. P.L. 102-564, which reauthorized the Small Business Innovation Research (SBIR) program, increased the set-aside to
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
48
Handbook of Technology Management in Public Administration
2.5%, phased in over a five-year period. Funding is, in part, dependent on companies obtaining private sector support for the commercialization of the resulting products or processes. The authorization for the program was set to terminate October 1, 2000. However, the SBIR activity was reauthorized through September 30, 2008 by P.L. 106-554, signed into law on December 21, 2000. A pilot effort, the Small Business Technology Transfer (STTR) program, was also created to encourage firms to work with universities or federal laboratories to commercialize the results of research. This program is funded by a 0.15% (phased in) set-aside. Set to expire in FY1997, the STTR was originally extended for one year until P.L. 105-135 reauthorized this activity through FY2001. Passed in the current Congress, P.L. 107-50 extends the program through FY2009 and expands the set-aside to 0.3% beginning in FY2004. Also in FY2004, the amount of individual Phase II grants increases to $750,000. (See CRS Report 96-402, Small Business Innovation Research Program.) The Omnibus Trade and Competitiveness Act of 1988 (P.L. 100-418) created the Advanced Technology Program (ATP) at the Department of Commerce’s National Institute of Standards and Technology. ATP provides seed funding, matched by private sector investment, for companies or consortia of universities, industries, and/or government laboratories to accelerate development of generic technologies with broad application across industries. The first awards were made in 1991. To date, 642 projects have been funded representing approximately $1,960 million in federal expenditure matched by a similar amount of private sector financing. Over 60% of the awardees are small businesses or cooperative efforts led by such firms. (For more information, see CRS Report 95-36, The Advanced Technology Program.) Appropriations for the ATP include $35.9 million in FY1991, $47.9 million in FY1992, and $67.9 million in FY1993. FY1994 appropriations increased significantly to $199.5 million and even further in FY1995 to $431 million. However, P.L. 104-6, rescinded $90 million from this amount. There was no FY1996 authorization. The original FY1996 appropriations bill, H.R. 2076, which passed the Congress, was vetoed by the President, in part, because it provided no support for ATP. The appropriations legislation finally enacted, P.L. 104-134, did fund the Advanced Technology Program at $221 million. For FY1997, the President’s budget request was $345 million. Again, there was no authorizing legislation. However, P.L. 104-208, the Omnibus Consolidated Appropriations Act, provided $225 million for ATP, later reduced by $7 million to $218 million by P.L. 105-18, the FY1997 Emergency Supplemental Appropriations and Rescission Act. For FY1998, the Administration requested $276 million in funding. P.L. 105-119, appropriated FY1998 financing of ATP at $192.5 million, again at a level less than the previous year. The Administration’s FY1999 budget proposal included $259.9 million for this program, a 35% increase. While not providing such a large increase, P.L. 105-277 did fund ATP for FY1999 at $197.5 million, 3% above the previous year. This figure reflected a $6 million rescission contained in the same law that accounted for “deobligated” funds resulting from early termination of certain projects. In FY2000, the President requested $238.7 million for ATP, an increase of 21% over the previous year. S. 1217, as passed by the Senate, would have appropriated $226.5 million for ATP. H.R. 2670, as passed by the House, provided no funding for the activity. The report to accompany the House bill stated that there was insufficient evidence “. to overcome those fundamental questions about whether the program should exist in the first place.” Yet, P.L. 106-113 eventually did finance the program at $142.6 million, 28% below prior year funding. The Clinton Administration’s FY2001 budget included $175.5 million for the Advanced Technology Program, an increase of 23% over the earlier fiscal year. Once again, the original version of the appropriations bills that passed the House did not contain any financial support for the activity. However, P.L. 106-553 provided $145.7 million in FY2001 support for ATP, 2% above the previous funding level. For FY2002, President Bush’s budget proposed suspending all funding for new ATP awards pending an evaluation of the program. In the interim, $13 million would have been provided to meet
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
49
the financial commitments for ongoing projects. H.R. 2500, as initially passed by the House, also did not fund new ATP grants but offered $13 million for prior commitments. The version of H.R. 2500 that originally passed the Senate provided $204.2 million for the ATP effort. P.L. 107-77 funds the program at $184.5 million, an increase of almost 27% over the previous fiscal year. The Administration’s FY2003 budget request would have funded the Advanced Technology Program at $108 million, 35% below the FY2002 appropriation level. The 107th Congress passed no relevant appropriations legislation; however, a series of Continuing Resolutions funded the program until the 108th Congress enacted P.L. 108-7, which financed ATP at $180 million for FY2003. H.R. 175 would abolish the program. In its FY2004 budget, the Administration proposes to provide $17 million to cover ongoing commitments to ATP, but would not support any new projects. Industry–University Cooperative Efforts The promotion of cooperative efforts among academia and industry is aimed at increasing the potential for the commercialization of technology. (For more information, see CRS Issue Brief IB89056, Cooperative R&D: Federal Efforts to Promote Industrial Competitiveness.) Traditionally, basic research has been performed in universities or in the federal laboratory system while the business community focuses on the manufacture or provision of products, processes, or services. Universities are especially suited to undertake basic research. Their mission is to educate and basic research is an integral part of the educational process. Universities generally are able to undertake these activities because they do not have to produce goods for the marketplace and therefore can do research not necessarily tied to the development of a commercial product or process. Subsequent to the Second World War, the federal government supplanted industry as the primary source of funding for basic research in universities. It also became the principal determinant of the type and direction of the research performed in academia. This resulted in a “disconnect” between the university and industrial communities. The separation and isolation of the parties involved in the innovation process is thought to be a barrier to technological progress. The difficulties in moving an idea from the concept stage to a commercial product or process are compounded when several entities are involved. Legislation to stimulate cooperative efforts among those involved in technology development is viewed as one way to promote innovation and facilitate the international competitiveness of U.S. industry. Several laws have attempted to encourage industry–university cooperation. Title II of the Economic Recovery Tax Act of 1981 (P.L. 97-34) provided, in part, a 25% tax credit for 65% of all company payments to universities for the performance of basic research. Firms were also permitted a larger tax deduction for charitable contributions of equipment used in scientific research at academic institutions. The Tax Reform Act of 1986 (P.L. 99-514) kept this latter provision, but reduced the credit for university basic research to 20% of all corporate expenditures for this over the sum of a fixed research floor plus any decrease in non-research giving. The 1981 Act also provided an increased charitable deduction for donations of new equipment by a manufacturer to an institution of higher education. This equipment must be used for research or research training for physical or biological sciences within the United States. The tax deduction is equal to the manufacturer’s cost plus one-half the difference between the manufacturer’s cost and the market value, as long as it does not exceed twice the cost basis. These provisions were extended through July 1995 by the Omnibus Budget Reconciliation Act of 1993, but then expired until restored by the passage of P.L. 104-188, P.L. 105-277, and P.L. 106-170 as noted above. Amendments to the patent and trademark laws contained in P.L. 96-517 (commonly called the “Bayh–Dole Act”) were also designed to foster interaction between academia and the business community. This law provides, in part, for title to inventions made by contractors receiving federal R&D funds to be vested in the contractor if they are small businesses, universities, or not-for-profit
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
50
Handbook of Technology Management in Public Administration
institutions. Certain rights to the patent are reserved for the government and these organizations are required to commercialize within a predetermined and agreed-upon time frame. Providing universities with patent title is expected to encourage licensing to industry where the technology can be manufactured or used thereby creating a financial return to the academic institution. University patent applications and licensing have increased significantly since this law was enacted. (See CRS Report RL30320, Patent Ownership and Federal Research and Development and CRS Report 98-862, R&D Partnerships and Intellectual Property: Implications for U.S. Policy.) Joint Industrial Research Private sector investments in basic research are often costly, long term, and risky. Although not all advances in technology are the result of research, it is often the foundation of important new innovations. To encourage increased industrial involvement in research, legislation was enacted to allow for joint ventures in this arena. It is argued that cooperative research reduces risks and costs and allows for work to be performed that crosses traditional boundaries or expertise and experience. Such collaborative efforts make use of existing and support the development of new resources, facilities, knowledge, and skills. The National Cooperative Research Act (P.L. 98-462) encourages companies to undertake joint research. The legislation clarifies the antitrust laws and requires that a “rule of reason” standard be applied in determinations of violations of these laws; cooperative research ventures are not to be judged illegal “per se.” It eliminates treble damage awards for those research ventures found in violation of the antitrust laws if prior disclosure (as defined in the law) has been made. P.L. 98-462 also makes changes in the way attorney fees are awarded. Defendants can collect attorney fees in specified circumstances, including when the claim is judged frivolous, unreasonable, without foundation, or made in bad faith. However, the attorney fee award to the prevailing party may be offset if the court decides that the prevailing party conducted a portion of the litigation in a manner that was frivolous, unreasonable, without foundation, or in bad faith. These provisions were included to discourage frivolous litigation against joint research ventures without simultaneously discouraging suits of plaintiffs with valid claims. Over 700 joint research ventures have filed with the Department of Justice since passage of this legislation. P.L. 103-42, the National Cooperative Production Amendments Act of 1993, amends the National Cooperative Research Act by, among other things, extending the original law’s provisions to joint manufacturing ventures. These provisions are only applicable, however, to cooperative production when (1) the principal manufacturing facilities are “. located in the United States or its territories,” and (2) each person who controls any party to such venture “. is a United States person, or a foreign person from a country whose law accords antitrust treatment no less favorable to United States persons than to such country’s domestic persons with respect to participation in joint ventures for production.” Commercialization of the Results of Federally Funded R&D Another approach to encouraging the commercialization of technology involves the transfer of technology from federal laboratories and contractors to the private sector where commercialization can proceed. Because the federal laboratory system has extensive science and technology resources and expertise developed in pursuit of mission responsibilities, it is a potential source of new ideas and knowledge, which may be used in the business community. (See CRS Issue Brief IB85031, Technology Transfer: Utilization of Federally Funded Research and Development, for more details.) Despite the potential offered by the resources of the federal laboratory system, however, the commercialization level of the results of federally funded R&D remained low. Studies indicated that only approximately 10% of federally owned patents were ever utilized. There are many reasons
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
51
for this low level of usage, one of which is the fact that some technologies and/or patents have no market application. However, industry unfamiliarity with these technologies, the “not-inventedhere” syndrome, and, perhaps more significantly, the ambiguities associated with obtaining title or exclusive license to federally owned patents also contribute to the low level of commercialization. Over the years, several governmental efforts have been undertaken to augment industry’s awareness of federal R&D resources. The Federal Laboratory Consortium for Technology Transfer was created in 1972 (from a Department of Defense program) to assist in transferring technology from the federal government to state and local governments and the private sector. To expand on the work of the Federal Laboratory Consortium, and to provide added emphasis on the commercialization of government technology, Congress passed P.L. 96-480, the Stevenson–Wydler Technology Innovation Act of 1980. Prior to this law, technology transfer was not an explicit mandate of the federal departments and agencies with the exception of the National Aeronautics and Space Administration. To provide “legitimacy” to the numerous technology activities of the government, Congress, with strong bipartisan support, enacted P.L. 96-480, which explicitly states that the federal government has the responsibility, “. to ensure the full use of the results of the nation’s federal investment in research and development.” Section 11 of the law created a system within the federal government to identify and disseminate information and expertise on what technologies or techniques are available for transfer. Offices of Research and Technology Applications were established in each federal laboratory to distinguish technologies and ideas with potential applications in other settings. Several amendments to the Stevenson–Wydler Technology Innovation Act have been enacted to provide additional incentives for the commercialization of technology. P.L. 99-502, the Federal Technology Transfer Act, authorizes activities designed to encourage industry, universities, and federal laboratories to work cooperatively. It also establishes incentives for federal laboratory employees to promote the commercialization of the results of federally funded research and development. The law amends P.L. 96-480 to allow government-owned, government-operated laboratories to enter into cooperative R&D agreements (CRADAs) with universities and the private sector. This authority is extended to government-owned, contractor-operated laboratories by the Department of Defense FY1990 Authorization Act, P.L. 101-189. (See CRS Report 95-150, Cooperative Research and Development Agreements [CRADAs].) Companies, regardless of size, are allowed to retain title to inventions resulting from research performed under cooperative agreements. The federal government retains a royalty-free license to use these patents. The Technology Transfer Improvements and Advancement Act (P.L. 104-113), clarifies the dispensation of intellectual property rights under CRADAs to facilitate the implementation of these cooperative efforts. The Federal Laboratory Consortium is given a legislative mandate to assist in the coordination of technology transfer. To further promote the use of the results of federal R&D, certain agencies are mandated to create a cash awards program and a royalty sharing activity for federal scientists, engineers, and technicians in recognition of efforts toward commercialization of this federally developed technology. These efforts are facilitated by a provision of the National Defense Authorization Act for FY1991 (P.L. 101-510), which amends the Stevenson–Wydler Technology Innovation Act to allow government agencies and laboratories to develop partnership intermediary programs to augment the transfer of laboratory technology to the small business sector. Amendments to the Patent and Trademark law contained in Title V of P.L. 98-620 make changes which are designed to improve the transfer of technology from the federal laboratories—especially those operated by contractors—to the private sector and increase the chances of successful commercialization of these technologies. This law permits the contractor at government-owned, contractor-operated laboratories (GOCOs) to make decisions at the laboratory level as to the granting of licenses for subject inventions. This has the potential of effecting greater interaction between laboratories and industry in the transfer of technology. Royalties on these inventions are also permitted to go back to the contractor to be used for additional R&D, awards
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
52
Handbook of Technology Management in Public Administration
to individual inventors, or education. While there is a cap on the amount of the royalty returning directly to the lab in order not to disrupt the agency’s mission requirements and congressionally mandated R&D agenda, the establishment of discretionary funds gives contractor-operated laboratories added incentive to encourage technology transfer. Under P.L. 98-620, private companies, regardless of size, are allowed to obtain exclusive licenses for the life of the patent. Prior restrictions allowed large firms exclusive use of licenses for only 5 of the 17 years (now 20 years) of the life of the patent. This should encourage improved technology transfer from the federal laboratories or the universities (in the case of universityoperated GOCOs) to large corporations, which often have the resources necessary for development and commercialization activities. In addition, the law permits GOCOs (those operated by universities or non-profit institutions) to retain title to inventions made in the laboratory within certain defined limitations. Those laboratories operated by large companies are not included in this provision. P.L. 106-404, the Technology Transfer Commercialization Act, alters current practices concerning patents held by the government to make it easier for federal agencies to license such inventions. The law amends the Stevenson–Wydler Technology Innovation Act and the Bayh–Dole Act to decrease the time delays associated with obtaining an exclusive or partially exclusive license. Previously, agencies were required to publicize the availability of technologies for three months using the Federal Register and then provide an additional, sixty-day notice of intent to license by an interested company. Under the new legislation, the time period was shortened to fifteen days in recognition of the ability of the Internet to offer widespread notification and the necessity of time constraints faced by industry in commercialization activities. The government retains certain rights, however. The bill also allows licenses for existing government-owned inventions to be included in CRADAs. The Omnibus Trade and Competitiveness Act (P.L. 100-418) mandated the creation of a program of regional centers to assist small manufacturing companies to use knowledge and technology developed under the auspices of the National Institute of Standards and Technology and other federal agencies. Federal funding for the centers is matched by non-federal sources including state and local governments and industry. Originally, seven Regional Centers for the Transfer of Manufacturing Technology were selected. Later, the initial program was expanded in 1994 to create the Manufacturing Extension Partnership (MEP) to meet new and growing needs of the community. In a more varied approach, the Partnership involves both large centers and smaller, more dispersed organizations sometimes affiliated with larger centers as well as the NIST State Technology Extension Program, which provides states with grants to develop the infrastructure necessary to transfer technology from the federal government to the private sector (an effort which was also mandated by P.L. 100-418) and a program which electronically ties the disparate parties together along with other federal, state, local, and academic technology transfer organizations. There are now centers in all 50 states and Puerto Rico. Since the manufacturing extension activity was created in 1989, awards made by NIST have resulted in the creation of approximately 400 regional offices. [It should be noted that the Department of Defense also funded 36 centers through its Technology Reinvestment Project (TRP) in FY1994 and FY1995. When the TRP was terminated, NIST took over support for 20 of these programs in FY1996 and funded the remaining efforts during FY1997.] Funding for this program was $11.9 million in FY1991, $15.1 million in FY1992, and $16.9 million in FY1993. In FY1994, support for the expanded Manufacturing Technology Partnerships was $30.3 million. In the following fiscal year, P.L. 103-317 appropriated $90.6 million for this effort, although P.L. 104-19 rescinded $16.3 million from this amount. While the original FY1996 appropriations bill, H.R. 2076, was vetoed by the President, the $80 million funding for MEP was retained in the final legislation, P.L. 104-134. The President’s FY1997 budget request was $105 million. No FY1997 authorization legislation was enacted, but P.L. 104-208 appropriated $95 million for Manufacturing Extension while temporarily lifting the six-year limit on federal support for individual centers. The Administration requested FY1998 funding of $123
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
53
million. Again no authorizations were passed. However, the FY1998 appropriations bill, P.L. 105-119, financed the MEP program at $113.5 million. This law also permitted government funding, at one-third the centers’ total annual cost, to continue for additional periods of one year over the original six-year limit, if a positive evaluation was received. The President’s FY1999 budget included $106.8 million for the MEP, a 6% decrease from current funding. The Omnibus Consolidated Appropriations Act, P.L. 105-277, appropriated the $106.8 million. The decrease in funding reflects a reduced federal financial commitment, as the centers mature, not a decrease in program support. In addition, the Technology Administration Act of 1998, P.L. 105-309, permits the federal government to fund centers at one-third the cost after the six years if a positive, independent evaluation is made every two years. For FY2000, the Administration requested $99.8 million in support of the MEP. Again, the lower federal share indicated a smaller statutory portion required of the government. S. 1217, as passed by the Senate, would have appropriated $109.8 million for the Manufacturing Extension Partnership, an increase of 3% over FY1999. H.R. 2670, as passed initially by the House, would have appropriated $99.8 million for this activity. The version of the H.R. 2670 passed by both House and Senate provided FY2000 appropriations of $104.8 million. While the President vetoed that bill, the legislation that was ultimately enacted, P.L. 106-113, appropriated $104.2 million after the mandated rescission. The Clinton Administration’s FY2001 budget requested $114.1 million for the Partnership, an increase of almost 9% over current support. Included in this figure was funding to allow the centers to work with the Department of Agriculture and the Small Business Administration on an e-commerce outreach program. P.L. 106-553 appropriates $105.1 million for FY2001, but does not fund any new initiatives. The FY2002 Bush Administration budget proposed providing $106.3 million for MEP. H.R. 2500, as originally passed by the House, would have funded MEP at $106.5 million. The initial version of H.R. 2500 passed by the Senate would have provided $105.1 million for the program. The final legislation, P.L. 107-77, funds the Partnership at $106.5 million. For FY2003, the Administration’s budget included an 89% decrease in support for MEP. According to the budget document, “. consistent with the program’s original design, the President’s budget recommends that all centers with more than six years experience operate without federal contribution.” A number of Continuing Resolutions supported the Partnership at FY2002 levels until the 108th Congress enacted P.L. 108-7, which appropriates $106.6 million for MEP in FY2003. The President’s FY2004 budget requests $12.6 million for MEP to finance only those centers that have not reached six years of federal support. (For additional information, see CRS Report 97-104, Manufacturing Extension Partnership Program: An Overview.) A Different Approach? As indicated above, the laws affecting the R&D environment have included both direct and indirect measures to facilitate technological innovation. In general, direct measures are those that involve budget outlays and the provision of services by government agencies. Indirect measures include financial incentives and legal changes (e.g., liability or regulatory reform; new antitrust arrangements). Supporters of indirect approaches argue that the market is superior to government in deciding which technologies are worthy of investment. Mechanisms that enhance the market’s opportunities and abilities to make such choices are preferred. Advocates further state that dependency on agency discretion to assist one technology in preference to another will inevitably be subjected to political pressures from entrenched interests. Proponents of direct government assistance maintain, conversely, that indirect methods can be wasteful and ineffective and that they can compromise other goals of public policy in the hope of stimulating innovative performance. Advocates of direct approaches argue that it is important to put the country’s scarce resources to work on those technologies that have the greatest promise as determined by industry and supported by its willingness to match federal funding.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
54
Handbook of Technology Management in Public Administration
In the past, while Republicans tended to prefer reliance on free market investment, competition, and indirect support by government, participants in the debates generally did not make definite (or exclusionary) choices between the two approaches, nor consistently favor one over the other. For example, some proponents of a stronger direct role for the government in innovation are also supporters of enhanced tax preferences for R&D spending, an indirect mechanism. Opponents of direct federal support for specific projects (e.g., SEMATECH, flat panel displays) may nevertheless back similar activities focused on more general areas such as manufacturing or information technology. However, the 104th Congress directed their efforts at eliminating or curtailing many of the efforts that had previously enjoyed bipartisan support. Initiatives to terminate the Advanced Technology Program, funding for flat panel displays, and agricultural extension reflected concern about the role of government in developing commercial technologies. The Republican leadership stated that the government should directly support basic science while leaving technology development to the private sector. Instead of federal funding, changes to the tax laws, proponents argue, will provide the capital resources and incentives necessary for industry to further invest in R&D. During the 105th and 106th Congresses many of the same issues were considered. While funding for several programs decreased, particularly in FY1998, support for most ongoing activities continued, some at increased levels. At the close of the 107th Congress, funding remained at FY2002 levels, as many of the relevant appropriations bills were not enacted. How the debate over federal funding evolves in the 108th Congress may serve to redefine thinking about the government’s efforts in promoting technological advancement in the private sector. 107th Congress Legislation † P.L. 107-50, H.R. 1860 (Small Business Technology Transfer Program Reauthorization
Act of 2001): Reauthorizes the Small Business Technology Transfer Program through FY2009. In FY2004 the set-aside is to increase to 0.3% while Phase II awards may expand to $750,000. Introduced May 16, 2001; referred to the House Committees on Small Business and Science. Reported, amended, by the Committee on Small Business and discharged from the Committee on Science on September 21, 2001. Passed the House, amended, September 24, 2001. Received in Senate on September 25 and passed Senate, without amendment, on September 26, 2001. Signed into law by the President on October 15, 2001. † P.L. 107-77, H.R. 2500: Makes FY2002 appropriations for the National Institute of Standards and Technology, among other things. The Manufacturing Extension Partnership is funded at $106.5 million while the Advanced Technology Program is provided $184.5 million. Introduced July 13, 2001; referred to the House Committee on Appropriations. Reported to the House on the same day. Passed the House, amended, on July 18, 2001. Received in Senate July 19 and passed Senate, with an amendment, on September 13, 2001. Measure amended in Senate after passage by unanimous consent on September 13 and September 21, 2001. Conference held. The House agreed to the conference report on November 14, 2001; the Senate agreed the following day. Signed into law by the President on November 28, 2001.
LEGISLATION † P.L. 108-7, H.J.Res. 2 (Omnibus FY2003 Appropriations Act): Among other things,
funds the Advanced Technology Program at $180 million and the Manufacturing Extension Partnership at $106.6 million. Introduced January 7, 2003; referred to the House Committee on Appropriations. Passed House on January 8, 2003. Passed Senate,
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
55
amended, on January 23, 2003. House and Senate agreed to conference report on February 13, 2003. Signed into law by the President on February 20, 2003. † H.R. 175 (Royce): A bill to abolish the Advanced Technology Program. Introduced January 7, 2003; referred to the House Committee on Science.
SCIENCE AND TECHNOLOGY POLICY: ISSUES FOR THE 107TH CONGRESS, SECOND SESSION* SUMMARY Science and technology have a pervasive influence over a wide range of issues confronting the nation. Decisions on how much federal funding to invest in basic and applied research and in research and development (R&D), and determining what programs have the highest priority, may have implications for homeland security, new high-technology industries, government/private sector cooperation in R&D, and myriad other areas. This report provides an overview of key science and technology policy issues pending before Congress, and identifies other CRS reports that treat them in more depth. For FY2003, the President is requesting $112.1 billion for R&D, an increase of $8.9 billion over FY2002. Of that amount, defense R&D (for the Department of Defense, and Department of Energy military/nuclear programs) would receive $58.8 billion, while non-defense would receive $53.3 billion. Most of the increase is for the Department of Defense (DOD) and the National Institutes of Health (NIH). Some of the DOD and NIH funding will be spent on counter terrorism R&D. The White House Office of Science and Technology Policy (OSTP) is playing a major supporting role in coordinating some of the federal counter terrorism R&D activities. OSTP Director John Marburger told Congress in February 2002 that counter terrorism R&D funding is likely to increase from about $1.5 billion in FY2002 to about $3 billion for FY2003. Although total R&D spending is rising, nonNIH, non-defense R&D spending would fall by 0.2%, a pattern that raises concern among some scientists who argue that physical sciences, chemistry, social sciences, computer sciences, and related fields are not being given the same attention as health sciences research. They believe such a pattern could eventually undermine the knowledge base needed to sustain growth in biomedical research and across all fields of science. Apart from R&D funding and priorities, many other science and technology policy issues are pending before Congress. For example, a major debate is ongoing over the deployment of “broadband” technologies to allow high-speed access to the Internet. The issue is what, if anything, should be done at the federal level to ensure that broadband deployment is timely, that industry competes on a “level playing field,” and that service is provided to all sectors of American society. Other issues include slamming (an unauthorized change in a subscriber’s telephone service provider), Internet privacy, electronic government, spectrum management, and voting technologies. Congress is also debating what role the government should play in drug pricing. Because the federal government funds basic research in the biomedical area, some believe that the public is entitled to commensurate consideration in the prices charged for resulting drugs. Others believe government intervention in setting drug prices would be contrary to long-standing technology development policies. The role of the federal government in technology development is being debated as well.
* From The CRS Report for Congress, by Marcia S. Smith, Coordinator Specialist in Aerospace and Telecommunications Policy Resources, Science, and Industry Division. q2002 Congressional Research Service.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
56
Handbook of Technology Management in Public Administration
INTRODUCTION Science and technology are an underpinning of, and have a pervasive influence over, a wide range of issues confronting the nation. Decisions on how much federal funding to invest in basic and applied research and in research and development (R&D), and determining what programs have the highest priority, could have implications for homeland security, new high-technology industries, government/private sector cooperation in R&D, and myriad other areas. Following are brief discussions of some of the key science and technology issues pending before the second session of the 107th Congress. More in-depth CRS reports and issue briefs on these topics, many of which are frequently updated, are identified at the end of the report. For brevity’s sake, the titles of the referenced reports are not included in the text, only their numbers. The list at the end has both titles and product numbers by topic. This report continues the series of annual CRS reports on science and technology issues for Congress initiated and coordinated by former CRS Senior Specialist Richard E. Rowberg.
ISSUES Research and Development Budgets and Policy FY2003 Research and Development (R&D) Budget: For FY2003, the President is requesting $112.1 billion for R&D, an increase of $8.9 billion over FY2002. (See CRS Issue Brief IB10088.) Defense funding includes R&D at the Department of Defense (DOD) and for the Department of Energy’s (DOE’s) military/nuclear programs. Non-defense R&D is all other R&D agencies. For FY2003, the Administration is requesting $58.8 billion for defense R&D, while $53.3 billion is requested for non-defense R&D. Funding for DOD and the National Institutes of Health (NIH) account for most of the R&D funding increase. (Details on R&D funding by agency, for most of the agencies discussed in this report, can be found in CRS Issue Brief IB10100.) The FY2003 request continues a pattern from the FY2002 budget in which Congress approved a record $11.5 billion increase for federal R&D, raising the federal R&D budget to an estimated $103.2 billion. As it did last year, the Administration identified a subset of the R&D budget—called the “Federal Science and Technology” (FS&T) budget—totaling $57 billion, that focuses on basic and applied research leading to the creation of new knowledge. It includes some education and training funding, and excludes most development funding. This conceptualization is similar, but not identical, to a proposal made by the National Academy of Sciences in 1995. Some of the funding increases in the FY2003 budget are for counter terrorism R&D, laboratory security, and basic research. The basic research budget would increase about 9%, to $25 billion, the highest level ever reached. The Administration is seeking funding for three interagency R&D initiatives: nanoscale science, engineering and technology, requested at $710 million, an increase of 17.5% over FY2002; networking and information technology R&D, $1.89 billion, up 2.5% over FY2002; and the U.S. Global Change Research Program, $1.71 billion, an increase of 2.6%. The Office of Management and Budget (OMB) is proposing deficit spending for FY2003, after four years of budget surpluses. Consequently, congressional debate could focus on discretionary spending priorities for R&D versus other areas, including tax cuts, funding for domestic programs, and homeland defense. Election year politics could increase pressure for more discretionary spending. Debates continue about the balance between health sciences-related funding in NIH, and non-NIH, non-defense funding. Non-NIH, non-defense R&D would fall by 0.2%, a pattern which continues to raise concern among some scientists who argue that physical sciences, chemistry, social sciences, computer sciences and other related fields are not being given the same attention as health sciences research. They believe such a pattern could eventually undermine the knowledge base needed to sustain growth in biomedical research as well as across all fields of science.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
57
In the FY2003 budget, in the House, most appropriations bills that dealt with R&D have not been reported yet. House-approved defense appropriations would increase DOD R&D funding by 8.1% over the President’s requested level. Senate committee action is complete for all 13 appropriations bills and would increase R&D funding by 12.4% over FY2002, which is $4.7 billion more than the President requested. Most of the Senate increase was allocated to defense R&D and NIH. (See CRS Issue Briefs IB10088, IB10100, and IB10062.) National Institutes of Health (NIH) The President requested a total of $27.3 billion for the NIH (part of the Department of Health and Human Services—HHS) for FY2003, enough to complete the planned doubling of the NIH budget over the five-year period since the FY1998 appropriation of $13.6 billion. The requested amount was an increase of $3.7 billion or 15.7% over the comparable FY2002 appropriation of $23.6 billion. NIH’s plans for its FY2003 budget had to be adjusted after the terrorist attacks of September 2001. Of the $3.7 billion increase in the request, $1.5 billion or 40% was devoted to bioterrorismrelated activities, which totaled $1.75 billion, up from $275 million in FY2002. Issues facing Congress included the need to weigh its previous commitment to completing the five-year doubling of NIH against the many new needs for discretionary resources across the federal government. The $3.7 billion increase requested for NIH was larger than the increase ($2.4 billion) requested for total HHS discretionary programs; several other public health and human services agencies are proposed for decreased funding. In addition, there is a continuing disparity between funding for health research and support of other fields of science, including many areas where advances are critical for progress in biomedical research. Finally, contentious issues in several areas of research oversight continue to draw attention: research on human stem cells, human embryo research, cloning, human subjects’ protection, gene therapy, and possible conflicts of interest on the part of researchers. Defense Science and Technology Last year, DOD conducted two major reviews—the congressionally mandated Quadrennial Defense Review, and the Administration’s own internal strategic military review. During the 2004 election, the Bush campaign suggested that the U.S. military was still too wedded to cold war structures, tactics and equipment, and too slow in addressing more unconventional threats. The September 11 terrorist attacks appear to have underscored that concern, and the war in Afghanistan is proving to be a laboratory for new technologies and tactics. Last year’s reviews and the subsequent war on terrorism, however, have not resulted in any tectonic shift in the allocation of science and technology resources. While more funds have been allocated to areas such as unmanned aerial vehicles (including increasing their capabilities beyond surveillance), networking of sensors and communications, and technologies for the war-fighter (i.e., technologies that an individual combatant would carry with him), many traditional cold war era systems (e.g., F-22, Comanche helicopter) are also continuing to be developed. The Administration has fulfilled its promise to increase defense research and development. Its FY2003 Research, Development, Test and Evaluation (RDT&E) budget request is higher than the previous historic peak in FY1987, in both current and constant dollars. Its Science and Technology (S&T) request (for basic and applied research), however, is slightly below last year’s appropriation. The Administration is also committed to increasing funding for missile defense development and has restructured that program to pursue a layered global defense system that could engage a limited number of ballistic missiles at any point along their flight path. The Administration is pushing a concept called “evolutionary acquisition.” Evolutionary acquisition has existed in various forms for many years as “block” developments and preplanned product improvements.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
58
Handbook of Technology Management in Public Administration
The Administration has also floated a concept called “capabilities-based management” of systems development, as opposed to the current practice of “requirements-based management.” The Administration, however, is still in the process of articulating the differences. It is not yet clear what the implications of these conceptual changes may be for the allocation of research and development resources and the development and insertion of new technologies. (See CRS Issue Brief IB10062.) Public Access to Federal R&D Data The FY1999 omnibus appropriations bill (P.L. 105-277) required OMB to establish procedures for the public to obtain access to data from federally funded research, through provisions of the Freedom of Information Act (FOIA). This was a major change from traditional practice. While permitted, federal agencies typically have not required grantees to submit research data, and pursuant to a 1980 Supreme Court decision, agencies, under FOIA, did not have to give the public access to research data not part of agency records. There was considerable debate in Congress and the scientific community about this legislation. Opponents said that FOIA was an inappropriate vehicle to allow wider public access. They argued that using it would harm the traditional process of scientific research because human subjects will refuse to participate in experiments, believing that the federal government might obtain access to confidential information; researchers would have to spend additional time and money preparing data for submission to the government, thereby interfering within ongoing research; and government/university/industry partnerships would be jeopardized, because data funded jointly would be made available under FOIA. Proponents of the amendment said that “accountability” and “transparency” were paramount; the public should have a right to review scientific data underlying research funded by government taxpayers and used in making policy or setting regulations. OMB released its final guidelines (as revisions to OMB Circular A-110), as directed by law, on September 30, 1999. After considerable public comment, OMB limited access under FOIA to selected research data that the federal government cites or uses in actions having the force and effect of law. Legislation was introduced in the 106th Congress to repeal the law and hearings were held, but the bill did not pass. It had been anticipated that court challenges would be raised to the OMB guidelines, to the extent they represent a narrow interpretation of the law. Reportedly, William L. Kovacs, vice president of environmental and regulatory affairs for the U.S. Chamber of Commerce and a major supporter of the legislation, predicted that the OMB regulations, which some see as being too narrow in allowing access to research data, could be revisited by the new Bush Administration. This has not yet occurred. Quality of Federal R&D Data Final guidelines implementing the “Data Quality Act,” Section 515 of P.L. 106-554 (the FY2001 Treasury and General Government Appropriations Act), were published in the Federal Register on January 2, 2002. This section required OMB to issue government-wide guidelines to ensure the “quality, objectivity, utility and integrity” of information disseminated by the government. Some say the law strengthens the position of industrial opponents to some federal health and environmental policies, who would be able to challenge the scientific quality of data and reports used to develop regulations. During the rule writing stage for the new law, scientific groups sought to have the rules written in a way to prevent “harassment” of scientists working on controversial research and to avoid imposing new obstacles to the publication of research rules. The final guidelines addressed some of these issues, but still allow challenges to research results underlying official agency policies. The guidelines allow peer reviewed findings to be challenged on a case-by-case basis. According to the New York Times (March 21, 2002), agencies should have promulgated their
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
59
own implementing regulations by October 1, 2002, and the National Academy of Sciences and other groups held meetings to discuss agency implementation procedures. Government Performance and Results Act (GPRA) and the President’s Management Agenda The Government Performance and Results Act of 1993 (GPRA), P.L. 103-62, is intended to produce greater efficiency, effectiveness, and accountability in federal spending and to ensure that an agency’s programs and priorities meet its goals. It also requires agencies to use performance measures for management and, ultimately, for budgeting. Agencies are required to provide Congress with annual performance plans and performance reports. All major R&D funding agencies have developed performance measures to assess the results of their R&D programs. Commentators have pointed out that it is particularly difficult to define priorities for most research and to measure the results quantitatively, since research outcomes cannot be defined well in advance and often take a long time to demonstrate. Recent actions could force agencies to identify more precisely goals for research and measures of research outcomes. The Bush Administration has emphasized the importance of performance measurement, including for R&D, as announced in The President’s Management Agenda, FY2002 and in the FY2003 budget request. However, most observers say that more analytical work and refinement of measures is needed before performance measures can be used to recommend budget levels for research. In the FY2003 budget request, OMB used performance measures for management processes, and issued a color-coded chart indicating how departments and agencies were performing in five different areas: human capital, competitive sourcing, improved financial management, electronic government (e-government), and integrating budget and performance. Green signified success, yellow indicated mixed results, and red meant unsatisfactory. Only one green rating was awarded—to the National Science Foundation for financial management. Most departments and agencies received red ratings in all five categories, although a few yellows were issued. In addition, as part of a pilot test, six performance criteria were used to evaluate the Department of Energy’s applied R&D programs. Although OMB reported that not enough data were available for a valid assessment, the measures used indicated areas possibly meriting increased funding, including research to control greenhouse gases, and areas where funding might be decreased, including oil drilling technology and high wind-speed power research (FY2003 Budget, Analytical Perspectives, Section 8). OMB also identified seven “fundamental [performance] principles” that motivated the development of FY2004 R&D budgets. OMB cosponsored a conference with the National Academy of Sciences (NAS) to develop performance criteria for assessing basic research, which it says it wants agencies to use eventually in their budget requests. The NAS has issued two reports to assist agencies in developing performance measures for research. The most recent is entitled Implementing the Government Performance and Results Act for Research: A Status Report, 2001. The House Science Committee’s science policy report, Unlocking Our Future, 1998, commonly called the Ehlers report, recommended that a “portfolio” approach be used when applying GPRA to basic research. P.L. 106-531 mandated that an agency head assess the completeness and reliability of performance data used in reports to Congress and the House adopted a rule with the passage of H. Res. 5 requiring all “committee reports [to] include a statement of general performance goals and objectives, including outcome-related goals and objectives for which the measure authorizes funding.” (See CRS Report RL30905 and CRS Report RS20257.) Cooperative R&D: As R&D becomes more expensive, collaborative efforts among government, industry, and academia continue to expand. While there are various laws that encourage such efforts, additional issues have developed as a consequence of the implementation of those laws. Congress has addressed cooperative R&D within the context of patent reform, federal R&D
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
60
Handbook of Technology Management in Public Administration
funding, the future of the research and experimentation tax credit, and amendments to the Stevenson–Wydler Technology Innovation Act concerning cooperative research and development agreements (CRADAs). Recently, changes were made in the patent laws, the research and experimentation tax credit was extended, and the Small Business Technology Transfer Program was reauthorized. It is expected that during the second session of the 107th Congress, some Members of Congress may consider a review of collaborative R&D, particularly in relation to facilitating expansion of high-tech industries, including pharmaceuticals, biotechnology, telecommunications, and computers. Critics, however, believe the government should not fund research that supports development of commercial products. (See CRS Issue Brief IB89056 and CRS Report 98-862.) Science and Technology Education An important aspect of U.S. efforts to maintain and improve economic competitiveness is the existence of a capable scientific and technological workforce. Global competition and rapid advances in science and technology require a workforce that is increasingly more scientifically and technically proficient. A September 2000 report of the National Commission on Mathematics and Science Teaching for the 21st Century, Before It’s Too Late, states that jobs in the computer industries and health sciences requiring science and mathematics skills will increase by 5.6 million by the year 2008. Also, 60% of all new jobs in the early twenty-first century will requite skills held by just 20% of the current workforce. An important education focus of the 107th Congress may be on the ability of the United States to educate the workforce needed to generate the technological advances deemed necessary for continued economic growth. Hearings were held during the second session of the 107th Congress to address the reported needs in science and mathematics education. On March 7, 2002, the House Subcommittee on Research held a hearing to examine the current state of undergraduate mathematics, science, and engineering education. The hearing examined the variety of responses by colleges and universities, discussed the types of programs that address the relevant problems in science and mathematics education, and discussed federal programs that could be developed to stimulate additional change. On April 22, the House Subcommittee on Research held field hearings on strengthening and improving K-12 and undergraduate science, mathematics, and engineering education. In addition, the hearing discussed industry needs for a diverse and scientifically literate workforce for the twenty-first century. Several pieces of legislation have been introduced that focus on improving certain aspects of science and mathematics education. H.R. 3130, the Technology Talent Act of 2001, passed the House on July 9, 2002. H.R. 3130 authorizes the awarding of grants, on a competitive basis, to colleges and universities with science, mathematics, engineering, or technology programs for the purpose of increasing the number of students earning degrees in established or emerging fields within the disciplines. Not fewer than 10 grants are to be awarded each year. The awards are for a three-year period, with the third year of funding contingent on the progress made during the first two years of the grant period. H.R 1858, the National Mathematics and Science Partnerships Act, passed the House on July 30, 2002. One of the purposes of this bill is to make improvements in science and mathematics education by awarding competitive grants to institutions of higher education to evaluate and enhance the effectiveness of information technologies in elementary and secondary science and mathematics education. An added purpose is to make awards for outreach grants (for partnerships between community colleges and secondary schools) that give priority to proposals involving secondary schools with a significant number of students from groups that are underrepresented in the scientific and technical fields. On January 8, 2002, President Bush signed into law the Elementary and Secondary Education Act, P.L. 107-110 (H.R. 1, No Child Left Behind Act of 2001). The legislation provides $12.5 million for math and science partnerships between schools and colleges. Funding is targeted for use by schools to recruit and train science and
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
61
mathematics teachers. Also, colleges and universities will receive support for assisting in the training and advising of teachers in the scientific disciplines. Foreign Science and Engineering Presence in U.S. Institutions and the Labor Force: The increased presence of foreign students in U.S. graduate science and engineering programs continues to be of concern to many in the scientific community. Enrollment of U.S. citizens in graduate science and engineering programs has not kept pace with that of foreign students in those programs. In addition to the number of foreign students in graduate science and engineering programs, a significant number of university faculties in the scientific disciplines are foreign, and foreign doctorates are employed in large numbers by industry. National Science Foundation data reveal that in 2000, the foreign student population earned approximately 30.3% of the doctorate degrees in the sciences and approximately 52.4% of the doctorate degrees in engineering. Trend data for science and engineering degrees for the years 1991–2000 reveal that of the non-U.S. citizen population, temporary resident status students consistently have earned the majority of the doctorate degrees. Industry leaders contend that because of the lack of U.S. workers with skills in scientific and technical fields, high-technology companies have to rely more heavily on foreign workers on H-1B visas. The American Competitiveness in the Twenty-First Century Act of 2000 (P.L. 106-313) raised the number of H-1B visas by 297,500 over a period of three years. While a portion of the fees collected from these visas would be to provide technical training for U.S. workers and to establish a K-12 science, mathematics, and technology education grant program, many in the scientific and engineering communities believe that the legislation lessens the pressure to encourage more U.S. students, especially minorities, to pursue scientific and technical careers. In addition, they contend that U.S. workers can be retrained to do these new jobs, but the high-technology companies prefer foreign workers because they will work for less money. Company officials contend that the technologies are evolving too rapidly to permit retraining and that they must hire workers who have the skills now or risk losing market share to companies who do have the necessary workforce (see CRS Report RL30498). Homeland Security Counter Terrorism R&D: The White House Office of Science and Technology Policy (OSTP), the Office of Management and Budget (OMB), and the Office of Homeland Security have played major roles in coordinating federal counter terrorism R&D budgets and major activities. The National Science and Technology Council (NSTC) has established an Antiterrorism Task Force, which has four subgroups, each of which is developing R&D priorities for specific subject areas. The $3 billion FY2003 budget request for counter terrorism R&D was about double the amount appropriated for FY2002. According to the Office of Management and Budget’s Annual Report to Congress on Combating Terrorism, FY2002, $44.802 billion was requested for combating terrorism for FY2003. Of this, about $2.905 billion—or 5.5% of the total—was requested for R&D to develop technologies to deter, prevent or mitigate terrorist acts. This is an increase over FY2002, when appropriated funds, combined with the Emergency Response Fund, totaled $36.468 billion, with R&D funding at $1.162 billion, or 3.2% of the total. OSTP has also identified some examples of the Administration’s science and technology-related antiterrorism priorities for FY2003.* According to OMB, the three largest funding increases in the FY2003 request were for the Department of Health and Human Services (DHHS), the Environmental Protection Agency (EPA), and the National Science Foundation (NSF). The largest increase was for bioterrorism-related R&D at the National Institutes of Health (part of DHHS). (See CRS Report RS21270 and CRS Report RL31202.) It was estimated that the Department of Homeland Security, which was created in H.R. 5005, and which passed the House in August 2002, is responsible for counter terrorism R&D totaling about $300 to $500 million. The Department of National Homeland Security that was * See http://www.ostp.gov/html/AntiTerrorismS&T.pdf.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
62
Handbook of Technology Management in Public Administration
proposed in S. 2452 is responsible for a larger amount of R&D, which has been transferred to the new department. Among the issues Congress considered were: coordination among agency programs to avoid duplication and overlap; coordination between existing interagency counter terrorism R&D mechanisms and those included in the proposed departments; and possible negative effects on scientific information exchange and scientific inquiry of placing security controls on scientific and technical information. (See CRS Report RL31354.) Aviation Security Technologies The September 11 terrorist attacks heightened congressional interest in technologies for aviation security (CRS Report RL31151). In February 2002, the newly formed Transportation Security Administration took over a long-established aviation security R&D program previously conducted by the Federal Aviation Administration. The main emphasis of this program in recent years has been the development of explosives-detection equipment for screening airline passenger checked baggage. Other technologies under development include equipment for passenger screening, biometrics and other technologies for airport access control, and aircraft hardening. The Aviation and Transportation Security Act (ATSA, P.L. 107-71) requires that explosives-detection equipment be used to screen all checked baggage by December 31, 2002. Until then, the severe challenge of procuring and deploying enough equipment to meet the deadline, and the debate over whether to extend it, may overshadow efforts to develop improved equipment types. In particular, funding for aviation security R&D did not increase significantly in FY2003, despite the September 11 attacks, because of the pressure of ATSA’s near-term operational deadlines and the focus on immediate needs as the Transportation Security Administration takes over security responsibility from the private sector. Critical Infrastructure Following the September 11 terrorist attacks, the Bush Administration articulated its approach to protecting the nation’s information systems and the critical infrastructure that depends on it. Executive Order 13228, signed October 8, 2001, established the Office of Homeland Security. Executive Order 13231, signed October 16, 2001, established the President’s Critical Infrastructure Protection Board. The Office of Homeland Security has overall authority for coordinating activities to protect the nation, including the nation’s critical infrastructures, from terrorist attacks. The President’s Critical Infrastructure Protection Board focuses primarily on the information infrastructure upon which much of the nation’s critical physical infrastructure relies. The Executive Orders leave in place many of the activities initiated under the Clinton Administration’s Presidential Decision Directive 63 (PDD-63). In June 2002, the Administration proposed establishing a new Department of Homeland Security. The new Department would bring together numerous agencies from other departments in an effort to better coordinate the nation’s counter-terrorism response. In the area of critical infrastructure protection, a number of entities established by, or in support of, PDD-63 would be transferred, including the Critical Infrastructure Assurance Office, the Federal Computer Incident Response Center, and parts of the National Infrastructure Protection Center. The proposal for the new Department leaves in place the entities established by the earlier Executive Orders. While the reorganization is meant to increase coordination of these groups, the question remains how well their activities are being implemented and coordinated. Meanwhile, Version 2 of the National Infrastructure Plan, previously due in 2001, is still being developed This is supposed to contain the private sector’s plan for protecting the infrastructure they own and operate. It is not entirely clear if it will address cyber security primarily or will also include strategies for protecting assets from physical attacks as well. Bills have been introduced to help facilitate the exchange of information between the private sector and the federal government by exempting the shared information
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
63
from the Freedom of Information Act. The Administration’s proposal to establish the new Department (see above), and the two congressional bills being debated, would include such an exemption. The exemption proposals, however, have raised issues within those groups concerned with open access to government information. (See CRS Report RL30153, Technology Development Intellectual Property/Patent Reform.) Interest in protection of intellectual property has grown as its ownership becomes more complex because of increasing joint public and private support of research. A particular focus of that concern is cooperative R&D among the federal government, industry, and academia. Issues continue to be raised in Congress about the right of drug companies to set prices on drugs that were developed in part with federal funding or in conjunction with federal agencies. Conflicts have also surfaced in Congress over federal laboratories patenting inventions that each collaborating party believes to be its own. For some federal agencies, delays continue in negotiating cooperative research and development agreements (CRADAs), because of disagreements over the ownership and control of any intellectual property. Problems have been encountered by NIH in obtaining, for use in its research, new experimental compounds that have been developed and patented by drug companies. The companies are concerned that their intellectual property rights could be eroded if new applications are discovered by NIH. These and other issues are expected to be explored as Congress addresses technology transfer, drug pricing, and/or the implications of patent reform legislation passed last session (CRS Report 98-862, CRS Report RL30451, and CRS Report RL30572). Advanced Technology Program The Advanced Technology Program (ATP), a key element in the former Clinton Administration’s efforts to promote economic growth through technology development, has been targeted for elimination since the start of the 104th Congress. Critics argue that R&D aimed at the commercial marketplace should be funded by the private sector, not by the federal government. This controversy was evident in the activities of the 106th Congress when the original House-passed appropriations legislation contained no funding for ATP. While FY2000 funding for ATP was 28% below the previous year, a small increase was provided for FY2001. Funding for the program increased 27% in FY2002. During the upcoming authorization and/or appropriation debates, similar questions may arise as to the appropriateness of federal government support for the Advanced Technology Program. The broader issues associated with a determination of the proper role of the federal government in technology development may also be explored. (See CRS Issue Brief IB91132, CRS Report 95-36, and CRS Report 95-50.) Technology Transfer As technology transfer activities between federal laboratories and the private sector become more widespread, additional issues are surfacing including, among others, fairness of opportunity, dispensation of intellectual property, and participation of foreign firms. Congressional concerns about competing claims on rights to patents arising from federally funded research and development may generate oversight of the policies and practices of various federal research establishments. Congressional interest in health-related R&D has also led to questions about what role the transfer of government-supported research plays in the creation of pharmaceuticals and biotechnology products. The implications of the laws associated with technology transfer in these and other industrial sectors are expected to be of continuing concern during the 107th Congress. (See CRS Issue Brief IB85031 and CRS Report RL30585 and CRS Report RL30320.) Federal R&D, Drug Costs, and Availability Congressional interest in methods to provide drugs at lower cost, particularly through Medicare for the elderly, has rekindled discussion over the role the federal government plays in facilitating
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
64
Handbook of Technology Management in Public Administration
the creation of new pharmaceuticals for the marketplace. In the current debate, some argue that the government’s financial, scientific, and/or clinical support of biomedical research and development (R&D) entitles the public to commensurate considerations in the prices charged for any resulting drugs. Others view government intervention in price decisions based upon initial federal R&D funding as contrary to a long-term trend of government promotion of innovation, technological advancement, and the commercialization of technology by the business community leading to new products and processes for the marketplace. Various federal laws facilitate commercialization of federally funded R&D through technology transfer, cooperative R&D, and intellectual property rights. These laws are intended to encourage additional private sector investments often necessary to further develop marketable products. The current approach to technology development policy attempts to balance the public sector’s interest in new and improved technologies with concerns over providing companies valuable benefits without adequate accountability or compensation. However, questions have been raised in Congress about whether this balance is appropriate, particularly with respect to drug discovery. Critics maintain that the need for technology development incentives in the pharmaceutical and/or biotechnology sectors is mitigated by industry access to government-supported work at no cost, monopoly power through patent protection, and additional regulatory and tax advantages such as those conveyed through the Hatch–Waxman Act (P.L. 98-417) and the Orphan Drug Act (P.L. 97-414). Supporters of the existing approach argue that these incentives are precisely what are required and have given rise to robust pharmaceutical and biotechnology industries. It remains to be seen whether or not Congress will change the nature of the current approach to government–industry–university cooperation through an attempt to legislate costs associated with prescription drugs. (See CRS Report RL31379, CRS Report RL30756, CRS Report RL30585, CRS Report RS21129, and CRS Issue Brief IB10105.) Telecommunications and Information Technology Bell Entry into Long Distance: Present laws and regulatory policies applied to the Bell operating companies (BOCs) restrict them from offering long-distance (interLATA) services within their service regions until certain conditions are met. The BOCs seeking to provide such services must file an application with the Federal Communications Commission (FCC) and the appropriate state regulatory authority that demonstrates compliance with a fourteen-point checklist. The FCC, after consultation with the Justice Department and the relevant state regulatory authority, will determine whether the BOC is in compliance and can be authorized to provide in region, interLATA services. To date, three BOCs, Verizon, SBC Communications, and BellSouth, have been authorized to provide such services in fifteen states. Concerns have been raised about whether such restrictions are overly burdensome and discourage needed investment in and deployment of broadband services. Proponents of these measures feel that the lifting of such restrictions will accelerate the deployment of and access to broadband services, particularly in rural and underserved areas. Opponents argue that such restrictions are necessary to ensure the growth of competition in the provision of telecommunications services and that the lifting of such restrictions will have an adverse effect on the broadband marketplace. Legislation (H.R. 1542) seeking to ease these regulatory restrictions, as applied to high-speed data services, passed (273-157) the House, as amended, on February 27, 2002. Two measures (S. 2430, and S. 2863) addressing broadband deregulation, but not containing provisions specific to BOC interLATA service entry, have been introduced in the Senate. (See CRS Issue Brief IB10045, CRS Report RL30018.) Slamming Slamming is the unauthorized change in a subscriber’s telephone service provider. Measures (S. 58 and S.1084) to strengthen slamming regulations issued by the FCC were introduced in the 106th Congress, but were not enacted. During that period, the FCC promulgated additional regulations to
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
65
further strengthen its slamming rules. Whether FCC-adopted slamming rules will be a sufficient deterrent to stop the practice of slamming and negate congressional interest to enact legislation remains to be seen. To date no legislation to modify slamming regulations has been introduced in the 107th Congress. (See CRS Issue Brief IB98027.) Broadband Internet Access Broadband Internet access gives users the ability to send and receive data at speeds far greater than conventional “dial up” Internet access over existing telephone lines. New broadband technologies—primarily cable modem and digital subscriber line (DSL), as well as satellite, and fixed wireless Internet—are currently being deployed nationwide by the private sector. Many observers believe that ubiquitous broadband deployment is an important factor in the nation’s future economic growth. At issue is what, if anything, should be done at the federal level to ensure that broadband deployment is timely, that industry competes on a “level playing field,” and that service is provided to all sectors of American society. Currently, legislation in Congress centers on two approaches. These are the easing of certain legal restrictions and requirements (imposed by the Telecommunications Act of 1996) on incumbent telephone companies that provide high-speed data (broadband) access (H.R. 1542, passed by the House on February 27, 2002, S. 2430, S. 2863), and providing federal financial assistance—such as grants, loans, or tax credits (H.R.267, S. 88, S.1731, S. 2448)—for broadband deployment in rural and economically disadvantaged areas. (See CRS Issue Brief IB10045 and CRS Report RL30719.) Spectrum Management and Wireless Technologies Managing utilization of the radio spectrum to maximize the efficiency and effectiveness of meeting increased spectrum demands during an era of rapidly growing wireless telecommunications has become a major challenge for government and industry. Interested parties want to ensure that competition is maximized and that all consumer, industry, and government groups are treated fairly. The radio spectrum, a limited and valuable resource, is used for all forms of terrestrial and satellite wireless communications including radio and television broadcast, mobile telephone services, paging, radio relay, and aeronautical and maritime navigation. The spectrum is used by federal, state, and local governments and the commercial sector. A vast array of commercial wireless services and new technologies are being developed to provide voice, data, and video transmissions in analog and digital formats for broadcast and interactive communications. Spurred by the growth of electronic commerce, many wireless service providers are developing wireless Internet access services. Spectrum used for public safety, similarly, needs to support data and video transmissions as well as voice communications to respond effectively to emergency situations. As a result, competition for spectrum is increasing. Due mainly to the combination of different technology standards operating on different radio frequencies, communications between—and even within— local, state and federal agencies is not always assured. Achieving interoperability—the ability to communicate among public safety telecommunications networks—is an important goal of the public safety community. In the last decade, significant advances in technology and in funding to purchase communications equipment have eased—but not eliminated—problems of incompatible systems, inadequate technology in the hands of first responders, insufficient funding, and limited spectrum. President Bush’s FY2003 budget request for Homeland Security included $1.4 billion to enhance communications infrastructure to support interoperability. This sum is part of $3.5 billion that went to the Federal Emergency Management Agency (FEMA) and the Department of Justice to be used in “first responder” grants to states. As currently planned, the Department of Homeland Security would absorb FEMA and ongoing Office of Justice Programs that include interoperability among federal, state and local public safety agencies.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
66
Handbook of Technology Management in Public Administration
Title III of the Balanced Budget Act of 1997 (P.L. 105-33) is intended to promote the transition from analog to digital television broadcasting. In that Act, Congress directed the FCC to designate spectrum for public safety agencies in the channels to be cleared (channels 60–69). The FCC is working with the broadcasting industry and wireless carriers on a market-driven approach for voluntary clearing of spectrum assigned for future use by public safety agencies. When it allocated this spectrum, the FCC specified that part would be used to assure interoperability for wideband networks used by public agencies. Congress is preparing to review national policies for managing spectrum including spectrum allocation, the promotion of efficient spectral technology, and the availability of sufficient spectrum for public safety operations. Additional legislation has been proposed to assure that public safety receives designated spectrum in the Upper 700 MHz range in a “timely manner.” H.R. 3397 addresses this matter. The roles of the FCC (which manages spectrum for commercial, and state and local government uses) and the National Telecommunications and Information Administration (which manages spectrum for the federal government) may also be revisited in the second session of the 107th Congress. (See CRS Report RL31375 and CRS Report RS 20993.) Internet Privacy Internet privacy issues encompass concerns about the collection of personally identifiable information (PII) from visitors to websites, as well as debate over law enforcement or employer monitoring of electronic mail and Web usage. In the wake of the September 11 terrorist attacks, debate over the issue of law enforcement monitoring has intensified, with some advocating increased tools for law enforcement to track down terrorists, and others cautioning that fundamental tenets of democracy, such as privacy, not be endangered in that pursuit. The Department of Justice authorization bill (H.R. 2215), as passed bys the House and Senate, requires the Justice Department to report to Congress on its use of Internet monitoring software such as Carnivore/DCS 1000. But Congress also passed the U.S.A. PATRIOT Act (P.L. 107-56) that, inter alia, makes it easier for law enforcement to monitor Internet activities. Congress and public interest groups are expected to monitor how law enforcement officials implement that Act. (See CRS Report RL31289 and CRS Report RL31408.) The parallel debate over website information policies concerns whether industry self-regulation or legislation is the best route to assure consumer privacy protection on commercial sites, and whether amendments to the 1974 Privacy Act are needed to protect visitors to government websites. The issue is how to balance consumers’ desire for privacy with needs of companies and the government to collect certain information on visitors to their websites. Although many in Congress and the Clinton Administration preferred industry self-regulation for commercial websites, slow industry response led the 105th Congress to pass legislation to protect the privacy of children under 13 (the Children’s Online Privacy Protection Act [COPPA], P.L. 105-277) as they use commercial websites. Many bills have been introduced since that time to protect those not covered by COPPA, but the only legislation that has passed addresses information collection practices by federal, not commercial, websites. Many Internet privacy bills are pending and hearings have been held (see CRS Report RL31408 for legislative status). E-Government Electronic government (e-government) is an evolving concept, meaning different things to different people. However, it has significant relevance to four important areas of governance: (1) delivery of services (government-to-citizen, or G2C); (2) providing information (also G2C); (3) facilitating the procurement of goods and services (government-to-business, or G2B, and business-to-government, or B2G); and (4) facilitating efficient exchanges within and between agencies (governmentto-government, or G2G). For policy makers concerned about e-government, a central issue is
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
67
developing a comprehensive but flexible strategy to coordinate the disparate e-government initiatives across the federal government. Just as the private sector is undergoing significant change due, in part, to the convergence of technology, these same forces are transforming the public sector as well. E-government initiatives vary significantly in their breadth and depth from state to state and agency to agency. So far, states such as California, Minnesota, and Utah have taken the lead in developing e-government initiatives. However, there is rapidly increasing interest and activity at the federal level as well. Perhaps the most well-known federal example is the September 2000 launch of the FirstGov website [http://www.firstgov.gov/]. FirstGov, which underwent a significant redesign in March 2002, is a web portal designed to serve as a single locus point for finding federal government information on the Internet. The FirstGov site also provides access to a variety of state and local government resources. The movement to expand the presence of government online raises as many issues as it provides new opportunities. Some of these issues concern security, privacy, management of governmental technology resources, accessibility of government services (including “digital divide” concerns as a result of a lack of skills or access to computers, or disabilities), and preservation of public information (maintaining comparable freedom of information procedures for digital documents as exist for paper documents). Although these issues are neither new nor unique to e-government, they do present the challenge of performing governance functions online without sacrificing the accountability of or public access to government that citizens have grown to expect. (See CRS Report RL31057.) Federal Chief Information Officer (CIO) A growing interest in better managing government technology resources, combined with recent piecemeal efforts to move governmental functions and services online, has led some observers to call for an “e-government czar,” or a federal Chief Information Officer (CIO), to coordinate these efforts. In the private sector, a CIO usually serves as the senior decision-maker providing leadership and direction for information resource development, procurement, and management with a focus on improving efficiency and the quality of services delivered. During the 106th Congress, two bills were introduced in the House calling for the establishment of a federal CIO position, but neither passed. The issues are being revisited in the 107th Congress. On May 1, 2001, Senator Lieberman introduced S. 803, the E-Government Act of 2001. Among its many provisions, S. 803 originally called for the establishment of a federal CIO, to be appointed by the President and confirmed by the Senate. The federal CIO would be in charge of a proposed Office of Information Policy and would report to the Director of OMB. S. 803 would also establish the CIO Council by law with the federal CIO as Chair. This bill was referred to the Senate Governmental Affairs Committee, which held a hearing on the bill on July 11, 2001. Also on July 11, 2001, Representative Turner introduced an identical companion bill, H.R. 2458, the E-Government Act of 2001. That bill was referred to the House Committee on Government Reform. On March 21, 2002, the Senate Governmental Affairs Committee reported S. 803 (now renamed the E-Government Act of 2002) with an amendment. As amended, S. 803 now calls for the establishment of an office of Electronic Government within OMB. The new office is to be headed by a Senate-confirmed administrator, who in turn, is to assist OMB’s Director, and Deputy Director of Management, and work with the Administrator of the Office of Information and Regulatory Affairs (OIRA) “in setting strategic direction for implementing electronic Government.”. The Senate passed the amended version of S. 803 unanimously on June 27, 2002. At this time, no additional action has been taken on the House companion bill, H.R. 2458. (See CRS Report RL30914.) On June 14, 2001, OMB announced the appointment of Mark Forman to a newly created position, the Associate Director for Information Technology and E-Government. According to the OMB announcement, as “the leading federal e-government executive,” the new Associate
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
68
Handbook of Technology Management in Public Administration
Director will be responsible for the e-government fund, direct the activities of the CIO Council, and advise on the appointments of agency CIOs. The Associate Director will also “lead the development and implementation of federal information technology policy.” The new position will report to the Deputy Director of Management at OMB, who in turn will be the federal CIO. Information Technology R&D For FY2002, almost all of the funding for federal information science and technology and Internet development is part of a single government-wide initiative. This is called the Information Technology Research and Development (IT R&D) initiative, and is the successor to the federal High Performance Computing and Communications Initiative begun in FY1991. The IT R&D initiative continues the effort begun in FY1991 by providing support for federal high-performance computing science and technology, information technology software and hardware, networks and Internetdriven applications, and education and training for personnel. In the current fiscal year, seven federal agencies will receive a total of $1.84 billion under the IT R&D initiative, with the NSF receiving about a third of that total. The Bush Administration is proposing that for FY2003, the IT R&D initiative receive $1.89 billion. The 107th Congress has so far supported this initiative, and H.R. 3400, which amends the High Performance Computing Act of 1991 to authorize appropriations for fiscal years 2003 through 2007, was reported favorably out of the House Committee on Science and placed on the House Calendar on June 18, 2002. Voting Technologies The 2000 Presidential election raised the question of whether changes are needed in the voting systems used in the United States (see CRS Reports RL30773 and RS20898). Elections in the United States are administered at the state and local level, and the federal government does not currently set mandatory standards for voting technologies. Five different kinds of technologies are now used: paper ballots, lever machines, punch cards, mark sense forms, and electronic systems. Most states use more than one kind. For some of these technologies, in particular, punch card ballots, concerns have been raised about ballot design, voter errors, and counting accuracy. Questions have also been raised about voter registration systems and the impacts of remote voting, including absentee and mail-in balloting. One form of remote voting currently in development is Internet voting (see CRS Report RS20639), which so far has been used only on an experimental basis. The House and Senate have both passed election reform legislation (H.R. 3295), which is now in conference. Issues currently being debated include the degree to which the federal government should set mandatory as opposed to voluntary national standards (CRS Report RS21156); whether punch card and lever voting systems should be eliminated; whether precincts should be required to have voting machines that are fully accessible to blind and other disabled voters; whether states should adopt computerized statewide voter registration systems; what kinds of identification should be required of first-time voters; and what federal funding should be made available for upgrading voting systems and for administering federal elections. (See the CRS Election Reform Electronic Briefing Book for details.) Biotechnology: Privacy, Patents, and Ethics Much debate currently focuses on how genetic privacy and discrimination, gene patenting, and ethical issues will affect the application of advances in biotechnology. Those advances hold great promise for providing extraordinary benefits through agricultural, medical, industrial, and other applications, but they have also raised concerns. The advances are based mostly on research in molecular biology and genetics. The genetic basis of biotechnology is the source not only of much of its promise, but also of many of the concerns. That is because the genetic code contains the basic
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
69
information used to produce the chemical building blocks of life, and it is inherited. Biotechnology provides methods to identify and manipulate that code, including the transfer of genes between species. One major issue is how individual privacy can best be protected and discrimination prevented in the face of major advances in genetic testing that are increasingly revealing predisposition to disease as well as other genetic traits. The application of existing privacy statutes to genetic information appears limited (CRS Report RL30006). One of the issues being debated is whether genetics should be included in broader medical privacy legislation or whether legislation specific to genetic privacy is more appropriate. The potential for genetic discrimination both in employment and insurance has led to the introduction of numerous bills, as well as hearings. Issues include whether such discrimination currently exists and whether the Americans with Disabilities Act (ADA) would cover it. Another important issue concerns the public policy implications of gene patenting and other forms of intellectual property protection in biotechnology. While patents have long been granted in the biotechnology industry, several issues are currently being debated (CRS Reports RL30648 and RL30585). They include ethical concerns, environmental impacts, and questions about the impacts of current patent practice. Some observers question whether patents should be granted at all for living things, genetic materials, and other biotechnologies. Supporters counter that trade secret protection is a less attractive alternative; and in a broader sense, they question whether patent law is the appropriate vehicle to address the social consequences of biotechnology. Internationally, a major issue is how intellectual property protection can affect equity in the distribution of biotechnology applications. Some nations are increasingly fearful that the use of agricultural biotechnology could leave their food production at the mercy of a few corporations; others demand, equally forcefully, that developed nations must commit to ensuring equal access to the benefits of biotechnology. A third set of issues concern identification of the ethical questions raised by research in biotechnology and ways to address them. Some of the thorniest ethical issues faced by Congress are associated with biomedical research, especially genetic testing, gene therapy, stem cell research (CRS Report RL31015, CRS Report RL31142, and CRS Report RS21044), the development of controversial crop technologies such as the “Terminator” gene (CRS Report RL30278), and cloning (CRS Report RL31358). Debate centers on the limits that should be placed on such research and the applications deriving from it, regulation of those activities, and to what extent the federal government should fund them. Global Climate Change Congress has maintained an active and continuing interest in the implications of, and the issues associated with, possible global climate change for the United States. In December 1997, the parties to the United Nations Framework Convention on Climate Change (UNFCCC) agreed to the Kyoto Protocol to establish binding commitments for reductions in greenhouse gases for the thirty-eight developed countries of the world, including the United States, and the economies in transition (former communist nations). However, the Kyoto Protocol has not yet received the required number of ratifications to enter into force. If the Protocol were to enter into force, and if the United States were ever to ratify the Protocol, the nation would be committed to reducing its net average annual emissions of six greenhouse gases to 7% below baseline levels (1990 for carbon dioxide) during the period covering the years 2008 to 2012. At present, U.S. emissions are above baseline levels. The United States signed the Protocol, but President Clinton did not submit it to the Senate for advice and consent to ratification because the Senate passed a resolution stating that the United States should not agree to a protocol that did not impose similarly binding requirements on developing countries or that would “result in serious harm to the U.S. economy or possibly produce little environmental benefit.” Work continued under United Nations’ auspices on many of the
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
70
Handbook of Technology Management in Public Administration
methodologies and procedures needed to implement the Convention and to ensure that the Protocol will be fully operational at such time as it might enter into force. Seven “conference of parties” (COP) meetings have been held to resolve outstanding issues. COP-6 negotiations collapsed in November 2000, however, and the meeting was suspended without agreement. It was anticipated that talks would resume in 2001. In March 2001, however, the Bush Administration indicated its opposition to the Kyoto Protocol, declared it a failed effort, and essentially rejected it, citing possible harm to the U.S. economy and lack of developing country participation. COP-6 negotiations resumed in July 2001. The United States attended, but, for the most part, did not participate in discussions related to the Protocol. The United States continued to act as an observer at COP-7 later in 2001, declining to participate in negotiations. At COP-7, most major issues were resolved, and a goal emerged of bringing the Kyoto Protocol into force, without the United States if necessary, by the August 26–September 4, 2002, meeting of the World Summit on Sustainable Development (WSSD) in Johannesburg, South Africa. Although Protocol proponents fell short of that goal, the drive continues internationally, spearheaded mainly by the European Union, to acquire the requisite number of ratifications so that the Kyoto Protocol might enter into force at some future date. On February 14, 2002, President Bush announced a U.S. policy framework for climate change, the so-called “Clear Skies Initiative”—a new approach for meeting the long-term challenge of climate change. The centerpiece of this announcement was a plan to reduce greenhouse gas intensity of the U.S. economy by 18% over the next 10 years. Greenhouse gas intensity measures the ratio of greenhouse gas emissions to economic output, and has been declining in the United States over the past several years. The Administration stated that the goal, to be met through voluntary action, was to achieve efficiency improvements that would reduce the 183 metric tons of emissions per million dollars of gross domestic product to 151 in 2012. The plan noted that “if, in 2012, we find that we are not on track toward meeting our goal, and sound science justifies further policy action, the United States will respond with additional measures that may include a broad, market-based program” and other incentives and voluntary measures to accelerate technology development. President Bush also outlined a U.S. Climate Change Research Initiative and a National Climate Change Technology Initiative, along with a new Cabinet-level management structure to oversee their implementation.* Discourse in Congress over the prospect of global warming, the extent to which it might occur, and what the United States could or should do about it, has yielded a range of legislative proposals from both sides of the issue. Moreover, several committees in the House and the Senate have held hearings to review the details of those proposals. In that milieu, arguments were presented that policy actions to reduce emissions of carbon dioxide and other greenhouse gases should be taken now, in line with the intent of the Kyoto Protocol. Alternative arguments called for delay, citing challenging issues that were regionally complex, politically delicate, and scientifically uncertain; the need to expand technological options for mitigating or adapting to the effects of any climate change; and the associated high cost of certain mitigation schemes that would prematurely replace existing capital stock before the end of its economic life. Interest in the 107th Congress has focused on the scientific evidence for global warming and the uncertainties associated with future climate projections; performance and results of federal spending on climate change technology programs and, more broadly, on global change research programs; the implications for the U.S. economy of various options for complying with emissions reductions in the Protocol, if it were ever to be ratified; the extent to which carbon dioxide is considered a “pollutant” and whether the government has the authority to regulate it; the pros and cons of granting American companies credit for early action to reduce their emissions of
* For a full description of this announcement, see http://www.whitehouse.gov/news/releases/2002/02/climatechange.html.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
71
greenhouse gases; and long-term research and development programs to develop new technologies to help stabilize greenhouse gas emissions.* Aeronautics R&D In February 2002, the National Aeronautics and Space Administration (NASA) presented its technology vision for aviation in a report entitled “The NASA Aeronautics Blueprint” (http:// www.aerospace.nasa.gov/aero_blueprint/). Noting that aviation accounts for about 6% of the U.S. gross domestic product, the report highlights the role of new technologies in increasing air traffic capacity, reducing the impact of aircraft noise and emissions, improving aviation safety and security, and meeting other needs such as national defense and commercial competitiveness. Unlike a similar document issued by the European Union in January 2001, “European Aeronautics: A Vision for 2020” (http://www.europa.eu.int/comm/research/growth/aeronautics2020/en/), the Blueprint does not call specifically for increases in government funding. Despite a modest increase in FY2002, the NASA budget for aeronautics R&D is down by about half from its FY1998 peak. The overall funding level, as well as funding for certain activities of particular congressional interest, continues to receive close attention in the second session of the 107th Congress. A related issue may be the coordination of NASA’s aeronautics R&D activities with those of the Federal Aviation Administration, which has a smaller program, focused primarily on support of its regulatory activities. (See CRS Report RL31347.) Space Programs: Civil, Military, and Commercial NASA’s Long-Term Goals: On December 20, 2001, the Senate confirmed Mr. Sean O’Keefe as the new Administrator of NASA. Mr. O’Keefe’s background is in public administration and financial management. He has made clear in testimony to Congress that his top priority at NASA is improving management, particularly in the space station program, which has experienced significant cost growth (see below). Responding to criticism that he lacked “vision” for the agency, Mr. O’Keefe gave a speech at Syracuse University on April 12, 2002 outlining that vision. Unlike many previous NASA administrators and space program advocates, he declined to identify human missions back to the Moon or to Mars as NASA goals, insisting that NASA’s program should be driven by science, not destination. Mr. O’Keefe’s more “nuts and bolts” focus makes some space advocates wonder what the future holds for NASA under his leadership. Some NASA supporters believe that the Bush Administration’s budget for NASA suggests that bold goals are not envisioned. The FY2003 request is $15 billion (see CRS Report RL31347), less than 1% higher than FY2002. The “out-year” budget projections show an agency that is either level-funded or declining (depending on the rate of inflation). Others, however, are relieved that in this tight fiscal environment, the NASA budget has not fared worse. Mr. O’Keefe also has stated that he wants to focus on NASA’s role as part of the national security community. To some, that comment is worrisome because NASA, by statute, is a civilian space agency. While NASA and DOD routinely cooperate on technology activities, particularly in aeronautics and space transportation, NASA’s identity as an open, civilian agency has remained unchanged since it was created in 1958. Some wonder to what extent NASA’s mandate may change under the Bush Administration. Space Station: One NASA program that continues to generate controversy is the International Space Station (ISS) program. (See CRS Issue Brief IB93017.) When ISS was approved in 1993 (replacing the earlier “Freedom” program begun in 1984), NASA said it would cost $17.4 billion to build, and the result would be a laboratory in space for “world class” scientific research, housing seven astronauts. By 2000, that cost had grown to $24.1–$26.4 billion. In response, Congress * For more information, see “CRS Products” in the CRS Electronic Briefing Book on Global Climate Change [http://www. congress.gov/brbk/html/ebgcc1.shtml], and CRS Report RL30452.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
72
Handbook of Technology Management in Public Administration
imposed a $25 billion cap on building the space station (not including the cost of space shuttle launches to take the various segments and crews into orbit). In 2001, however, NASA revealed another $5 billion in cost growth. Following a study by an independent task force (see CRS Report RL31216), the Bush Administration put the program on “probation” and gave the space station program office two years to demonstrate credibility in its cost estimating and program management practices. Until then, NASA has been instructed to truncate construction of the space station at a stage the Administration calls “core complete.” At that point, the space station could support only three crewmembers, instead of the seven planned. The crew size limitation would significantly reduce the amount of research that could be conducted, and would affect all the international partners in the program (the United States, Europe, Canada, Japan, and Russia). All of the partners have expressed deep concern. The non-U.S. partners are seeking a commitment from the Administration that the seven-person configuration ultimately will be built, even if there is no deadline for completing it. The Administration has not been willing to make that commitment, however. How Mr. O’Keefe will tame ISS costs, or whether he will find himself in the same quandary as his predecessor—attempting to build a useful space station that meets international commitments, while staying within the congressionally mandated cap and protecting other NASA programs—remains to be seen. The Space Shuttle and the Space Launch Initiative: The U.S. government and private sector companies need space launch vehicles to place satellites of varying sizes into different orbits or interplanetary trajectories. In the case of NASA, humans also must be launched. NASA’s space shuttle is the only U.S. launch vehicle capable of placing humans in space, and the only operational reusable launch vehicle (RLV) in the world. All others are expendable launch vehicles (ELVs) that can only be used once. Several U.S. companies compete in the world market to provide ELV launch services to government and commercial customers. (See CRS Issue Brief IB93062.) The U.S. government and the private sector want to develop launch vehicles with lower operational costs. In the 1990s, the government and the private sector embarked on joint efforts to create less costly ELVs, but many observers believe that to reduce costs significantly, a new RLV design is needed. Government, private sector, and joint government–private sector efforts to do so have failed so far. NASA began its most recent attempt, the Space Launch Initiative (SLI), in FY2001. SLI is funding technology development activities that are expected to allow a decision in 2006 as to what design to choose for a “2nd generation” RLV. Because of the earlier program failures, and SLI’s goals and timeline (which many consider optimistic), the program is under considerable scrutiny. The availability of a 2nd generation RLV is intertwined with decisions on how long the space shuttle will be needed and therefore how much to spend on safety and supportability upgrades to it. NASA asserts that the new vehicle will achieve initial operational capability in 2012, but many argue that is too optimistic, particularly since the choice of design will not be made until 2006. That would leave only 6 years to develop and test the new vehicle. Cost estimates for the new vehicle are notional at this time, but NASA suggests it will be on the order of $10 billion, raising issues about whether expected budgets could support such an investment. If the new vehicle will not be ready until after 2012, additional shuttle upgrades may be needed. In the nearer term, an independent advisory group that oversees safety in NASA’s human spaceflight programs (the Aerospace Safety Advisory Panel) said in its March 2002 annual report that “current and proposed budgets are not sufficient to improve or even maintain the safety risk levels of operating the Space Shuttle or the ISS.”* National Security Space Programs: DOD and the intelligence community conduct a space program roughly equal in size to that of NASA, although for FY2003 DOD is requesting more than NASA: $18.5 billion, compared with $15 billion for NASA. This “national security * The report is at http://www.hq.nasa.gov/office/codeq/codeq-1.htm
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
73
space program,” often referred to simply as the military space program, involves building and launching satellites for communications, navigation, weather, intelligence collection, and other purposes. (See CRS Issue Brief IB92011.) One program that is especially controversial is the Space Based InfraRed System (SBIRS) program of early warning satellites (see CRS Report RS21148). SBIRS consists of two separate but related programs, SBIRS-High and SBIRS-Low. SBIRS-High, using satellites in geostationary orbit (22,500 miles above the equator) and in highly elliptical orbits, would replace the existing series of early warning satellites that alert the National Command Authority to foreign missile launches. SBIRS-Low, consisting of 20–30 satellites in low Earth orbit, would be dedicated to missile defense, tracking the missile from launch, though its “mid-course” phase when warheads are released, to its terminal phase when warheads reenter the atmosphere. Technical and cost issues on both programs have made them very controversial. Commercial Satellite Exports Commercial communications satellites are used by countries and companies around the world for data, voice, and broadcast services. U.S. companies are the major manufacturers of such satellites and want to continue their market dominance. Many of the satellites are not launched by U.S. launch vehicles, however, but are exported to Europe, Russia, China, or elsewhere for launch. Export licenses are required to ship the satellites to the launch site, as well as for technical discussions among the companies, their customers, and insurers. The State Department had responsibility for issuing export licenses for commercial communications satellites until 1992. Between 1992 and 1996, that responsibility was transferred to the Commerce Department. In the late 1990s, Congress became concerned that U.S. satellite manufacturers were transferring technology to China in the course of investigating launch failures that involved their satellites. The resulting controversy led Congress to transfer export responsibility for these satellites back to the State Department as of March 15, 1999. U.S. space industry representatives and others claim that the State Department takes much longer to decide on export licenses, causing customers to buy from foreign companies instead. They are trying to convince Congress to return jurisdiction to the Commerce Department. Supporters of keeping State Department in control argue that the Commerce Department is not sufficiently strict in ensuring that technology is not transferred to other countries, and shifting responsibility again would add another element of uncertainty to U.S. policy, which could adversely affect a customer’s willingness to buy from a U.S. company. This issue is being debated as part of the Export Administration Authorization Act, H.R. 2581. (See CRS Issue Brief IB93062.)
RELATED CRS REPORTS Research and Development Budgets and Policy Research and Development Budget † CRS Issue Brief IB10088. Federal Research and Development: Budgeting and Priority-
Setting Issues, 107th Congress. By Genevieve Knezo.
† CRS Issue Brief IB10100. Federal Research and Development Funding: FY2003.
Coordinated by John Dimitri Moteff and Michael E. Davey.
† CRS Issue Brief IB10062. Defense Research: DOD’s Research, Development, Test and
Evaluation Program. By John Dimitri Moteff.
† CRS Report 95-307. U.S. National Science Foundation: An Overview. By Christine
Matthews.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
74
Handbook of Technology Management in Public Administration
DOD Science and Technology † CRS Issue Brief IB10062. Defense Research: DOD’s Research, Development, Test and
Evaluation Program. By John Dimitri Moteff.
Government Performance and Results Act † CRS Report RS20257. Government Performance and Results Act: Brief History and
Implementation Activities. By Genevieve J. Knezo.
Cooperative R&D † CRS Issue Brief IB89056. Cooperative R&D: Federal Efforts to Promote Industrial
Competitiveness. By Wendy H. Schacht.
† CRS Report 98-862. R&D Partnerships and Intellectual Property: Implications for U.S.
Policy. By Wendy H. Schacht.
Foreign Science and Engineering Presence in U.S. Institutions and the Labor Force † CRS Report RL30498. Immigration: Legislative Issues on Nonimmigrant Professional
Speciality (H-1B) Workers. By Ruth Ellen Wasem.
Homeland Security Counter Terrorism R&D † CRS Report RL31202. Federal Research and Development for Counter Terrorism:
Organization, Funding, and Options. By Genevieve J. Knezo.
Aviation Security Technologies † CRS Report RL31151. Aviation Security Technologies and Procedures: Screening
Passengers and Baggage. By Daniel Morgan.
Critical Infrastructure † CRS Report RL30153. Critical Infrastructure: Background, Policy, and Implementation.
By John Dimitri Moteff.
Information Technology Management † CRS Report RS21260. Information Technology (IT) Management: The Clinger-Cohen
Act and Homeland Security Proposals. By Jeffrey W. Seifert.
Technology Development Intellectual Property/Patent Reform † CRS Report 98-862. R&D Partnerships and Intellectual Property: Implications for U.S.
Policy. By Wendy H. Schacht.
† CRS Report RL30451. Patent Law Reform: An Analysis of the American Inventors
Protection Act of 1999 and Its Effect on Small, Entrepreneurial Firms. By John R. Thomas.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology † CRS Report RL30572. Patents on Methods of Doing Business. By John R. Thomas. † CRS Report RL31281. Patent Quality and Public Policy: Issues for Innovative Firms in
Domestic Markets. By John R. Thomas.
Advanced Technology Program † CRS Issue Brief IB91132. Industrial Competitiveness and Technological Advancement:
Debate Over Government Policy. By Wendy H. Schacht.
† CRS Report 95-36. The Advanced Technology Program. By Wendy H. Schacht. † CRS Report 95-50. The Federal Role in Technology Development. By Wendy
H. Schacht.
Technology Transfer † CRS Issue Brief IB85031. Technology Transfer: Use of Federally Funded Research and
Development. By Wendy H. Schacht.
† CRS Report RL30585. Federal R&D, Drug Discovery, and Pricing Insights from the
NIH–University–Industry Relationship. By Wendy H. Schacht.
† CRS Report RL30320. Patent Ownership and Federal Research and Development
(R&D); A Discussion on the Bayh–Dole Act and the Stevenson–Wydler Act. By Wendy H. Schacht.
Federal R&D, Drug Costs, and Availability † CRS Report RL31379. The Hatch–Waxman Act: Selected Patent-Related Issues. By
Wendy H. Schacht and John R. Thomas.
† CRS Report RL30756. Patent Laws and Its Application to the Pharmaceutical Industry:
An Examination of the Drug Price Competition and Patent Term Restoration Act of 1984 (the “Hatch–Waxman Act”). By Wendy H. Schacht and John R. Thomas. † CRS Report RL30585. Federal R&D, Drug Discovery, and Pricing Insights from the NIH–University–Industry Relationship. By Wendy H. Schacht. † CRS Report RS21129. Pharmaceutical Patent Term Extensions: A Brief Explanation. By Wendy H. Schacht and John R. Thomas. † CRS Issue Brief IB10105. The Hatch–Waxman Act: Proposed Legislative Changes. By Wendy H. Schacht and John R. Thomas.
Telecommunications and Information Technology Bell Entry Into Long Distance † CRS Report RL30018. Long Distance Telephony: Bell Operating Company Entry Into
the Long Distance Market. By James R Riehl.
Slamming † CRS Issue Brief IB98027. Slamming: The Unauthorized Change of a Consumer’s Tele-
phone Service Provider. By Angele A. Gilroy.
Broadband Internet Access † CRS Issue Brief IB10045. Broadband Internet Access: Background and Issues. By
Angele A. Gilroy and Lennard G. Kruger.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
75
76
Handbook of Technology Management in Public Administration † CRS Report RL30719. Broadband Internet Access and the Digital Divide: Federal
Assistance Programs. By Lennard G. Kruger.
† CRS Issue Brief IB98040. Telecommunications Discounts for Schools and Libraries. By
Angele Gilroy.
Spectrum Management and Wireless Technologies † CRS Report RL30829. Radiofrequency Spectrum Management: Background, Status, and
Issues. By Lennard G. Kruger.
† CRS Report RS20993. Wireless Technology and Spectrum Demand: Third Generation
(3G) and Beyond. By Linda K. Moore.
† CRS Report RL31375. Meeting Public Safety Spectrum Needs. By Linda K. Moore. † CRS Report RL31260. Digital Television: An Overview. By Lennard G. Kruger.
Internet Privacy † CRS Report RL31408. Internet Privacy: Overview and Pending Legislation. By Marcia
S. Smith.
† CRS Report RL31289. The Internet and the U.S.A. PATRIOT Act: Potential Implications
for Electronic Privacy, Security, Commerce, and Government. By Marcia S. Smith et al.
E-Government † CRS Report RL30661. Government Information Technology Management: Past and
Future Issues (the Clinger-Cohen Act). By Jeffrey W. Seifert.
† CRS Report RL31057. A Primer on E-Government: Sectors, Stages, Opportunities, and
Challenges of Online Governance. By Jeffrey W. Seifert.
† CRS Report RL31088. Electronic Government: Major Proposals and Initiatives. By
Harold C. Relyea.
† CRS Report RL30745. Electronic Government: A Conceptual Overview. By Harold
C. Relyea.
Federal Chief Information Officer (CIO) † CRS Report RL30914. Federal Chief Information Officer (CIO): Opportunities and
Challenges. By Jeffrey W. Seifert.
Voting Technologies † CRS Report RL30773. Voting Technologies in the United States: Overview and Issues
for Congress. By Eric A. Fischer.
† CRS Report RS20898. Elections Reform: Overview and Issues. By Kevin Joseph
Coleman and Eric A. Fischer.
† CRS Report RS20639. Internet Voting: Issues and Legislation. By Kevin
Joseph Coleman.
† CRS Report RS21156 Federal Voting System Standards: Congressional Deliberations.
By Eric A. Fischer.
Biotechnology: Privacy, Patents, and Ethics † CRS Report RL30006. Genetic Information: Legal Issues Relating to Discrimination and
Privacy. By Nancy L. Jones.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
77
† CRS Report RL30648. An Examination of the Issues Surrounding Biotechnology
Patenting and Its Effect Upon Entrepreneurial Companies. By John R. Thomas.
† CRS Report RL30585. Federal R&D, Drug Discovery, and Pricing Insights from the
NIH-University-Industry Relationship. By Wendy H. Schacht.
† CRS Report RL31015. Stem Cell Research. By Judith A. Johnson. † CRS Report RL31142. Stem Cell Research and Patents: An Introduction to the Issues.
By Wendy H. Schacht and John R. Thomas.
† CRS Report RS21044. Background and Legal Issues Related to Stem Cell Research. By
Diane Theresa Duffy.
† CRS Report RL30278. The “Terminator Gene” and Other Genetic Use Restriction
Technologies (GURTs) in Crops. By Alejandro E. Segarra and Jean M. Rawson.
† CRS Report RL31358. Human Cloning. By Judith A. Johnson.
Global Climate Change † CRS Report RL30452. Climate Change: Federal Research, Technology, and Related
Programs. By Michael M. Simpson.
Aeronautics R&D † CRS Report RL31347. The National Aeronautics and Space Administration’s FY2003
Budget Request: Description, Analysis, and Issues for Congress. By Marcia S. Smith and Daniel Morgan.
Space Programs: Civil, Military, and Commercial † CRS Issue Brief IB92011. U.S. Space Programs: Civil, Military and Commercial. By
Marcia S. Smith.
† CRS Issue Brief IB93017. Space Stations. By Marcia S. Smith. † CRS Report RL31216. NASA’s Space Station Program: The IMCE (“Young”) Report.
By Marcia S. Smith.
† CRS Issue Brief IB93062. Space Launch Vehicles: Government Activities, Commercial
Competition, and Satellites Exports. By Marcia S. Smith.
† CRS Report RL31347. The National Aeronautics and Space Administration’s FY2003
Budget Request: Description, Analysis, and Issues for Congress. By Marcia S. Smith and Daniel Morgan. † CRS Report RS21148. Military Space Programs: Issues concerning DOD’s Space-Based InfraRed System (SBIRS). By Marcia S. Smith.
RETHINKING SILICON VALLEY: NEW PERSPECTIVES ON REGIONAL DEVELOPMENT* ABSTRACT Silicon Valley in Southern California has, over the past 30 years, become a model for hightechnology development in many parts of the world. Associated with Silicon Valley is a common rhetoric and mythology that explains the origins of this area of high-technology agglomeration and indeed the business and entrepreneurial attributes needed for success. Governments in many parts of the world (including Southeast Asia and Australia) have tried to emulate this growth * By Ian Cook and Richard Joseph.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
78
Handbook of Technology Management in Public Administration
through various industry and regional development mechanisms, in particular, the science or technology park. More recently, promoting developments in information technology has come to be seen as an integral feature of these parks’ activities. In this paper, we argue that the modeling process used by governments to promote Silicon Valley-like regional development has tended to model the wrong things about Silicon Valley. The models have tended to be mechanical and have failed to reflect the nature of information and information industries. While we have not sought to develop a model for Silicon Valley in this paper, we address a number of issues that require attention on the part of anyone serious about this project. After discussing problems with previous attempts to model Silicon Valley and problems associated with the activity of modeling itself, we move on to consider four issues that must be addressed in any real attempt to model Silicon Valley in Southeast Asia. The first is the role of the state and the problems that state involvement may create. The second concerns the contribution that universities can make to the project. The third is the role of firms, particularly Chinese firms. The fourth is the cultural context within which the “model” will sit. Since technology parks are seen as a popular way of promoting high technology development by governments, the revised history suggested in this paper provides fresh thinking about modeling Silicon Valley in the Southeast Asian region.
INTRODUCTION Economic development is not a new phenomenon but it can be argued that the context under which it is being promoted, the new information economy, has transformed the way we understand its basic principles. In an investment-hungry world, where there appears to be an ever increasing “digital divide” between rich and poor, this poses very real problems for policy makers in developed and developing countries alike. The temptation for developing countries to copy or model the successes they see in the developing world is very great and it carries with it the advantages of “not reinventing the wheel” for many countries. However, there are significant problems associated with basing economic development policy around models of what is happening elsewhere, and indeed around models that may be downright misleading. An example of the role of models in policy making is the spectacularly influential role that Silicon Valley has had on the policies of many countries (more recently in Southeast Asia) that are aiming for development through high technology. This paper is about these policy problems. So influential has the Silicon Valley model been that it is very difficult to see its constraints and to envisage where it is possible to depart from it. A model can be a very powerful guide but it can be misleading at times too. That governments in many countries in the region are seeking to create their own Silicon Valleys is a clear indication that the model has enormous intellectual and political power. This makes it particularly important that the “model” be understood for what it is. This paper is in three parts. First, the conventional model of Silicon Valley is discussed and it is argued that the conventional wisdom is flawed. These flaws are discussed with the aim of highlighting the challenges they present for the promotion of technologybased economic development in Asia specifically. Second, the process of modeling itself is discussed with the aim of identifying the interests behind the Silicon Valley model and the implications that modeling has for policy makers. Finally, we consider the issues associated with promoting technology-based economic development in the Southeast Asian region. We argue that it is necessary to “rethink Silicon Valley” from a regional development perspective. Specifically, there are a number of areas that need attention in this process of “rethinking” models of Silicon Valley: the first stems from the discussion of modeling in the first two sections. Modeling needs to move from a mechanistic form to a focus on the sorts of effects that are sought. Further, those who design “models” need to accept that the state will play a role in any initiative of this sort in the region, but need to pay particular attention to the problems that states can create for Silicon Valley models. Another issue that “modelers” need to address concerns how universities can
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
79
contribute to the attempt to create Silicon Valley effects. Universities can make a positive contribution in this context, but what this contribution might look like requires careful consideration. Another issue that “modelers” must consider is the nature of firms and, in particular, firms in the region. Finally, “modelers” must consider the cultural context within which their initiatives are being developed. While we do not consider culture to be some form of permanent inscription, it does represent a contextual factor that requires consideration.
CONVENTIONAL MODELING
OF
SILICON VALLEY
It is widely accepted that Silicon Valley has been a model for economic development. While Silicon Valley has had a relatively short history, some 50 years, policy makers have yearned for its benefits—job growth, new start-up firms, a wealth of venture capital and innovation. We call the process by which policy makers of other regions and countries attempt to copy Silicon Valley, modeling. However, what is being modeled requires some conception of what is worth modeling, or more pragmatically, what can be easily modeled. We shall call this conception or framework, a model. As a result, a Silicon Valley model can be identified. The Silicon Valley model incorporates a narrative about how the region came about, its future and its elements of success. Typical features of the Silicon Valley model are: † † † † † †
A faith in entrepreneurialism A vital role for venture capital A critical role played by research universities A healthy supply of highly qualified researchers Benefits from firms co-locating (agglomeration economies) A strong role in the free market with limited government interference
Paradoxically, regions trying to stimulate high-technology growth along the lines of Silicon Valley have generally had to do so with state involvement. While there are many industry development measures available to regional and state governments, one of the more popular ones has been that of the technology park. A technology park can be seen as refinement of the more familiar industrial park. It is a vehicle for attracting high-technology development by providing the right sort of environment for the growth of such firms—a campus-like setting; proximity to a major university; and a setting that allows personnel to interact informally to create innovative “synergies.” Nearly all developed countries have technology parks (many private and some sponsored by the state). They were very popular in the early 1970s in the United States and during the 1980s, the idea spread to Europe, Australia and elsewhere. Their popularity seems not to have waned, with new ventures being announced in Asia over the past couple of years. Our concern here is not so much with the success or failure of technology parks, nor with the extent to which they have been used as an industry development measure in many countries. Rather, our concern is with the accuracy of the underlying premises of the Silicon Valley model, which has given rise to them. We have four concerns here, three of which reflect the observations of Stuart Macdonald in his recent book Information for Innovation.1 First, policy makers who have modeled Silicon Valley have consistently misunderstood the crucial role that information plays in Silicon Valley itself. Policy makers interpreted the essence of Silicon Valley as being such things as a clean environment, good universities, pleasant weather and somehow missed what was important. Macdonald points out, with respect to Silicon Valley, that: there was no understanding at all of what makes its industry tick. Even at the most practical level comprehension was missing . Policy makers saw in Silicon Valley and high technology not so much what they wanted to see as what they were prepared to see. What they missed were the intricate networks of surging information channels that supply high technology firms with their basic equipment.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
80
Handbook of Technology Management in Public Administration Without these, Silicon Valley would be nothing special, and without these this is just what most of the myriads of pseudo-Silicon Valleys have become.2
Second, the Silicon Valley model provides for a misinterpretation of the sort of information that is important to high technology. It had been thought that, for example, scientific information, based on basic research, provided the necessary sort of information for innovation. To this end, technology park initiatives have frequently been located next to universities or research institutes. Macdonald observes that this belief promotes a view of innovation that perpetuates the linear model of innovation. What is more to the point is that a location next to a university provides a prestigious address for companies desperately seeking credibility in the marketplace.3 Furthermore, Macdonald notes the importance of tacit and uncodified information: While they are certainly dependent on information, high-technology firms are not dependent on the sort of information available from university science and engineering departments. Even if they were, it would be unrealistic to expect any more than a tiny fraction of this information to be contained within the departments of a single university. The blend of commercial and technical information has always been of more use to high-technology firms than the purely technical. Blending comes through personal experience and results in a package of tacit and uncodified information.4
Third, the Silicon Valley model has given rise to a view that the Stanford Industrial Park, established by Stanford University, somehow caused the growth of Silicon Valley itself. Consequently, technology parks, as a mechanism for promoting high-technology growth, took on an importance that far exceeded their potential. Macdonald points out that Stanford Industrial Park is very much the product of Silicon Valley’s industrial prosperity, rather than vice versa. Yet, Silicon Valley and Route 128 around Boston, both quite unplanned high-technology concentrations and nothing to do with technology parks, were commonly used to justify technology park development elsewhere.5 Finally, the Silicon Valley model has placed considerable emphasis on the role of individual entrepreneurship. In emphasizing the role of the entrepreneur, the important role of government has been correspondingly de-emphasized in the Silicon Valley model. This has implications for development, especially if countries are trying to copy Silicon Valley. McChesney observes that such thinking plays the role (in the United States at least) of enforcing the view that the market is the natural order of things: corporations are meant to shape the future, not governments.6 It must be remembered that much of the initial impetus for research in Silicon Valley grew from the extensive U.S. federal government support for military research during the Second World War7 and that “at one point fully 85% of research and development in the U.S. electronic industry was subsidized by the federal government, although the eventual profits accrued to private firms.”8 Evidently, the state has little role to play in the Silicon Valley model, but this can be interpreted more as a rhetorical feature of a peculiarly U.S. political situation rather than a view based on a careful understanding of history or the development process. In sum, our argument is that the Silicon Valley model is persuasive but, nevertheless, flawed. We are suggesting that, while there may be “a grain of truth in this sanitized version of capitalism,”9 it is not enough to use it as the basis for a policy for development, as many countries would seem to have done. We argue that a more theoretical, historical and critical approach is needed. Compounding the problem is the modeling process itself. We shall turn our attention to this problem next.
PROBLEMS
WITH
MODELING
The discussion in this section follows the work of Joseph,10 which, in turn, draws on the work of Braithwaite.11 We have argued that while there are flaws in the Silicon Valley model itself, there
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
81
are also problems with the process of modeling itself. Modeling is defined as “action(s) that constitute a process of displaying, symbolically interpreting and copying conceptions of action (and this process itself). A model is a conception of action that is put on display during such a process of modeling. A model is that which is displayed, symbolically interpreted and copied.”12 We believe that the process of attempting to copy Silicon Valley is an act of modeling. There are some major problems with modeling as a policy technique. These present difficulties for policy makers. First, since time and capacity is often a factor in decision-making processes, proponents of models will usually present a solution to a problem that is “good enough.” This means that the models are often not well thought through, and worse still, such models are attractive to governments that are after a quick and easy solution. Second, those modeling processes will often misunderstand what they are modeling. This is our argument above concerning the flawed nature of the Silicon Valley model. Finally, models usually gain acceptance if they resonate with symbols that give them legitimacy. In practice, this means that models often reflect the symbols of progress that come from rich or dominant countries. In addition to this, the policy process itself often allows policy makers to obscure mistakes when models don’t work.13
ISSUES CENTRAL TO THE PROJECT OF DEVELOPING VALLEY EFFECTS IN SOUTHEAST ASIA
A
MODEL TO PRODUCE SILICON
Thus far, we have discussed general problems associated with modeling Silicon Valley. These will undoubtedly have an effect on any attempt to successfully model Silicon Valley in the Southeast Asian region. The following discussion of the issues that must be addressed in designing Silicon Valley models for the Southeast Asian region begins with some preliminary observations that we feel are essential in any planning process. Avoiding the mistakes associated with modeling must be a prerequisite. Understanding and providing for the involvement of states in Southeast Asia is also important, as is imagining a role for universities, or research institutes. Dealing with the general characteristics of firms and the specific characteristics of firms in the Asian region comprises another set of issues that must be addressed in the modeling process. The final issue that we discuss is culture. While culture affects the operation of states, universities and firms in the Southeast Asian region, its importance is such that it deserves separate and special consideration in the modeling process. Imagining a role for states, universities and firms in a Silicon Valley model in Southeast Asia will also require addressing issues of regional culture. These issues are so profound that they deserve extended consideration.
STARTING POINTS
FOR AN
ALTERNATIVE MODEL
The first point that must be made as a result of our discussion of the problems with modeling leads us to believe that, in any attempt to model Silicon Valley, it is necessary to move away from mechanical models. Indeed, we believe it necessary to change our approach to modeling; it may become necessary to cease to refer to the model as a model of Silicon Valley. If we move away from mechanical models, the question that must be addressed concerns the nature of that which we are seeking to model. Rather than focus on the nature or form of the model, we prefer to focus our attention on the effects that we are seeking to create. This directs our attention to the more important issue of what these models are intended to produce. A model, then, must be one in which, what may be described as Silicon Valley effects are sought and not one in which something called Silicon Valley is replicated.14 The third point that must be made in this context is that careful attention needs to be paid to the different types of effects that are often associated with attempts to model Silicon Valley. Three effects seem to be important to initiatives of the sort that we are discussing. The first are invention effects. “Invention is the act of insight, a new combination of pre-existing knowledge, by which a new and promising technical possibility is recognized and worked out in its essential, most rudimentary form.”15 These relate to the stimulation of imagination or creativity16
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
82
Handbook of Technology Management in Public Administration
on the part of those who are engaged in these initiatives. The effects we seek to create may include the imagination of products or services that have not been previously supplied, or the imagination of services that have not been previously supplied in the form in which they are imagined. A second set of effects that initiatives of this sort are often designed to produce is product innovation. This refers both to the imagination of a new product or service, invention, and the development of new forms or changes in existing firms, such that these products and services are produced and delivered on a marketable scale. The final effects that might be associated with initiatives of this sort are process innovation effects. Process innovation effects are produced where there is a mechanism through which existing firms can take advantage of new, in this case information technology-based, techniques in the production or distribution processes within their enterprises. The three effects may be interrelated, but they are unlikely to be produced in the same manner. Invention need not necessarily result in marketable products or services and may have little role in process innovation. New firms will require new products and services, but they will require more than the invention of new products or services. Process innovation does not have to occur as a result of new inventions, but may simply reflect the effective adoption of an existing invention. Differentiating between these three possible effects is important in that different mechanisms may be required for their generation. While some choices may be made concerning the priorities associated with each of these effects, it is more likely that the benefit of differentiating them is that it allows for the development of a model, which may provide for all of them. Identifying the particular sorts of effects that are desired from a Silicon Valley model is a vital preliminary to any attempt to produce Silicon Valley effects. Few of the attempts to create Silicon Valleys in Southeast Asia have involved an active engagement with this issue. Mechanical models, in which a variety of parts are agglomerated and expected to produce the desired effects, are the normal procedure. While determining the particular sorts of effect that are desired is an important step in the modeling process, more issues need to be considered and must be considered in light of the specific regional context into which they are introduced. These concern the inputs, and possible problem with the inputs, from states, universities and firms. Another issue that merits attention is the possible consequences that cultures in the region will have on the “model.” Each of these issues is considered in the following sections.
STATES
AND
SILICON VALLEY MODELS
IN
SOUTHEAST ASIA
While some disagreement may exist as to the role played by the state in the development of Silicon Valley, we believe that the state was important to Silicon Valley.17 Even if we are wrong in this respect, we see little hope that Silicon Valley models will be created in the Southeast Asian region without the participation of states. As Wade has argued, industry policy, in which states identify and promote the development of specific desired industries, is an important part of economic development in the Southeast Asian region.18 States in Southeast Asia If states are to be involved, an issue that needs to be addressed is the tendency for states in the Asian region to adopt interventionist and authoritarian political and social practices. A debate has already emerged with respect to the destabilizing effects of information technologies on authoritarian style regimes.19 If this is the case, one question that must be addressed concerns whether it is possible to separate the thirst for information, that might drive innovation in the context of information technologies, from the thirst for information that might result in unconventional social and political attitudes. Silicon Valley is pervaded by anti-state mentalities that privilege individual entrepreneurial spirit over any apparent commitment to a particular nation-state. This may not create problems if the anti-state mentality is both reflected in society more generally and is part of
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
83
the hegemonic ideology in the country concerned (both claims can be defended in the context of the United States). A significant problem may arise if the promotion of an individualist entrepreneurial spirit, as may be required in any attempt to reproduce innovation effects, produces an anti-state mentality and key state actors reject such a mentality. An option, if a regional approach is to be adopted, may be to situate the core element of the initiative in countries that have less interventionist or weaker states. However, this option may not satisfy the needs of states that are more oriented to control and direction and almost all states in the region will tend to be so oriented, as we will discuss. The leadership in these states may be reluctant to relinquish control and not encourage participation on the parts of their best people. Despite their problems, states are likely to have a significant input into initiatives of the sort for which we are seeking to develop a model. One of the differences between the cultures of the Asian region and that of Silicon Valley is that there is a greater willingness to accept a role for states in economic and technological development. While free market ideology is part of the rhetoric of a number of governments in the region, there remains a tendency for states to take up significant roles in economic development.20 Whether the notion of the developmental state21 is still the best characterization of states in Southeast Asia may be questioned.22 Certainly, an argument can be put that the Asian economic crisis has severely affected the developmental state.23 This may open up the possibility that the developmental state is in a process of transition, in at least some of the countries in the region. Whatever its precise form, however, the state remains an important player and where it has ceased to play a leading role, it has sometimes been replaced by unclear relationships between private and public sectors.24 The strong links between regional entrepreneurs and regional states is such that these entrepreneurs will tend to expect the state to take up an important role in the development of their projects. This has been the experience in Malaysia’s Multi-Media Super-Corridor, Singapore’s One and Hong Kong’s Cyberport project. States have tended to adopt central positions in the initiatives designed to stimulate the development of information technology. There can be little doubt, then, that states will be active in attempts to produce Silicon Valley effects in the region. State participation may be limited to providing a favorable taxation environment, rezoning and infrastructure supply. It may extend to supplying funds or acting as underwriters for loans. Political support will also be a likely form of state participation. All these forms of state participation will create a symbolic and, possibly, monetary investment in these projects that will give state actors a stake in the outcomes of the initiative. This will be a likely source of the problems that are associated with state participation. If nothing else, states are important in terms of their participation in a suitable political and social environment. The most desirable taxation regime would be the most likely location for a successful Silicon Valley model.25 This environment, however, might not provide the most attractive residential environment. Problematic State Effects That states will be involved in Silicon Valley models means that one of the most important tasks that must be addressed, if Silicon Valley effects are to be modeled, is to ensure that a particular set of state effects are occurring. These state effects may be understood to be a function of the tendency of those who occupy central positions in states, rather than a function of states themselves. Nonetheless, the modeling of Silicon Valley in the Asian region is unlikely to be successful if it is not supported and facilitated by states and this will create the potential for unhelpful state effects. Three state effects will constitute problems for any attempt to model Silicon Valley. The first is the tendency on the part of those who occupy central positions in states to require controlling and directing capacities. Those in states may want to feel that they are “in control” or otherwise directing the enterprise. Those involved directly in policy making in this context may seek to maintain a level of oversight and a regulatory capacity that may conflict with the modeling attempt itself. Another problem that the presence of states may introduce is a preoccupation with
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
84
Handbook of Technology Management in Public Administration
outcomes and outcome measurement. Much pressure is placed on states to measure the outcomes of their policies; this is despite the fact that the statistics thereby produced may not make much sense.26 The point is that states are increasingly generating legitimacy for their policies by producing statistics that allegedly measure the outcomes from their policies. While a concern with outcomes is acceptable, the way that states tend to measure outcomes and their predisposition to producing measurable outcomes, rather than less tangible effects, is important. The final problem with states is that they often have a preoccupation with the short term. This is particularly true of those states that are organized in terms of representative democratic procedures. However, all members of states will tend to want results, and results in time to provide them with some form of political advantage. Any attempt to model Silicon Valley in this region will need to engage with each of these features of states. Control/Direction Orientation The first state effect that is unlikely to be beneficial in any attempt to model Silicon Valley is the control direction effect. If the state is involved, it is likely that state representatives will seek to exert control over or impose direction on the model. That they may see themselves as having driven the initiative gives politicians a particular interest in the enterprise. Irrespective of their capacity to claim the initiative as their own, few people in senior positions in the public or private sector are there because they do not want to be in control. That the initiative is likely to be expensive means that those to whom politicians are accountable are unlikely not to require that they account for the initiative. Politicians are not the only problem in this regard, however, for any initiative of this sort will be open to identification as relevant to the expertise of one or more departments of the public service. These departments have long been engaged in developing policies and regulations designed to produce the effects that Silicon Valley models are designed to produce. One of the central tenets of some recent discussion of the behavior of senior officials in the public sector, is the view that those engaged in the public sector must be understood as agents who seek to promote their own values and interests.27 Thus, their decisions and policies reflect certain desires to maximize personal outcomes (which, in this case, may be understood to be either control or authority). This cannot simply be understood as an effect of highly interventionist states as it can also be understood to reflect the interplay between departments concerned with service provision, and those involved in fiscal management.28 Whatever their motives, the institutional position and general stake that people in senior positions in the public sector have in initiatives makes them a serious obstacle to attempts to produce Silicon Valley effects. Outcomes Driven The second state effect that may create problems with respect to initiatives of this sort is that states, and their leading officials, place great value on the production of measurable outcomes. Politicians, who are oriented to direction and control, will tend to want to prefigure outcomes and to create measures that will identify success with respect to those outcomes. A preoccupation with measurement creates a tendency to value that which can be measured (which may be less of a problem in terms of inventions, but more of a problem with respect to product and process innovation). Another problem created by a preoccupation with measurement relates to the increase in cost that this creates. This leads either to a reduction in expenditure on the core activities or an increase in expenditure. The problems associated with measurement are, probably, the more important of these, as some of the most desirable Silicon Valley effects that are sought will be less tangible. An even more important problem with an outcome orientation is that it is rare for such an approach to accept failure as an outcome. Cash-burn in Silicon Valley can be understood to be
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
85
a direct reflection of failure. While statistics concerning cash-burn are not easily extracted, that Silicon Valley absorbed something in the order of $7 billion during the second quarter of 200029 provides some indication of the money required to fuel it. Only a limited number of good ideas result in start-ups. Only a limited number of the start-ups created will become viable firms. Only a limited number of viable firms survive the middle term. In short, failure is an essential part of the activity (and not an undesirable outcome). Failures might have positive effects, in terms of the information flows that they have created and in terms of the interpersonal connections that they produce. However, failures will still be measured as failures. While many public officials are responsible for monumental failures, they are not prone to champion initiatives on the basis that they will have significant failure rates. Attempts to engender political support based on less tangible outcomes may be adopted, but this will require particular skills on the part of public officials. Short-Term Focus The final state effect that might create problems for any attempt to produce Silicon Valley effects is the preoccupation with short-term outcomes. While he may have had the development of a much larger region in mind, it is salutary to bear in mind Herbig’s argument that any attempts to create a hotspot like Silicon Valley ought to be thought of as involving a 15–25 year process.30 Politicians in representative democracies are particularly concerned with the short term. The election cycle of 3–4 years means that initiatives, which will generally have taken more than a year to introduce, have little more than 2 years in which to produce the sorts of outcomes that politicians will tend to want to use to justify their re-election. Governments in those countries in which representative democratic practices are weak, require legitimacy and will be oriented to seek short-term outcomes from their initiatives, which they can use to justify their political control. These governments will be prone to seeking short-term outcomes that they can use to justify their position.
UNIVERSITIES
AND
SILICON VALLEY MODELS
IN
SOUTHEAST ASIA
While the significance of Stanford’s role in the emergence of Silicon Valley is open to question, the desirability of linking universities to models designed to promote Silicon Valley effects remains.31 One reason for this is that a respected university provides status and connotes a connection to a significant research capacity and technical expertise. While the former is important, and probably real, the latter is open to question. This is not to suggest that those in universities play no role, but does require that the nature of that role be carefully considered. Universities are, at best, complex organizations within which a variety of interests are embedded.32 Understanding these interests provides a basis upon which the role of a university, or a number of universities, might be approached. There can be little doubt, however, that universities can provide a means by which talented people can be encouraged to contribute to the production of Silicon Valley effects. Need for Universities Yet, apart from a good address, universities can offer much to a Silicon Valley model. One of these is the potential for the Silicon Valley model to link to an accreditation process that will allow the Silicon Valley model to attract those who seek qualifications that will position them for lucrative employment in information industries. Universities also contain people who possess, and can bring others to possess, skills that may be attractive to those who are desirable participants in the Silicon Valley model. Most important, however, is the fact that universities contain, in terms of both students and staff, information seekers and it is these who are most likely to contribute to a Silicon Valley model. Universities have always played a role in bringing together people with like interests and allowing them to participate in communities seeking particular sorts of
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
86
Handbook of Technology Management in Public Administration
information and skills. In short, universities contain people who can be made central to the activity of encouraging individuals to contribute to the production of Silicon Valley effects. The basic question that arises in Southeast Asia concerns the quality of the universities that are found in the region. Certainly, there are many good universities in this region, but whether they will attract the staff and provide the resources necessary to participate effectively in attempts to produce Silicon Valley effects may be open to question. The educational cultures of universities in this region must be carefully examined if they are to be understood to make a positive contribution to the creation of Silicon Valley effects. While Diez was referring to research institutes, he makes the important point that it is the embeddedness of these institutes in national and international scientific or knowledge networks that is an important contribution that they make with respect to stimulating innovation.33 It is their place in the information network or “information milieu” that is crucial to their contribution to the production of Silicon Valley effects. Universities seem to offer one conduit through which information may flow in order to produce Silicon Valley effects. Simply clustering firms will not provide for the flow of information that is important for Silicon Valley effects. This is a point that Antonelli has emphasized. “Agglomeration is not a sufficient condition for a clustering of technological innovation and a diffusion of technological externalities. A number of important communication channels are necessary, and only their combination provides a conducive environment for encouraging the rate of accumulation of collective knowledge and the eventual introduction of technological innovations.”34 Universities with research, teaching and seminar capabilities might be understood as a meaningful source of a variety of communications channels. Diez’s work provides some valuable insights into the issues that must be addressed in this context—though his study was focused on research institutes. The first of his findings was that research institutes tended to be more oriented to facilitating product innovation.35 The second was that research institutes tended to support larger firms. He concluded with the following comment: If research institutes are to play a leading role in supporting regional innovation processes, . then the incentive structures for research institutes must change in such a way that co-operation with local small and medium-sized businesses becomes a matter of course. In view of the fact that the technology fields of research institutes and businesses differ greatly, the question must be asked whether research institutes ought not to be aimed to a far greater extent at the support of the fostering of university spin-offs, instead of supporting existing local businesses which operate in technology fields that cannot be covered by the local research institutes. One possibility might be to motivate and support current students in the start up of business.36
That people in universities will be important to Silicon Valley models and that universities represent difficult organizations with which to work mean that they must be approached carefully. Indeed, the most important first step in conceiving of the role of the university may be to disaggregate the institutions. Universities, in short, are composed of people. Some of them will be useful to a Silicon Valley model and some will not. Some of those who are most likely to offer something for a Silicon Valley model are those who have acquired significant status in technical or other fields that relate to information industries. Many of these, however, may be too far from the “game” to render their expertise current. Their status may be important in attracting people to a Silicon Valley model, but their expertise may not. Their access to technology and knowledge networks and various information or knowledge milieus may be their most important contribution in this context. Other members of universities might provide the skill associated with fostering product and process innovation. Another important contribution, in this context, is from those people in universities who can provide an environment that fosters invention. There can be little doubt that few single universities could provide all of these people. The Silicon Valley model must itself function as an attractor for those members of university
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
87
communities, who are bearers of the various forms of knowledge, that will be useful to those who are attracted to a Silicon Valley model. It may well be the case that activities that allow for the selection and attraction of members of university staffs must accompany those activities that allow for the selection and attraction of members of academic communities. One possibility that might allow for disaggregation, without losing a prestigious connection, is to allow for a consortium of universities in the region to be associated with the project. Such a consortium would reflect the potential on the part of these universities to supply staff with the various forms of expertise required for the success of initiatives of the sort that we are describing. This would create problems of management and organization, however. Yet, universities in the region are already embedded in knowledge networks and milieu, so the problem may not be insurmountable. That this might be possible is reinforced by Tornquist and Kallsen’s findings that, “the proximity of firms and higher education institutions is not as important in the knowledge and technology transfer arena as has been commonly assumed.”37 Tornquist and Kallsen’s findings, related to the aircraft and electronic equipment industries, would appear to provide even greater support for a refusal to consider proximity as a vital issue in the context of information technology industries involved in software creation and Internet services generation (which can rely more readily on virtual networks).
FIRMS
AND
SILICON VALLEY MODELS IN SOUTHEAST ASIA
Firms must also play an important role in the sorts of initiatives that we are trying to model. Local firms offer important contributions both to an environment in which invention may be facilitated and in which product innovation can occur. They are central to process innovation. Multinational or foreign firms may also play a role and many of these initiatives, such as Hong Kong’s Cyberport, have been based upon the participation of multinational or foreign firms. Firms provide both personnel and facilities that may be useful in the context of Silicon Valley models. They may provide experience with respect to production and distribution that may be unavailable through any other means. Certainly key enterprises have been associated with the development and success of regions like Silicon Valley.38 Firms often possess imaginative entrepreneurs whose skills and understanding would be vital to invention and product innovation. Firms create a variety of problems for the success of these initiatives, however, such that their involvement will have to be carefully managed. Firms can often constitute rigid structures that prevent the permeation and, more importantly, the escape of information. They constitute points of resistance to both product and process innovation. One of the problems with firms is that they sometimes fail to acknowledge the importance of those with tacit knowledge of production and distribution processes, yet it is these people who are central to process innovation.39 Competing firms may transpose their rivalries into the Silicon Valley model and create a difficult environment for invention and innovation. Firms are not simple organizations and embed a variety of networks and power relationships that must be understood and dealt with for successful product and process innovation. As Hislop, Newell, Scarborough and Swan suggest, sensitivity must be demonstrated with respect “to complex ways in which the use of power is shaped by the specificities of the organizational context.”40 This is even more significant in the context of process innovation. From their study of the appropriation of Enterprise Planning Systems in two firms, Hislop et al. concluded that “for the type of innovations examined not only was the development and use of networks and knowledge of central importance ., but that the knowledge utilized and the networks developed were inextricably linked. The typically embodied nature of the knowledge utilized during the course of the appropriation process examined meant that accessing it involved the development of personal networks.”41 The fact that firms are power structures in which people maintain their identity through maintenance of a controlling position led Suchman and Bishop to conclude that innovation
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
88
Handbook of Technology Management in Public Administration
could be understood as a conservative project. In their view, rather than innovation being about fundamental change to an organizational structure, “change agendas may actually be directed at least as much at the reproduction of existing organizational and economic orders as at their transformation.”42 This point seems particularly salient in the context of the Southeast Asian region. If we treat the dominant firm in the Southeast Asian region as reflecting a Chinese management style, then the characteristics of this style of management need to be understood in the context of initiates designed to create either or both product and process innovation, which affect local firms. Lee has suggested that four key features distinguish Chinese management. These are “human-centredness, familycentredness, centralization of power and small size.”43 Pun seemed to support such an assertion of a basic characteristic of Chinese management. In Pun’s view, Chinese cultural values have “strongly influenced the Chinese management systems, and centralized authority, hierarchical structures as well as informal co-ordination and control mechanisms prevail in both the Mainland Chinese government and the overseas Chinese business.”44 These factors are also important in the context of attempts to bring local firms into contact with multinational corporations. Xing has suggested that any firm seeking to do business in China must understand that, amongst other things “Confucianism, family-ism, group orientation . have heavily influenced the direction of business practices.”45 These characteristics do not appear very different from those that Pun identified as typical of overseas Chinese businesses, so they must be provided for if Silicon Valley effects are to be produced. Culture We do not want to appear to be obsessed with the issue but we believe that cultural factors are likely to be important. The cultures in the region, specifically Chinese cultures, may create problems for an attempt to create a model designed to produce Silicon Valley effects in the region. We wish to discuss three cultural factors in this final section of our paper. These are a tolerance of failure, individualism and language. Confucian dynamism may ameliorate these cultural effects to some extent, but Herbig’s conclusion in this regard suggests that the greater innovative capacities associated with Confucian dynamism was more with respect to lower order innovations,46 with lower order constituting a combination of continuous innovation (“involving only the introduction of a modified product”) and modified innovation (which “is more disruptive than continuous innovation, but stops short of altering behavioral patterns”).47 Before we develop these points, we feel it necessary to point out that, while these cultural values are dominant in countries in the region we do not presume that culture maps directly onto individuals. Indeed, the contribution that people from the region make to the development of Silicon Valley is a clear demonstration that culture need not be treated as a sole determinant of identity. Saxenian’s Silicon Valley’s New Immigrant Entrepreneurs identifies the contribution that Chinese and Indian entrepreneurs are making to the region.48 Culture, from our perspective, is a possible source of constraint on the behavior of people in the region. If we assume that there are significant differences between the culture in which Silicon Valley is located and those that pertain in Asian countries, then these cultural differences are likely to have a significant impact on any attempt to model Silicon Valley in the Asian region. Even if we deny cultural differences, we must still be aware of a tendency on the part of state elites in these countries to promote a sense of cultural distinctiveness (sometimes played out in terms of the “Asian values” position). We are not suggesting that Asians are different, more social and community oriented, but that the cultures in which they find themselves may preclude the values associated with innovation. The first characteristic that has often been associated with Silicon Valley and which may create problems in this region is failure. We have already discussed this point in terms of problems with the tendency on the part of those in prominent positions within states to reject failure. The question
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
89
that arises in this context concerns whether there may be problems associated with accepting failure in societies in which hierarchy and authority are significantly valued. Herbig’s suggestions that the Japanese are more risk averse than their American counterparts and that this affects their capacity for entrepreneurial activity49 may not directly transpose to the Southeast Asian region. It deserves some consideration in this context, however. The next cultural factor that must be considered concerns the promotion of individualism. According to Herbig,50 collectivist societies are less prone to produce innovation than societies in which individualist values are strong. If Southeast Asian societies are more collectivist (and Herbig seems more willing to denote “Oriental cultures” as collectivist51) and workers show greater levels of loyalty and connection to communities and firms, then they may not flow as freely as they have done in Silicon Valley. A lack of commitment to a firm may reduce movement on the part of those who might be engaged in a Southeast Asian version of Silicon Valley. A tendency not to connect with those from other communities/firms may also be a problem. The flow of information and personnel around and across firms in Silicon Valley seems to constitute one of the region’s distinctive features. The third cultural issue relates to the language that might be necessary if desirable information effects (including process innovation and innovation stimulation effects) are to be produced. This is a greater problem if the effects are to be regional and not country specific. However, it remains an issue for country-specific initiatives. The English language may prove something of a necessity, especially if foreign firms, usually American firms, are to contribute to the production of Silicon Valley effects. However, the effects of this on process innovation, in particular, may create problems. Innovation may well be stimulated, however, in the context of information products and services that are tailored to a domestic market. Innovation, to create internationally desirable information products and services may not be facilitated if the dominant idiom is not English.
CONCLUSION Much of the alleged modeling of Silicon Valley in this and other regions appears to us to be fundamentally flawed. Insufficient consideration has been given to the models of Silicon Valley that have dominated planning in this context. These problems are compounded by a failure to consider problems associated with the very activity of modeling itself (including those introduced by the interests of those involved in the modeling process itself). Mechanistic copying will not prove, in our view, to be a viable approach. Concentrating on producing Silicon Valley effects is a more promising starting point. Careful consideration of the specific innovation effects that are sought is essential. Considering this question may even lead to the conclusion that innovation in the region is likely to more reflect that of a “second mover,” rather than a rapid product innovator. A next step is to give due consideration to the fruitful employment of the resources of states and universities. The dominant form of firms in the region is another factor that must be taken into account in this context. Dominant cultures in the region are yet another contextual factor that requires careful consideration. Certainly, technical “know-how” is a vital ingredient, but, in our view, only one of the ingredients of what is inevitably a complex mix. While we believe that we have dealt with the most important issues that face any attempt to produce Silicon Valley effects in Southeast Asia, we believe that other issues will also need attention. Intellectual property issues, for example, constitute yet another issue that must be addressed in this context.52 Our intention in this paper was not to provide a model for Silicon Valley, but to draw attention to the many factors that need to be taken into account in any attempt to create a model. We do not think that the project ought to, or will, be abandoned. However, we believe that it must be more carefully thought through if a Silicon Valley model is to be created that will successfully contribute to development in the Southeast Asian region.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
90
Handbook of Technology Management in Public Administration
NOTES 1. Macdonald, S., Information for Innovation: Managing Change from an Innovation Perspective, Oxford University Press, Oxford, pp. 160–88, 1998. 2. Macdonald, S., Information for Innovation: Managing Change from an Innovation Perspective, Oxford University Press, Oxford, p. 171, 1998. 3. Macdonald, S., Information for Innovation: Managing Change from an Innovation Perspective, Oxford University Press, Oxford, p. 179, 1998. 4. Macdonald, S., Information for Innovation: Managing Change from an Innovation Perspective, Oxford University Press, Oxford, p. 181, 1998. 5. Macdonald, S., Information for Innovation: Managing Change from an Innovation Perspective, Oxford University Press, Oxford, pp. 180–81, 1998. 6. McChesney, R., The Internet and U.S. communication policy-making in historical and critical perspective, J. Commun., 46(1), 98–124, 1996. 7. Saxenian, A., The genesis of Silicon Valley, In Silicon Landscapes, Hall, P. and Markusen, A., Eds., Allen and Unwin, Winchester, MA, p. 22, 1985. 8. McChesney, The Internet and U.S. communication policy-making in historical and critical perspective, J. Commun., 46(1), 109–10, 1996. 9. Ibid., p. 109. 10. Joseph, R. A., Political myth, high technology and the information superhighway: an Australian perspective, Telematics and Informatics, 14(3), 289–301, 1997. 11. Braithwaite, J., A sociology of modeling and the politics of empowerment, Br. J. Sociol., 45(3), 445–80, 1994. 12. Braithwaite, J., A sociology of modeling and the politics of empowerment, Br. J. Sociol., 45(3), 450, 1994. 13. Joseph, R. A., New ways to make technology parks more relevant, Prometheus, 12(1), 46–61, 1994. [Copyright of Prometheus is the property of Carfax Publishing Company and its content cannot be copied or emailed to multiple sites or posted to a listserv without the copyright holder’s express written permission. However, users may print, download or email articles for individual use.] 14. An issue that may need to be addressed before any attempt to produce Silicon Valley effects is begun concerns whether these effects are more likely to be most forthcoming in the context of software products and Internet delivered services, rather than hardware. Silicon Valley itself may be undergoing a shift toward a focus on software, rather than hardware. See Lee, C.-M., Miller, W. F. Hancock, M. G., and Rowen, H. S., The Silicon Valley Edge: A Habitat for Innovation and Entrepreneurship, Stanford University Press, Stanford, CA, 2000. 15. Herbig, P. A., The Innovation Matrix: Culture and Structure Prerequisites to Innovation, Quorum Books, Westport, CT, p. 5, 1994. 16. Invention may be understood as closely related to creativity. The relationship between creativity and innovation is discussed extensively in the work by Ford (Ford, C. M., A theory of individual creative action in multiple social domains, Academy of Management Review, 21(4), 1996, 1112–42). 17. “Both Silicon Valley and Route 128 took off after the Second World War with heavy support from military and space programs. It is obvious that government funds, support, and encouragement provided strong incentives and play the role of a catalyst for high-tech innovation.” Taken from Herbig, The Innovation Matrix: Culture and Structure Prerequisites to Innovation, Quorum Books, Westport, CT, p. 238, 1994. 18. Wade, R., The visible hand: the state and East Asia’s economic growth, Curr. Hist., 92(578), 431–41, 1993.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
19. See Hertling, J., Internet growth challenges China’s authoritarian ways, The Chronicle of High. Educ., 41(39), 22, 9 June 1995; Robinson, J., Technology, change, and the emerging international order, SAIS Rev., 15(1), 153–74, 1995; Rodan, G., The Internet and political control in Singapore, Polit. Sci. Q., 113(1), 63–90, 1998. 20. Stubbs even suggests that capitalism has taken a different form in the Asia Pacific region characterized by “the strong, developmental state; . the structure of industry; and . the role of Japanese and Chinese firms” (Stubbs, R., Asia Pacific regionalisation and the global economy: a third form of capitalism?, Asian Surv., 35(9), 785–98, 1995, quote from p. 788). 21. See Leftwich, A., Bringing politics back in: towards a model of the developmental state, J. Dev. Stud., 31(3), 400–13, 1995. 22. Kim, Y. T., Neoliberalism and the decline of the developmental state, J. Contem. Asia, 29(4), 441–61, 1999. 23. See Pang, E.-S., The financial crisis of 1997–98 and the end of the Asian developmental state, Contem. Southeast Asia, 22(3), 570–87, 2000. 24. See Peng, M. W., How entrepreneurs create wealth in transition economies, Acad. Manage. Exec., 15(1), 95–108, 2001, in particular pp. 100–1 and 102. 25. Herbig, The Innovation Matrix: Culture and Structure Prerequisites to Innovation, Quorum Books, Westport, CT, pp. 157–68, 1994. 26. See Eisner, R., Black holes in the statistics, Challenge, 40(1), 6–10, 1997. 27. This theory has been stated in a variety of forms. Breton has suggested that ‘bureaucrats seek to maximize the relative size of their bureaus . [in order] to achieve the highest possible income and prestige’ . (Breton, A., The Economic Theory of Representative Government, Aldine, Chicago, p. 162, 1974). Public choice theory, or at least its New Right or Virginia school variants, represents other iterations of this position. (See Dunleavy, P., Democracy, Bureaucracy and Public Choice: Economic Explanations in Political Science, Prentice Hall, New York, pp. 154–56; and Lane, J.-E., The Public Sector: Concepts, Models and Approaches, Second Edition, Sage, London, pp. 201–16.) 28. See Schwartz, H. M., Public choice theory and public choices: bureaucrats and state reorganization in Australia, Denmark, New Zealand and Sweden in the 1980s, Admin. Soc., 26(1), 48–77, 1994. 29. Business: a squeeze in the valley, The Econ., 357(8191), 71–72, 2000. 30. Herbig, The Innovation Matrix: Culture and Structure Prerequisites to Innovation, Quorum Books, Westport, CT, p. 240, 1994. 31. See Lee et al., The Silicon Valley Edge: A Habitat for Innovation and Entrepreneurship, Stanford University Press, Stanford, CA, 2000, Herbig, The Innovation Matrix: Culture and Structure Prerequisites to Innovation, Quorum Books, Westport, CT, 1994, and Diez, J. R., The importance of public research institutes in innovative networks— empirical results from the metropolitan innovation systems Barcelona, Stockholm and Vienna, Eur. Plan. Stud., 8(4), 451–63, 2000. 32. Bird, Hayward and Allen provide an interesting discussion of the tensions that can emerge amongst university staff and between university staff and staff in firms that reflect different values, status and workload considerations (see Bird, B., Hayward, D. and Allen,D., Conflicts in the commercialization of knowledge: perspectives from science and entrepreneurship, Entrep. Theory and Pract., pp. 57–77, 1993, see especially pp. 59–61). 33. Diez, The importance of public research institutes in innovative networks—empirical results from the metropolitan innovation systems Barcelona, Stockholm and Vienna, Eur. Plan. Stud., 8(4), 459, 2000. 34. Antonelli, C., Collective knowledge communication and innovation: the evidence of technological districts, Regional Stud., 34(6), 535–47, 2000, quote from p. 544.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
91
92
Handbook of Technology Management in Public Administration
35. Diez, The importance of public research institutes in innovative networks—empirical results from the metropolitan innovation systems Barcelona, Stockholm and Vienna, Eur. Plan. Stud., 8(4), 461, 2000. 36. Diez, The importance of public research institutes in innovative networks—empirical results from the metropolitan innovation systems Barcelona, Stockholm and Vienna, Eur. Plan. Stud., 8(4), 462, 2000. 37. Tornquist, K. M. and Kallsen, L. A., Out of the ivory tower: characteristics of institutions meeting the research need of industry, J. High. Educ., 65(5), 523–39, 1994, quote from p. 533. 38. Herbig, The Innovation Matrix: Culture and Structure Prerequisites to Innovation, Quorum Books, Westport, CT, p. 237, 1994. 39. Macdonald, Information for Innovation: Managing Change from an Innovation Perspective, Oxford University Press, Oxford, pp. 160–88, 1998. 40. Hislop, D., Newell, S., Scarborough, H., and Swan, J., Networks, knowledge and power: decision making, politics and the process of innovation, Technol. Anal. Strategic Manage., 12(3), 399–411, 2000, quote from p. 410. 41. Hislop, D., Newell, S., Scarborough, H., and Swan, J., Networks, knowledge and power: decision making, politics and the process of innovation, Technol. Anal. Strategic Manage., 12(3), 399–411, 2000, quote from p. 410. 42. Suchman, L. and Bishop, L., Problematising “innovation” as a critical project, Technol. Anal. Strategic Manage., 12(3), 327–33, 2000, quote from p. 331. 43. Lee, J., Culture and management—a study of small Chinese family businesses in Singapore, J. Small Bus. Manage., pp. 63–67, 1996, quote from p. 63. 44. Pun, K.-F., Cultural influences on total quality management adoption in Chinese enterprises: an empirical study, Total Qual. Manage., 12(3), 323–42, 2001, quote from p. 329. 45. Xing, F., The Chinese cultural system: implication for cross-cultural management, SAM Adv. Manage. J., 60(1), pp. 14–21, 1995. Chen and Boggs have suggested that firms engaged in joint ventures in China must address the issue of building trust. ‘Western MNCs tend to trust contracts, but in China, because of weak property rights laws and an uncertain, dynamic institutional environment, informal relationships and the development of trust between partners may play a more important role than contracts’ (Chen, R. and Boggs, D. J., Long term cooperation prospects in international joint ventures: perspectives of Chinese firms, J. Appl. Manage. Stud., 7(1), pp. 111–27, 1998). 46. Herbig, The Innovation Matrix: Culture and Structure Prerequisites to Innovation, Quorum Books, Westport, CT, p. 108, 1994. 47. Herbig, The Innovation Matrix: Culture and Structure Prerequisites to Innovation, Quorum Books, Westport, CT, p. 7, 1994. 48. Saxenian, A., Silicon Valley’s New Immigrant Entrepreneurs, Public Policy Institute of California, San Francisco, CA, 1999. See also Herbig, The Innovation Matrix: Culture and Structure Prerequisites to Innovation, Quorum Books, Westport, CT, p. 233, 1994. 49. Saxenian, A., Silicon Valley’s New Immigrant Entrepreneurs, Public Policy Institute of California, San Francisco, CA, 1999. See also Herbig, The Innovation Matrix: Culture and Structure Prerequisites to Innovation, Quorum Books, Westport, CT, p. 97, 1994. 50. Saxenian, A., Silicon Valley’s New Immigrant Entrepreneurs, Public Policy Institute of California, San Francisco, CA, 1999. See also Herbig, The Innovation Matrix: Culture and Structure Prerequisites to Innovation, Quorum Books, Westport, CT, 91–5, 1994. 51. Saxenian, A., Silicon Valley’s New Immigrant Entrepreneurs, Public Policy Institute of California, San Francisco, CA, 1999. See also Herbig, The Innovation Matrix: Culture and Structure Prerequisites to Innovation, Quorum Books, Westport, CT, p. 91, 1994.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
93
52. Like most parts of the world, countries in the Asian region are net importers of the Intellectual Property associated with high-technology hardware and applications. Silicon Valley is located in the United States, which is a net exporter of IP. Firms in Silicon Valley do not develop IP in order for that IP to be repatriated elsewhere. Silicon Valley models in other countries than the United States, which rely on the participation of American firms, run the risk of contributing to IP imbalances. The problems associated with generating IP and retaining it in the Asian region require some consideration in the context of the sort of modeling that we are discussing in this paper.
GMOS: GENERATING MANY OBJECTIONS* Genetically modified organisms (GMOs) have the potential to dramatically change the agricultural world. Through advancements in agricultural crop production, we now have the ability to produce crops with greater yields; longer shelf life in stores; greater resistance to insects, disease, and droughts; as well as lower prodution costs. While GMOs are greatly impacting the world food markets, some individuals object to their introduction into the food stream. Many scientists praise the biological opportunities, but reception by the general public is mixed. While the opportunities for benefits are many, the possibility of devastatingly adverse impacts on health, the environment, and world order must be addressed. Governments around the world are gearing up for GMOs with new laws and regulations. Even the European Economic Community (EEC) is wrestling with the subject. Like it or not, however, GMOs are here to stay. This paper addresses some of the above issues.
ADVANTAGES
OF
GMOS
A survey of 800 farmers in Iowa by the National Agricultural Statistics Service for the 1997 crop found that the net return for GMO soybeans was $144.50 per acre, whereas the net return for nonGMO soybeans was $145.75 per acre.1 As with any science, improvement comes with time. Now, only a few short years later, as much as 60% of the soybeans grown in the United States have been genetically modified. While this improvement has been substantial, it does come with caveats. On the one hand is the improvement of life through the genetic modification of living organisms. Health can be improved; even life can be prolonged. On the other hand, however, are the dangers that all of this “toying” with science may bring. Any alteration of an ecosystem requires an adjustment period. A study of any of the geosciences demonstrates how the earth and its biology have changed over the centuries and millennia, but that change has occurred naturally. What happens when humans enter the picture and speed up the change artificially? A study of Australia over the past 100 years clearly demonstrates the adverse effects that can occur when new organisms are introduced into an already existing ecosystem. The abundance of mice, rabbits, and other species have wreaked untold havoc. Can this happen with the introduction of genetically modified organisms?
HUMAN IMPLICATIONS The social concerns relating to GMOs primarily involve four areas: health, the environment, social, and economic. From a health standpoint, one of the most frequently raised objections relates to allergies. If people are allergic to a trait that is artificially introduced into the new organism, they * By Leslie E.Nunn. Dr. Nunn is Assistant Professor of Business Law, School of Business, University of Southern Indiana, 8600 University Blvd., Evansville, IN 47712, U.S.A.; Tel.: 812-465-1205; Fax: 812-465-1044; e-mail:
[email protected].
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
94
Handbook of Technology Management in Public Administration
will then also be allergic to the new organism. If notice is not given of the transplanted trait, then the person allergic to that trait will not know of its inclusion in the new product. The same holds true for individuals with dietary restrictions. Many religious groups forbid the eating of pork, so what would be the impact of a food that has been genetically modified with genes from swine? Environmental issues are severalfold. The newly minted crops may prove detrimental or fatal to beneficial insects. For instance, it has been suspected that the pollen of Bt corn has damaged the Monarch butterfly population.2 Another concern is that GMO crops will cross-pollinate with wild plants to create “super” weeds against which known herbicides will prove useless. Additionally, there are concerns that herbicide-tolerant crops will cause increased use of herbicides and thereby result in environmental contamination in order to speed up the development of resistance to those herbicides. Counter to that is that herbicide-resistant crops decrease the need for the use of herbicides. Ultimately, the real issue is: Will GMOs truly upset the biological diversity that exists today? Scientific research conducted in Britain has concluded that plants genetically engineered to resist aphids had serious effects on the fertility and life span of ladybird beetles since the ladybirds feed on aphids.3 Socially, the impact of GMOs could be great. The world’s agriculture has evolved for thousands of years. The development of new varieties of crops has occurred naturally and intentionally. However, when intentionally modified, the process took years to accomplish, allowing plenty of time for any adverse results to be observed. Now, genetic changes can be accomplished in a matter of days, weeks, or at most months. The long-term effects of these changes may not be known until long after a proliferation of the offending modified species has occurred. The jury is still out on much of the economic results of GMOs. While initially it may be more costly to raise genetically modified plants and animals, the industry can be expected to refine the process to the point where GMOs can be raised and produced much more efficiently and less expensively. In time, the world may come to see the benefits of GMOs, the fears being quelled, and more food becoming available for the world’s rapidly growing population. There is concern that crops engineered to resist pesticides and herbicides will cause farmers to rely only on specific chemicals to which the crops have been modified to be resistant.4 Another concern is that there will be a reduction in the varieties of crops available for the world’s food production—a loss of biodiversity. It has been estimated that there are approximately 100,000 genes in a mammal.5 These genes can be moved around within the same species as well as between species. One of the more notable transferences is a gene from fish being placed into tomatoes to increase shelf life and enhance the tomato’s tolerance of cold temperatures. With GMO technology, science can now add to, subtract from, change, exchange, alter, and otherwise manipulate organisms. While all of this can be to society’s benefit, unwanted situations can likewise result. Among the issues are how to prevent GMO crops from detrimentally cross-breeding with other plants. The United States Department of Agriculture defines genetically modified crops as those that have been transformed by: † Insertions of DNA, which could occur in nature by normal breeding but have been
accelerated by laboratory splicing
† Insertions of DNA from other plant species 6 † Insertions of DNA from microorganisms or the animal kingdom.
TESTING FOR THE PRESENCE
OF
GMOS
Simply finding the GMO crops and meat is a difficult, if not economically unfeasible, task. The U.K. Ministry of Agriculture Fisheries and Food has created the Central Science Laboratory (CSL) (London, U.K.), which offers a commercial testing service for a number of GMOs. Specifically, the CSL can test coarsely ground cereals, flour, soya lecithin, unrefined oil and fat, chocolate, soya
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
95
sauce, tofu, cakes and other baked products, cornflakes, popcorn, meat and sausages, tomato and ketchup, and animal feeds. Additionally, 28 approved products can be tested for GMOs: chicory (1), corn (6), cotton (4), papaya (1), potato (2), rapeseed (4), soybean (2), squash (2), tobacco (1), and tomato (5). (Data courtesy of the Central Science Laboratory, Ministry of Agriculture Fisheries and Food.) The most popular genetically modified grain is the Roundup Ready soybean produced by Monsanto (St. Louis, MO). This soybean is resistant to Roundup, the company’s highly successful and profitable herbicide. With Roundup Ready beans, the farmer can spray the growing soybean crop with Roundup, thereby killing all noxious and other weeds that would otherwise interfere with the growth and profitability of the soybean crop, but not harming the soybean plant itself. This bean has been a tremendous benefit to farmers in that it allows them to spend less time, fuel, equipment usage, and money to produce their weed-free soybean crop. Nevertheless, it is a GMO, which some highly object to. Because of the objections, some consumers have refused to purchase foodstuff s made with Roundup Ready soybeans. A Roundup Ready soybean is impossible to detect by visual inspection with the naked eye, nor is microscopic examination of any use. Strategic Diagnostics, Inc. (SDI) (Newark, NJ) has developed a field test to determine the presence of Roundup Ready soybeans. The test uses antibodies to detect the Roundup Ready protein and can be administered easily with quick results. In 1999, the company marketed other tests to determine the presence of genetically modified Bt corn and other genetically altered traits in corn. Antibodies are used to find the GMOs via a method known as immunodiagnostics, similar to the home pregnancy diagnosis kits. Grain elevators can purchase packs of 100, while farmers can obtain smaller packs.7 Another testing company, Genetic ID, Inc. (Fairfield, IA), has become a worldwide concern with offices in the United States, Japan, Germany, and other locations on five continents. Numerous other companies are beginning to appear around the world.
SCIENTIFIC TESTING OF
THE
GMO PROCESS
The Rowett Research Institute (Bucksburn, Aberdeen, U.K.) conducted a study of GMOs and concluded that: “The Audit Committee (of the Medical Research Council of Great Britain) is of the opinion that the existing data do not support any suggestion that the consumption by rats of transgenic potatoes expressing GNA has an effect on growth, organic development or the immune function.”8 The American Society for Microbiology (ASM) represents more than 42,000 microbiologists worldwide. Its members pioneered molecular genetics and were principals in the discovery and application of recombinant DNA procedures. On July 19, 2000, admitting that nothing in life is totally free of risk, the ASM issued its statement on GMOs, concluding that it was not aware of any acceptable evidence that food produced with biotechnology and subject to the Food and Drug Administration constituted high risk or was unsafe. In fact, on that date it was stated that: “Those who resist the advance of biotechnology must address how otherwise to feed and care for the health of a rapidly growing global population forecast to increase at a rate of nearly 90 million people per year . Indeed it is doubtful that there exists any agriculturally important product that can be labeled as not genetically modified by traditional breeding procedures or otherwise.”9
INTERNATIONAL OBJECTIONS
TO
GMOS
One World.Net in its “Campaigns: Biosafety Protocol” Web Page10 reported the following news items: † The U.K. branch of Greenpeace celebrated after a jury found 28 of its activists not guilty
of charges relating to the destruction of a test plot of genetically modified corn last summer.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
96
Handbook of Technology Management in Public Administration † Friends of the Earth accused the United Kingdom’s Government of ignoring the public’s
† † † †
views and gambling with the countryside by allowing genetically modified crops to be grown and used. Friends of the Earth has launched a major European campaign to halt GMO pollution and to ensure that citizens’ concerns are addressed. Five Greenpeace activists had been arrested off Britain’s west coast after staging a protest on board of a cargo ship carrying 60,000 tons of genetically modified soya. U.S. farmers planned to plant 16 percent less genetically modified corn than they did last year, a new market survey revealed. Germany’s Health Minister, Andrea Fischer, called a last minute halt to the licensing of a genetically modified brand of corn produced by biotech firm Novartis (Basel, Switzerland).
In the July 1999 booklet entitled “An activist’s handbook on genetically modified organisms and the WTO,” the authors raise a number of objections to GMOs.11 Among them are that GMOs may exhibit increased allergenic tendencies, toxicity, altered nutritional value, or even mutations with unknown results. Also, the danger to people and the environment when GMOs are released into the environment is questioned. The French are quite active in expressing their concerns about GMOs. When the European Union banned hormone-treated beef, the United States imposed a 100% import tax on Roquefort cheese as well as other European food and luxury items. Roquefort cheese is specially made from the milk of one breed of sheep, which are raised in only one part of France. In response to the American action, the French town of St. Pierre-de-Trivisy, where Roquefort cheese is produced, imposed its own 100% tax on all Coca Cola sold in the town. Other French and European towns have imposed similar taxes on Coca Cola and McDonald’s restaurants. One restaurant in the Dijon mustard-growing part of France imposed its own tax on Coca Cola.12 Similar protests have occurred in other parts of Europe.
INTERNATIONAL CONFERENCES
ON
GMOS
There has been a concerted international effort to control the development and use of GMOs. This effort has resulted in several international meetings, such as the one at Cartagena, Colombia, in February 1999 involving 140 nations.13 In April 1999, more than 60 American and European consumer groups met as the Transatlantic Consumer Dialogue (TACD), demanding mandatory labeling of all GMO foods.14 In November 1999, a meeting of international experts in science, agriculture, medicine, law, philosophy, and ecology met in Bryansk, Russia. This group issued a Declaration in which they demanded a halt to the testing, production, and use of genetically modified foods and organisms, and the products of those organisms “until an international protocol has been implemented in all our countries.” Further, they demanded that there be no releases of GMOs “nor entry of GMOs into our market places unless the biosafety, desirability, necessity, and sustainability of such creations and products have been demonstrated to the satisfaction of the members of our societies.”15 In January 2000, an international conference called Biosafety Protocol was held in Montreal, Canada. A treaty to regulate the international transport and release of GMOs was agreed to. The treaty further entitled nations to block the entry of GMOs “if there is ‘reasonable doubt’ that there could be risks to public health or the environment.”16
INTERNATIONAL REGULATION OF GMOS As genetically modified products and organisms are transported to and from many places in the world, both in bulk state and in the form of finished products, questions have arisen as to the import
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
97
and export of GMOs. To address this, some individual countries have passed their own legislation on the subject. Among them are: 1. Australia. The Australian Environment Protection and Biodiversity Conservation Act 1999 was modified in 2000 to set up a Gene Technology Regulator to review applications for the use of GMOs within Australia. However, as the Environment Protection and Biodiversity Conservation Act of 1999 provides, the government “places a high priority on the development of a prosperous and innovative biotechnology industry that will generate wealth and employment for Australians.” 2. Czech Republic. Act no. 153/2000 was passed by the Czech Parliament to regulate the use of GMOs, wherein obligations were imposed on persons in the use of GMOs and their products. 3. Great Britain. The U.K. has enacted legislation that requires risk assessments to be performed on all projects that involve GMOs. The U.K. Genetically Modified Organisms Contained Use Amendment Regulations 1996 require that any person experimenting with GMOs must first assess the risks to human health and the environment; notify the appropriate authorities, including the Health and Safety Executive, of the proposed GMO work; adopt controls to prevent accidents; and draw up emergency plans in case of accidents. 4. India. The Supreme Court for India has prohibited experimental use of GMO cotton until rules have been promulgated to make certain that the environment is protected.17 5. Japan. The Japanese Ministry of Agriculture, Forestry and Fisheries has issued requirements for the labeling of GMO foods.18 6. Mexico. Mexico’s Congress has enacted legislation to require the labeling of GMOs to inform consumers of what they are eating.19 7. The Netherlands. The Dutch government has created its own Genetically Modified Organisms Bureau, as a part of the Centre for Substances and Risk Assessment under the National Institute of Public Health and the Environment.20 8. New Zealand. New Zealand’s approach to the GMO dilemma is similar to Australia’s. 9. Switzerland. On May 2, 2001, the Swiss Federal Office for Public Health issued a 1% limit for GMOs in food.21 European Economic Community One of the strongest economic regions of the world is that of the European continent. The formation of the European Economic Community has brought with it myriad problems, both great and small. Importing, exporting, and transporting both raw materials and finished products from member country to member country has been difficult, at times, for the EEC to deal with. In its infancy, this international organization desires to promote entrepreneurial activities of the businesses within its boundaries, but at the same time protect its member citizens. The Governing Council has faced the issue of GMOs head on with various directives. The Council of the European Economic Community issued Council Directive 90/219 on April 23, 1990. This directive addressed the issue of GMOs in detail. Among its provisions are: † Any person, before undertaking for the first time the contained use of a genetically
modified microorganism in a particular installation, should forward to the competent authority a notification so that the authority may satisfy itself that the proposed installation is appropriate to carry out the activity in a manner that does not present a hazard to human health and the environment.22 † Article 6 thereof requires that member states ensure that appropriate measures are taken to avoid adverse effects on human health and the environment that might arise from the contained use of GMOs, including that a risk assessment be completed prior to activities involving GMOs.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
98
Handbook of Technology Management in Public Administration † Other articles require member states to ensure that no accidents occur in the use of
GMOs, that notification be given to other member nations when an accident involving GMOs does occur, and that certain reports be sent to the EU Commission on an annual basis. The Directive includes provisions to protect trade secrets and other similar business related concerns. † Finally, Article 22 requires that member states must pass appropriate laws to comply with the directive. This directive was amended in 199423 and in 199724 to require member states to regulate the deliberate release into the environment of GMOs. Additionally, if one member state approves a GMO product consistent with the EU Directives, then all other member states must accept that product within their own territorial boundaries. In 1998, the Council of the European Union enacted Directive 98/81 EC,25 amending the original Directive of 90/219 EC to beef up its provisions by designating an authority or authorities to examine the conformity of entities within their own respective nations to ensure compliance with these directives. Matters such as the necessity of emergency plans in the respective member nations and consultations among the member nations were required. The Directive was updated to include the more recent scientific works with their applicability to GMOs. The World Trade Organization The World Trade Organization (WTO) has placed the burden of proof on importing countries seeking health safeguards to prove that GMOs are unsafe, instead of requiring the exporting country to prove that the GMOs are not harmful. Article 2.2 of the Agreement on Technical Barriers to Trade (the TBT Agreement)26 requires that all member nations not create unnecessary obstacles to international trade. Any regulations or laws of a member nation that restrict international trade in some way must do so only to the extent that is necessary to fulfill a legitimate objective of that nation. Regulations in the United States Basically, the Food and Drug Administration must first approve foods for human consumption. Once this is done, there are really no further laws regarding GMOs. Purdue University (Lafayette, IN), among various universities and public and private concerns, is presently conducting research on the topic of genetically modified foods. On May 29, 1992, the Food and Drug Administration published its “Statement of Policy: Foods Derived From New Plant Varieties.”27,28 The FDA did not give the public opportunity for comment. An environmental group, The Alliance For Bio-Integrity, filed a lawsuit against the United States government, claiming that the policy statement was not properly issued because of the lack of opportunity for public comment. On September 29, 2000, the U.S. District Court for the District of Columbia (116 F.Supp 2d 166) declared that there is a difference between an agency policy statement and a substantive rule. The substantive rule must undergo the formal notice and comment process since it has the same effect as a law. However, a policy statement is simply that, a statement of policy, and no notice and comment process are required. The FDA’s policy statement did not require labeling for genetically engineered foods, and the Court upheld this as likewise being proper.29 The FDA renewed the position of not requiring labels in January 2001. Products that are free from GMOs can be labeled as such, but these labels cannot use the terms “GM” or “genetically modified.” This is because the FDA considers that all foods are genetically modified through the traditional breeding techniques.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
99
Some individual states, such as the Commonwealth of Massachusetts, have enacted their own respective legislations regarding GMOs. Massachusetts requires the following label to be added in a conspicuous location: “This product contains genetically engineered material or was produced with genetically engineered material.”30
PATENTS
ON
GMOS
While it may be difficult to envision obtaining a patent on any living thing, the U.S. Patent Office has issued a number of patents for genetically modified organisms. Under Section 102(g)(2) of Title 35 of the United States Code,31 anyone applying for a patent will not be issued that patent if, before the “applicants invention thereof the invention was made by another who had not abandoned, suppressed, or concealed it.” Therefore, to be entitled to a patent, the applicant must show that either he/she was the first to put the invention to practice, or the first to conceive of the idea, and then exercised reasonable diligence in attempting to put the invention into practice. In order to prove his/her conception of the idea, the applicant must corroborate evidence, such as a contemporaneous disclosure of the idea to someone else. “An idea is definite and permanent when the inventor has a specific, settled idea, a particular solution to the problem at hand, not just a general goal or research plan he hopes to pursue. The conception analysis necessarily turns on the inventor’s ability to describe his invention with particularity. Until he can do so, he cannot prove possession of the complete mental picture of the invention.”32 This general statement of patent law has peculiar application to the world of GMOs. In the technology of genetically altering an organism, one can easily see that it is quite complicated. More is needed than to define a gene solely by its principal biological property (such as encoding human erythropoietin), because an alleged conception having no more specificity than that is simply a wish to know the identity of any material with that biological property.33 There is a true distinction between having a hope or expectation (the statement of a problem, for instance) on the one hand and having an inventive conception on the other. The hope or statement of a problem is not, in and of itself, an inventive conception.34 (See also Hitzman v. Rutter, which was decided on March 21, 2001.35) The mere belief by an inventor that his/her invention will indeed work, and his/her reasons for choosing a particular way to solve the problem, are totally irrelevant. Times do change. The technology of today will be far surpassed by the technology of tomorrow. So, what happens if an inventor works on something today that he/she recognizes to have true patentable possibilities 10 years down the road? This very issue was addressed in the federal courts almost 40 years ago. In the 1964 case Heard v. Burton,36 the Court held that “an inventor who failed to appreciate the claimed inventive features of a device at the time of alleged conception cannot use his later recognition of those features to retroactively cure his imperfect conception.” Can an amended patent application cure a previously defective application on which a patent was granted? This too was considered by the courts in a 2000 case in which the Court cited 35 U.S.C. Section 132.37,38 This section provides “No amendment shall introduce new matter into the disclosure of the invention.” The question to be decided in reviewing an amendment, then, is: Was the new material contained in the new application inherently contained in the original application for the patent? If so, there is no problem. If not, however, then the amendment fails and probably the original patent can be struck down as well.
SUMMARY Genetically modified organisms are created by the biological process of taking the gene for a desired trait from one species and putting it into another species to give the new species the desired trait from the first species. There are a number of advantages in doing this, since it is a much faster process of developing desired characteristics than the conventional method of selective
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
100
Handbook of Technology Management in Public Administration
breeding. Food products can now be grown that will give fruits and vegetables longer shelf life and resistance to diseases, pests, droughts, and other natural calamities. With genetically modifying crops, it is anticipated that food production costs, as well as chemical usage, will be reduced. At the same time that the benefits of GMOs become realized, the number of detractors increases. Public concerns center around four areas: health, environment, social, and economic. Health fears include allergic reactions to previously nonallergenic foods. The environmental issues are greater in number. The possibility of creating “super” weeds that are resistant to known herbicides would upset the balance, as would the danger of harmful effects on wildlife and humans. From a social standpoint, the long-term impacts of GMOs on society are not yet known. Finally, the economic advantages and disadvantages are still being debated. Tests are being developed to determine the presence of GMOs. The process for developing genetically modified organisms is constantly being reviewed by the scientific and environmental community. A number of environmentally active groups have arisen that passively and actively keep the general public apprised of the dangers of GMOs. While some of the groups are domestic, numerous groups are international. As for international impact, several worldwide conferences have been held in the past several years to determine how best to address the dangers of GMO technology. They have hoped to restrict the movement and introduction of GMO products from country to country. Individual countries have passed legislation regarding genetically modified organisms. Most effective are the efforts of the World Trade Organization and the European Economic Community. The EEC has required its member nations to ensure the safe development of GMOs with a definite approval, monitoring, and reporting system. The WTO has mainly encouraged GMO technology as a means of producing more food for more people in the world, as well as the economic benefits. While the right to patent a naturally occurring biological situation may be questioned, it does happen. Obtaining a patent on a process to genetically modify organisms is both difficult and costly. If not done properly in the beginning, a defective patent cannot be cured by amendment. Patent disputes are both lengthy and expensive. The technology of genetically modifying organisms will increase with the passage of time, likely overcoming most of its present negative aspects. Through the proper use of this technology, mankind, society and the environment all have the potential to be improved. With any change come problems, and we are presently in the early development stage, with its accompanying problems, of genetically modifying organisms.
REFERENCES 1. Successful Farm., November 1999. 2. In Motion, Bill Christison Interview by Nic Paget-Clarke, Chillicothe, MO, October 1999. 3. Meikle, J. and Brown, P., Friend in need—the ladybird, an agricultural ally whose breeding potential may be reduced by GM crops, The Guardian, London, March 4, 1999. 4. Stillwell, M. and Van Dyke, B., An Activist’s Handbook on Genetically Modified Organisms and the WTO, Center for International Environmental Law, March 1999. 5. http://www.sac.ac/ul 6. http://www.agric.nsw.gov.au 7. Successful Farm., http://www.agric.nsw.gov.au 8. Press release, Rowett Research Institute, Bucksburn, Aberdeen, U.K., October 28, 1998. 9. http://www.asmusg.org. 10. One World.Net, Biosafety Protocol Web Page. 11. Stillwell, M. and Van Dyke, B., One World.Net, Biosafety Protocol Web Page, July 1999. 12. Washington Post, p. 28, August 28, 1999. 13. Maldonado, R., Biotech Industry Discusses Trade, Associated Press, New York, February 22, 1999. 14. http://www.citizen.org/pctrade 15. Bryansk Declaration. European strategy session on genetic engineering. A SEEDEurope, The Netherlands, November 1999.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
101
16. Global Issues. Press release by Anup Shah, May 17, 2001. Update July 1, 2001. 17. Noronha, F., India’s high court stops field trials of biotech cotton, Environmental News Service, February 23, 1999. 18. Nenon, S., Scientific comments on science and policy: the USDA perspective on labeling, U.S. Department of Agriculture, Washington, DC. 19. OneWorld.Net/campaigns/biosafety/front/html 20. http://www.rivm.nl/csr/bggo.html 21. http://www.admin.ch/bag/verbrau/levensmi/gvo/d/index 22. Directive 90/219 of European Economic Community Council, April 23, 1990. 23. Directive 94/15 of European Economic Community Council, April 15, 1994. 24. Directive 97/35 of European Economic Community Council, June 18, 1997. 25. Directive 98/81 of European Economic Community Council, October 26, 1998. 26. Agreement on Technical Barriers to Trade, Article 2.2. 27. Statement of policy: foods derived from new plant varieties, Food and Drug Administration, Washington, DC, May 29, 1992. 28. Federal Regulation, Section 22, 984. 29. Alliance for Bio-Integrity v. Shalala et al. 116 F.Supp 2d 166. 30. Section 329, chapter 94. General Laws of Massachusetts. 31. United States Code, Section 102(g)(2). 32. Burroughs Wellcome Co. v. Barr Laboratories, Inc., 40 F.3d 1223. 33. Amgen, Inc. v. Chugai Pharm. Co., 927 F.2d 1200. 34. Alpert v. Slatin, 305 F.2d 891. 35. Hitzman v. Rutter, 243 F.3d 1345. 36. Heard v. Burton, 333 F.2d 239. 37. Schering v. Amgen, 22 F.3d 1347. 38. United States Code, Section 132.
TECHNOLOGIES IN TRANSITION, POLICIES IN TRANSITION: FORESIGHT IN THE RISK SOCIETY* ABSTRACT The emergence of formal Foresight programmes in science policy across Europe is examined in terms of government’s response to the changes in, and especially the uncertainties of, contemporary innovation. The paper explores this through deploying Beck’s notion of the “risk society”. It shows how, through a discussion of the social management of new health technologies, a tension arises between the priorities and regimes of the new “negotiation” and those of the former “provident” (or welfare) state. The emergence of new technologies will be shaped by the institutional assumptions and processes operating within the different policy regimes.
INTRODUCTION Technologies today are often said to be undergoing a radical shift in the way they are configured, in the way they embody high levels of intellectual density as “knowledge-based”, in the way they cut through conventional geological and physical barriers and demand new forms of engineering and design skills, in the way they are increasingly interactive and interdependent on other technologies for their very survival as working machines, devices, kits, databases, and so on. Technologies are said to be information rich, where knowledge resides in and depends upon the orchestration of large * By Andrew Webster, Professor in Sociology of Science and Technology, and Director of the Science and Technology Studies Unit (SATSU) at Anglia University, Cambridge. His most recent texts include Capitalising Knowledge (SUNY Press, New York, 1998, co-editor) and Valuing Technology (Routledge, London, 1999, co-author). q1999 Elsevier Science Ltd. All rights reserved.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
102
Handbook of Technology Management in Public Administration
in silica forms of data as in the human genome project. New forms of science and technology such as bio-informatics or molecular genetics become new forms of information science as do communications technologies relying on the digitization of communications systems. Information units in binary code to which genetic or telecomms lines are often reduced become powerful drivers of an increasing range of technological systems. Yet, paradoxically, the arrival of informated-innovation as a common denominator shaping the design and production of new technology has not been accompanied by an increasing sense of control over the sort of technologies this innovation produces. On the contrary, the economic value, environmental impact and social utility of these new technologies are more likely to generate more not less uncertainty among those who confront them, have to manage them and have to think about their development 10–20 years from now. Those of a rationalist, scientist persuasion will put down such doubts and feelings of insecurity to ignorance or misunderstanding, but even the rationalist will at times experience the sense of being simultaneously overloaded with information, short of the right information, and burdened with obsolescent information. No wonder there is so much attention given to the need for knowledge brokers who can filter, evaluate and distribute what are regarded as most relevant forms of knowledge within organizations. The emergence of the paradox of knowledge-based uncertainty has, of course, been associated with wider changes in late modernity, as Beck’s (1995) account of “reflexive modernisation” has argued, and, in this paper, I want to draw on some of the insights Beck offers in relation to the social management of uncertainty to interrogate the Technology Foresight programme, for, in its way, it is a very explicit response made by what Beck calls the “negotiation state” to the demands posed by the risk society. At the substantive level, I shall do this through considering one of the main areas of Foresight, health.
THE RISK SOCIETY AND ITS IMPLICATIONS FOR SCIENCE POLICY The future is, of course, always shot through with uncertainty, and always has been. As Bell (1996) says, “There is no knowledge of the future . Although there are past facts, present options and future possibilities, there are no past possibilities and no future facts”. However, there is a sense today that our futures are in some sense more uncertain than they were in the past, or more accurately, that we experience a different type of uncertainty than before. This may be because the capacity to shape future agendas is more widely distributed than before, and that therefore a much wider range of futures are up for debate; it also reflects a view that it is increasingly difficult to evaluate the impact and risks of new science and technology, which are always two-edged, whose unintended effects are a creation of the very science itself such as antibiotic-resistant super bugs. Over recent years Ulrich Beck has made a major contribution to understanding the nature of risks we face today. He has produced a series of texts that describe and explain his concept of “the risk society”, which he believes best characterizes the condition of late modern states today (see, e.g. Beck 1991, 1995, 1996, 1998). Beck has argued that science and technology are a combination of promise and threat, of being able to meet our needs such as food, warmth, and transport but doing so in such a way as to threaten the very basis of our collective, global survival. Beck’s work on “the risk society” is, therefore, about this fundamental contradiction of science in late modernity. He argues that there has been a major shift since the Enlightenment in the way science and technology relate to risks in society: until recently science and technology were regarded as the means through which our dependency on and vulnerability towards nature could be overcome; today (within the past two decades) science and technology must respond to the risks that they themselves create. The state no longer is able to act as guarantor of safety of freedom from risk because, “in contrast to early industrial risks, nuclear, chemical, ecological and genetic engineering risks (a) can be limited neither in time nor place and (b) are not accountable according to established rules of casualty, blame, and liability and cannot be compensated or insured against” (Beck 1996, p. 31). There are, in other words, significant changes in the capacity of the state and the wider
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
103
political system to manage these developments: during the period of the welfare or “provident” state, political institutions acted to oversee and reduce the hazards, both social and economic, of industrial society. It should be clear from this that Beck’s belief in a dramatic change in the state’s capacity to manage science goes beyond the conventional notion that scientific and technological development are always shot through with uncertainties and unintended effects. Such a position can be found in much science policy writing, notably, for example, in the seminal contributions by Collingridge (1984, 1986). Collingridge was one of the first to argue against the view dominant in the post-war period that policy making for science was the end result of a rational decision-making process. He argued on the contrary, that the political management of technologies can never, in principle or practice, be based on claims to know their outcome in advance. Instead uncertainty should be a welcome guest at the science policy table. Instead of policy acting in some rational, strategic way, it is full of contingency and as a result science policy makers continually need to engage in “repair work”. In a similar vein, Rip (1990) argues that scenarios about future technology developments can only, at best, be understood to be “story-telling [about the future] which conveys some intelligence,” rather than conveying any sense of certainty or clear, optimal choices. In contrast to (though in some ways complementing) these commentaries, Beck’s argument is that contemporary techno-economic development is qualitatively different in its effects and influence on society from anything that has gone before. This is precisely because of the character of the innovations themselves as in genetic engineering whose uncertain outcomes as a field of innovation are unknown not only in terms of their unintended effects, but even in terms of those that are intended: the effects which are anticipated are still in themselves experimental and as such inherently uncertain in their outcomes beyond a limited range of predictability. Today, political institutions that are supposed to manage new science find they cannot keep up with the pace of “techno-economic development”. Indeed, the force of this development is such that it itself has the capacity to structure society, where, for example, “microelectronics permits us to change the social constitution of the employment system” (Beck 1991, p. 190). The former provident state becomes disemboweled and its political institutions “. become the administrators of a development they neither have planned for nor are they able to structure, but nevertheless must somehow justify” (pp. 186–7). Indeed, the locus for the development and social management of new technologies shifts to a new sub-political arena, outside of parliament or political party. The structuring of the future is taking place indirectly and unrecognizably in research laboratories and executive suites, not in parliament . Everyone else . more or less lives off the crumbs of information that fall from the planning tables of technological sub-politics (Beck 1991, p. 223).
So extensive is this region of sub-politics that are outwith the institutional structures of the state that Beck has described this as tantamount to the “re-invention of politics”. To ensure it retains some sort of legitimacy the state now must entertain and facilitate a more complex and institutionally problematic form of “governance”: “the authoritarian decision and action state gives way to the negotiation state.” In these circumstances the state must redefine its position in relation to technoeconomic development and the risks it creates, adopting a more circumspect and limited capacity in regard to the control and direction of techno-economic innovation. If Beck’s account of the risk society can be accepted, it is perhaps not surprising, then, that we see a mushrooming of “futures” analysis, of horizon-scanning, of scenario thinking about the development of new technologies, precisely because of the desire to try to tie things down, to reduce uncertainties and risks or at least to be seen to be doing so as much as possible. This often involves a glossing over of the risks of technology itself: uncertainty is presented as lying less with the technology per se and more with the social and economic circumstances within which it is to be deployed (see Miles 1997). Such technocratic arguments are often closely associated with the deficit model of the “public (mis)understanding of science”.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
104
Handbook of Technology Management in Public Administration
While its discourse (at least in face of the public) may often appear technocratic, this futures analysis is typically associated with and expressed through both the transitory and more longlasting socio-technical networks that help to construct and reconstruct the future agenda for technologies. This can of course mean that any technocratic line is difficult to secure, especially in new areas of innovation where embryonic networks generate competing rather than singular agendas: since there are many new networks coming together in what we might call “times of Foresight”, the stories of the future unfold in many different, competing directions. As de Laat and Laredo (1998) observe . a foresight exercise is a forum where actors put forward their anticipations and postulate not only “technological options” but, often implicitly, also the scripts and scenarios that correspond to these options. Moreover, it is a hybrid forum since in most of the cases it appears that no absolute criterion (shared by all actors) have yet been constructed that would allow for comparison between scripts and selection of techno-economic networks (p. 157).
This hybrid forum finds its most formal expression in the Technology Foresight programmes now found in most “later modern” states. These programmes identify technological options yet, in principle, keep options open: if there is no “absolute criterion” for evaluating and subsequently managing technologies in transition, the “negotiation state” can use this particular science policy instrument to negotiate its way round the maze of technological futures. At the same time, the “re-invention of politics” means that contemporary negotiation states will develop an approach to its new policies (such as Foresight) which is quite distinct from the past. A defining characteristic of the contemporary state is its tendency to alter the terms on which it has responsibility for whole areas of policy making. Typically, there is a tendency to devolve responsibility to local or regional levels and to new (non-traditional) political actors. The privatisation and deregulation of state agencies such as research institutes, railway systems, power plants means that, while their activities may be subject to both national and international conventions and directives, they are shaped by competing localized interests not subject to any single form of governance or accountability since what it is to be “accountable” is now itself subject to negotiation. This movement towards a localised, sub-national policy regime creates what Beck might call “forms of organized irresponsibility” (Beck 199, p. 15): while the negotiation state might provide the general steer and strategy for policy, the state does not set the terms on which sub-national policy regimes must executed. This separation between strategy and execution is a defining feature of contemporary politics, and has, of course, raised considerable debate over contemporary forms of “governance” (Kooiman 1993; Rhoades 1997). It also is a defining feature of the shift from the provident to the negotiation state, since in the former policies such as employment, welfare and health policies were centrally driven. In light of this, we might expect to find that institutions created by the provident state have some difficulty in responding to the technology futures opened up by the negotiation state’s Foresight programme. This may be particularly true of institutions which are heavily dependent on new technologies such as national health care systems. As the progeny of the provident state, these institutions have sought to reduce the uncertainties and risks of modern life. The provident (welfare) state is, or at least was, based on a set of assumptions that certain needs such as health care can be collectively defined and so provided on a rational, albeit rationed, basis. The ambiguities of the risk society cast doubt on what these needs are, precisely because of the technological innovation such as genetic therapy or xenotransplantation that is associated with it. How, in other words, are the modern institutions of the provident state such as the British National Health Service to respond to the uncertainties technological, organizational and jurisdictional created by the contemporary policy regime (see Rip and van der Meulen 1996) and prevailing within the negotiation state? And how do they respond to the technological scripts of the future written
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
105
by socio-technical networks directly associated with Foresight itself? But before we discuss this, it is necessary to provide some brief contextualisation within which Foresight has developed.
THE MOVE TO FORESIGHT The momentum behind current Foresight programmes in both public (government) and private sectors can be said to be derived from the need to confront, take stock of and engage with the risks and uncertainties of the innovation system, which according to Beck’s analysis are quite distinct from the past. Contemporary innovation poses new problems for science policy regimes (Edquist 1997) including: † † † † †
The need for institutional flexibility in response to future demands; The enabling of organizational and managerial change; The encouragement of new types of network; The effective and appropriate selection of socio-technologies for the future; The effective and appropriate management of knowledge flows within and between innovation actors.
Not only, then, are we seeing a transition to new types of (information-dependent) technologies, we are also witnessing changes in the social context in which they are to be developed, a sociotechnical transition. In principle, Foresight programmes are supposed to be able to meet some if not all of the demands that this transition throws up through building a consensus on priorities, encouraging an anticipatory culture, providing a means through which to determine optimum selection of technology development through a careful evaluation of innovative capacity, and defusing the tensions associated with uncertainty by redefining uncertainty as a positive rather than negative feature of the planning process, typically by recasting this as “vision”, or even as a process “visioning”. In general Foresight involves four processes: † Deriving a list of “critical” or “generic” technologies which can underpin several
different areas of innovation;
† A consensus-driven consultation exercise (firmly located in Beck’s “sub-political”
arena) that tries to identify possible developments in science and technology which may help meet societal needs over the next 30 years; † A priority-setting process for the science and engineering base; † The identification and encouragement of fields of “technological fusion”: which might otherwise be marginalized by conventional disciplinary and institutional structures. These aspirations create a political discourse which legitimates the new role of the negotiation state. A series of rhetorical claims are made on behalf of the Foresight programmes. In the U.K., the exercise is credited with the creation of a new cultural configuration, or as the Office of Science and Technology calls it “a foresight culture”. Innovation actors are encouraged to become instilled with a future-oriented gaze fixed on long term health and wealth creation. This also involves encouraging the generic conditions within which innovation competencies can prosper rather than “picking winners”. Here it marks itself off from the predictive forecasting of the 1970s. In addition, the programme seeks to promote aggregation of ideas and initiative across newly formed networks through encouraging informal links; “go out and network” has become the clarion call of the U.K.’s Department of Trade and Industry at all its sponsored workshops. Networking is of course only as good as the networks that it produces and typically networks self-organise themselves into relatively closed relationships of like-minded actors, sharing similar socio-economic and political interests, though in the fast-moving innovation environment such as networks can come and go quite quickly once they have served their purpose (Gibbons et al. 1995).
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
106
Handbook of Technology Management in Public Administration
Perhaps in response to the criticism often directed at British policy makers that decisions merely reflect and “old-boys’ network”, the U.K. programme has claimed that it has as one of its primary objectives the broadening of participation in the priority setting process by increasing representation from as wide a range of constituencies as possible; this in turn is said to ensure the social accountability of the programme. Finally, the long term prospective anticipation of distant policy conditions—a culture of forward looking—is seen to be an essential driver behind the whole programme. Indeed it is the programmer’s primary rationale since it is premised on the belief that one can endeavour to reduce the uncertainties of the future by responding in advance to the conditions that create them. A prominent proponent of Foresight programmes describes them as “systematic attempts to look into the longer-term future of science, technology, the economy, the environment and society with a view to identifying the emerging generic technologies and the underpinning areas of strategic research likely to yield the greatest economic and strategic benefit” (Martin 1995). Technology Foresight in the U.K. itself originates from the 1993 White Paper, Realising Our Potential (OST 1993). Established in 1994, the TF programme was seen as a way of managing science and technology capacity and prioritizing research. It was meant to be less a means of making detailed predictions about markets and technical advances than as a way of looking at a ranges of possibly future scenarios which would be influenced by policy decisions made today. The U.K. programme was organized into 15 (later 16) sectoral panels with an overall Steering Group. As part of the process, Foresight included a variety of means of consulting research and industry, such as regional workshops and the use of Delphi surveys. Delphi involves successive rounds of questionnaires directed at key individuals seen as representative of their sector. With each round, interviewees are asked to revise their prioritizations in the light of the other respondents’ recommendations. In U.K.’s case, over 3000 respondents were included. Delphi is a good example of the state deploying transitory, non-institutionalized mechanisms in an attempt to construct a consensus on legitimacy for its actions: it is the negotiation state going about its business outside of formal political processes. The first set of priorities published by the Panels and Steering Group in 1995 attempted to identify those areas of strategic research likely to yield the greatest economic and social benefit in 10–20 years. As the name implies, “Technology Foresight” was supposed to primarily concern itself with the identification of future technological opportunities which had market potential, rather than giving prime attention to developments in basic science. As the OST commented, “the Steering Group decided to follow a largely market-driven approach by first identifying future markets and then the technologies, and related scientific research, which underpin them” (OST 1995, p. 22). The 1993 White Paper declared that Foresight would inform the decisions of individual firms and research organizations of future technologies and markets, it would help inform the policy making process, increase communications between interested parties and help the government’s own decisions about and priorities for the scientific base. The process of consultation with scientists, business representatives, government officials was not only supposed to help derive a set of recommendations for setting priorities but also to improve understanding between the scientific community, industry and government in turbulent times. In every respect the Foresight Programme represented an unprecedented degree of emphasis upon the importance of the execution of policy options at the local level, on terms which public and private actors set at the local level, something quite distinct from the previous high level U.K. planning in science and technology (Elliot 1996). In short, the epitome of the “negotiation state” at work.
FORESIGHT IN MORE THAN ONE COUNTRY The U.K. Foresight programme is just one of a number of national Foresight programmes (see Gavigan and Cahill 1997; Hetman and Kamata 1996; OECD 1996), such as those in Germany (Grupp 1994) and The Netherlands (van der Meulen 1996), and new Programmes are currently
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
107
being considered in various countries such as Sweden and Hungary (Balazs 1998). These programmes differ in the specific strategy they deploy. As history shows, radical or revolutionary ideas often spread from one country to another: while its “revolutionary” credentials may be questionable, Foresight has become one of the most successful policy manifestos of recent years, and is found now throughout the world—a sort of Foresight domino-effect (Hetman and Kamata 1996). Programmes exist currently in Japan, the U.S.A., The Netherlands, Germany, France, the U.K., Italy and Australia. These have varying pedigrees, but a real momentum to them all has come over the past 6 years. Although they vary in terms of the particular processes that are used to fashion a Foresight agenda Japan favors Delphi while the U.K. puts more weight on sectoral Panels not surprisingly, the Foresight agendas that eventually appear are strikingly similar. As the POST (1997) report notes, “all clearly recognise the importance of information technology, communications, biological and other “core” technologies” (p. 34). This convergence of innovation frameworks reflects the internationalisation of R&D in many sectors, driven by global research networks (such as the HGP*), international specifications that set the standards for new technology, and international regimes of governance that shape the terms of which new technology is to be developed, deployed or commercialized (such as the GATT/TRIPs agreement). The convergence is so strong, that reading tables that provide comparative lists of who is pursuing what where becomes (because it is so repetitive) a rather tedious task. Along the way, of course, many items have been squeezed off the list, presumably because they are regarded as too parochial, demand too many resources, lack the stability of existing R&D infrastructures, and favour the groups other than industry or academia, the programmes’ principal agenda-setters. Had these other interests come through, it is quite likely that Foresight would be a much more diverse set of “story-telling” than the one currently on offer. Negotiations seem always to have similar outcomes presumably because they favour some, rather than other socio-technical networks championing particular techno-economic agendas. Despite the convergent agendas, the programmes do differ across countries in important respects. Variation is evident not only through examining the way in which Foresight consultations have been conducted; it is also shown by the ways in which expert panels within different countries gave differing weight to particular types of constraint on future technological developments identified through the Foresight surveys (Cameron et al. 1996). Experts were typically asked to consider whether technical, economic or social and ethical constraints were more or less likely to act as obstacles to the achievement of technological priorities. It is noticeable that over a wide range of technology fields, Japanese panels gave, in general, a higher rating to technical constraints than did similar panels in other countries (Gavigan 1997; Kuwahara 1996). While this might reflect a need for more basic research in key fields in Japan, it may also point, in Beck’s terms, to a less reflexive society culture where the state is still committed to a strong executive and not merely steering role in setting technological futures. The Japanese case also suggests that there may be forms of Foresight which can prevail outside of the negotiation state. This in turn suggests that we may be able to differentiate Foresight practices along a provident state negotiation state spectrum. But this is beyond the ambitions of this paper.
TENSIONS IN FORESIGHT Whatever specific national Foresight programme one is considering, most seem to carry of a number of tensions that are difficult to resolve. First, there is a tension between a reflexive, post-modern (Rip and van der Meulen 1996) strategy towards building a future innovation agenda through a rolling programme of non-linear, aggregative co-ordination (epitomized by consensus conferences, Delphi, scenarios etc.), and a * The Human Genome Project.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
108
Handbook of Technology Management in Public Administration
linear, dirigiste approach where innovation strategy is steered from the centre (epitomized by national plans, agreed lists of “critical technologies” and a selection of priorities to be pursued). The tendency, of course, is for the latter to take precedence once “lists” are in place, generating both a technology and a policy path-dependency that become self-confirming. For example, as a recent review of Foresight has observed, “when [Japan’s] Science and Technology Agency study of 1971 predicted liquid crystal displays as a successor to the cathode ray tube, it was far from clear that this was the right technical assessment, but the resulting weight of Japanese investment in LCD production actually brought that forecast about.” (POST 1997, Annex B. p. 12). Second, there is a tension between the requirement to facilitate new networks on the one hand and, on the other, the need to use the available mechanisms (both public and private) that comprise the infrastructure of the national research system Research Councils, Departments, universities, corporate labs etc. In these circumstances, the novelty of networks might be much more limited than the programme managers would hope, and the agendas that are pursued merely extrapolations of network members’ existing R&D activities. While this may reduce one potential source of uncertainty indeterminate and unstable networking it is likely to mean that the programme would make a very limited contribution to the innovations system’s requirement for diverse and flexible institutional relationships between organizations in the R&D infrastructure. Third, if it is to have any significant impact Foresight has to be able to facilitate the translation of innovation agendas and needs across different time-frames, that are derived from the different priorities of R&D actors in the innovation system. Foresight tends to assume a 10–20-year time frame in determining the technology options to be pursued (in some cases, such as Germany and Japan, 30 years), whereas few actors in the R&D system work on such time lines. There are considerable differences in technology sectors in relation to the time taken for new product development. One new pharmacological compound may take 8 years to bring to market, during which time four generations of IT software have come and gone. Public sector organizations tied into the innovation system, such as a country’s national health care system, may be required to plan on an annualized budgetary basis, even in areas which relate to the R&D. The vagaries of time impact on the programme itself: the rolling Foresight exercise can push the future into the future where new developments are seen to require further time, or into the past, where promising options are dropped. The tensions between time frames that can slip, or are discordant with each other, mean that “the horizon” is more kaleidoscope than unitary. These three features of TF programmes relating to forms of control, the fostering of transitory networks and the alignment of different timeframes means that Foresight cuts across conventional, institutionalized structures and processes relating to the co-ordination and management of R&D. In doing so, it sets in train new relationships not only technically based but also of an organizational and ultimately political nature which are more difficult to co-ordinate than conventional R&D domains and which require new “stages” and new types of “conversations” among the players. A recent example, drawn from the U.K.’s programme, is the establishing in September 1998 of a new Virtual Informatics Institute, which has (virtually) brought together academic, industrial and public health groups. This initiative has been taken by a number of actors who are trying to develop a new techno-economic network, with a new script and future scenario, in the area of health informatics, bringing foresight stories into the forum of health and medical research. Which raises the question: how will these scripts fare in the institutional networks that make up the NHS, networks premised on the risk-reducing “provident state” whose political career, according to Beck is now “warning”?
FORESIGHT, RISK
AND THE
NHS
What, then, is the relationship between the U.K.’s Technology Foresight programme and one of the country’s most important institutions heavily involved in researching and deploying new technologies, the National Health Service? As suggested above, there may well be some
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
109
dis-alignment between the two as a result of tensions between the emergent practices of the “negotiation state” as expressed via Foresight and the long-established practices of the provident state upon which the NHS has depended. The U.K. Foresight programmer’s Health and Life Sciences Panel produced a large number of core priorities for future RTD in basic and clinical health science. For example, priority has been given to neurosciences, molecular genetics, rDNA technologies, health informatics (such as telemedicine) and the impact of new demographic shifts, such as ageing, on medical delivery and research. Much of this agenda might be regarded as searching for answers to the “problems” (risks) caused by modern, scientific medicine: as Beck (and many others) notes the success of medicine this century has been to eject people out of acute sickness into long term chronic illness for which there is no obvious remedy, but associated with which is an increasing aged population making higher demands on health care. The TF agenda in its emphasis on neurosciences and genetics seeks answers to these problems by encouraging R&D on the source of chronic illness and disease in order to prevent and/or more effectively manage it. The Health and Life Sciences Panel has, since its inception, been shaped by the academic research constituency within health and life sciences along with RTD agendas of larger (primarily pharmaceutical) firms, reflecting the well-established academic-industry complex in this field in the U.K. It has produced a range of initiatives, sought to develop new networks and established various Working Groups to develop specific strands within the programme. As a result the level of alignment among these actors has grown and the localized agendas associated with the original expert groups have been gradually opened up and decontextualised such that other health and life science RTD actors are not only able to participate but, in some cases—such as public agencies expected to respond to government initiatives—required to do so. However, as van Lente and Rip (1997) say, “the key phenomenon is the way in which actors position themselves and others in relation to a future technology” (p. 244). This positioning will reflect actors’ localized priorities and the activities they engage in to manage local RTD agendas and the knowledge-based needs these produce. As such, actors within the NHS R&D Executive have a range of localized practices that will shape their response to the recommendations and initiatives generated by the TF programme, a response that is driven primarily by the demands of clinical delivery. The innovative and ambitious agendas of Foresight become translated into the more prosaic agendas and language of health provision: as one NHS officer associated with the DoH Health Technology Assessment programme has observed: You see, if you say to people in the health service “We’ve got to deal with this ageing population,” that’s too hard. It needs breaking down into what you need to think about is fractured femurs.*
While the Health and Life Sciences TF programme is mobilized around an innovation-led agenda with a 10–15-year timeframe, the NHS R&D Executive has to develop its RTD strategy with a much closer (3-year) horizon and with priority given to supporting a Health Technology Assessment programme whose broad aim is to reduce the costs of new technologies, a concern that can be found in many other health care delivery systems. Patient need and equity across all health care needs are portrayed as the proper basis for allocating resources within the NHS, even if what this means in practice is far from straightforward. Following the restructuring of 1991 and subsequent reforms towards “evidence-based research”, the NHS Executive through its Research and Development Directorate is trying to source and select existing and new clinically related research which can best meet NHS needs. These needs are prioritised in terms of effectiveness and “consumer” (patient) requirements. How this is to be done * Data from fieldwork associated with “Knowledge Sourcing and Foresight” project, SATSU, 1998.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
110
Handbook of Technology Management in Public Administration
was first outlined in the document Research for Health (DoH 1992). This was a major departure from previous practice, and has led to the identification of 21 priority areas which the health Service’s Central Research and Development Committee agree carry the best potential to meet its future requirements. The language of “equity” and “patient need” are clearly derived from the lexicon of the provident state on which the NHS was built. The normative and institutional practices this generates are difficult to align with the timeframe, costs, uncertainties and risks of the innovation agenda inspired by Foresight. The position of NHS actors to future technology is likely, thereby, to be quite different. At the same time, we should not romanticise the degree to which the NHS in practice has been able to meet patient “needs”. A modernist social management of risk, rather than a reflexive embracing of uncertainty characterizes the way in which health risks are handled within the NHS, both by the formal procedures deployed by the Health Technology Assessment committees at national and regional level, and by the informal clinical procedures adopted at the point of delivery in primary and secondary care. The language and discourse of “risk” have, in fact, occupied a prominent place in the policy lexicon of the NHS (and related social services) in recent years. But this should not be seen as a grasping of the Beckian script but, on the contrary, a rationing-driven move to redefine “needs assessment” to “risk assessment” in order to use cash-limited budgets as effectively as possible: patient needs can then be more easily defined as within or outside of the responsibility of the Service. As Higgs (1998) has argued: Assessment forms would distinguish between needs that were not important enough to warrant intervention and those that could result in harm if no action was taken (p. 184). Those deemed to be “at risk” can be “surveyed” and “kept safe”, suggesting that “the utilisation of a risk discourse . has flowed from a modernist belief in the control of nature and social phenomena” (p. 185). In short, it is the prevention of risk rather than its embrace which is to be the order of the day. Yet, if Beck’s risk society is upon us, it would seem that the NHS itself has to respond to the new uncertainties which it brings. Indeed, although the assessment of new technologies is driven by cash-limited budgets and the need for “evidence-based research”, we can see the NHS taking up at least in form if not substance some of the future scripts which has foresight storytelling encourages. This has been primarily in terms of the growing popularity of “horizon scanning” and “scenario building” among U.K. Health Authorities and within the NHS at a national level, such as the so-called “Madingley Scenarios” (Ling 1998). This latter document reflects the concern within the NHS over the transition from a stable provident state support for health care to a much more uncertain future, as is clear when it declares that “its primary purpose is to stimulate debate within the NHS about how best to respond to changes in the healthcare environment which are, to a large extent, not only beyond the control of the NHS itself but also beyond the control of governments” (emphasis added). Nevertheless, even here, we can see that such scenarios are rather different than the prospective future options mapped out by Foresight inasmuch as they are premised on a range of explicit socio-political conditions against which various technological futures are to be compared. One of these, for example, presupposes the maintenance, as much as is possible, of socialized health care; another, in direct contrast, the privatisation and individualisation of health care provision. This explicit contextualisation of future technologies according to counterposing scenarios forces those involved to consider how much of the provident state is “up for negotiation”, dismantling and replacement by new innovation, new providers and new health care networks. This forces those involved in scenario work in the NHS to consider the interests and boundaries of the constituency to be served. While the current U.K. Foresight programme makes much play of its emphasis on the “quality of life” it has no natural constituency is everyone and no-one precisely because it occupies a place in Beck’s sub-political arena.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
111
CONCLUSION I have tried to show in this paper how Beck’s concept of the risk society can be used to interrogate the emergence of Foresight within the science policy regimes of late modern “negotiation” states. I have argued that Foresight can be seen to express the attempt by the state to socially manage the uncertainties generated by the transition of technologies within the contemporary innovation system while fostering the heterogeneity and risk-laded nature of this system. I have suggested, however, that the insertion of Foresight visions and practices within the interstices of modern institutions is highly problematic, and indicates a dis-alignment between the modernist provident state and the late-modern negotiation state, illustrating this through a brief discussion of the distinct priorities, timeframes, and crucially, languages of risk that separate the scripts and scenarios of Foresight from those of health innovation and delivery in the NHS. Ultimately, therefore, the risk society is one which produces innovation policies such as Foresight which, as Giddeas (1998) would say, “manufacture” risk, while simultaneously, fosters practices on the ground which attempt to prevent it. New technologies, and their associated techno-economic networks, are caught between these two, and can only on hope to innovate successfully when they achieve a degree of socio-technical alignment between them.
ACKNOWLEDGMENTS This paper is based on research currently being funded by both the Economic and Social Research Council and the European Commission being conducted at SATSU in collaboration with partners in The Netherlands and Spain. I am grateful for the comments of SATSU colleagues Nik Brown, Annermiek Nelis, and Brian Rappert on earlier drafts of this paper.
REFERENCES Balazs, K., Technology Foresight in Hungary, Mimeo, Technopolis, Brighton, 1998. Beck, U., The Risk Society, Sage, London, 1991. Beck, U., The re-invention of politics, In Reflexive Modernisation Beck, U. et al., Eds., Polity Press, Cambridge, 1995. Beck, U., Risk society and the provident state, In Risk, Environment and Modernity: Towards a New Ecology, Szersynzki, B. et al., Eds., Sage, London, pp. 27–43, 1996. Beck, U., Politics of risk society, In The Politics of Risk Society, Franklin, J. Ed., Polity Press, Cambridge, 1998. Bell, W., What do we mean by future studies?, In New Thinking for a New Millennium, Slaughter, R. Ed., Routledge, London, 1996. Cameron, H. et al., Technology Foresight: Perspectives for European and International Co-operation. PREST Report, University of Manchester. Commissioned from DGXII, European Commission, Brussels, 1996. Collingridge, D., The Social Control of Technology, 1984. Collingridge, D., When Science Speaks to Power, 1986. Department of Health, Research for Health. NHS Executive, Leeds, 1992. Edquist, C. Ed., Systems of Innovation, Pinter, London, 1997. Gavigan, J., and Cahill, E. A., Overview of Recent European and Non-European National Technology Foresight Studies, Technical Report No. TR97/02. European Commission: Joint Research Centre, Seville, 1997. Gavigan, J., and Cahill, E. A., Overview of recent european and non-european national technology foresight studies. Institute for Prospective Technological Studies, Seville, 1997. Gibbons, M. et al., The New Production of Knowledge, Sage, London, 1995. Giddens, A., Risk society: the context of British politics, In The Politics of Risk Society, Franklin, J. Ed., Polity Press, Cambridge, 1998. Grupp, H., Technology at the beginning of the 21st century, Technol. Anal. Strategic Manage., 6(4), 379–411, 1994.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
112
Handbook of Technology Management in Public Administration
Hetman, F. and Kamata, H., Introduction: initiatives in futures research at the OECD, STI Rev., 17, 7–13, 1996. Higgs, P., Risk, goner mentality and the rreconceptualisation of citizenship, In Modernity, Medicine and Health, Scambler, G. and Higgs, P. Eds., Routledge, London, pp. 176–197, 1998. Kooiman, J., Modern Governance, Sage, London, 1993. Kuwahara, T., Technology Foresight in Japan: a new approach in methodology and analysis, STI Rev., 11, 51–70, 1996. de Laat, B. and Lardo, P., Foresight for research and technology policies: from innovation studies to scenario confrontation, In Technological Change and Organisation, Coombs, R. et al. Eds., Edward Elgar, Cheltenham, pp. 150–179, 1998. van Lente, H. and Rip, A., The rise of membrane technology, Social Stud. Sci., 28(2), 1997. Ling, T., The Madingley Scenarios: Two Scenarios for the Future Context of Healthcare, NHS Confederation, Cambridge, 1998. Martin, B., Foresight in science and technology. Technol. Anal. Strategic Manage., 7(2), 139–168, 1995. Miles, I., Foresight in social science. Evaluation Report, ESRC, January 1997. OST [Office of Science and Technology], Realising our Potential, HMSO, London, 1993. OST [Office of Science and Technology], Realising our Potential, HMSO, London, 1995. OECD, Special issue on government technology foresight exercises, STI Review, 17(1), 1996. POST [Parliamentary Office of Science and Technology], Science Shaping the Future? Technology Foresight and its Impacts, HMSO, London, June 1997. Rhoades, R.A., Understanding Governance, Open University Pres, Milton Keynes, 1997. Rip, A., An exercise in foresight: the research system in transition—to what?, In The Research System in Transition, Cozzens, S. et al., Eds., Kluwer, Dordrecht, 1990. Rip, A. and Van Der Meulen, B. J. R., The postmodern research system, Sci. Public Policy, 23(6), 343–352, 1996. Van Der Meulen, B. J. R., Heterogeneity and co-ordination: the experience of the Dutch Foresight Committee, STI Review, 11, 161–188, 1996.
NEW TECHNOLOGY AND DISTANCE EDUCATION IN THE EUROPEAN UNION: AN EU–U.S. PERSPECTIVE* Today European institutions in the education sector must provide greater access to their services with constrained budgets to an increasing and diversified population. In this context, the benefits of developing and using online complementary facilities for education are gradually being recognised. Strategy and policy aim now at delivering complementary educational resources for primary and secondary schools, high standard courses, complete programs or certification for higher education and adult training, substantially based on information and communication technologies. Online facilities in higher education are also expected to give scope for low-cost expansion through large economies of scale or for new revenue centre. Education is seen as a forthcoming important valueadded sector in which the “E-learning” segment would play a significant role.** Numerous pilot experiments have been conducted in Europe in the last decade. They show that Internet and new media, when coupled with innovative pedagogical thinking, revitalise the processes of teaching and learning. However, most experiences have not been lodged in the mainstream of the academic effort. Findings relate to limited range of learning outcome which limit comparative effectiveness analysis and restrain large-scale transfer of results.
* Dr. Alain Dumort, European Commission, EU Fellow at the University of Southern California. Taken from “New Media in Higher Education and Learning,” International Information, Communication and Society Conference, 27–30 October 1999, Annenberg School of Communication, USC at Los Angeles. ** The market would be worth $2 billion in 2001 for over 15% of the total education market according to the International Data Corporation. However, the market in Europe remains small (2% of the total education budget) and fragmented because of the diversity of school curricula and languages.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
113
The challenge today in Europe is to go beyond the trial stage. Convergent driving forces allow a rapid pace of change towards the general use of new media for education through a sustainable market: † change in pedagogical thinking in which learners play a much more active role; † decrease of equipment and communication price with special conditions to educational
institutions, e.g., flat-rate telecommunication tariff, reduced Internet access fee;
† fast growing Internet expansion;* † commitment of public authorities and development of partnerships between institutions
and businesses;
† new thinking on the mission and “core” functions of universities.
Most European countries have concentrated on developing information and communication technology policy in the schools (2) and it is only recently that they are realising the importance of having initiatives for the higher education sector (1).
FACTS
AND
TRENDS
IN THE
HIGHER EDUCATION SECTOR
A whole new sector of higher education is emerging along side traditional, national and stateregulated systems, through branch campus, franchising and now by electronic means. The sector is mainly driven by American providers which fully exploit the attraction of American universities on European students.** Concerns are growing in Europe on the inertia and capacity of European institutions to provide actually online education facilities. The risk would be to lose pedagogical credibility in the new networked environment, and therefore prestige, as well as market opportunities on a potential highly profitable segment. But until recently, the trend has been largely ignored by public authorities as well as by universities in Europe. Compared to the U.S. situation, almost all European universities are publicly funded. Education is considered to play a central role in ensuring social equality which must be fulfilled by institutions. If the higher education can not be a business-as-usual market on which profit can be raised, universities enjoy a greater autonomy but share budgetary restrictions, which lead to re-examine which kind of services can be better provided and how. Most universities in Europe are now planning or are using Internet facilities to complement current activities. The introduction of online facilities has been partly linked to strategy on Open University. Five countries have created a national Open University since 1970: Germany (Fern Universitat), the Netherlands (Open Universiteit), Portugal (Universidade Aberta), Spain (Universidad Nacional de Educacion a Distancia) and the United Kingdom (Open University). The other members of the European Union promote open and distance learning directly through existing universities or network of universities (Consorzio Nettuno in Italia or the virtual open university in Finland through the Finnish University Network, FUNET, for example). Networking and bench-marking apply at the European level as well with the European Association of Distance Teaching Universities which aims at pooling and providing distance education programs to 1 million students through 18 non-profit institutions from 14 countries. Co-operation between European higher education institutions is driven through joint programs and common diploma which is provided online for an increasing part.*** * C55% in 1997/98 in Europe, C34% in the U.S.A.; C40% are expected in Europe in 1998/99,C27% in the U.S.A. Morgan Stanley Dean Witter, The European Internet Report, June 1999. ** The number of Europeans studying in U.S. universities exceeds by far the number of U.S. students in Europe. *** For example, the MBA for executives jointly offered part-time online by the Institut d’Administration des Entreprises de l’Universite d’Aix-Marseille and the Dutch Open University.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
114
Handbook of Technology Management in Public Administration
However, the current practices rely mainly on new delivery systems of information and communication but quite few new innovations in pedagogy or in accreditation and certification of knowledge. Most of the experiments are still technology-driven and based on a traditional academic paradigm. The institutions offer online courses and programs as an extension of their campusbased programs. Recent surveys reveal that a majority of university teachers use technologies in their own research, in communication and bureautic tasks. Actual use in teaching remains still quite limited with the exception of particular topics (statistics for example). Shortage of time is the most important obstacle. Shortcomings in skills and lack of pedagogical and technical support are other common obstacles raised by teachers. Policy makers are increasingly concerned on how universities should respond to the new E-learning environment. A major study is being completed on the impact of virtual and corporate universities in the United Kingdom. Commissioned by the Higher Education Funding Council for England, the report examines implications particularly on the regulatory framework in which higher education institutions operate, accreditation and quality assurance issues. A working group has been established at European level with a view to present recommendations to the European Commission on the opportunity to constitute a Virtual European Higher Education Space through extensive co-operation between universities.
EUROPEAN SCHOOLS ARE GOING ONLINE In 1995, less than 3% of the European schools were connected to the Internet and the number of students per computer in secondary schools was over 25. In the late 99’s, around 90% of secondary schools are equipped and wired, the number of students per computer has dropped to an average of 10, and, crucially, every new teacher and an increasing number of on-the-job teachers are being trained in using information and communication technologies. Equipment rates in the European schools are growing quickly, despite high regional and local disparities, but the quality of the European educational system is not contested. European and U.S. records are converging. If equipment are less spread in the classrooms in Europe than in the U.S.A., they are often more uprated and upgraded and offer better capacity and functionality.* These significant changes are the result of the implementation of action plans at national and regional level since 1995 to equip and network schools, which is the exclusive competence of local authorities, to train teachers and to promote the development of appropriate electronic content through public–private partnerships.** All these initiatives have been supported and complemented at European level within the action plan “Learning in the information society 1996–1998” and the 1996 resolution on educational multimedia of the Council of Ministers, in order to foster European cooperation in education. Beyond the dramatic success of the equipment strategy, many obstacles remain in Europe as well as in the U.S.A. The teacher’s ability to integrate new media effectively into classroom instruction is still limited. In the U.K., the British Communications and Technology Agency reported that less than 1/3 of trained teachers actually use technology in the day-to-day practice. In the U.S.A., according to a survey realized in 1998 by the Department of Education, 80% of * Fifty-one percent of U.S. public school instructional rooms had an Internet access in 1998. No figures are available at the European level but the penetration of the Internet into secondary school classrooms should be lower than 20% regarding preferences to equip special lab-rooms in the school. ** Amongst others, “Programme d’Action Gouvernementale pour la Societe de l’Information” in France, “The National Grid for Learning” in the United Kingdom, “Schulen ans Net” in Germany, “Schools IT 2000” in Ireland, “Towards an Information Society” and “National Training Strategy for Education and Research” in Finland, “Nonio XXI Century Programme” in Portugal.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
115
teachers recognized not feeling prepared to integrate technology into the classroom. A 1999 survey of U.S.A. Today still shows that only 30% of teachers estimate that technology fit with the school curricula. Results are comparable in Finland and in other pioneer countries in this field. Lessons from the experiences show that school teachers, like their university colleagues, often find it hard to change from traditional teaching methods. Technology itself does not improve the learning, nor the teaching process. It must be embodied in the complex school environment and in the curricula. In this context, technological-push strategy aiming at boosting quickly virtual learning paradigm may lead to fundamental misconceptions. Much remains to be done in Europe to improve equal use of technology as a tool for learning. Access to and use of information and communication technologies still depend on family income. Despite high growth rate of Internet expansion, the gap with regard to home Internet access based on education and income level has increased. Computers and the internet are also still mainly used by boys and men. These social and gender differences in home-use tend to be duplicated at school for students and also teachers, who for the very large majority are women.
WHAT NEXT
FOR
EDUCATION IN EUROPE?
The challenge is no longer to “learn to master new technologies,” but mostly to acquire the other fundamental basic competencies needed to live and work in the networked society. Schools and universities must therefore support each youngster to handle information in a critical and constructive way, in order to successfully communicate and work in a multicultural environment. The successful spread of educational technology depends partly on policy issues. One is finance and public investment in order to sustain demand of educational institutions: how much should be allocated to the content compared to the technology? Resolving the legal issues on taxation, intellectual property rights, security of networks and data, privacy, are also decisive to stimulate supply. Beyond the economic and legal issues, particularly for broadcasters and multimedia publishers, the evaluation of the educational and social benefit of new media remains challenging. † How can the Internet reinforce education, through broadening content delivery, without
sacrificing quality?
† What are the socio-economical benefits of “taylor-made” education? Private demand-led
supply of online learning material increases the risk of unequity between people and country of access to education and knowledge. Closing the “Digital Divide” is on the top of the political agenda. Pro-competition policies, to reduce prices of equipment and communication services, and universal service policies, to ensure affordable access to Community such as school, will continue to be parts of the solution in Europe and the U.S.A. † How can diversity of culture and language be valorized in an emerging market dominated by Anglo-American content, supply and technology investment?
New challenges call for a better understanding of the issues at stake and for appropriate concerted decisions between all actors involved. The European Commission is engaged in facilitating the process of co-operation between and with the 15 Member States of the European Union in order to improve continually the effectiveness and the efficiency of the educational system. Encouraging the use and development of information and communication technologies constitutes an other priority for the new Commission. A new “Digital Initiative” is being prepared under the leadership of President Prodi to ensure that all young European will benefit from the opportunities offered by new technologies.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
116
Handbook of Technology Management in Public Administration
REFERENCES Confederation of European Rector’s Conferences, Trends in learning structures in higher education in Europe, Report to the European Commission for the Meeting of the Council of Ministers of Education, Tampere, Finland, September 1999. Mutschler, D. in collaboration with Amor. P and Laget. P, Curriculum for the 21st Century—Technology in Education, Delegation of the European Commission to the U.S.A., Washington, 1999. European Commission, Information society project office, Measuring Information Society, 1997 Survey, Belgium, 1998. European Commission, Learning in the information society, Communication to the European Parliament and the Council of Ministers, Belgium, 1996. European Commission, Report of the Task-Force on the Educational Multimedia, Belgium, 1996. European Parliament, Scientific and technological options assessments, The Application of Multimedia Technologies in School: Use, effect and implications, Belgium, 1997. Fisher, C., Dwyer, D., and Yocan, K., Education and technology, Reflections on Computing in Classrooms, Apple Press, U.S.A., 1996. Hakkarainen, K. and Vosniadou, S., Evaluation report on Netd@ys Europe 1998 experience, Report to the European Commission, Belgium, 1999. Haymae Sandholtz, J., Ringstaff, C., and Dwyer, D., Teaching with Technology, Teachers College Press, U.K., 1997. Institute for Higher Education Policy, What’s the difference? U.S.A., 1999. Lehtinen, E. and Sinko, M., The Challenge of ICT in Finnish Education, Atena, Sitra Edition, Finland, 1999. Pouts-Lajus, S. and Riche-Magnier, M., L’enseignement ouvert et a distance en Europe: mythe et realites, Revue internationale d’education numero, 23, Sevres, France, 1999. Pouts-Lajus, S. and Riche-Magnier, M., L’ecole a l’heure d’Internet—les enjeux du multimedia dans l’education, Nathan Edition, France, 1998.
INFORMATION TECHNOLOGY AND ELEMENTARY AND SECONDARY EDUCATION: CURRENT STATUS AND FEDERAL SUPPORT* SUMMARY Interest in the application of information technology to education has risen among federal policy makers, sparked partly by concern over poor performance of U.S. elementary and secondary school students and a growing perception that technology might improve that performance. Since the 1980s, schools have acquired technology at a fast pace. Today the ratio of students to computers is 6 to 1. Despite these gains, schools have a sizeable stock of old, outdated technology. Further, students have substantially different degrees of access to technology. Perhaps of greater concern is that, even when students have access to the technology, relatively little use is made of it in schools. Research suggests that beneficial effects of technology on achievement are possible, but the effects appear to depend largely upon local school factors. Strengthening teachers’ capabilities with technology is considered one essential step. Another is to develop curriculum that integrates technology into instruction. The financial cost of acquiring, maintaining, and using technology in schools is likely to be a significant hurdle. Estimates of these costs vary widely. Any estimate must be approached with caution because it will be based upon widely varying assumptions about such elements as the configuration of hardware, software, training, and curriculum development. While there is not set figure on the amount of federal investment being made in technology, the federal government appears to be providing a billion dollars or more annually in support of educational technology through a fragmented effort with support flowing through multiple agencies and many different programs. A large proportion of that assistance comes from federal programs for which technology is not a primary focus. Additionally, the E-rate program, established through * Patricia Osorio-O’Dea, Analyst in Social Legislation Domestic Social Policy Division.
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
117
the Telecommunications Act of 1996, has provided billions of dollars in discounts for telecommunications services, Internet access, and internal connections to schools and libraries. These discounts are funded by interstate telecommunications carriers. The program has been challenged in the Congress for, among reasons, being more expansive than was intended. Discount commitments totaling over $3.5 billion for its first and second year have been made; the third year of the program is now being implemented. Shaping federal policy in this area, particularly given that elementary and secondary education is a state and local responsibility, requires addressing at least four major questions: Should the federal government provide support? What activities, if any, should it support? How should this support be provided? What level of support should be provided? This document updates the original CRS Report 96-178, “Information Technology and Elementary and Secondary Education: Current Status and Federal Support,” formerly written by James Stedman.
RECENT ACTION The 106th Congress reviewed proposals to amend several of the existing federal education technology programs, such as the Technology Literacy Challenge Fund, as it considered reauthorization of the Elementary and Secondary Education Act (ESEA).1 None of these proposals was enacted. The 107th Congress is expected to once again consider education technology programs as it reauthorizes the ESEA.
INTRODUCTION In their quest for ways of making elementary and secondary schools more effective, policy makers at all levels have looked to new information technology. As they do, they are faced with myriad claims about technology’s actual and potential impact on schools, ranging from assertions that it may revolutionize schooling to warnings that technology may have little impact and, at worst, may exacerbate current problems.2 This report provides an analysis of issues involving the application of information technology to elementary and secondary education, and federal policy making in this area. The report includes the following topics: † sources of the current interest in bringing technology to education, † status of technology in schools, † major issues involving the integration of technology into schools, such as the impact of
technology on achievement and the cost of technology,
† federal support for technology in schools, and † major federal policy questions.
This report will be updated periodically to reflect substantive action on federal programs and policies described below. For this report, the terms “information technology” and “technology” are used to identify a broad array of different equipment and materials, such as: computer hardware; compact disc (CD-ROM) and video disc players; computer software; electronic databases; television; video material; satellites, modems, and other telecommunications equipment; and electronic networks based on that telecommunications equipment.
CURRENT INTEREST IN TECHNOLOGY
FOR
ELEMENTARY AND SECONDARY EDUCATION
For nearly two decades, policy makers at the federal, state, and local levels have been concerned about the poor academic performance of U.S. students relative to their counterparts in many other
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
118
Handbook of Technology Management in Public Administration
industrialized nations.3 At the same time, policy makers have increasingly recognized that technology is becoming a central component of many jobs, changing the skills and knowledge needed to be successful in the workplace.4 This anxiety about the academic competitiveness of U.S. students coupled with changes in needed work skills has heightened interest in integrating technology into the elementary and secondary curriculum in an effort to address both sets of needs. With regard to academic performance, many policy makers and analysts believe that new information technology can increase students’ achievement and mastery of challenging curricula by providing students and teachers with new environments for learning, new ways of instructing, expanded access to resources, constructive contact with other students or teachers, and new tools to manipulate and present data.5 Concurrently, many contend that, with an increasingly technological workplace, schools should be equipping students with a different mix and level of skills. These, it is argued, include familiarity with technology and the ability to use it in the workplace. Further, the integration of technology into work is seen as creating demands for higher levels of mathematics and science competence, and such other skills as being able to work in teams, exercise judgment on work-related issues, and quickly master new skills.6 In the eyes of many critics, schools as currently structured are relics of an industrial age that is passing, inappropriately engaged in “prepar[ing] students for a world that no longer exists, developing in students yesterday’s skills for tomorrow’s world.”7
STATUS
OF
TECHNOLOGY
IN
SCHOOLS
Information technology is spread broadly, but not deeply, across elementary and secondary education. Despite nearly two decades of influx of technology, the extent to which elementary and secondary schools provide students with continuing and effective access to new information technology remains limited. Throughout the 1980s and 1990s, much of the focus has been on the presence of computers in schools. Today there are an estimated 8.2 million instructional computers in schools, with an additional 600,000 used for administrative purposes.8 This acquisition of computers has dramatically cut the ratio of students to instructional computer. Most recent figures indicate that in 1999 the students-to-computer ratio was 6 to 1.9 Despite this sizeable reduction in the ratio, many experts believe that current ratios still do not provide the level of access necessary to realize this technology’s educational benefits. For example, the students-to-computer ratio for computers with Internet access was 9 to 1 in 1999.10 Further, some analysts believe that a lower students-tocomputer ratio—four or five to one—should be the target for schools.11 As is explored in a later section in this report, the ratio of students to computer continues to provide relevant information for policy making, particularly when it is disaggregated to consider the access that different kinds of schools and students have to computer technology.12 A substantial number of schools have acquired the newest information technology during the 1990 s. Between 1994 and 1999, the percentage of public schools with access to the Internet rose from 35% to 95%.13 In 1999, the percentage of classrooms, computer labs, and library/media centers connected to the Internet was 63%, 21 times greater than the 1994 percentage (3%). Internet access is considered in more detail later in this report. Despite substantial acquisition of other elements of the new technology during 1990s, their availability may still be relatively limited.14
MAJOR ISSUES In this section, we briefly consider a number of the major issues that directly affect the effort to integrate technology into elementary and secondary education: † impact of technology on academic achievement, † cost of technology,
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology † † † † †
119
differences in access to technology, access to the Internet, amount of technology use and kinds of uses in schools, technology-related knowledge and skills of the teaching force, and integration of technology into the curriculum.
Impact of Technology One of the key questions for policy making at all levels is whether information technology has, or can have, beneficial effects on education. Overviews of available research suggest that positive effects are possible, not only for students, but also for teachers and the schooling process in general. Among the reported outcomes for students are increased academic achievement, more positive attitudes about school work, and achievement gains by special populations of students (e.g., those with learning disabilities). Consequences for teachers reportedly include greater interaction with colleagues, and changes in teaching styles from lecturing to coaching and facilitating students’ work. It is suggested that information technology can be an important force for restructuring the educational process, not just the role of teachers, but also such aspects of education as how time is used in schools; how lines of authority are drawn among teachers, administrators, schools, school districts, etc.; and where schooling occurs.15 Some analysts have reached somewhat less positive conclusions concerning the current or potential impact of technology in elementary and secondary schools. They raise concerns about such issues as whether educational institutions will be able to use technology effectively, whether all groups of students will be able to take advantage of technology, whether technology will isolate students rather than bring them together, whether technology will be a distraction from more serious academic learning, whether technology investment will divert resources from other critical school needs, whether the evidence is really persuasive that technology can improve academic performance, and whether technology will be used to support current educational practice and structure, rather than to promote change.16 Policy makers are particularly interested in the effects of information technology on students’ academic achievement. Traditional analysis of the academic effects of technology seeks to address this interest by following the “horse race” approach of comparing the educational impact of one kind of technology with another or with conventional instruction. The focus is on identifying winners and losers. Available data from such studies suggest that some uses of technology, such as computer-assisted instruction, are found to be either more effective than, or equally as effective as, conventional instruction.17 However, such a generalization has serious limitations. For example, studies covering shorter periods of time have found stronger positive results than have studies assessing effects over a longer period of time. This led one analyst to suggest that “novelty effects boost performance with new technologies in the short term but tend to wear off over time.”18 Perhaps even more important is the growing understanding that “horse race” studies may not provide sound guidance for policy making because they fail to account for the local context within which technology is applied in schools. The elementary and secondary education enterprise is exceedingly complex and the circumstances under which technology may be introduced into the instructional process will vary among the approximately 14,800 school districts, over 85,000 public schools, and 26,000 private schools in which children are educated. Linda Roberts, the director of the Office of Educational Technology in the U.S. Department of Education (ED), has asserted that “under the right conditions new interactive technologies contribute to improvements in learning.”19 For policy makers, the research on the effects of technology in education may be most important for identifying those right conditions under which technology will be effective. Among the conditions that may make technologies more likely to be effective in schools are careful and systematic planning, direct access for students and teachers to a broad variety of technologies and supporting materials, and the time and opportunities for teachers to become
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
120
Handbook of Technology Management in Public Administration
well trained and comfortable in the use of these technologies. For instance, a recent study by the Educational Testing Service (ETS) found that technology may be an important learning tool if teachers are well-skilled in its use.20 Using data from the math section of the 1996 National Assessment of Education Progress (NAEP), the study found that 8th graders whose teachers used computers for “simulations and applications,” which are generally associated with higherorder thinking skills, performed better on the NAEP than did students whose teachers did not use computers in this manner. Students whose teachers used computers mostly for “drill and practice” performed worse on the NAEP. Cost of Technology Estimates of the total cost associated with equipping schools with advanced technology (including access to networks and to data, voice, graphical, and video services) vary greatly. They provide a moving target for policy analysis because they are dependent upon a host of assumptions concerning the configuration of technology being established; they become outdated quickly as prices change for some elements in the technology configuration. Further, comparing different estimates by different analysts is often exceedingly difficult. As a result, any estimate should be used with caution. Note that the data discussed below, while providing an example of the estimated cost of equipping schools with technology, are fairly dated; they are not intended to reflect the current potential cost of equipping K-12 schools with technology, particularly given the substantial recent investments that have been made. Most recent data are from a 1995 analysis of several options for connecting schools to the Internet; these data suggest how wide the range in cost estimates can be.21 According to this analysis, a “baseline” model for Internet connection which provides students and teachers with the ability of fully engaging in telecommunications opportunities involves networking of the computers in each school, connections for every classroom, equipment and resources at the school site necessary to permit multiple connections to the Internet and to permit the school to reduce its dependence upon the school district’s central computers, significant renovation of school facilities, and extensive teacher training. The range of estimated up-front costs are between $9.35 billion and $22.05 billion, with annual costs ranging from $1.75 billion to $4.61 billion. At the high end of this range of estimated costs is a model that builds on the baseline model by providing each student and teacher with a technology-rich environment complete with a high capacity computer and other technology, and full access to the Internet. According to this analysis, estimated upfront costs range between $51.2 billion and $113.52 billion; annual costs may range from $4.02 billion to $10.03 billion. These and other cost estimates depend upon several key factors, including the configuration of technology being acquired;22 the amount of renovation and repair (retrofitting) necessary for school buildings; and, how the technology is to be supported and to what extent. Differences in Access to Technology Certain types of schools, as well as the schools attended by certain groups of students, are less likely to be able to provide access to technology than are other schools. This has been a recurring issue since personal computers first became commercially available in the late 1970s and began making their way into schools. Of particular concern to educators and policy makers in the 1980s was that schools serving substantial populations of low-income or minority students had fewer computers relative to the size of their enrollment than did schools with more affluent students or fewer minority students.23 Other kinds of schools that appeared to provide significantly less technology access to students included: (1) large schools, (2) urban schools, (3) private schools, and (4) elementary schools.24 More recent data suggest that somewhat similar patterns of uneven access to technology still apply to different groups of schools, but that some changes may have also occurred. Perhaps one of
DK3654—CHAPTER 3—3/10/2006—17:20—SRIDHAR—XML MODEL C – pp. 35–217
Public Sector Perspectives on Technology
121
the most significant findings by the 1992 IEA Computers in Education Study was that differences in the students-to-computer ratios in U.S. schools based on their minority enrollments have largely disappeared at the high school level, are very small at the elementary level, and are modest at the middle school level.25 Nevertheless, other data continue to depict substantial disparities in access for schools’ with student populations that are substantially minority or poor.26 For example, according to one source, in 1995–1996, schools with enrollment that was less than 25% minority had a students-to-computer ratio of approximately 10 to 1; schools with 90% or more minority enrollment had a ratio of 17.4 to 1.27 Of concern to some analysts is the substantially lower access to computers that black and Hispanic students have at home than do white students.28 According to U.S. Census data, in 1997, 54% of white students in grades 1 through 8 used a computer at home while only 21% of black students and 19% of Hispanic students did so. For students in grades 9 through 12, 61% of whites, 21% of blacks, and 22% of Hispanics used computers in their homes. Access to the Internet A central theme to recent analyses of educational technology is the importance of providing elementary and secondary schools and classrooms with access to telecommunications networks, particularly the Internet (see Figure 3.1 and Figure 3.2). Advocates of such access argue that students and teachers need to go “online” because of expected educational benefits from exploring the wealth of information now being made available on electronic networks, sharing information, and communicating with students, teachers, and experts in various fields. Although it is too early to identify the overall educational effects that access to telecommunications networks will have, there is a growing literature describing students’ and teachers’ reportedly positive experiences with particular online applications or activities.29 Importantly, there is a recognition that access to the Internet is unevenly shared by schools across the country. Surveys by ED have assessed public schools’ access to telecommunications technology, particularly connections to the Internet.30 As shown in the figures below, schools in these surveys were differentiated by instructional level, enrollment size, metropolitan status,31 geographic region, minority enrollment, and income of students.32 The figures below show that elementary schools, city schools, schools in the Central region of the country, and schools with
Percent of Schools without Internet Access
15% 13% 12% 11% 10%
6%
6%
1998 1999
6% 5%
5%
4%
4%
2% 0%
Elementary
Secondary
2.09< Filter : G_WWSSN_SP
LastCmd: Delete Traces
P
11: BEAM Z 10: GRC4 Z 9: GRC3 Z 8: GRC2 Z 7: GRC1 Z 6: GRB5 Z 5: GRB4 Z 4: GRB3 Z 3: GRB1 Z 2: GRA4 Z 1: GRA1 Z 01:57:10
01:57:35
01:58:00
01:58:25
01:58:50
FIGURE 8.3 Seismic signals from China’s nuclear test at lop nor (26 July 1996). Source: Prototype Comprehensive Test Ban Treaty International Data Center.
DK3654—CHAPTER 8—14/10/2006—15:21—SRIDHAR—XML MODEL C – pp. 573–663
590
Handbook of Technology Management in Public Administration
PART C: TECHNICAL FILE Definitions Surveillance is the systematic investigation or monitoring of the actions or communications of one or more persons. The basic born physical surveillance comprises watching (visual surveillance) and listening (aural surveillance). In addition to physical surveillance, several kinds of communications surveillance are practiced, including mail covers and telephone interception. The popular term electronic surveillance refers to both augmentations to physical surveillance (such as directional microphones and audio bugs) and to communication surveillance, particularly telephone taps. Data surveillance or Dataveillance is the systematic use of personal data systems in the investigation or monitoring of the actions or communications of one or more persons. Dataveillance is of two kinds: “personal Dataveillance,” where a particular person has been previously identified as being of interest, “mass Dataveillance,” where a group or large population is monitored, in order to detect individuals of interest, and/or to deter people from stepping out of line. Surveillance technology systems are mechanisms, which can identify, monitor and track movements and data. Privacy is the interest that individuals have in sustaining a “personal space” free from interference by other people and organizations. Information privacy or data privacy is the interest an individual has in controlling, or at least significantly influencing the handling of data about themselves. “Confidentiality is the legal duty of individuals who come into the procession of information about others, especially in the course of particular kinds of relationships with them.”
SURVEILLANCE: TOOLS
AND
TECHNIQUES—THE STATE OF THE ART
Physical Surveillance Electronic devices have been developed to augment physical surveillance and offer new possibilities such as: † † † † † † †
Closed-circuit TV (CCTV) Video coding recorder (VCR) Telephone bugging Proximity smart cards Transmitter location E-mail at workplace Electronic databases, etc.
Communications Surveillance Communication Intelligence (COMINT) involving the covert interception of foreign communications has been practiced by almost every advanced nation since international communications became available. The NSA (National Security Agency, U.S.A.), the largest agency conducting such operations as “technical and intelligence information derived from foreign communications by other than their intended recipient,” defines COMINT. COMINT is a large-scale industrial activity providing consumers with intelligence on diplomatic, economic and scientific developments. The major English speaking nations of U.K.U.S.A. alliance support the largest COMINT organization. Besides UKUSA, there at least 30 other nations operating major COMINT organizations. The largest is the Russian FAPSI, with 54,000 employees. China maintains a substantial Signal Intelligence (Signit) system, two stations of which are directed at Russia and operate in collaboration with the U.S.A. Most Middle eastern and Asian nations have invested substantially in Signit, in particular Israel, India and Pakistan. COMINT organizations use the term International Leased Carrier (ILC) to describe the interception of international communications.
DK3654—CHAPTER 8—14/10/2006—15:21—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
591
The ILC communication collection (COMINT Collection) cannot take place unless the collecting agency obtains access to the communications channels they wish to examine. Information about the means used to gain access are, like data about code breaking methods, the most highly protected information within any COMINT organization. Access is gained both with and without the complicity of the cooperation of network operators. Different activities for this purpose have been developed like: † † † † † † † † †
Operation SHAMPROCK High frequency radio interception Space interception Signit satellites COMSAT ILC collection Submarine cable interception Intercepting the Internet Covert collection of high capacity signals New satellite networks
Apart from global surveillance technology systems, additional tools have been developed for surveillance. The additional tool used for information transferred via Internet or via Digital Global telecommunication systems is the capture of data with Taiga software. Taiga software has the possibility to capture, process and analyze multilingual information in a very short period of time (1 billion characters per second), using keywords. The Use of Surveillance Technology Systems for the Transmission and Collection of Economic Information As the Internet and other communication systems reach further into the everyday lives, national security, law enforcement and individual privacy have become perilously intertwined. Governments want to restrict the free flow of information and software producers are seeking ways to ensure consumers are not bugged from the moment of purchases. All developing communication technologies, digital telephone switches and cellular and satellite phones HAVE SURVEILLANCE CAPABILITIES. On the other hand, the development of software that contains encryption, a telephone which allows people to scramble their communications and files to prevent others from reading them gained earth. CALEA System The first effort to heighten surveillance opportunities (made by U.S.A.) was to force telecommunication companies to use equipment desired to include enhanced wiretapping capabilities. ECHELON Connection The highly automated U.K.U.S.A. system for processing COMINT, often known as ECHELON system was brought to light by the author Nicky Hager in his 1996 book, “Secret Power: New Zealand’s role in the International Spy Network.” For this, he interviewed more than 50 people who work or have worked in intelligence who are concerned at the uses of ECHELON. It is said, “The ECHELON system is not designed to eavesdrop on a particular individual’s e-mail or fax link. Rather the system works by indiscriminately intercepting very large quantities of communications and using computers to identify and extract messages from the mass of unwanted ones.”
DK3654—CHAPTER 8—14/10/2006—15:21—SRIDHAR—XML MODEL C – pp. 573–663
592
Handbook of Technology Management in Public Administration
ECHELON became well known following the previous STOA Interim study (PE 166.499) entitled “An appraisal of technologies of political control.” In this reported to be a world wide surveillance system designed and coordinated by NSA, U.S.A., that intercepts e-mail, fax, telex and international telephone communications carried via satellites and has been operating since the early 1980s—it is part of the post-Cold War developments based on the U.K.U.S.A. agreement signed between the U.K., U.S.A., Canada, Australia, and New Zealand in 1948. According to the Interim study (PE 166.499) of 1998, there are reported to be three components to ECHELON: 1. The monitoring of Intelsats, international telecommunications satellites used by phone companies in most countries. A key ECHELON station is at Morwenstow in Cornwall monitoring Europe, the Atlantic and the Indian Ocean. 2. ECHELON interception of non-Intelsat regional communication satellites. Key monitoring stations are Menwith Hill in Yorkshire and Bad Aibling in Germany. 3. The final element of the ECHELON system is the surveillance of land-based or under-sea systems, which use cables or microwave tower networks. Each of the five centers supply to the other four “Dictionaries” of keywords, phrases, people, and places to “tag” and tagged intercept is forwarded straight to the requesting country. The STOA report 1999, prepared as contribution to this study, entitled “The state of the art in communications intelligence (COMINT) of automated processing for intelligence purposes of intercepted broadband multi-language leased or common carrier systems, and its applicability to COMINT targeting and selection, including speech recognition” (PE 168.184/part3/4), is providing new documentary and information evidence about ECHELON. In this is reported that: † In the mid 1980s, extensive further automation of ECHELON COMINT processing was
planned by NSA as project P-415.
† The key components of the new system are “Local Dictionary computers” which store en
extensive database on specific targets. An important point about the new system is that before ECHELON, different countries and different countries and different stations knew what was being intercepted and to whom it was sent. Now, all but a fraction of the messages selected by Dictionary computers at remote sites are forwarded to NSA or other customers without being read locally. † A dictionary computer is operating at GCHQ’s (Government Communications Headquarters; the Signit agency of the U.K.) Westminster, London office. The system intercepts thousands of diplomatic, business and personal messages every day. The presence of dictionary computers has also been confirmed at Kojarena, Australia; and at GCHQ’s Cheltenham, U.K. office. † There are satellite receiving stations in Sugar Grove/Virginia, Sabana Seca /Puerto Rico and Leitrim/Canada working also as ECHELON interception sites. † New Zealand signit agency operates two satellite interception terminals at Waihopai covering the pacific Ocean which are working as ECHELON interception sites as well.
Inhabitant Identification Schemes Inhabitant identification schemes are schemes, which provide all, or most people in the country with a unique code and a token (generally a card) containing the code. Such schemes are used in many European Countries for a defined set of purposes, typically the administration of taxation, natural superannuation and health insurance. In some countries, they are used for multiple additional purposes.
DK3654—CHAPTER 8—14/10/2006—15:21—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
593
The Nature of Economic Information Selected by Surveillance Technology Systems Advances in information and communication technologies have fostered the development of complex national and international networks which enable thousands of geographically dispersed users to distribute, transmit, gather and exchange all kinds of data. Transborder electronic exchanges—private, professional, industrial, and commercial—have proliferated on a global scale and are bound to intensify among businesses and between businesses and consumers, as electronic commerce develops. At the same time developments in digital computing have increased the capacity for accessing, gathering, recording, processing, sorting, comparing, and linking alphanumeric, voice, and image data. This substantial growth in international networks and the increase in economic data processing have arisen the need at securing privacy protection in transborder data flows. There is wide ranging evidence indicated that governments from U.K.U.S.A. alliance countries are using global surveillance systems to provide commercial advantage to companies and trade. Each U.K.U.S.A. country authorizes national level intelligence assessment organizations and relevant individual ministries to task and receive economic intelligence for COMINT. Such information may be collected for a lot of purposes such as: estimation of future essential commodity prices, determining other nation’s private positions in trade negotiations, tracking sensitive technology or evaluating the political stability and/or economic strength of a target country. Any of these targets and many others may produce intelligence of direct commercial relevance. The decision as to whether it should be disseminated or exploited is taken not by COMINT but by national government organization. On the other hand there is no evidence that companies in any of U.K.U.S.A. countries are able to task COMINT collection to suit their private purposes. The growth in international networks and the increase in economic data processing have arisen the need at securing privacy protection in transborder data flows and especially the use of contractual solutions. Global E-commerce has changed the nature of retailing. There were great cultural and legal differences between countries affecting attitudes to the use of sensitive data (economic or personal) and the issue of applicable law in global transaction had tope resolved. Contracts might bridge the gab between those with legislation and the others. Since Internet symbolized global commerce, faced with a rapid expansion in the numbers of transactions, there is a need to define a stable lasting framework for business. Internet is changing profound the markets and adjusting new contracts. To that reality is a complex problem. Internet is a “golden highway,” for those interested in the process of information. On the other hand since Internet symbolized global commerce could be a tool of misleading information and a platform for deceitful advertisement.
EXAMPLES
OF
ABUSE
OF
ECONOMIC INFORMATION
Various examples could be mentioned about abuse of privacy via global surveillance telecommunication systems (like ECHELON). Many accounts have been published by reputable journalists citing frequent occasions where the U.S. government has utilized COMINT for national purposes. The examples given below are the most representative. Example 1 On January 15, 1990, the telephone company AT&T faced serious difficulties throughout the northeastern United States. The network NuPrometheus illegally owned and distributed the keycode of the operational system of the AT&T Macintosh computer. J. P. Barlow: “A not terribly brief history of the Electronic Frontier Foundation,” November 8, 1990. Example 2 On January 24, 1990, the Electronic Frontier Foundation (EEF) in the United States accused the police of conducting a huge operation under the code name “Sun Devil,” in which 40 computers and
DK3654—CHAPTER 8—14/10/2006—15:21—SRIDHAR—XML MODEL C – pp. 573–663
594
Handbook of Technology Management in Public Administration
23,000 diskettes were seized from teenagers in 15 towns within the United States. Teenager Graig Neidorf was supported by EEF not to be punished with 60 years’ prison and $120,000 fine. Craig Neidorf had published in Phrake (a hacker magazine) part of the internal files of a telephone company. [M. Godwin: “The EEF and virtual communities,” 1991.] Example 3 On June 25, 1998, in Absheim, an A-320 aircraft of the European Company “Airbus Industries” crashed during a demonstration flight. The accident followed dangerous manoeuvres of the aircraft. One person died and 20 were injured. Very soon, and before the announcement of the official report, aggressive messages against Airbus and the French company “Aerospatiale,” which had close links with Airbus, were published by aerospace and transport Internet newsgroups. Messages declared that the accident was expected because European engineers are not as highly qualified as American engineers. It was also clearly stated that in future similar accidents would be anticipated. Aerospatiale’s agents investigated the origin of these messages. They attempted to discover their source and finally revealed the senders’ identification data, addresses and nodes were false. The source messages came from the United States, from computers with misleading identification data, and were then transferred from anonymous servers in Finland. In this instance, Aerospatiale had grounds for believing that the American Boeing corporation instigated one of the biggest ever misinformation campaigns over the Internet. [Martinet, B. and Marti, Y.M., “L’ intelligence econimique, Les yeux et les oreilles de l’ enteprise,” Editions d’organisation, Paris, 1995.] Example 4 On October 31, 1994, an ATR (Aeritalia and Aerospatiale European Consortium) aircraft was involved in an accident in the United States. Owing to this accident, a two-month ban of ATR flights was imposed. This decision was commercially catastrophic for the company, because ATR was obliged to carry out test flights in foggy conditions. During this period, Internet newsgroups (especially the AVSIG forum, supported by Compuserve), exchanged messages of vital significance to ATR. The arguments supporting the European company were few, the arguments against ATR were many. At the beginning of January 1995, there appeared a message from a journalist in this forum asking the following question: “I have heard that ATR flights will begin soon. Can anybody confirm this information?” The newsgroup confirmation arrived quickly. Three days later, official permission was granted to ATR that their flights could resume. The company did not expect this news. However, if ATR had actively participated in the newsgroups, they would have gained important days allowing directors to inform ATR’s offices and clients. [“Des langages pour analyser la poussiere d’info,” Liberation, 9 June 1995.] Example 5 The government of Brazil, in 1994, announced its intention to assign an international contract (Amazonios). This procurement was of great interest because the total value of the contract was U.S. $1.4 billion. From Europe, the French companies Thomson and Alcatel submitted their bids, while a bid was received from the huge weapons company Raytheon in the United States. Although, the offer from the French companies was technically superior and better documented, the contract was eventually assigned to the U.S. company. This appeared to have been achieved following the use of a new offensive strategy by the United States. When the government of Brazil was about to assign the contract to the French companies, American officials (with the personal involvement of President Bill Clinton) readjusted their offer in line with that of the European companies, asserting that the French companies had influenced the bid committee, an accusation that was never proven. The European companies argued that
DK3654—CHAPTER 8—14/10/2006—15:21—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
595
the intention of the Brazilian government was to assign the contract to them, but this became known to the Americans following FBI surveillance (with the ECHELON system). [“La nouvelle machine de querre americaine,” LeMonde du reseingnement number 158, 16 February 1995.] Example 6 In January 1994, Edouard Balladur went to Riyadh (Saudi Arabia) certain to bring back a historical contract for more than 30 million francs in sales of weapons and, especially, Airbus. The contract went to the American McDonnell-Douglas company, rival to Airbus. The French were convinced that the ECHELON electronic listening system had given the Americans the financial conditions authorized by Airbus. This information was collected and analyzed by batteries of supercomputers at Fort Meade (Maryland), head office of the National Security Agency. The National Security Agency is the most secret and most significant of the thirteen agencies of the United States government. It received about a third, i.e., $8 billion of the $26.6 billion (160 billion francs), of the appropriations allocated to espionage in the federal budget of 1997. With its 20,000 employees in the United States and some thousands of agents throughout the world, the NSA (which has been part of the Defense Department since its creation in 1956) is more important than the CIA, though much less known. Fort Meade allegedly contains the greatest concentration of data processing power and mathematicians in the world employed to sort and analyze the flood of data accessed by ECHELON from the networks of international telecommunications. “There is not one diplomatic event or soldier concerning the United States in which the NSA is not directly implied,” recognized the director of the agency, John McConnel in 1996. “The NSA plays a very significant role as regards economic espionage,” affirmed John Pike, expert on information at the Federation of the American Scientist, which specified that “ECHELON is at the heart of its operations.” In 1993, a former president of the agency, Admiral William Studeman, stated in a confidential document that, “the requests for total access to information do not cease growing.” Economic espionage now justifies the maintenance of an oversized surveillance apparatus since the end of the Cold War. “Evidence now exists that the military use of ECHELON (terrorism, proliferation of armaments, as well as economic espionage) have become priorities for the NSA.” [“Echelon est au service des interets Americains,” Liberation, 21 April 1998.]
PROTECTION
FROM
ELECTRONIC SURVEILLANCE
Electronically managed information touches almost every aspect of daily life in modern society. This rising tide of important yet unsecured electronic data leaves our society increasingly vulnerable to curious neighbors, industrial spies, rogue nations, organized crime, and terrorist organizations. Encryption is an essential tool in providing security in the information age. Encryption is based on the use of mathematical procedures to scramble data so that it is extremely difficult—if not virtually impossible—for anyone other than authorized recipients to recover the original “plain text.” Properly implemented encryption allows sensitive information to be stored on insecure computers or transmitted across insecure networks. Only parties with the correct decryption “key” (or keys) are able to recover the plain text information. Encryption is the practice of encoding data so that even if a computer or network is compromised, the data’s content will remain secret. Security and encryption issues are important because they are central to public confidence in networks and to the use of the systems for the sensitive or secret data, such as the processing of information touching on national security. These issues are surpassingly controversial because of governments’ interest in preventing digital information from being impervious to official interception and decoding for law enforcement and other purposes. Cryptography is a complex area, with scientific, technical, political, social, business, and economic dimensions. For the purpose of this report, “key recovery” systems are characterized
DK3654—CHAPTER 8—14/10/2006—15:21—SRIDHAR—XML MODEL C – pp. 573–663
596
Handbook of Technology Management in Public Administration
by the presence of some mechanism for obtaining exceptional access to the plain text of encrypted traffic. Key recovery might serve a wide spectrum of access requirements, from a backup mechanism that ensures a business’ continued access to its own encrypted archive in the event keys are lost, to providing covert law enforcement access to wiretapped encrypted telephone conversations. Many of the costs, risks, and complexities inherent in the design, implementation, and operation of key recovery systems depend on the access requirements around which the system is designed. The Global Information Infrastructure promises to revolutionize electronic commerce, reinvigorate government, and provide new and open access to the information society. Yet this promise cannot be achieved without information security and privacy. Without a secure and trusted infrastructure, companies and individuals will become increasingly reluctant to move their private business or personal information online.
SURVEILLANCE TECHNOLOGY SYSTEMS IN LEGAL AND REGULATORY CONTEXT Europe is the site of the first privacy legislation, the earliest national privacy statute, and now the most comprehensive protection for information privacy in the world. That protection reflects on apparent consensus within Europe that privacy is a fundamental human right which few in any other rights equal. In the context of European history and civil law culture, that consensus makes possible extensive, detailed regulation of virtually all activities concerning “any information relating to an identified or identifiable natural person.” It is difficult to imagine a regulatory regime offering any greater protection to information privacy, or greater contrast to U.S. law. As a result of the variation and uneven application among national laws permitted by both the guidelines and the convention, in July 1990 the commission of the then-European Community (EC) published a draft Council Directive on the Protection of Individuals with Regard to the Processing of Personal Data and on Free Movement of Such Data. The draft directive was part of the ambitious program by the countries of the European Union to create not merely the “common market” and “economic and monetary union” contemplated by the Treaty of Rome, but also the potential union embodied in the Treaty on European Union signed in 1992 in Maastricht. Directive 97/66/EC of the European Parliament and the Council of the 15 December 1997 concerns the processing of personal data and the protection of privacy in the telecommunications sector. This directive provides for the harmonization of the provisions of the member states required to ensure an equivalent level of protection of fundamental rights and freedom, and in particular the right to privacy, with respect to the processing of personal data in the telecommunications sector and to ensure the free movement of such data and telecommunications equipment and services in the Community. The protection for the information privacy in the United States is disjoined, inconsistent, and limited by conflicting interests. There is no explicit constitutional guarantee of a right to privacy in the United States. Although the Supreme Court has fashioned a variety of rights, “information privacy” has received little protection. Outside of the constitutional arena, protection for information privacy relies on hundreds of federal and state laws and regulations, each of which applies only to a specific category of information user (such as the government or retailers of videotapes), context (applying for credit or subscribing to cable television), type of information (criminal records or financial information), or use for that information (computer matching or impermissible discrimination). Privacy laws in the United States most often prohibit certain disclosures, rather than collection, use, or storage, of personal information. When those protections extend to the use of personal information, it is often as a by-product of legislative commitment to another goal, such as eliminating discrimination. And the role provided for the government in most U.S. privacy laws is often limited to providing a judicial form for resolving disputes. Privacy of communicators in one of the fundamental human rights. The UN Declaration, International Covenant and European Convention all provide that natural persons should not be
DK3654—CHAPTER 8—14/10/2006—15:21—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
597
subject to unlawful interference with their privacy. The European Convention is legally binding and has caused signatories to change their national laws to comply. Most countries, including most EU Member States, have a procedure to permit and regulate lawful interception of communications, in furtherance of law enforcement or to protect national security. The European Council has proposed a set of technical requirements to be imposed on telecommunications operators to allow lawful interception. The United States has defined similar requirements (now enacted as Federal law) and Australia has proposed to do the same. Most countries have legal recognition of the right to privacy of personal data and many require telecommunications network operators to protect the privacy of their users. All EU countries permit the use of encryption for data transmitted via public telecommunications networks (except France where this will shortly be permitted). Electronic commerce requires secure and trusted communications and may not be able to benefit from privacy law designed only to protect natural persons. The legal regimes reflect a balance between three interests: privacy; law enforcement; electronic commerce. Legal processes are emerging to satisfy the second and third interests by granting more power to governments to authorize interception (under legal controls) and allowing strong encryption with secret keys. There do not appear to be adequate legal processes to protect privacy against unlawful interception, either by foreign governments or by non-governmental bodies.
LAW ENFORCEMENT DATA INTERCEPTION—POLICY DEVELOPMENT As the Internet and other communications systems reach further into everyday lives, national security, law enforcement and individual privacy have become perilously intertwined. Governments want to restrict the free flow of information; software producers are seeking ways to ensure consumers are not bugged from the very moment of purchase. The United States is behind a world-wide effort to limit individual privacy and enhance the capability of its intelligence services to eavesdrop on personal conversations. The campaign has had two legal strategies: the first made it mandatory for all digital telephone switches, cellular and satellite phones and all developing communication technologies to build in surveillance capabilities; the second sought to limit the dissemination of software that contains encryption, a technique which allows people to scramble their communications and files to prevent others from reading them. The first effort to heighten surveillance opportunities was to force telecommunications companies to use equipment designed to include enhanced wiretapping capabilities. The end goal was to ensure that the United States and its allied intelligence services could easily eavesdrop on telephone networks anywhere in the world. In the late 1980s, in a programme known internally as “Operation Root Canal,” U.S. law enforcement officials demanded that telephone companies alter their equipment to facilitate the interception of messages. The companies refused but, after several years of lobbying, Congress enacted the Communications Assistance for Law Enforcement Act (CALEA) in 1994. CALEA requires that terrestrial carriers, cellular phone services and other entities ensure that all their “equipment, facilities or services” are capable of “expeditiously . enabling the government . to intercept . all wire and oral communications carried by the carrier . concurrently with their transmission.” Communications must be interceptable in such a form that they could be transmitted to a remote government facility. Manufacturers must work with industry and law enforcement officials to ensure that their equipment meets federal standards. A court can fine a company U.S. $10,000 per day for each product that does not comply. The passage of CALEA has been controversial but its provisions have yet to be enforced due to FBI efforts to include even more rigorous regulations under the law. These include the requirement that cellular phones allow for locationtracking on demand and that telephone companies provide capacity for up to 50,000 simultaneous wiretaps. While the FBI lobbied Congress and pressured U.S. companies into accepting a tougher CALEA, it also leant on U.S. allies to adopt it as an international standard. In 1991, the FBI held a series of secret meetings with EU member states to persuade them to incorporate CALEA
DK3654—CHAPTER 8—14/10/2006—15:21—SRIDHAR—XML MODEL C – pp. 573–663
598
Handbook of Technology Management in Public Administration
into European law. The plan, according to an EU report, was to “call for the Western World (EU, U.S. and allies) to agree to norms and procedures and then sell their products to Third World countries. Even if they do not agree to interception orders, they will find their telecommunications monitored by the UKUSA signals intelligence network the minute they use the equipment.” The FBI’s efforts resulted in an EU Council of Ministers resolution that was quietly adopted in January 1995, but not publicly released until 20 months later. The resolution’s text is almost word for word identical to the FBI’s demands at home. The U.S. government is now pressuring the International Telecommunications Union (ITU) to adopt the standards globally. Since 1993, unknown to European parliamentary bodies and their electors, law enforcement officials from many EU countries and most of the U.K.U.S.A. nations have been meeting annually in a separate forum to discuss their requirements for intercepting communications. These officials met under the auspices of a hitherto unknown organization, ILETS (International Law Enforcement Telecommunications Seminar). ILETS was initiated and founded by the FBI. At their 1993 and 1994 meetings, ILETS participants specified law enforcement user requirements for communications interception. These appear in a 1974 ILETS document called “IUR 1.0.” This document was based on an earlier FBI report on “Law Enforcement Requirements for the Surveillance of Electronic Communications,” first issued in July 1992 and revised in June 1994. The IUR requirement differed little in substance from the FBI’s requirements but was enlarged, containing ten requirements rather than nine. IUR did not specify any law enforcement need for “key escrow” or “key recovery.” Cryptography was mentioned solely in the context of network security arrangements. Between 1993 and 1997 police representatives from ILETS were not involved in the NSA-led policy making process for “key recovery,” nor did ILETS advance any such proposal, even as late as 1997. Despite this, during the same period the U.S. government repeatedly presented its policy as being motivated by the stated needs of law enforcement agencies. At their 1997 meeting in Dublin, ILETS did not alter the IUR. It was not until 1998 that a revised IUR was prepared containing requirements in respect of cryptography. It follows from this that the U.S. government misled EU and OECD states about the true intention of its policy. This U.S. deception was, however, clear to the senior Commission official responsible for information security. In September 1996, David Herson, head of the EU Senior Officers’ Group on Information Security, stated his assessment of the U.S. “key recovery” project: “‘Law Enforcement’ is a protective shield for all the other governmental activities . We’re talking about foreign intelligence, that’s what all this is about. There is no question [that] ‘law enforcement’ is a smoke screen.” It should be noted that technically, legally and organizationally, law enforcement requirements for communications interception differ fundamentally from communications intelligence. Law enforcement agencies (LEAs) will normally wish to intercept a specific line or group of lines, and must normally justify their requests to a judicial or administrative authority before proceeding. In contract, COMINT agencies conduct broad international communications “trawling” activities, and operate under general warrants. Such operations do not require or even suppose that the parties they intercept are criminals. Such distinctions are vital to civil liberty, but risk being eroded it the boundaries between law enforcement and communications intelligence interception becomes blurred in future. Following the second ILETS meeting in Bonn in 1994, IUR 1.0 was presented to the Council of Ministers and was passed without a single word being altered on 17 January 1995 (57). During 1995, several non-EU members of the ILETS group wrote to the Council to endorse the (unpublished) Council resolution. The resolution was not published in the Official Journal for nearly 2 years, on 4 November 1996. Following the third ILETS meeting in Canberra in 1995, the Australian government was asked to present the IUR to International Telecommunications Union (ITU). Noting that “law enforcement and national security agencies of a significant number of ITU member states have agreed on a generic set of requirements for legal interception,” the Australian government asked the ITU to advise its standards bodies to incorporate the IUR requirements into future
DK3654—CHAPTER 8—14/10/2006—15:21—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
599
telecommunications systems on the basis that the “costs of providing legal interception capability and associated disruptions can be lessened by providing for that capability at the design stage.” It appears that ILETS met again in 1998 and revised and extended its terms to cover the Internet and Satellite Personal Communications Systems such as Iridium. The new IUR also specified “additional security requirements for network operators and service providers,” extensive new requirements for personal information about subscribers, and provisions to deal with cryptography. On 3 September 1998, the revised IUR was presented to the Police Co-operation Working Group as ENFOPOL 98. The Austrian Presidency proposed that, as in 1994, the new IUR be adopted verbatim as a Council Resolution on interception “in respect of new technology” (59). The group did not agree. After repeated redrafting, a fresh paper has been prepared by the German Presidency, for the eventual consideration of Council Home and Justice ministers. The second part of the strategy was to ensure that intelligence and police agencies could understand every communication they intercepted. They attempted to impede the development of cryptography and other security measures, fearing that these technologies would reduce their ability to monitor the emissions of foreign governments and to investigate crime. These latter efforts have not been successful. A survey by the Global Internet Liberty Campaign (GILC) found that most countries have either rejected domestic controls or not addressed the issue at all. The GILC found that “many countries, large and small, industrialized and developing, seem to be ambivalent about the need to control encryption technology.” The FBI and the National Security Agency (NSA) have instigated efforts to restrict the availability of encryption world-wide. In the early 1970s, the NSA’s pretext was that encryption technology was “born classified” and, therefore, its dissemination fell into the same category as the diffusion of A-bomb materials. The debate went underground until 1993 when the U.S. launched the Clipper Chip, an encryption device designed for inclusion in consumer products. The Clipper Chip offered the required privacy, but the government would retain a “pass-key”—anything encrypted with the chip could be read by government agencies. Behind the scenes, law enforcement and intelligence agencies were pushing hard for a ban on other forms of encryption. In a February 1993 document, obtained by the Electronic Privacy Information Center (EPIC), they recommended “Technical solutions, such as they are, will only work if they are incorporated into all encryption products.” To ensure that this occurs, legislation mandating the use of government-approved encryption products, or adherence to government encryption criteria, is required. The Clipper Chip was widely criticized by industry, public interest groups, scientific societies, and the public and, though it was officially adopted, only a few were ever sold or used. From 1994 onwards, Washington began to woo private companies to develop an encryption system that would provide access to keys by government agencies. Under the proposals—variously known as “key escrow,” “key recovery” or “trusted third parties”—the keys would be held by a corporation, not a government agency, and would be designed by the private sector, not the NSA. The systems, however, still entailed the assumption of guaranteed access to the intelligence community and so proved as controversial as the Clipper Chip. The government used export incentives to encourage companies to adopt key escrow products: they could export stronger encryption, but only if they ensured that intelligence agencies had access to the keys. Under U.S. law, computer software and hardware cannot be exported if it contains encryption that the NSA cannot break. The regulations stymie the availability of encryption in the United States because companies are reluctant to develop two separate product lines—one, with strong encryption, for domestic use and another, with weak encryption, for the international market. Several cases are pending in the U.S. courts on the constitutionality of export controls; a federal court recently ruled that they violate free speech rights under the First Amendment. The FBI has not let up on efforts to ban products on which it cannot eavesdrop. In mid-1997, it introduced legislation to mandate that key-recovery systems be built into all computer systems. The amendment was adopted by several congressional Committees but the Senate preferred a weaker variant. A concerted campaign by computer, telephone and privacy groups finally stopped the proposal; it now appears that no legislation will be enacted in the current Congress.
DK3654—CHAPTER 8—14/10/2006—15:21—SRIDHAR—XML MODEL C – pp. 573–663
600
Handbook of Technology Management in Public Administration
While the key escrow approach was being pushed in the United States, Washington had approached foreign organizations and states. The lynchpin for the campaign was David Aaron, U.S. ambassador to the Organization for Economic Co-operation and Development (OECD), who visited dozens of countries in what one analyst derided as a programme of “laundering failed U.S. policy through international bodies to give it greater acceptance.” Led by Germany and the Scandinavians, the EU has been generally distrustful of key escrow technology. In October 1997, the European Commission released a report which advised: restricting the use of encryption could well prevent law-abiding companies and citizens from protecting themselves against criminal attacks. It would not, however, totally prevent criminals from using these technologies. “The report noted that privacy considerations suggest limit the use of cryptography as a means to ensure data security and confidentiality.” Some European countries have or are contemplating independent restrictions. France had a long-standing ban on the use of any cryptography to which the government does not have access. However, a 1996 law, modified the existing system, allowing a system of “tiers du confidence,” although it has not been implemented, because of EU opposition. In 1997, the Conservative government in the U.K. introduced a proposal creating a system of trusted third parties. It was severely criticized at the time and by the new Labor government, which has not yet acted upon its predecessor’s recommendations. The debate over encryption and the conflicting demands of security and privacy are bound to continue. The commercial future of the Internet depends on a universally-accepted and foolproof method of on-line identification; as of now, the only means of providing it is through strong encryption. That put the U.S. government and some of the world’s largest corporations, notably Microsoft, on a collision course. (Report of David Banisar, Deputy Director of Privacy International and Simon Davies, Director General of Privacy International.) The issue of encryption divides the member states of the European Union. Last October the European Commission published a report entitled: “Ensuring security and Trust in Electronic Commerce,” which argued that the advantages of allowing law enforcement agencies access to encrypted messages are not clear and could cause considerable damage to the emerging electronic industry. It says that if citizens and companies “fear that their communications and transactions are being monitored with the help of key access or similar schemes unduly enlarging the general surveillance possibility of government agencies, they may prefer to remaining in the anonymous offline world and electronic commerce will just not happen.” However, Mr. Straw said in Birmingham (JHA Informal Ministers) that: “It would not be in the public interest to allow the improper use of encryption by criminals to be totally immune from the attention of law enforcement agencies.” The U.K., along with France (which already has a law obliging individuals to use “crackable” software) and the United States, is out on a limb in the EU. “The U.K. presidency has a particular view and they are one of the access hard-liners.” They want access: “them and the French,” commented an encryption expert. They are particular about “confidential services” which ensure that a message can only be read by the person for whom it is intended who has a “key” to access it. The Commission’s report proposes “monitoring” Member States’ laws on “confidential services” to ensure they do not contravene the rules of the single market.
REFERENCES 1. STOA, PE 166.499: An appraisal of technologies of political control, 1998. 2. STOA, PE 168.184 /Int.St/part 1/4: The perception of economic risks arising from the potential vulnerability of electronic commercial media to interception, 1999. 3. STOA, PE 168.184 /Int.St/part 2/4: The legality of the interception of electronic communications: a concise survey of the principal legal issues and instruments under international, European and national law, 1999.
DK3654—CHAPTER 8—14/10/2006—15:21—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
601
4. STOA, PE 168.184 /Int.St/part 3/4: Encryption and cryptosystems in electronic surveillance: a survey of the technology assessment issues, 1999. 5. STOA, PE 168.184 /Int.St/part 4/4: The state of the art in communications intelligence (COMINT) of automated processing for intelligence purposes of intercepted broadband multi-language leased or common carrier systems, and its applicability to COMINT targeting and selection, including speech recognition, 1999. 6. Clarke, R., Dataveillance: Delivering “1984,” Xamax Consultancy Pty Ltd, February 1993. 7. Clarke, R., Introduction to Dataveillance and Information Privacy and Definitions of Terms, Xamax Consultancy Pty Ltd, October, 1998. 8. Clarke, R., A Future Trace on Dataveillance: Trends in the Anti-Utopia/Science Fiction Genre, Xamax Consultancy Pty Ltd, March, 1993. 9. Dixon, T., Workplace video surveillance—controls sought, Privacy Law and Policy Reporter, 2 PLPR 141, 1995. 10. Dixon, T., Privacy charter sets new benchmark in privacy protection, Privacy Law and Policy Reporter, 2 PLPR 41, 1995. 11. Banisar, D. and Davies, S., The code war, Index online, News Analysis, issue, 1998. 12. Lesce, T., They’re Watching You! The Age of Surveillance, Breakout Productions, 1998. 13. Staples, W. G., The Culture of Surveillance, St. Martin’s Press, 1997. 14. Lyon, D. and Zureik, E., Computers, Surveillance and Privacy, University of Minnesota Press, 1996. 15. Lyon, D., The Electronic Eye—The Rise of Surveillance Society, University of Minnesota Press, 1994. 16. Cate, F. H., Privacy in the Information Age, Brookings Institution Press, 1997. 17. Brookes, P., Electronic Surveillance Devices, Newnes, 1998. 18. O.E.C.D. Privacy Protection in a Global Networked Society, DSTI/ICCP/REG(98)5/ FINAL, July, 1998. 19. O.E.C.D., Implementing the OECD “Privacy Guidelines” in the Electronic Environment: Focus on the Internet, DSTI/ICCP/REG(97)6/FINAL, September, 1998. 20. O.E.C.D., Cryptography Policy: The Guidelines and the Issues, OCDE/GD(97)204, 1997. 21. Report by an Ad Hoc Group of Cryptographers and Computer Scientists: The Risks of Key Recovery, Key Escrow, and Trusted Third Party Encryption, 1998. 22. COM(98) 586 final: Legal framework for the Development of Electronic Commerce. 23. COM(98) 297 final: Proposal for a European Parliament and Council Directive on a Common Framework for Electronic Signatures, OJ C325, 23/10/98. 24. Troye-Walker, A., European Commission: Electronic Commerce: EU policies and SMEs, August, 1998. 25. COM(97) 503 final: Ensuring Security and Trust in Electronic Communications—Towards a European Framework for Digital Signatures and Encryption. 26. Directive 97/7/EC of the European Parliament and the Council of May 1997 on the Protection of Consumers in Respect of Distance Contracts, OJ L 144, 14/6/1997. 27. ISPO.: Electronic Commerce—Legal Aspects. http://www.ispo.cec.be 28. Privacy International: http://www.privacy.org 29. Newton and Mike, Picturing the future of CCTV, Security Management, November, 1994. 30. Gips and Michael, A., Tie Spy, Security Management, November, 1996. 31. Clarke and Barry, Get Carded With Confidence, Security Management, November, 1994. 32. Horowitz and Richard, The Low Down on Dirty Money, Security Management, October, 1997. 33. Cellular E-911 Technology Gets Passing Grade in NJ Tests, Law Enforcement News, July– August, 1997. 34. Shannon and Elaine, Reach Out and Waste Someone, Time Digital, July–August, 1997. 35. Thompson, Army, Harowitz, and Sherry, Taking a Reading on E-mail Policy, Security Management, November, 1996. 36. Trickey and Fried, L., E-mail Policy by the Letter, Security Management, April, 1996. 37. Net Proceeds, Law Enforcement News, January, 1997. 38. Burrell and Cassandra, Lawmen Seek Key to Computer Criminals, Associated Press, July, 10, Albuquerque Journal, 1997. 39. Gips and Michael, A., Security Anchors CNN, Security Management, September, 1996.
DK3654—CHAPTER 8—14/10/2006—15:21—SRIDHAR—XML MODEL C – pp. 573–663
602
Handbook of Technology Management in Public Administration
40. Bowman and Eric, J., Security Tools up for the Future, Security Management, January, 1996. 41. Alderman, E. and Kennedy, C., The right to Privacy, Knopf, 1995. 42. Bennet and Colin, J., Regulating Privacy—Data Protection and Public Policy in Europe and the United States, Cornell University Press, 1992. 43. BeVier and Lillian, R., Information about Individuals in the Hands of Government—Some reflections on Mechanisms for Privacy Protection, William and Mary Bill of Rights Journal, 4, Winter, 1995. 44. Branscomb and Well, A., Who Owns Information? From Privacy to Public Access, Basic Books, 1994. 45. Branscomp: Global Governance of Global Networks, Indiana Journal of Global Legal Studies, Spring, 1994. 46. Network Wizards, Internet Domain Survey, January, 1997, http://www.nw.com/zone/WWW/report.html 47. Network Wizards, Internet Domain Survey, January, 1997, http://nw.com/zone/WWW/lisybynum.html 48. Simon Davis., Report, December, 1997, http://www.telegrapf.co.uk 49. Francis S. Chlapowski, The Constitutional Protection of Information Privacy: Boston University Law Review, January, 1991. 50. Guisnel, J., Guerres dans le cyberspace, Editions la decouverte, 1995. 51. http://www.dis.org 52. http://www.telegraph.co.uk
SUPERCOMPUTERS, TEST BAN TREATIES, AND THE VIRTUAL BOMB* No matter what the Russian expectations were, or their cause, the acquisition of supercomputing power has now become a proxy for the nuclear weapons race. It’s basically who has the virtual bomb. We’re going to have a cold war on virtual weapons.1
Chelyabinsk-70 and Arzamas-16 are names unfamiliar to most outside of the intelligence or nuclear weapons design communities. Yet these two Russian facilities, long considered hidden or closed installations by the Soviet Union, designed and manufactured several hundred nuclear warheads per year and had a hand in the creation of most of the tens of thousands of nuclear weapons developed during the Soviet era. Soviet state secrecy practices formulated such odd hyphenated names to describe approximately thirty-five municipalities dedicated to the military-industrial complex, ten of which were huge nuclear design facilities controlled by the Ministry of Atomic Energy (MINATOM) and guarded by special regiments of the Ministry of Internal Affairs.2 In addition to their primary names, these closed sites bear code names from cities 50 to 100 km away followed by a postal zone number (for example, the All-Russian Scientific Research Institute of Experimental Physics, “Arzamas-16,” or the All-Russian Scientific Research Institute of Technical Physics, “Chelyabinsk-70”). Since 1989, Russia has opened a number of these sites to limited visits by foreigners, but many details of their specific missions and locations still have not been declassified. Several of the facilities are described in Table 8.1. Chelyabinsk-70 and Arzamas-16 have recently become more widely known by virtue of published news accounts of the apparently illegal sale of high-performance supercomputers to these nuclear weapons design and manufacturing bureaus. The illegal transfer occurred after several failed Russian attempts to purchase similar supercomputers legally. Gary Milhollin of the Wisconsin Project on Nuclear Arms Control first broke the story of the illegal export of Silicon
*By Peter M. Leitner. Peter M. Leitner is a senior strategic trade advisor at the Department of Defense. The opinions expressed herein are the author’s alone and do not represent the views of the Department of Defense, the government of the United States, or any organization. Copyright q 1998 World Affairs.
DK3654—CHAPTER 8—14/10/2006—15:21—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
603
TABLE 8.1 Selected Russian Nuclear Weapon Design Facilities Code Name
Formal Name
Closed City
Region
Specialty Nuclear warhead design and land-based ICBM re-entry vehicle fabrication Nuclear warhead design, highexplosive testing, nuclear bombs and submarinelaunched ballistic re-entry vehicles Warhead assembly/disassembly Warhead assembly/disassembly Warhead assembly/disassembly Fissile material component fabrication Plutonium and tritium production for nuclear weapons Plutonium for nuclear warheads Enriched uranium production
Arzamas-16
All-Russian Institute of Experimental Physics
Sarova
Urals
Chelyabinsk-70
All-Russian Institute of Technical Physics
Snezhinsk
Urals
Sverdlovsk-45
Elektroimpribor
Rusnoy
Urals
Zlatoust-36
Zlatonst
Trekhgornyy
Urals
Zarechnyy
Kuznetsk
Penza-19 Tomsk-7
Siberian chemical combine
Seversk
Siberia
Chelybinsk-65
Mayak Production Association
Ozersk
Southern Urals
Krasnoyarsk-26
Krasnoyarsk mining and chemical combine Electro-chemical plant: Zheleznogarsk reactor
Krasnoyarsk/ Atomgrad Krasnoyarsk/ Atomgrad
Siberia
Krasnoyarsk-45
Siberia
Source: Carnegie Endowment for International Peace, Nuclear Successor States of the Soviet Union 4 (May 1996); Wisconsin Project on Nuclear Arms Control, Risk Report (various issues).
Graphics and IBM computers,3 and their role in warhead design and simulation, after it was publicly revealed by Viktor Mikhaylov, the head of MINATOM. News of the illegal export spurred angry statements from Congress and an ongoing investigation by law enforcement agencies. In an interview, MINATOM spokesmen “could not hide their astonishment at the American side’s hints to the effect that the furor broke because of Minister Viktor Mikhaylov’s excessive frankness—speaking in Moscow, he referred to the processor models by their number. But the whole point is that the minister had nothing to hide. Everyone knows that the U.S. supercomputers will be used to solve tasks connected with the safe operation of the Russian nuclear arsenal, confirming its reliability, and ensuring its safekeeping.” 4 Izvestia also reported, “The U.S. Commerce Department says that the whole problem has arisen because of the undue candor of . Mikhaylov [who] also said that these computers are to be used for the modeling of nuclear explosions.”5 A year ago Silicon Graphics sold eight high-speed R-1000 computers to a Russian scientific research institute known at home and in the rest of the world by its former code name Chelyabinsk-70. U.S. Laws do not forbid the sale of such equipment to Russia. All machines with speeds of up to 2 billion operations a second do not need export licenses. For computers with speeds of between 2 billion and 7 billion operations a second, the rules are different. Manufacturers are obliged to consult Commerce Department experts before shipment commences. And finally the best computers, with speeds over 7 billion operations a second, cannot be sold to countries like Russia without the mandatory license.
DK3654—CHAPTER 8—14/10/2006—15:21—SRIDHAR—XML MODEL C – pp. 573–663
604
Handbook of Technology Management in Public Administration
A legal investigation has now been launched against Silicon Graphics Inc. The Commerce Department says that a parallel system of several R-1000 computers amounts to a supercomputer of the class that is strictly forbidden for sale to Russia. [In fact, these computers can be easily upgraded from 480 MTOPS to at least 4,500 MTOPS simply by adding additional CPU boards and memory.] However, this is not an old prohibition dating back to the Cold War times, but came into effect fairly recently after the disbandment of the infamous COCOM—the committee controlling the sale of strategic materials to Communist countries.6
Mikhaylov’s announcement came in the wake of a controversial U.S. denial of Convex and IBM supercomputer equipment, in fall 1996, to the same two “closed” facilities.7 “Convex originally applied to sell three supercomputers . The SPP 1,200 model (Exemplar X-Class) operated at 4,564 million theoretical operations per second (MTOPS) but upgradeable to 34,500, while two others, also upgradeable, ran at 1,630 MTOPS and 1,870 MTOPS. The IBM SP 2 model that was intended for sale operates at 780 MTOPS. Another IBM machine was also bound for the Moscow lab, a company official indicated.”8 These RS6000 series computers are upgradeable to 250,000 MTOPS.9 In 1995, “President Clinton [unilaterally] decontrolled computers up to 2,000 MTOPS [from the previous CoCom ceiling of 260 MTOPS] for all users and up to 7,000 MTOPS for civilian use in Russia but reserved the authority to block exports that raise proliferation concerns.”10 Ostensibly, the powerful IBM and Convex computers withheld from the Russians were to be used to model the migration of radioactive material in ground water in the vicinity of nuclear weapons plants. However, this was considered a highly improbable end use given what was known about the two facilities and their interest in the simulation of nuclear weapons effects. In an exchange of letters between Mikhaylov and U.S. Energy Secretary Hazel O’Leary, Mikhaylov first indicated that he wanted the supercomputers to maintain the safety and security of nuclear stockpiles under the test ban. In a second letter on September 9, he denied the computers would be used to improve Moscow’s nuclear weapons. But he conceded that at least one machine, the Convex SPP 2,000, would be used in confirmation of the reliability of, and the preservation of Russia’s nuclear stockpile. Those words meant that the Russians planned simulated tests to verify the yields of their nuclear bombs. It would be difficult to separate reliability testing of old weapons from development of new ones.11
In a 29 November 1996 letter to Mikhaylov from Assistant Secretary of State for PoliticalMilitary Affairs Thomas E. McNamara, the United States officially rejected the export request stating, “I am writing to inform you of our government’s recent decision with respect to the Ministry of Atomic Energy’s license requests for advanced computers for use by the nuclear research institutes Arzamas-16 and Chelyabinsk-70. We have informed the U.S. manufacturers [company names were expurgated in the version of the letter released by the State Department] that we are not prepared to approve their license applications. While we consider the promotion of scientific and technical cooperation between the U.S. and Russia one of our most important goals, we must balance such considerations with national security concerns in evaluating sensitive dual-use export cases.”12
SUPERCOMPUTERS: A QUID FOR RUSSIAN CTBT SIGNATURE The U.S. rejection of the Convex and IBM sales triggered Mikhaylov to publicly state that Russia was promised access to U.S. supercomputer technology by Vice President Gore as a quid for Russian accession to the Comprehensive Test Ban Treaty. Vladislav Petrov, head of MINATOM’s Information Department, stated that the Clinton administration promised Russia the computers during the test ban treaty negotiations to allow Russia to engage in virtual testing of warhead designs.13 Indeed, Mikhaylov told reporters in January 199714 that the Silicon Graphics and IBM supercomputers illegally shipped will be used to simulate nuclear explosions. Boris Litvinov, the chief of design at Arzamas-16, stated in December 1996 that these computers were needed for “constantly perfecting nuclear warheads.”15 He added, “It is simply impossible
DK3654—CHAPTER 8—14/10/2006—15:21—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
605
to improve our knowledge of nuclear processes today without modern computers. We retain our nuclear power status; it is recognized and no one in the world has the right to demand that we scale down our research.” On 24 February 1997, MINATOM’s Information Department issued a press release stating: The 1996 signature of the Comprehensive Test Ban Treaty (CTBT) has become an undoubted success in the struggle for nuclear disarmament. At the expert meetings in London in December 1995 and Vienna in May 1996, which preceded the CTBT signature, special attention was paid to the issue of maintaining security of the nuclear powers respective arsenals under conditions of discontinued on-site testing. Nuclear arsenal security maintenance is impossible without simulation of physical processed and mathematical algorithms on high-performance parallel computers, which are currently produced in the United States and Japan. In the interests of signing the CTBT in the shortest possible time, the U.S. and Russian experts mutually agreed on the necessity of selling modern high-performance computers to Russia.16
According to a February 1997 report, “The possibility of the theoretical modeling or, in scientific parlance, ‘simulation’ of nuclear explosions was a crucial part of the Comprehensive Nuclear Test Ban Treaty. When pressuring for the conclusion of this treaty the Russians and Americans worked together on problems of the computer simulation of controlled explosions.”17 Nikolay Voloshill, chief of the Russian Federation Ministry of Atomic Energy Department for Designing and Testing Nuclear Warheads, revealed, “During the purchase it was stated and guaranteed that Russia is buying the computers for fundamental scientific research in the sphere of ecology and medicine, and this includes the safety of the remaining nuclear arsenal.”18 The rejection immediately provoked charges in Moscow that the United States was reneging on promises allegedly made during Gore–Chernomyrdin commission meetings, particularly as the long-delayed decision not to approve the licenses came just days after Russia signed the CTBT.19 One MINATOM official expressed his concern over American intentions: If one takes into account the fact that nuclear parity between the two states has in many respects been maintained not only through testing, but also with the help of theoretical studies, one can imagine what is behind such a refusal. In many traditional branches of science and technology the creation of an experimental model is preceded by laboratory modeling, but in the atomic branch mathematical computations are a substitute for this stage. In the process of a real-life blast nothing is left of the elements of the nuclear devices except vaporized material, and that is why mathematical computation actually becomes the only way to obtain information on the processes that occur. The special significance of these theoretical studies has become obvious in the course of the fulfillment of the terms of the comprehensive nuclear test ban treaty. The United States has made much better provisions than Russia for giving up nuclear testing. Supercomputers used for virtual-reality modeling of the processes of nuclear explosions have played a decisive role in that. The Americans rightly figured that since they has such equipment, they would be able to compensate for nuclear explosions by obtaining the necessary data with the aid of supercomputers. This practice of bans, smacking of the cold war, can push Russia, devoid, by contrast with the United States, of the possibility to improve its nuclear weapons with the help of supercomputers, into breaking the moratorium on nuclear tests.20
GOING VIRTUAL—WHAT DOES IT MEAN? Virtual testing, modeling, and simulation are essential to clandestinely maintain or advance nuclear weapons technology. As the planet shows no sign of nearing the point where nuclear weapons are banned, it is reasonable to assume that current or aspiring nuclear weapons states will vigorously attempt to acquire high-performance computers to advance their nuclear programs with a degree of covertness hitherto impossible to achieve.
DK3654—CHAPTER 8—14/10/2006—15:21—SRIDHAR—XML MODEL C – pp. 573–663
606
Handbook of Technology Management in Public Administration
There is considerable conjecture within the scientific community as to whether a state would be able to design and deploy a nuclear device without first engaging in a full-scale test of the physics package. The arguments boil down to the confidence of designers and government officials that an untested device would behave in the intended manner. Many engineering purists in the United States declare unequivocally that virtual testing alone is insufficient to determine whether a weapon design is predictable or even functional. However, they often ignore one of the most compelling lessons drawn from the Iraqi nuclear weapons program; the necessity for a clandestine program not to expose itself by venturing beyond hydrodynamic testing. Proof of concept was all that the Iraqis could safely achieve without provoking a devastating pre-emptive response from the Israelis. A similar pattern was evident with Israeli and Swedish weapons programs. In fact, the only publicly known full-scale weapons test by a clandestine program was carried out by South Africa, reportedly with Israeli assistance. The development of supercomputers has been driven relentlessly and underwritten by the weapons program because of the high cost of physical testing and the severity of the test environment. “The technical limitations are enormous: extreme temperatures (10 million degrees) and material velocities (4 million miles per hour), short time scales (millionths of a second) and complicated physical processes make direct measurement impossible. Computers provide the necessary took to simulate these processes.”21 Perhaps the best way to understand the importance of virtual testing to facilitate weapons maintenance and development is to analyze by analogy. DoE’s National Ignition Facility (NIF) embodies what many fear will be the worst-case application of U.S. supercomputer technology to Russian nuclear weapons development. The NIF represents the marriage of high-energy lasers and massively parallel supercomputers in support of an inertial confinement fusion program advertised as supporting pure, applied, and weapons sciences. This facility will seek—using lasers, x-rays, and electrical pulses—to measure how bomb components behave in conditions similar to those in a nuclear explosion. The Department of Energy intends, following a concept called Science Based Stockpile Stewardship, “to use the fastest supercomputers yet devised to simulate nuclear explosions along with all the important changes that occur to weapons as they age. The plan has stirred vigorous debate among arms-control advocates, military strategists, and, most recently, university researchers, over whether the approach is cost-effective, feasible and wise.”22 The weapons-related research envisioned for the NIF would rely on high-performance computers and test equipment to explore a range of topics, including these: † † † † † †
Radiation flow Properties of matter Mix and hydrodynamics X-ray laser research Computer codes Weapons effects
The Department of Energy is promoting each of these as an important potential NIF activity.23 The following descriptions are paraphrased from publicly available materials: Radiation flow: in most thermonuclear devices X-radiation emitted by the primary supplies the energy to implode the secondary. Understanding the flow of this radiation is important for predicting the effects on weapon performance of changes that might arise over time. Properties of matter: two properties of matter that are important at the high-energy densities of a nuclear explosion are equation of state and opacity. The equation of state is the relationship among a material’s pressure, density, and temperature expressed over wide ranges of these variables. Opacity is a fundamental property of how radiation is absorbed
DK3654—CHAPTER 8—14/10/2006—15:21—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
607
and emitted by a material. The correct equation of state is required to solve any compressible hydrodynamics problem accurately, including weapons design. Radiation opacities of very hot matter are critical to understanding the radiation flow in a nuclear weapon. Mix and hydrodynamics: these experiments involve the actual testing of extremely low-yield fission devices (as low as the equivalent of several pounds of TNT) within a confined environment to study the physics of the primary component of thermonuclear warheads by simulating, often with high explosives, the intense pressures and heat on weapons materials. (The behavior of weapons materials under these extreme conditions is termed “hydrodynamic” because they seem to flow like incompressible liquids.) Hydrodynamic experiments are intended to closely simulate, using non-nuclear substitutes, the operation of the primary component of a nuclear weapon, which normally consists of high explosive and fissionable material (the plutonium “pit”). In hydrodynamic experiments, the properties of surrogate pits can be studied up to the point where an actual weapon releases fission energy. High explosives are used to implode a surrogate non-fissile material while special x-ray devices (“dynamic radiography”) monitor the behavior of the surrogate material under these hydrodynamic conditions.24 X-ray laser research: supercomputer-based experiments could provide data for comparison with codes and could be used to further interpret the results of past underground experiments on nuclear-pumped x-ray lasers. Computer codes: the development of nuclear weapons has depended heavily on use of complex computer codes and supercomputers. The codes encompass a broad range of physics including motion of material, transport of electromagnetic radiation, neutrons and charged particles, interaction of radiation and particles with matter, properties of materials, nuclear reactions, atomic and plasma physics, and more. In general, these processes are coupled together in complex ways applicable to the extreme conditions of temperature, pressure, and density in a nuclear weapon and to the very short time scales that characterize a nuclear explosion. Weapons effects: nuclear weapons effects used to be investigated by exposing various kinds of military and commercial hardware to the radiation from actual nuclear explosions. These tests were generally conducted in tunnels and were designed so that the hardware was exposed only to the radiation from the explosion and not the blast. The data were used to “harden” the equipment to reduce its vulnerability during nuclear conflict. Without nuclear testing, radiation must be simulated in above-ground facilities and by numerical calculations. “NIF . will cost approximately U.S. $4.5 billion to construct and operate, [and] will be the world’s largest laser, intended to bring about thermonuclear fusion within small confined targets [and] represents the closest laboratory approach to a number of critical parameters in the weapons environment . by using 192 laser beams to produce 500 trillion watts of energy for 3 billionths of a second.”25 This capability will be combined with DoE’s Accelerated Strategic Computing Initiative (ASCI), aimed at developing “the computer simulation capabilities to establish safety, reliability, and performance of weapons in the stockpile, virtual prototyping for maintaining the current and future stockpile, and connecting output from complex experiments to system behavior.” A reported goal of ASCI is to create a “virtual testing and prototyping capability for nuclear weapons.”26 The parallel between the development of the NIF and ASCI programs and the transfer of U.S. supercomputing power to Russian nuclear weapons labs is striking, particularly considering that Arzamas-16 is home to the Iskra-5 advanced x-ray laser facility. This 67-mega-joule, 12-synchronous-channel pulse laser is designed for experiments in thermonuclear target heating in support of nuclear fusion research activities.27 In addition, the United States has reportedly offered to provide China with computers that could aid in nuclear explosion simulations, in order to persuade Chinese military leaders to halt underground testing.28
DK3654—CHAPTER 8—14/10/2006—15:21—SRIDHAR—XML MODEL C – pp. 573–663
608
Handbook of Technology Management in Public Administration
As the joint report from Los Alamos, Lawrence Livermore, and Sandia National Labs pointed out, Computers are more important to nuclear weapons design when agreements limit testing. In support of the atmospheric test ban treaty, we perform[ed] our nuclear tests underground. A weapon’s performance in the mode for which it was designed, perhaps an above ground burst, must be inferred from test data by expensive computer calculations. Such calculations take account of the “down hole” environment, such as reflection from test-cavity walls which do not exist in the atmosphere. A second agreement, the threshold test ban, limit[ed] testing to weapons with yields of 150 kt or less. To design beyond this limit, computer extrapolations [were] relied upon to verify the performance of the weapon.29
VERIFICATION TECHNOLOGIES MADE IRRELEVANT On a prima facie level, most would instinctively argue that eliminating nuclear chain-reaction explosions from the planet is highly desirable and would help make the world a safer place. However, the reverse may actually be the case; that is, the elimination of physical tests and their migration to cyberspace may make the world a more dangerous place. Can such a counterintuitive proposition be true? Consider the trillions of dollars’ worth of detection, monitoring, and earlywarning infrastructure designed to identify and measure foreign nuclear weapons programs that would be rendered useless by virtual testing. As the availability of data indicating the strength and direction of foreign nuclear weapons activities decreases, the likelihood that the United States or its allies will fall victim to tactical or strategic surprise will increase. No longer will analysts have access to tangible seismic data of the type shown in Figure 8.4. High-explosive or hydrodynamic tests simply do not have the energy potential to be identified from the background clutter such as natural seismic activity (see Figure 8.5); mining and construction blasting, detonations from oil and gas development, or conventional weapons testing, training, or ordnance disposal. In the United States, for example, each year there are several thousands of chemical explosions greater than 50 t or more, and a couple of hundred larger than 200 t.30
Earthquake Magnitude Levels
4.5
4.0
2,700/7 7,500/20
3.5
3.0 2.5
21,000/57
59,000/161
170,000/466
Average Number of Earthquakes per year/day
FIGURE 8.4 Annual earthquake activity and magnitude. Source: International Data Center. Ad Hoc Group of Scientific Experts.
DK3654—CHAPTER 8—14/10/2006—15:21—SRIDHAR—XML MODEL C – pp. 573–663
609
2000
Opaque
ia ss Ru SS /U
CORRTEX Deployed
R
Un ite d
Comprehensive Test Ban Treaty
St at es
National Security Issues
1987
Greatest Transparency
Threshold Test Ban Treaty 1974 Limited Test Ban Treaty
1963 Opaque
FIGURE 8.5 Relative opacity of nuclear development programs.
Figure 8.4 is a graphic example of the type of empirical data currently collected by seismic stations and indicates the precise time, duration, and magnitude of a specific nuclear weapons test. Under a CTBT regime, characterized by clandestine computer-based modeling and simulation techniques, such hard data would be unattainable. The term “national technical means of verification” (NTM) is often used to describe satelliteborne sensors, but it is more generally accepted as covering all (long-range) sensors with which the inspected country does not interfere or interact. Ships, submarines, aircraft, and satellites can all carry monitoring equipment employed without cooperation of the monitored country. Groundbased systems include over the horizon (OTH) radar and seismic monitors. Acoustic sensors will continue to provide the main underwater NTM for monitoring treaty compliance. The first of the high-technology methods of treaty monitoring were the U.S. VELA satellites, designed in the 1960s to monitor the Limited Test Ban Treaty. Their task was to detect nuclear explosions in space and the atmosphere.31 At precisely 0100 GMT on Sept. 22, 1979, an American satellite recorded an image that made intelligence analysts’ blood run cold. Looking down over the Indian Ocean, sensors aboard a Vela satellite were momentarily overwhelmed by two closely spaced flashes of light. There was only one known explanation for this bizarre phenomenon. Someone had detonated a nuclear explosion.
The list of suspects quickly narrowed to the only two countries at the time that had the materials, expertise, and motivation to build a nuclear weapon: South Africa and Israel. Both denied responsibility.32 This event was not confirmed until 1997, when Aziz Pahad, South African deputy foreign minister, stated “that his nation detonated a nuclear weapon in the atmosphere vindicating data from a then-aging Vela satellite.”33 Pahad’s statements were confirmed by the U.S. embassy in Pretoria, South Africa. VELA’s modern counterparts include the global positioning system (GPS) satellites. While these also have the function of providing navigational and positional data, their alternate role is to detect nuclear explosions, and to this end they mount both x-ray and optical sensors. However, “as nuclear detectors in orbit on Global Positioning System satellites age, the credibility of their data again could be challenged, and have subsequent adverse policy impacts.”
DK3654—CHAPTER 8—14/10/2006—15:21—SRIDHAR—XML MODEL C – pp. 573–663
610
Handbook of Technology Management in Public Administration
Without strong evidence of a nuclear test no Administration official is going to charge another nation with violating a test ban treaty, for example. Los Alamos and the U.S. Energy Dept. have expended approximately $50 million to develop a new generation of space-based nuclear detection sensors, but they may never get into orbit. Pentagon budget woes could preclude inclusion of EMP sensors on next generation satellites, according to Los Alamos officials. Researchers who developed the new sensors said it is ironic that funding constraints could force a decision to keep the detectors grounded. After all, had the old Vela satellite been equipped with a functioning EMP detector, it would have confirmed that the optical flash in September 1979 was a nuclear blast. The White House panel subsequently stated that, because nuclear detonations had such critical ramifications and possible consequences, it was imperative that systems capable of providing timely, reliable corroboration of an explosion be developed and deployed.34
However, detection does not constitute identification. There are thousands of earthquakes each year in Russia with magnitudes comparable to decoupled kiloton-scale nuclear explosions. Many seismic events are detected that cannot be identified. There are also hundreds of chemical explosions each year that have seismic signals in this range and thus cannot be discriminated from nuclear explosions. Thus, it is obvious that there will be many unidentified seismic events each year that could be decoupled nuclear explosions with militarily significant yields much greater than 1 kt.35 A summary of annual seismic activity appears in Figure 8.5. The following types of useful verification technologies, among others, would be rendered ineffective or irrelevant by the migration of nuclear weapons testing to supercomputer-based simulation and modeling: Space-based optics and sensors: several satellites have telescopes and an array of detectors that are sensitive to various regions of the electromagnetic spectrum. Radar: lightweight space-based radar aboard satellites are capable of penetrating heavy cloud layers and monitoring surface disturbances at suspected nuclear test sites. Listening posts: hydroacoustic stations located on Ascension, Wake, and Moresby Islands and off the western coasts of the United States and Canada and Infrasound arrays in the United States and Australia detect underwater and sub oceanic events and distinguish between explosions in the water and earthquakes under the oceans. Some seismic stations located on islands or continental coastlines may be particularly useful since they will be able to detect the T phase—an underwater acoustic wave converted to a seismic wave at the edge of the land mass. Radionuclide monitoring network: a new effort is under way to detect Xenon-133 and Argon37 seepage into the atmosphere days or weeks after a nuclear weapons test.36 The inadvertent release of noble gases during clandestine nuclear tests, both above and below ground, represents an important verification technique. As nuclear explosions produce xenon isotopes, and xenon can be detected in the atmosphere, its concentration determined by noble-gas monitoring is very useful.37 Seismic detectors: the United States has set up a worldwide network of seismic detectors, like those used to measure earthquakes, that can gauge the explosive force of large under ground nuclear tests. Research programs funded by the Department of Defense improved monitoring methods for detecting and locating seismic events, for discriminating the seismic signals of explosions from those of earthquakes, and for estimating explosive yield based on seismic magnitude determinations. A 1-kt nuclear explosion creates a seismic signal of 4.0. There are about 7,500 seismic events worldwide each year with magnitudes O4.0. At this magnitude, all such events in continental regions could be detected and identified with current or planned networks. If, however, a country
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
611
were able to decouple successfully a 1 kt explosion in a large underground cavity, the muffled seismic signal generated by the explosion might be equivalent to 0.015 kt and have a seismic magnitude of 2.5. Although a detection threshold of 2.5 could be achieved, there are over 100,000 events worldwide each year with magnitudes O2.5. Even if event discrimination were 99% successful, many events would still not be identified by seismic means alone. Furthermore, at this level, one must distinguish possible nuclear tests not only from earthquakes but also from chemical explosions used for legitimate industrial purposes.38
CONCLUSION The proliferation of high-performance computers, made possible by the drastic liberalization of supercomputer export restraints unilaterally undertaken by the United States in 1995, as well as illegal shipments by manufacturers, will serve to undercut the intent and spirit of the Comprehensive Test Ban Treaty. Providing access to advanced modeling and simulation platforms will facilitate the migration of nuclear testing to the world of virtual reality and draw down a curtain of opacity on nuclear weapons development activity where monitoring, verification, and inspections do not apply. Many may argue that high-performance supercomputers are not a complete substitute for physical nuclear testing. Indeed, they are not. However, they will provide the analytical platform for operators of clandestine programs to acquire a much higher level of confidence in their design, assembly, and detonation processes. This higher level of confidence is absolutely critical to deployment decisions of even the most radical forces. The growing opacity of weapons development activities between the superpowers marks a historic shift. Prior to the CTBT and the rise of the NIF and ASCI, the trend was clearly in the direction of transparency. Transparency was sought under the terms of the 1974 Threshold Test Ban Treaty, which provided for physical presence and real-time down-hole monitoring by officials from the two countries. In fact, the 1987 U.S. installation of CORRTEX (Continuous Reflectometry for Radius Versus Time Experiments) equipment at Semipalatinsk represented the zenith in transparency. Unfortunately, permitting nuclear weapons design and testing organizations to acquire high-performance computers capable of simulating nuclear explosion scenarios will mark the end of transparency. Figure 8.6 depicts the trend back toward opacity. One of the lessons learned from the destruction of Saddam Hussein’s nuclear weapons program was that a proliferant may be quite willing to settle for hydrodynamic testing of its prototype nuclear weapons as a sufficient basis for an uneasy certification for inclusion into its arsenal. The Iraqis were designing exclusively implosion-type nuclear weapons. Their apparent exclusive focus on U235 as a fuel is, therefore, puzzling because plutonium is the preferred fuel for an implosion weapon [as] . the mass of high explosives required to initiate the nuclear detonation can be far smaller. On the other hand, given enough U235 it is virtually impossible to design a nuclear device which will not detonate with a significant nuclear yield.39 The Iraqi nuclear weapon design, which appeared to consist of a solid sphere of uranium, incorporated sufficient HEU to be very nearly one full critical mass in its normal state. The more nearly critical the mass in the pit, or core, the more likely the weapon will explode with a significant nuclear yield, even if the design of the explosive set is relatively unsophisticated. Furthermore, the majority of the weight involved in an early-design implosion-type nuclear weapon is consumed by the large quantity of high explosives needed to compress the metal of the pit; the more closely the pit approaches criticality, the less explosive is needed to compress the pit to supercritical densities and trigger the nuclear detonation, and thus the lighter, smaller, and more deliverable the weapon will be.40
Given the problem of limited access to fissile materials facing most potential proliferants and the potential for a preemptive strike by a wary neighbor, as in the case of the 1981 Israeli
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
612
Handbook of Technology Management in Public Administration Type of Organization
Gender 13%
30% Industry
MALE
Public Authority
47%
FEMALE
Research Center
87%
University 13% 10%
7% 3%
3% 3% 3%
Age
7%
27%
Nationality 7% 3% 3% 3% 3% 3%
21-30
20%
8%
31-40 41-50
8%
51-60
35%
3%
61-70 BE
43%
D
DK
F
FI
GR
8% IR
IT
S
SP
UK
SH
SY
N
USA
18
19
100% 80% 60% 40% 20% 0%
1
2
3
4
5
6
7
YES
8
9
NO
10
11
12
13
14
15
16
17
NOT KNOWN
FIGURE 8.6 Representation of the Experts data and their responses.
destruction of the Iraqi Osirak reactor, physical testing along the lines of the superpower model cannot be readily engaged in U.S. actions to promote the availability of high-performance supercomputers will likely contribute to the proliferation problem by facilitating access to modeling and simulation which will give clandestine bomb makers greater confidence in the functionality of their designs. This increased level of confidence may be all that a belligerent may require to make the decision to deploy a weapon. Under such constraints, sophisticated modeling and simulation will enable clandestine programs to advance closer to the design and development of true thermonuclear weapons. It is worth noting, in this context, that the vintage-1965 Swedish designs were very sophisticated and that at least one appeared to have been designed for use as an artillery shell. The Swedish developers, according to journalist Christer Larsson who broke the story, were fully confident in the performance of their weapons even with no test program planned.41
From a historical perspective, it is interesting to note that the concept of a comprehensive test ban was repeatedly forwarded by the Russians throughout the 1980s and consistently rejected by
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
613
the United States. In the 1990s, a strange juxtaposition occurred with the United States advocating a CTBT and the Russians ever more reluctant to go along. This shift parallels the explosion in highspeed computing potential emanating from the United States and the relatively stagnant progress of Russian indigenous capabilities. There may be much truth in the MINATOM official’s statement quoted earlier: “The United States has made much better provisions than Russia for giving up nuclear testing. Supercomputers used for virtual-reality modeling of the processes of nuclear explosions have played a decisive role in that.” If the Russian claim that the United States reneged on a promise of supercomputer technology in exchange for accession to the CTBT is accurate, then the very value of this treaty must be questioned. If, as a price for Russia’s signature, the Clinton administration was willing to provide the signatories the means of circumventing both its spirit and explicit goals, then the treaty should be regarded as little more than a sham to be rejected by the U.S. Senate. If high-performance computers were made available to the Russian nuclear weapons design bureaus the historical database accumulated from their previous nuclear tests will be the most significant factor in maintaining their stockpiles. In the absence of physical testing they would be able to simulate a wide range of nuclear weapons design alternatives including a variety of unboosted and boosted primaries, secondaries, and nuclear directed-energy designs.42 In addition, the modeling and simulation efforts will help them to maintain a knowledgeable scientific cadre and to continue to verify the validity of calculational methods and databases. Under a test ban, only computer calculations will be able to approximate the operation of an entire nuclear weapon. Other states would also recognize the value of advanced simulation research in helping to develop or maintain nuclear weapon programs. In addition, high-performance computers may make it possible for microphysics regimes of directed-energy nuclear weapon concepts to be investigated as well.43
There is increasing speculation that the Clinton administration’s furious push to decontrol supercomputers widely seen as a payoff tor generous campaign support and contributions,44 was also intended to underwrite CTBT treaty signatures by providing an avenue for weapons testing, stockpile stewardship, and ongoing weapons development without the need for the physical initiation of a nuclear chain reaction. Few were happy when the United States helped the United Kingdom become a nuclear power. Even fewer were pleased when the United States helped the French develop an independent nuclear capability. Assisting the Russians in maintaining and further developing their nuclear arsenal is outrageous. Unfortunately, U.S. nuclear proliferation activities do not end there. If the persistent rumors are true that the United States is even considering providing aid to China to sustain its nuclear weapons modernization program in a CTBT environment, then alarm bells should be sounding on Capitol Hill on the unintentional consequences of reckless disarmament. Will the synergistic effect of the CTBT and the decontrol of supercomputers make the world a safer place or a more dangerous place? The predictable outcome of the events described argues that the uncertainty in our ability to anticipate the nuclear intentions of potential adversaries will increase as the result of an increasingly opaque window into their programs. As to whether this will translate into a quantifiable in crease in the risk of nuclear war or terrorism, intuitively the answer appears to be yes. U.S. willingness to trade supercomputer technology for treaty signatories and its own rush toward virtual testing make a farce of pretensions to high moral ground in criticizing others for rejecting the CTBT. “Pakistan or India . could be forgiven for suspecting that the five major nuclear powers which asserted for years that testing was critical to maintaining deterrence, have now advanced beyond the need for nuclear tests. All the more reason, perhaps, for them to oppose the treaty.”45
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
614
Handbook of Technology Management in Public Administration
Amid the numerous statements by Russian officials of a secret deal to provide U.S. super computer technology as an inducement to sign the CTBT is the noticeable absence of official denials from the U.S. side. This may be one of those times when silence speaks louder than words.
NOTES 1. White House Rejects Pending Sale of U.S. Supercomputers to Russia, Journal of Commerce, 1A, 25 November 1996. 2. From Nuclear War to the War of the Markets, El Pais, 18, 7 November 1995. 3. Gary Milhollin, Exporting an Arms Race, New York Times, A19, 20 February 1996; Weekend All Things Considered, National Public Radio, 1 December 1996; John J., Fialka, U.S. Investigates Silicon Graphic’s Sale of Computers to Russian Weapons Lab, Wall Street Journal, 18 February 1997; David E., Sanger, U.S. Company Says it Erred in Sales to Russian Arms Lab, New York Times, 19 February 1997; Supercomputer Deal May Violate Export Rules, Washington Post, 19 February 1997. 4. Ministry’s ‘Astonishment’ at Furor Over Supercomputers, Izvestiya, 4 March 1997; Researchers to Buy Supercomputers to Study Nuclear Blasts, ITAR-TASS, 13 January 1997. 5. Supercomputers for a Former Superpower, Izvestiya, 1–2, 22 February 1997. 6. Ibid. 7. John J., Fialka, Clinton Weighs Russian Bid to Buy 3 Supercomputers, Wall Street Journal, 11 October 1996, A2; Gary Milhollin, U.S. Says ‘No’ to Supercomputers for Russia’s Nuclear Weapon Labs, Risk Report 2 (November–December 1996); U.S. General Accounting Office. Nuclear Weapons: Russia’s Request for the Export of U.S. Computers for Stockpile Maintenance, GAO/T-NSIAD-96-245, 30 September 1996. 8. White House Rejects Pending Sale of U.S. Supercomputers to Russia, Journal of Commerce, 1A, 25 November 1996. 9. MTOP figure was provided by Wisconsin Project on Nuclear Arms Control. 10. Journal of Commerce, 1A, 25 November 1996. 11. Ibid.; Moscow Times, 5 December 1996. 12. Expurgated copy of letter provided by Wisconsin Project on Nuclear Arms Control. 13. Minister: Computer Block Threatens Arsenal, Moscow Times, 5 December 1996. 14. Researchers to Buy Supercomputer to Study Nuclear Blasts, ITAR-TASS, 13 January 1997. 15. Forecasting a Nuclear Explosion, Komsomolsakya Pravada, 3, 10 December 1996. 16. ITAR-TASS 26 February 1997 Press Release, Information Department, Ministry of Atomic Energy of Russia, presented by G. A. Kaurov, department head, 24 February 1997. 17. Computer Furor Contradicts Albright Signals, Izvestiya, 1, 22 February 1997. 18. Supercomputers for Arzamas and Chelyabinsk, Izvestiya, 3, 4 March 1997. 19. White House Still Appears Confused Over Supercomputer Sales to Russia Possible Promise Tied to Test Ban Treaty, Journal of Commerce, 3A, 6 December 1996. 20. Anything Goes? Virtual Nuclear Reality, Krasnaya Zvezda, 5, 25 January 1997. 21. U.S. Department of Energy, The Need for Supercomputers in Nuclear Weapons Design, 5, 1986. 22. Ibid., 14. 23. U.S. Department of Energy, Office of Arms Control and Nonproliferation, The National Ignition Facility and the Issue of Nonproliferation, 1996, http://198.124.130. 244/news/docs/nif/front.htm 24. Michael Veiluva, John Burroughs, Jacqueline Caabasso, and Andrew Lichterman, Laboratory Testing in a Test Ban/Nonproliferation Regime, Western States Legal Foundation, April 1995, http://www.chemistry.ucsc.edu/anderso/UC_CORP/testban.html
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
615
25. Ibid. 26. Ibid. 27. Ibid.; A. C. Gascheev, I. V. Galachov, V. G. Bezuglov, and V. M. A. Murugov, Charge Control System of the Energy Capacitor Storage of Laser Device Iskra-5, http://adwww. fnal.gov/www/icalepcs/abstracts/Abstracts_I/Mozin.html [As written, though apparently not accessible]. 28. Veiluva et al., Laboratory Testing in a Test Ban. 29. U.S. Department of Energy, The Need for Supercomputers in Nuclear Weapons Design, 5. 30. Prototype International Data Center, Contributing to Societal Needs, http://earth.agu.org/ revgeophys/va.4.html [As written, though apparently not accessible]. 31. Means to an End, International Defense Review, 24, 413 (1 May 1981). 32. Jim Wilson., Finding Hidden Nukes, Popular Mechanics, 48, (May 1997). 33. William B. Scott., Admission of 1979 Nuclear Test Finally Validates Vela Data, Aviation Week & Space Technology, 147, 33, 21 July 1997. 34. Ibid. 35. Facing Nuclear Reality, Science, 455, 23 October 1987. 36. Wilson, Finding Hidden Nukes, 50. 37. Prototype International Data Center, Report of the Radionuclide Export Group, www. cdidc.org:65120/librarybox/ExpertGroup/8dec95radio.html [As written, though apparently not accessible]. 38. Prototype International Data Center, Contributing to Societal Needs. http://earth.agu.org/ revgeophys/va.4.html [As written, though apparently not accessible]. 39. Peter D. Zimmerman, Iraq’s Nuclear Achievements: Components, Sources, and Stature, U.S. Congressional Research Service Report #93-323F, 18 February 1993. 40. Ibid. 41. Ibid. 42. U.S. Department of Energy, The National Ignition Facility and the Issue of Nonproliferation. 43. Ibid. 44. Michael Waller, vice president of the American Foreign Policy Council, Testimony before the House National Security Committee, Subcommittee on Military Research and Development, 13 March 1997. 45. W. Wayt Gibbs, Computer Bombs: Scientists Debate U.S. Plans for ‘Virtual Testing’ of Nuclear Weapons, Scientific American, 16, March 1997.
GEOGRAPHY AND INTERNATIONAL INEQUALITIES: THE IMPACT OF NEW TECHNOLOGIES* ABSTRACT Some writers have predicted that new technologies mean the “death of distance,” allowing suitably skilled economies to converge with high income countries. This paper evaluates this claim. It *Anthony J. Venables. Published September 2001 by the Centre for Economic Performance, London School of Economics and Political Science, Houghton Street, London WC2A 2AE, U.K. This paper was produced as part of the Centre’s Globalisation Programme. The Centre for Economic Performance is financed by the Economic and Social Research Council. This paper was prepared for the World Bank Annual Conference on Development Economics held in Washington in May 2001. Anthony J. Venables is the Research Director at the Centre for Economic Performance, LSE, Professor of International Economics at the London School of Economics and Programme Director at the Centre for Economic Policy Research, London.
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
616
Handbook of Technology Management in Public Administration
argues that geography matters for international income inequalities, and that new technologies will change, but not abolish this dependence. Some activities may become more entrenched in high income countries than they are at present. Others—where information can be readily codified and digitized—will relocate, but typically only to a subset of lower income countries. These countries will benefit, but other countries will continue to experience the costs of remoteness.
INTRODUCTION New communications and information technologies (ICT) offer many benefits to developing countries. Costs of establishing communications networks have been slashed, and with that comes the prospect of better provision of education, health care and a host of other services. Some writers go further, arguing that ICT offers the “death of distance.” In the words of Frances Cairncross (2001), p.16, To allow communications to work their magic, poor countries will need sound regulations, open markets, and, above all, widely available education. Where these are available, countries with good communications will be indistinguishable. They will all have access to services of world class quality. They will be able to join a world club of traders, electronically linked, and to operate as though geography has no meaning. This equality of access will be one of the great prizes of the death of distance.
The objective of this paper is to evaluate this claim. At present, we shall argue in this paper, geography matters a great deal for economic interaction and for the spatial distribution of income. How will new technologies change this, and what will they do for the location of economic activity and for international inequalities? What are the prospects that ICT will lead to the death of distance? The conceptual framework for addressing this question is based on the profitability of production in different countries, knowing that a change that increases profitability will tend to attract firms and bid up wage rates. The profitability of a location is determined by many forces: labour costs and efficiencies, the social infrastructure of the economy, and also geography—location relative to sources of supply and relative to markets. The fact that firms tend to locate close to their markets creates a force for international inequality. Established economic centres offer large markets, attracting firms and hence supporting high wages—which in turn supports the large market size. Pulling in the opposite direction are international wage differentials (or primary factor costs more generally). Obviously, the lower are primary factor prices, other things being equal, the more profitable is production in the country, a force for international equality. The trade-off between these forces provides a simple relationship between costs of distance and international inequalities. We will show in Section 2 that there are international wage gradients, with wages falling as a function of remoteness from markets. In so far as new technologies reduce the costs of distance they might be expected to flatten these gradients and reduce international inequalities. If trade were to become perfectly free—the limiting case of textbook international economics—distance would be dead, goods markets perfectly integrated and factor price equalization would hold. Perfectly free international trade means that similar factors get paid the same price, regardless of their location, although per capita income levels may differ as individuals own different amounts of human and physical capital. This view of the effects of ICT is misleading, for at least two reasons. First, new technologies will have a mixed and complex effect on the costs of distance. Some activities can be digitized and supplied from a distance, but most cannot. Second, geography determines firms’ profitability not only via ease of access of markets, but also via access to a cluster of related activities. The propensity of economic activity to cluster is widely documented (for example, Porter 1990), and attributed to a range of different forces. One is the development of dense local networks of suppliers
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
617
of specialised goods and services for industry. A second is development of local labour markets with specialist skills, probably arising because of the training activities of other firms in the industry. A third is the benefit of being close to research centres and to the knowledge spillovers that firms derive from proximity to other firms: “the mysteries of the trade become no mystery; but are, as it were, in the air” (Marshall 1890). Finally, it may simply be easier to manage and monitor activities in an established centre where firms have local knowledge and can benchmark their performance on that of other firms in the same location. How does new technology change these clustering forces? Some are likely to be weakened by ICT. For example, proximity may come to matter less for the flow of knowledge between firms and for the supply of business services (at least, to the extent that the relevant knowledge can be codified and digitized). But other clustering forces—such as those arising from labour market skills—are likely to be unaffected. The overall effects of ICT on location and international inequalities must therefore take into account the fact that distance may die for some of the functions involved in some industries, while remaining important for many other functions and activities. Thus, some activities will no longer need to be close to consumers and will go in search of lower cost locations, but low costs depend on wages, social infrastructure, and access to the benefits of a cluster of related activities. Consequently some activities may tend to move to lower wage countries, while others become more deeply entrenched in high wage economies. These effects are illustrated by the experience of previous communications revolutions. The transport revolutions of the nineteenth century did not lead to the dispersion of economic activity, but instead to its concentration—in relatively few countries, and within those countries in large and often highly specialized cities. Lower transport costs reduced the value of being close to consumers who could instead be supplied from cities in which production exploited the advantages of increasing returns to scale and agglomeration externalities. So too with new technologies, we might expect to see changes in economic geography of the world economy, but not necessarily changes towards the “integrated equilibrium” view of the death of distance. The remainder of the paper develops the argument in three main stages. First, we show that geography matters greatly for many economic interactions; these interactions—be they trade, investment, or knowledge transfers—are overwhelmingly local, falling off sharply with distance. We also argue that the costs that cause interactions to fall off across space have major implications for the world income distribution. Using measures of distance based on the intensity of economic interaction between countries we show that distance can account for a large part of international inequalities. Poor countries are poor, in part, because distance inhibits their access to the markets and suppliers of established economic centres. We then turn to the effects of information and communications technologies (ICT) on the costs of international transactions. To do this requires that we look more deeply at why distance is costly, and we divide these costs into four main elements. Search costs (the costs of identifying a potential trading partner). Direct shipping costs. Control and management costs. And finally, the cost of time involved in shipping to and communicating with distant locations. ICT reduces some of these costs for some activities, but we argue that its effects are ambiguous, and can in some cases increase the value of proximity, rather than reduce it. Finally, we turn to the likely effects of these cost changes on the location of activity and hence on wages and income levels. Will existing centres of economic activity deconcentrate, with activities relocating to lower wage economies? This will occur for some activities, but for others the concentration in central regions may well be reinforced. Furthermore, activities that do relocate will tend to cluster in relatively few new locations. Thus, new technologies may change the pattern of inequalities in the world economy, but not necessarily reduce them. In this way it may be like previous rounds of infrastructure development, such as canals, railways and road networks, that permitted deagglomeration of some industrial activities, but probably reinforced rather than diminished centralising tendencies (Leamer and Storper 2000).
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
618
Handbook of Technology Management in Public Administration
DOES DISTANCE MATTER? Distance and Economic Interactions Almost all economic interactions fall off very rapidly with distance. We look at some of the reasons for this later, but first simply outline the facts. The standard framework for quantifying the effect of distance on economic interactions is the gravity model, which relates interactions between a pair of countries to their economic mass and to a measure of the cost of the interaction between them. This framework has been applied in a number of different contexts, most of all to trade flows. Thus, if yij is the value of exports from country i to country j, then the gravity relationship takes the form yij Z si mj tijq
(8.1)
where si denotes exporter (supplier) country characteristics, mj denotes importer country characteristics, and tij is a set of “between-country” factors measuring the costs of trade between the countries. This between-country term is typically proxied by distance, and perhaps also by further between-country characteristics such as sharing a common border, a common language, history or treaty relationship. Exporter and importer country characteristics can be modelled in detail, including income, area, population and geographical features such as being landlocked. However, if the researcher’s main interest is the between-country term, tij, then si and mj can simply take the form of dummy variables whose values are estimated for each country. Extensive data permits the gravity trade model to be estimated on the bilateral trade flows of one hundred or more countries. Studies find that the elasticity of trade flows with respect to distance is around K0.9 to K1.5. It is important to realise quite how steep the decline in trade volumes implied by this relationship is. Table 8.2 expresses trade volumes at different distances, relative to their value at 1,000 km; if q Z K1.25, then by 4,000 km volumes are down by 82% and by 8,000 km down by 93%. Similar methodologies have been used to study other sorts of economic interactions. Portes and Rey (1999) study cross-border equity transactions (using data for 14 countries accounting for around 87% of global equity market capitalization, 1989–1996). Their main measure of country mass is stock market capitalization, and their baseline specification gives an elasticity of transactions with respect to distance of K0.85. This indicates again how—controlling for the characteristics of the countries—distance matters. Other authors have studied foreign direct investment flows. Data limitations mean that the set of countries is once again quite small, and the estimated gravity coefficient is smaller, although still highly significant; for example, Di Mauro (2000) finds an elasticity of FDI flows with respect to distance of K0.42. The effect of distance on technology flows has been studied by Keller (2001) who looks at the dependence of total factor productivity (TFP) on R&D stocks (i.e., cumulated R&D expenditures), for 12 industries in the G-7
TABLE 8.2 Economic Interactions and Distance
1,000 km 2,000 km 4,000 km 8,000 km
Trade (qZL1.25)
Equity flows (qZL0.85)
FDI (qZ L0.42)
Technology
1 0.42 0.18 0.07
1 0.55 0.31 0.17
1 0.75 0.56 0.42
1 0.65 0.28 0.05
Note: Flows relative to their magnitude at 1,000 km.
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
619
countries, 1971–1995. The R&D stocks include both the own country stock and foreign country stocks weighted by distance.1 Both own and foreign country stocks are significant determinants of each countries’ TFP and so too is the distance effect, with R&D stocks in distant economies having much weaker effects on TFP than do R&D stocks in closer economies. The final column in Table 8.2 illustrates his results by computing the spillover effects of R&D in more distant economies relative to an economy 1,000 km away; the attenuation due to distance is once again dramatic.2 Distance and Real Income The previous sub-section made the point that distance matters greatly for economic interactions. How does this feed into the distribution of income across countries? A number of mechanisms might be at work, including the effects of investment flows and technology transfers. Here, to illustrate effects, we concentrate just on the way in which trade flows—and the implicit trade costs demonstrated by the gravity model—can generate international income gradients. The effect of distance on factor prices is easily seen through a simple example. Suppose that country 1 represents the high income countries, from which country 2, a developing country, imports intermediate goods and to which it exports manufactures. The cost of producing manufactures in country 1 is given by c(w1, r1, tq) where w1 and r1 are the unit costs of labour and capital and q the cost of intermediate goods.3 The developing country has to import the intermediate good, and imports are subject to “trade costs” at proportionate rate t.4 These “trade costs” consist of a number of different elements that we discuss in detail in the following section. Trade costs at rate t mean that the price of intermediates in country 2 is tq, so country 2 units costs are c(w1, r1, tq), given its factor prices, w2 and r2.5 It sells in the developed country market, but faces trade cost factor t in shipping to this market. In order to compete with production in country 1, the following equation must therefore hold cðw1 ; r1 ; tqÞ Z tcðw2 ; r2 ; tqÞ
(8.2)
Figure 8.7 illustrates country 2 wages (expressed as a proportion of country 1 wages) as a function of trade costs, computed from this relationship with the assumption that r2 Z r1. It can be thought of as illustrating the wage gradient for different countries at increasing distances (increasing trade costs) from the centre. In all cases illustrated two thirds of value added is labour and one-third capital. In the upper line there are no intermediate goods, while in the middle line intermediates account for 25% of country 1 costs, and in the bottom line 50% of 1.0 Wages relative to 0.9 central 0.8 wages 0.7
No intermediates
0.6 0.5 0.4
Intermediates 25% country 1 costs
0.3 0.2 0.1 0.0 1.0
Intermediates 50% country 1 costs 1.1
1.2
1.3
1.4
FIGURE 8.7 Trade costs and wages.
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
1.5
t
620
Handbook of Technology Management in Public Administration
country 1 costs. The point to note from the figure is how rapidly wages get squeezed at more remote locations with higher trade costs. Thus if trade costs are 30% of the value of output (t Z 1.3) and intermediate inputs are 50% of costs (bottom curve) then wages drop to around one tenth of their level in the centre. Trade costs of 30% are not that high (the median cif/fob ratio for all countries reporting bilateral trade is 1.28). Furthermore, if the price of capital were higher in more remote locations (r2Or1) then wages would be depressed still further. Figure 8.7 suggests the theoretical importance of distance for international inequalities. To establish the importance of this relationship in fact we must generalize it to many countries and to the full set of trade relationships between them. Instead of simple measures of transport costs we define the “market-access” of country i, X MAi Z mj tijq j
Recall that mj measures the economic mass of an importer country, and tijq the rate at which its effect falls off with distance. MAi is therefore a measure of country i’s access to demand from all countries. It provides a generalization of the old idea of “market potential” (Harris 1954), which takes GDP as economic mass and the reciprocal of distance as the measure of spatial decay. Analogously, we define the “supplier-access” of country i as X q SAi Z sj tij j
where sj represents economic characteristics of exporting countries, such as manufacturing output, and we can use SAi to measure country i’s access to suppliers of intermediate goods. Thus, a high value of SAi means that country i is close to exporting countries so has relatively cheap access to intermediate goods. Using these concepts we can now express the rate of return to production in country i as a function of the wage in the country and its market and supplier access: ri Z R w i ;
X j
mj tijq ;
X j
! sj tijq
Z Rðwj ; MAi ; SAi Þ
(8.3)
Suppose that economic activity locates in a manner that equalizes the rate of return across countries. Equation 8.3 then form a set of equations linking each country’s wage to its market and supplier access, so generates an estimating equation of the form, wi Z a C 41 MAi C 42 SAi C ui
(8.4)
The final term, ui, is an error term to which we assign, for the moment, all other influences on wages. Redding and Venables (2000) estimate this relationship using a cross-section of data on 101 developed and developing countries.6 A two-stage procedure is followed. At the first stage a gravity trade model (Equation 8.1) is estimated to give estimates of mj, si and q, from which measures of market-access and supplier-access can be constructed for each country. The full specification of market-access and supplier-access requires that each country’s own market (and supply) is included, as well as the effect of all foreign markets (suppliers). In this paper we discuss only the foreign market (foreign supplier) effects, so work with foreign market-access and foreign supplier-access, defined as FMAi Z
X jsi
mj tijq
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
621
10.2581 AUS NZL
JPN
TWN
ln GDP per capita (U.S. dollars)
ARG SAU
6.1569
USA SGP HKG
NOR ISR KOR GRC
FIN ESP PRT
CHE CAN BLX AUT DNK GER GER FRA NLD IRL SVN HUN
MYS ZAF URY MUS CHL TTO RUSMEX BRA THA ESTHRV ROM GAB LTU VEN TUR PAN DZA PBY MKD PER KAZ SLV JOR PHL GTMJAM MAR ECU SYR IDN EGY ALB LKA CHN ZWE HND BOL KGZ NIC MDA ARM IND PAK CIV MNG CMR SDN NPL CAF KEN TCD MDS ZMB YEM MOZ MWI ETH TZA 12.4915
CZE SVK
POL
17.9726 ln FMA
FIGURE 8.8 GDP per capita and FMA.
and FSAi Z
X jsi
sj tijq
Redding and Venables deal with the full case. At the second stage, Equation 8.4 is econometrically estimated. Before looking at regression results it is instructive to look at the scatter plot given in Figure 8.8. The horizontal axis is the log of foreign market access (FMA), and the vertical axis gives the log of GDP per capita, used as a proxy for manufacturing wages.7 The figure presents evidence of the importance of market-access in determining wages—the empirical analogue of Figure 8.7. Clearly, there is a strong positive association between FMA and per capita income. There are outliers such as Australia, New Zealand, Japan, the U.S.A., Singapore and Hong Kong. For two of these, this is explicable in terms of their own size: the sheer population mass of the U.S.A. and Japan mean that domestic market and supplier access are extremely important relative to foreign access. Looking at the rest of the sample, the relationship holds within regions as well as between them. Thus, there is a European wage gradient lying from the core countries down through Spain and Portugal (ESP and PRT) to Greece (GRC). There is an East European gradient, lying below the West European, indicating that these countries have lower per capita income than their location alone would justify. Similar gradients can be pulled out for other regions. The results of using this data to estimate Equation 8.4 are given in Table 8.3. Column (1) presents the results using foreign market-access alone. The estimated coefficient is positive and highly statistically significant and the variable explains about 35% of the cross-country variation in income per capita. Column (2) uses foreign supplier-access alone, with similar effect. The theoretical specification says that we should include both market-access and supplier-access, and column (3) does this, although separately identifying the coefficients on these two variables is difficult given
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
622
Handbook of Technology Management in Public Administration
TABLE 8.3 GDP Per Capita, Market and Supplier Access (1)a 101 1996
In (GDP per Capita) Obs Year In (FMAi) In (FSAi) In (hydrocarbons per capita) Fraction land in geog. tropics Prevalence of malaria Socialist rule 1950–1995 External war 1960–1985 Estimation R2 F($) ProbOF
(2)a 101 1996
(3)a 101 1996
(5)a 99 1996
0.319 0.182 (0.040)
0.277 (0.063)
0.532 (0.114)
0.476 (0.076)
0.026 (0.016) K0.139 (0.253) K1.496 (0.268) K0.743 (0.156) K0.344 (0.170) OLS
OLS
OLS
OLS
0.346 52.76 0.000
0.377 57.05 0.000
0.361 54.60 0.000
0.671 55.63 0.000
Note: First stage estimation of the trade/using Tobit (column (3) in Table 8.2). a
Bootstrapped standard errors in square parentheses (200 replications).
the high degree of correlation between them. However, the theory suggests a restriction across the two coefficients based on the relative shares of labour and intermediates in costs, and column (3) presents estimates based on the assumption that the intermediate share of costs is 50% higher than the labor share. Once again, results are highly significant, with the measures explaining 36% of the variation in the cross-country income distribution. Of course, we do not claim that geography is the only cause of cross-country variations in income, and the final column of Table 8.3 includes other variables, particularly those used by Sachs and his coauthors.8 Endowments of hydrocarbons per capita have a positive and significant effect, as would be expected, while the proportion of land in the tropics is negative although insignificant. Former socialist rule and involvement in external wars have negative and significant effects. Sachs has argued that malaria can have a pervasive productivity reducing effect, and the variable measuring the presence of malaria (a dummy variable taking value one in countries where malaria is endemic) has a significant negative and quantitatively important effect. Together with the foreign market-access measure these variables explain around two-thirds of the cross country variation in per capita income. From the current perspective, the main point is that the foreign market-access measure remains highly significant, making the point that distance matters for per capita income, as suggested by the theory.
WHAT DETERMINES DISTANCE COSTS
AND
HOW ARE THEY CHANGING?
We argued above that geography is an important determinant of per capita income. Despite the presence of large cross-country wage differences it is not profitable for firms to relocate, moving away from markets and suppliers. We now look in more detail at the determinants of the costs of distance and at the effects of new technologies on these costs. This can best be addressed through the following thought experiment. A firm is considering where to source its supplies from, or where to locate its own production. How is the decision to outsource to a low wage economy deterred by distance, and how might ICT mitigate this deterrent effect?
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
623
We divide the distance effects into four main elements. First, making any sort of trade involves finding a trading partner, a process of search and matching which turns on the availability of information. Second, inputs and outputs have to be transported. We show how these depend on country and commodity characteristics and present some evidence on how they are changing; in “weightless” activities new technologies set these costs essentially at zero, but we argue that such activities amount to only a few percent of total expenditure. Third, the supply chain has to be managed; for outsourced supply this involves a process of information exchange and monitoring, and for own investment it involves management of the entire project. The final component of the costs of distance is time. New technologies often speed up aspects of the production and management process, but we argue that this might either increase or decrease the benefits of proximity and costs of distance. Searching and Matching A major reason why transactions fall off with distance is that we simply know less about what potential trades can be made with people on the other side of the earth than we do about potential trades with our neighbours. Relatively little is known about the magnitude of these information barriers, although attempts to establish their existence have been made by a number of researchers. For example, Rauch and Trindada (1999) use a gravity trade model to show how ethnic Chinese networks seem to increase trade volumes. It seems likely that new technologies—the Internet in particular—significantly reduce search and matching costs. The Internet means that distance ceases to be important in advertising (by either suppliers and purchasers) and business-to-business exchanges facilitate search and matching across space. From my desktop a search engine will produce “about 10,300” matches for the search string garmentCexportCchinaCltd, at least the first ten of which are trading houses or Chinese firms offering supply. The most heavily researched examples of searching and matching through the Internet have a national rather than international focus. For example, in the U.S. automobile market in 1999 more than 40% of buyers used the internet to seek out price and model information— although only 3% of sales were made on the internet.9 This example makes a point which many dotcom companies have discovered, and which is surely even more true in an international context. The internet is excellent for acquiring information, but information is a necessary but by no means a sufficient condition for completing a trade. Moving Inputs and Outputs An international transaction requires that outputs and traded inputs be moved across space. This can be done by different modes—surface, air, or for some activities, digitally. How large are these costs, and in what ways—and for how large a share of trade—do we expect new technologies to reduce them? Data on shipping costs indicates that there is a very wide dispersion of transport costs across commodities and across countries. Thus, for the United States in 1994, freight expenditure was only 3.8% of the value of imports, but equivalent numbers for Brazil and Paraguay are 7.3 and 13.3% (Hummels 1999a, from customs data). These values incorporate the fact that most trade is with countries that are close, and in goods that have relatively low transport costs. Looking at transport costs unweighted by trade volumes gives much higher numbers; thus, the median cif/fob ratio, across all country pairs for which data is available, is 1.28 (implying 28% transport and insurance costs). Looking across commodities, an unweighted average of freight rates is typically 2–3 times higher than the trade weighted average rate. Estimates of the determinants of transport costs are given in Hummels (1999b) and Limao and Venables (2001). These studies typically find elasticities of transport costs with respect to distance of between 0.2 and 0.3. Limao and Venables find that sharing a common border substantially
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
624
Handbook of Technology Management in Public Administration
reduces transport costs, overland distance is around seven times more expensive than sea distance, and being landlocked increases transport costs by approximately 50%. Infrastructure quality (as measured by a composite of index of transport and communications networks) is important; for example, while the median cif/fob ratio is 1.28, the predicted value of this ratio for a pair of countries with infrastructure quality at the 75th percentile rises to 1.40. How are transport costs changing through time? Figure 8.9 documents the evolution of the costs of ocean shipping, air freight and transmission of digitized information. There are three main points to notice. First, the costs of sea transport declined during the 1940s and 1950s, but since then there has been no trend decline, although there have been substantial fluctuations driven largely by oil prices. This seems superficially surprising, but less so when one sees that the variable reported is the shipping cost relative to a goods price index. Thus, there has been technical progress in shipping, but it has been no faster than the average in the rest of the economy. Second, the cost of airfreight fell more and continued to fall for a longer period, but this too has essentially bottomed out from the 1980s onwards. The third series is a measure of the cost of transmitting digitized information. Evidently, this has experienced the most dramatic fall, and can now be regarded as being close to zero. From the standpoint of investigating international inequalities the important question is: what share of world expenditure is now “weightless” and can be digitized and transmitted at close to zero cost? This question is very hard to answer, because it is typically particular economic functions that can be digitized, rather than whole production sectors that are the basis for data collection. There are numerous examples of activities that have been digitized and relocated. Airline ticketing services and the back-room operations of banks are standard ones. Call centres, transcription of medical notes, architectural drawings and cartoons and computer graphics for the film industry are further possibilities. One way to try and get a quantitative estimate is to look sectorally, in which case the numbers look rather small. Figures are available for U.S. household consumption of ICT-based products and services. By 1998, 50% of Americans already had a personal computer and 30% were regular Internet users. But total consumption of ICT-based products and services, including voice telephony, was only 2.4% of consumer expenditure, of which a very large part is ultimately devoted to upkeep of the network, a largely non-tradeable activity (Turner 2001). On the supply side, the U.S. Bureau of Labor Statistics foresee ICT industry employment growing from 3.7% of the U.S. total in 1998 to 4.9% in 2008, with the increase concentrated almost entirely in computer processing and 120 100 80 Ocean freight Air freight Satellite charges Transatlantic phone call
60 40 20 0
1940
1950
1960
1970
1980
1990
FIGURE 8.9 Transportation versus communication costs, 1940–1990. Source: From Baldwin, R. E. and Martin, P., Two Waves of Globalisation: Superficial Similarity and Fundamental Differences, NBER working paper, no. 6904, 1999.
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
625
software services (Turner 2001). The OECD estimates that all software and computer related services accounted to 2.7% of U.S. GDP in 1996, and half that in other OECD countries studied. Software products and computer services combined accounted for just 0.8% of U.S. exports in 1996 (OECD 1998). Other sectors contain functions that are “IT enabled.” In banking it is estimated that some 17–24% of the cost base of banks can be outsourced, (Economist May 5th 2001), a share that seems quite low for an activity that is fundamentally weightless. Another way to get a feel for the magnitude of these activities is to look at the recent experience of the highly successful Indian software and IT-enabled services sectors. The total output of software and related services in 2000 was around $8 billion with exports of $4 billion. IT enabled services—call centres (“customer interaction centres”), medical transcriptions, finance and accounting services—had exports to the U.S.A. of $0.26 billion, predicted to grow to $4 billion by 2005 (Economist May 5th 2001). These are substantial size activities, compared to total Indian exports of $45 billion in 2000, but are less than 1% of total U.S. imports of around $950 billion. Although it is difficult to quantify the share of the economy that is, or is likely to become, weightless, one fundamental point can be made. As activities are codified and digitized, so not only can they be moved costlessly through space, but also they are typically subject to very large productivity increases and price reductions. Thus, the effect of ICT on, say, airline ticketing, has been primarily to replace labor by computer equipment, and only secondarily to allow remaining workers to be employed in India rather than the United States or Europe. (Technology that can capture voice or handwriting will make Indian medical transcription obsolete). This suggests that even if more activities become weightless the share of world expenditure and employment attributable to these activities will remain small—perhaps as little as a few percent of world GDP. Monitoring and Management Recent years have seen rapid growth of both outsourcing and foreign direct investment (FDI), with the associated development of production networks or production chains.10 FDI has grown faster than either income or trade. The growth of production networks has been studied by a number of researchers. One way to measure its growth is by looking at trade in components, and Yeats (1998) estimates that 30% of world trade in manufactures is trade in components rather than final products. Hummels, Ishii, and Yi (2001) chart trade flows that cross borders multiple times, as when a country imports a component and then re-exports it embodied in some downstream product. They find that (for 10 OECD countries), the share of imported value added in exports rose by one third between 1970 and 1990, reaching 21% of export value. Both FDI and outsourcing involve, in somewhat different ways, a fragmentation of the structure of the firm, as production is split into geographically and/or organizationally different units. From the international perspective this fragmentation offers the benefits of being able to move particular stages of the production process to the lowest cost locations—labour-intensive parts to low wage economies, and so on. However, as well as involving potentially costly shipping of parts and components it also creates formidable management challenges. Product specification and other information has to be transferred, and production schedules and quality standards have to be monitored. Do new technologies reduce the costs of doing this? To the extent that pertinent information is “codifiable” the answer is likely to be yes. The use of ICT for business-to-business trade is well documented, although this is reported to often reduce the number of suppliers a firm uses, rather than increase it.11 In mass production of standardized products designs can be relatively easily codified; the production process is routine, daily or hourly production runs can be reported and quality data can be monitored. Dell Computers offers the classic example of the use of new technologies to outsource to order, getting components from suppliers at short notice. However, it is instructive that Dell’s business practices, while held up
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
626
Handbook of Technology Management in Public Administration
as a model, has not been widely emulated (Economist April 12th 2001). It works because PCs are made almost entirely from standard parts, available from many sources; there is no need to order special components in advance and consumer customization of PCs is within very narrow limits— speed and memory, but not colour or trim. The product range and set of options is vastly less complex than a motor car. In many activities then, the pertinent information cannot be codified so easily. There are two sorts of reasons for this. One is the inherent complexity of the activity. For example, frequent design changes and a process of ongoing product design and improvement (involving both marketing and production engineering) may require a level of interaction that—to date—can only be achieved by face-to-face contact. The second reason is to do with the fact that contracts are incomplete, and people on either side of the contract (or in different positions within a single firm) have their own objectives. It is typically expensive or impossible to ensure that their incentives can be shaped to be compatible with meeting the objectives of the firm. This issue has been the subject of a large economics literature. Part of the literature has its origins in analysis of the boundaries of the firm (Coase 1937), asking what transactions are best done within the firm, and what by the market. Following Williamson (1975, 1985) this is typically modelled as a trade-off between efficiency gains of using specialist suppliers (or suppliers in locations with a comparative advantage or low labor costs) and the problems encountered in writing (enforceable) contracts with them. Another part of the literature looks at the problems of incentives in organizations, asking how employees can be induced to meet their firm’s objectives.12 While new technologies may reduce the costs of monitoring, it seems unlikely that these problems of incomplete contracts are amenable to a technological fix. What evidence is there? On the one hand, there is the fact that in recent years there has been a dramatic increase in the outsourcing of activities to specialist suppliers, suggesting that difficulties in writing contracts and monitoring performance have been reduced. On the other hand, a number of empirical studies point to the continuing importance, despite new technologies, of regular face-to-face contact. Thus, Gaspar and Glaeser (1998) argue that telephones are likely to be complements, not substitutes, for face-to-face contact as they increase the overall amount of business interaction. They suggest that, as a consequence, telephones have historically promoted the development of cities. The evidence on business travel suggests that as electronic communications have increased so too has travel, again indicating the importance of face-to-face contact. Leamer and Storper (2000) draw the distinction between “conversational” transactions (that can be done at a distance by ICT) and “handshake” transactions that require face-to-face contact. New technologies allow dispersion of activities that only require “conversational” transactions, but might also increase the complexity of production and design process, and hence increase the proportion of activities that require “handshake” communication. Overall then, it seems that there are some relatively straightforward activities where knowledge can be codified, new technologies will make management from a distance easier and relocation of the activity to lower wage regions might be expected. But monitoring, control and information exchange in more complex activities still requires a degree of contact that involves proximity and face-to-face meetings. Perhaps nowhere is this more evident than in design and development of the new technologies themselves. The Costs of Time in Transit We now turn to the final element of shipping costs—the cost of time in transit. New technologies provide radical opportunities for speeding up parts of the overall supply process. There are several ways this can occur. One is simply that basic information—product specifications, orders and invoices—can be transmitted and processed more rapidly. Another is that information about uncertain aspects of the supply process can be discovered and transmitted sooner. For example,
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
627
retailers’ electronic stock control can provide manufacturers with real time information about sales and hence about changes in fashion and overall expenditure levels. For intermediate goods, improved stock controls and lean production techniques allow manufacturers to detect and identify defects in supplies more rapidly. These changes pose the interesting question: if some elements of the supply process become quicker, what does this do to the marginal value of time saved (or marginal cost of delay) in other parts of the process? In particular, if one part of the process that takes time is the physical shipment of goods, then will time saving technical changes encourage firms to move production closer to markets, or allow them to move further away? The importance of the costs of time in transit is highlighted by recent work by Hummels (2000), who analyses data on some 25 million observations of shipments into the United States, some by air and some by sea (imports classified at the 10-digit commodity level by exporter country and district of entry to the United States for 25 years). Given data on the costs of each mode and the shipping times from different countries he is able to estimate the implicit value of time saved by using air transport. The numbers are quite large. The cost of an extra day’s travel is (from estimates on imports as a whole) around 0.3% of the value shipped. For manufacturing sectors, the number goes up to 0.5%, costs that are around 30 times larger than the interest charge on the value of the goods. One implication of these figures is that transport costs have fallen much more through time than suggested by looking at freight charges alone. The share of U.S. imports going by air freight rose from 0 to 30% between 1950 and 1998, and containerization approximately doubled the speed of ocean shipping; these giving a reduction in shipping time of 26 days, equivalent to a shipping cost reduction worth 12–13% of the value of goods traded. Given the magnitude of these costs, how might a time-saving technology influence the location of production? To answer this question it is worth writing down a very simple economic model. Production of a good can take place in one of many locations, and the distance of each of these locations from the place where the product is sold is d. Production requires one unit of labor, so has unit cost equal to the wage. Wages are lower in more remote locations—for the reasons outlined in the “Distance and Real Income” section above—and we write this relationship w(d), where w 0 (d)!0 (as in Figure 8.10). The full supply process and delivery to market takes time T(d,z), which is increasing in distance to market and in a technology parameter, z, so Td(d,z)O0 and Tz(d,z)O0, where a subscript denotes a partial derivative. The proportion of earnings lost due to delay is f(T), f 0 (t)O0. Thus, if the price is p, profits per unit are, p Z p½1K4ðTðd; zÞÞ KwðdÞ
(8.5)
Firms choose where to produce, trading off the loss of earnings due to delay against the lower wages they have to pay in more remote regions. The profit maximizing choice of d is characterized by first Proportion of earnings 1 lost due to delay f(T)
1 – f(T)
1 – exp(–rT)
y f(T )
T0
Delay, T
FIGURE 8.10 The cost of delay.
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
628
Handbook of Technology Management in Public Administration
order condition, pd ZKp4 0 ðTðd; zÞÞTd ðd; zÞKw 0 ðdÞ Z 0
(8.6)
The final term is the lower wage costs from moving to a more remote location, and the first term is the effect of this extra distance on time, Tf(d,z), times the marginal cost of delay, pf 0 . Suppose that there is a technological change, dz, that directly reduces the time taken in the supply process. We want to know whether this technical change induces firms to move closer to the centre or further way. Totally differentiating the first order condition for location choice gives pdd dd 4 00 T Z 0 C dz 4 Td Tz dz 4 Td Tz 0
(8.7)
The term in square brackets on the left hand side is negative (pdd must be negative as the second order condition for profit maximization) so a time saving technical change causes production to move towards the centre (dd/dz positive) if the right hand side is negative. We look at the two terms on the right hand side in turn. The first term, f 00 /f 0 , measures how the marginal cost of time changes as T increases. The case where this is negative is illustrated by the solid (concave) curve in Figure 8.10. In this case a technical improvement which reduces T increases the marginal value of a further reduction in T, so will encourage firms to move production closer to the centre. This is in fact the normal case that arises because of discounting (at rate r, so 4Z 1KexpðKrTÞ). In addition to discounting there are other reasons to believe that f 00 /f 0 is negative. For example, suppose that the firm produces a fashion sensitive product, and under the old retail stock-control technology it was impossible to detect consumer response to this season’s fashion until after it was too late to change production for this season. The firm produced all its stock in advance but expected to have to discount them by factor j; thus, the cost of delay is that instead of receiving price p per unit, it receives only p[1Kj] (the dashed horizontal line in Figure 8.10). Under the new retail stock-control technology the firm can learn about fashion instantaneously, redesign, and sell without discounting. However, if production and shipping takes T and the length of the season is T0 (with sales occurring at a constant rate during the season) then the cost of time is 4ðTÞZ jT=T0 , given by the dashed line between 0 and T0. The shorter is T the higher the proportion of the season in which the firm does not have to discount. (So for example, if T Z T0/2 then one-half of the sales are discounted, and the average receipts are p[1Kj/2].) The dashed line corresponds to a case where f 00 /f 0 !0, so the firm moves production closer to the market to exploit the advantage of the more rapid market information.13 An example of this is the highly successful Spanish clothing chain, Zara (Economist, May 19th 2001). It uses real time sales data, can make a new product line in three weeks (compared to the industry average of nine months) and only commits 15% of production at the start of the season (industry average 60%). It also does almost all its manufacturing (starting with basic fabric dyeing through the full manufacturing process) in house in Spain, with most of the sewing done by 400 local cooperatives (compared to the extensive outsourcing of other firms in the industry). Other examples could arise in intermediate goods supply, where instead of making it quicker to detect new fashions, new technology might make it easier to detect faults; the supplier would then want to move production closer and cut delivery times so that fewer faulty items were in the delivery chain. Returning to the model, the second term in Equation 8.7 gives a quite different reason why firms may relocate their production, arising because of direct complementarity between technology and distance in the journey time. This is best understood through a few examples. Suppose that T depends on activities that happen in sequence—say transmitting information, followed by production and shipping—and that the technical change only affects the first of these.
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
629
Technology, z
T(d, z)=T
Parallel
Series
Distance, d
FIGURE 8.11 Iso-time lines.
Since activities are in sequence, the total time is the sum of the parts Tðd;zÞ Z T z ðzÞ C T d ðdÞ; where the first term is the time of information transmission, and the second the time in shipping. In this case there is no interaction between the technical change and the time taken in shipping, so Tdz is zero. Conversely, suppose that the processes are in parallel, so the time is set by the slowest part of the process, i.e.: Tðd;zÞ Z max½T z ðzÞ;T d ðdÞ : Generally, we might imagine the situation to be between these cases, and this is illustrated by the curved iso-time line in Figure 8.11. Increasing the time taken in information transmission reduces the effect of moving further out on total time taken, so Tdz!0. In this case then, we once again expect to see that the technical improvement encourages activity to move closer to the centre, rather than further away. Evidence on the phenomena outlined here comes from study of just-in-time technologies, where new technologies have allowed much improved stock control and ordering, and a consequent movement of suppliers towards their customers. In a study of the location of suppliers to the U.S. automobile industry Klier (1999) finds that 70–80% of suppliers are located within one day’s drive of the assembly plant, although even closer location is limited by the fact that many suppliers serve several assembly plants. He also finds evidence that the concentration of supplier plants around assembly plants has increased since 1980, a timing that he points out is consistent with the introduction of just-in-time production methods. The leader in the application of justin-time techniques is Toyota, whose independent suppliers are on average only 59 miles away from its assembly plants, to which they make eight deliveries a day. By contrast, General Motors’ suppliers in North America are an average of 427 miles away from the plants they serve and make fewer than two deliveries a day. As a result, Toyota and its suppliers maintain inventories that are one-fourth of General Motors’, when measured as a percentage of sales (Fortune, December 8th 1997).
WHERE WILL ACTIVITIES MOVE? The previous section suggests that ICT will change the costs of distance in quite different ways for different types of activity. For many activities both face-to-face contact and proximity to markets or
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
630
Handbook of Technology Management in Public Administration
a cluster of related activities will remain important. These are activities where complexity makes it difficult to codify information and write complete contracts, where uncertainty makes rapid response to changing circumstances important, or where transport costs remain important. Other sorts of activities can be fully digitized (the “weightless” activities) or may be sufficiently simple that information flows required in production control and monitoring can be codified and implemented remotely. Activities in the former group are likely to remain spatially concentrated, and at least two reasons suggest that their concentration might increase. One is the existence of complementarities in the value of time, as outlined above. The other derives from the possibility of spatially separating these activities from more routine parts of the supply process. For example, suppose that financial services require both “front-room” operations (that tend to cluster together) and “back-room” operations (that are intensive in medium skilled labour and office space). If the front and back-room operations have to be located together, then the overall clustering force might be quite weak—firms that are not in London, Tokyo or New York lose out on the benefits of being in a cluster, but at least have the benefits of cheaper labour and office space. But once the back-room operations can be separated from the frontroom, then the agglomeration forces on the latter become overwhelming. All these activities will therefore be further concentrated by new technologies. It is therefore perhaps to be expected that financial services—in some ways a prime example of a weightless activity— are in fact enormously concentrated in a few centres, with no prospect of technology causing the dissolution of these centres. What about the more routine and codifiable activities? These now have the possibility of moving out of established centres, but where will they go? One possibility is that they spread rather evenly through many locations, bringing modest increases in labour demand in many countries. An alternative is that relocation takes these industries to rather few countries, and this is what we expect to see if there is some propensity for these activities to cluster. The propensity may be quite weak—the point is simply that as activities leave established centres in search of lower wage locations, it is likely that a location that has some similar activities will look more attractive than one that has none. The effects of trade cost reductions in a world where manufacturing is internationally mobile but subject to some clustering forces can be illustrated by developing a variant of the “new economic geography” models of Fujita, Krugman, and Venables (1999). Suppose that there are many countries, arranged in a linear world with a well defined centre and pair of peripheries. Each country is identical (apart from in its location) being endowed with the same quantity of two factors of production (labour and land). There are two production activities, one of which we call “agriculture,” although it can be interpreted as a wider aggregate of all the perfectly competitive sectors of the economy; this sector uses labour, land and manufactures to produce a perfectly tradeable output. The other sector is manufacturing, in which firms operating with increasing returns to scale produce in a monopolistically competitive market structure; these firms use labour and manufactures to produce manufactures. This structure has within it forward and backwards linkages, as manufacturing firms use inputs from other manufacturing firms and supply outputs to other manufacturing firms. These linkages encourage agglomeration, so that typically manufacturing operates only in the central locations, while peripheral locations are specialized in agriculture. The wage implications of this are illustrated in Figure 8.12. At an initial position with high trade costs the low wage countries have agriculture only, as do a corresponding set of countries on the other side of the centre (concealed in the diagram). Wages in these countries are much lower than those in industrialized countries, and wages peak in the central region that has the best market access and best supplier access. The effects of trade cost reduction can be seen by moving to the right along the figure. At lower trade costs it becomes profitable for some firms to relocate to lower wage economies, but (a) these
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
631
0.4 0.3
A
A
0.2
Real Wages
0.6
0.6
National Security Issues
Hig
h
Tra d
ec
ost
s Low
36 ry 31 he 26 s e r ip 21 tion P 16 oca tre L en 11 C 6 y r 1 he r ip Pe
FIGURE 8.12 Trade costs and real wages.
are the countries that are relatively close to the centre and (b) as these countries attract industry so a process of cumulative causation commences. Forward and backwards linkages between firms in the country mean that there is rapid “take-off” of these countries, as indicated by the steepness of the wage gradient. The bold line AA illustrates the wage path of a country located midway between centre and edge as transport costs fall. This country is initially in “the periphery” with no manufacturing and low wages, but lower trade costs cause manufacturing to spread out of the centre, industrializing this country and causing the rapid wage growth illustrated. The point of this example is then, that even for activities that can relocate from established centres, the presence of (weak) agglomeration forces means that they will move to just a subset of possible new locations. As a consequence some countries will experience a rapid increase in labour demand and wages, while others remain in the periphery, essentially untouched by the process. New technologies change the pattern of inequalities in the world economy, but do not uniformly decrease them. The predictions of this theoretical model seem to be broadly in line with what we know about recent sectoral relocations. Much software production has left the United States—but to concentrate largely in Ireland and Bangalore. At a broader level, there has been growth of production networks, with components production outsourced to lower wage countries, but this growth of vertical specialization and parts and components trade is concentrated in a few countries neighbouring existing centres—in Asia, Europe and America. The growth of trade in production networks and its geographical concentration are illustrated in Table 8.4, which looks at countries’ exports of telecommunications equipment (both final equipment and parts and components), a set of commodities for which there has been rapid growth of outsourcing to lower wage countries. The 68 countries in the sample are divided according to their initial (1983–1985) per capita incomes, and we see (bottom row) that the
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
632
Handbook of Technology Management in Public Administration
TABLE 8.4 Exports of Telecommunications Equipment, Final and Parts. Number of Countries Classified by Per Capita Income and Share of Telecoms in Exports Telecoms as % Country’s Exports !3.33% 3.3%–6.6% 6.6%–10% O10% Share of countries in all telecoms exports
1983–1985
1995–1997
Low Income
Mid Income
High Income
Low Income
Mid Income
High Income
36 1 0 0 0.051
9 1 2 0 0.117
14 3 1 1 0.83
32 3 1 1 0.191
7 4 1 0 0.112
11 5 2 1 0.697
share of low income countries in world trade in telecoms equipment rose from 5% in the early 1980s to 19% in the late 1990s. The body of the table gives the number of countries in each income group classified according to the share of telecoms in their exports. The point to note is the skewness of this distribution: telecoms equipment production and trade has become very important for just a few low income countries (for one country it accounts for more than 10% of total exports, another between 6.6 and 10%) while for the vast majority it remains unimportant. This pattern is repeated in other sectors, generally with the same set of countries being the main exporters.
CONCLUSIONS Speculating about the implications of new technology is a notoriously risky activity. However, the analysis of the paper suggests several main conclusions. Some activities will become more deeply entrenched in high income countries—and typically in cities in these countries. These activities will generally be complex—knowledge intensive, rapidly changing, and requiring face-to-face communication. But they will also include supply of non-tradeables and of produced goods where shipping is costly or time consuming. Other activities which are more readily transportable and less dependent on face-to-face communications may relocate to lower wage countries, and this will be an important force for development. However, since these activities may cluster together, development is likely to take the form of rapid development by a small number of countries (or regions) rather than a more uniform process of convergence. Although new technologies facilitate the relocation of these activities, the proportion of world GDP that can “operate as though geography has no meaning” (Cairncross 2001) is likely to be small. New technologies will not mean the death of distance, but the contribution of these technologies to economic development will nevertheless be important. It will come primarily from allowing individuals greater access to knowledge, education and basic services, not through rewriting the rules of economic geography.
APPENDIX Table 8.5 classifies the countries illustrated by Figure 8.8 and Table 8.3.
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
633
TABLE 8.5 Countries in Figure 8.8 and Table 8.3 1. Albania (ALB) 2. Argentina (ARG) 3. Armenia (ARM) 4. Australia (AUS) 5. Austria (AUT) 6. Bangladesh (BGD) 7. Bulgaria (BGR) 8. Belg./Lux(BLX) 9. Bolivia (BOL) 10. Brazil (BRA) 11. C. Afr. Rp. (CAF) 12. Canada (CAN) 13. Switzerl. (CHE) 14. Chile (CHL) 15. China (CHN) 16. Cote d’Ivoire (CIV) 17. Cameroon (CMR) 18. Congo Rep. (COG) 19. Colombia (COL) 20. Costa Rica (CRI) 21. Czech Rep. (CZE) 22. Germany (DEU) 23. Denmark (DNK) 24. Algeria (DZA) 25. Ecuador (ECU) 26. Egypt (EGY) 27. Spain (ESP) 28. Estonia (EST) 29. Ethiopia (ETH) 30. Finland (FIN) 31. France (FRA) 32. Gabon (GAB) 33. U.K. (GBR) 34. Greece (GRC) 35. Guatemala (GTM) 36. Hong Kong (HKG) 37. Honduras (HND) 38. Croatia (HRV) 39. Hungary (HUN) 40. Indonesia (IDN) 41. India (IND) 42. Ireland (IRL) 43. Israel (ISR) 44. Italy (ITA) 45. Jamaica (JAM) 46. Jordan (JOR) 47. Japan (JPN) 48. Kazakhstan (KAZ) 49. Kenya (KEN) 50. Kyrgyz Rp. (KGZ)
51. Korea, Rp. (KOR) 52. Sri Lanka (LKA) 53. Lithuania (LTU) 54. Latvia (LVA) 55. Morocco (MAR) 56. Moldova (MDA) 57. Madagasc. (MDG) 58. Mexico (MEX) 59. Macedonia (MKD) 60. Mongolia (MNG) 61. Mozambiq. (MOZ) 62. Mauritius (MUS) 63. Malawi (MWI) 64. Malaysia (MYS) 65. Nicaragua (NIC) 66. Netherlands (NLD) 67. Norway (NOR) 68. Nepal (NPL) 69. New Zeal. (NZL) 70. Pakistan (PAK) 71. Panama (PAN) 72. Peru (PER) 73. Philippines (PHL) 74. Poland (POL) 75. Portugal (PRT) 76. Paraguay (PRY) 77. Romania (ROM) 78. Russia (RUS) 79. Saudi Arab. (SAU) 80. Sudan (SDN) 81. Senegal (SEN) 82. Singapore (SGP) 83. EL Salvador (SLV) 84. Slovak Rep. (SVK) 85. Slovenia (SVN) 86. Sweden (SWE) 87. Syria (SYR) 88. Chad (TCD) 89. Thailand (THA) 90. Trinidad/T. (TTO) 91. Tunisia (TUN) 92. Turkey (TUR) 93. Taiwan (TWN) 94. Tanzania (TZA) 95. Uruguay (URY) 96. U.S.A. (USA) 97. Venezuela (VEN) 98. Yemen (YEM) 99. South Afr. (ZAF) 100. Zambia (ZMB) 101. Zimbabwe (ZWE)
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
634
Handbook of Technology Management in Public Administration
NOTES 1. Distance weighting according to exp(Kq distanceij). 2. To try and identify the channels through which technical knowledge is transmitted Keller investigates not just distance between countries, but also the volume of trade between them, their bilateral FDI holdings, and their language skills (the share of the population in country i that speaks the language of country j). Adding these variables renders simple geographical distance insignificant; around two-thirds of the difference in bilateral technology diffusion is accounted for by trade patterns, and one-sixth each through FDI and language. However, all these variables are themselves declining with distance. 3. Of course, there are many intermediate goods, but here we summarize their prices in a single price index, q. 4. This is a trade cost factor, thus t Z1.2 means that trade costs are 20% of the value of goods shipped. 5. We assume that technologies are the same in all countries—geography is the only source of difference between countries. 6. They also derive the wage equation and the market access and supplier access from economic fundamentals, based on Fujita, Krugman, and Venables (1999). 7. The list of countries is given in the Appendix. A similar pattern is observed using data on manufacturing wages per worker. See Redding and Venables (2000) for further details. 8. For example, Gallup and Sachs (1999). We only use variables that can be reasonably regarded as exogenous, so do not have, for example, measures of countries’ human or physical capital stocks. 9. Cairncross (2001, p. 113). 10. A good example of outsourcing is Nortel Networks, a Canadian company that specialises in high-performance communications networks. In 1998 it sold off its production plants to separate companies with whom it now has long-term contracts, in order to concentrate on production of the most sophisticated components and on network installation (Cairncross 2001, p. 150). 11. British Airways expects to reduce the number of suppliers from 14,000 to around 2,000 as it implements on-line procurement (Cairncross 2001, p. 138). 12. See Holmstrom and Roberts (1998) and Gibbons (1998) for surveys of these two areas. 13. The curve is concave, although not strictly concave everywhere.
REFERENCES Baldwin, R. E. and Martin, P., Two waves of globalisation: superficial similarity and fundamental differences, NBER working paper, no. 6904, 1999. Cairncross, F., The death of distance 2.0; how the communications revolution will change our lives, Harvard Business School Press, Cambridge, MA, 2001. Coase, R. H., The nature of the firm, Economica, 4, 386–405, 1937. Di Mauro, F., The impact of economic integration on FDI and exports; a gravity approach, CEPS working document no. 156, Brussels, 2000. Dickens, P., Global shift; transforming the world economy, Chapmans, London, 1998. Economist (April 12, 2001), A revolution of one. Economist (May 5, 2001), Outsourcing to India. Economist (May 19, 2001), Floating on air. Feenstra, R. C., Integration of trade and disintegration of production in the global economy, Journal of Economic Perspectives, 12, 31–50, 1998.
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
635
Fortune (December 8th, 1997), How Toyota defies gravity. Fujita, M. Krugman, P. and Venables, A. J., The Spatial Economy; Cities, Regions and International Trade, MIT Press, Cambridge. MA, 1999. Gallup, J. L. Sachs, J. (with A. D. Mellinger), Geography and economic development, In Annual World Bank Conference on Development Economics, 1998, B. Pleskovic and J. E. Stiglitz, Eds., World Bank, Washington, DC, 1999. Gaspar, J., and Glaeser, E. L., Information technology and the future of cities, Journal of Urban Economics, 43, 136–156, 1998. Gibbons, R., Incentives in organisations, Journal of Economic Perspectives, 12, 115–132, 1998. Grossman, G. M., and Helpman, E. Incomplete contracts and industrial organisation, NBER working paper no. 7303, 1999. Harris, C., The market as a factor in the localization of industry in the United States, Annals of the Association of American Geographers, 44, 315–348, 1954. Holmstrom, B. and Roberts, J., The boundaries of the firm revisited, Journal of Economic Perspectives, 12, 73–-94, 1998. Hummels, D., Have international transportation costs declined, mimeo, Chicago, 1999a. Hummels, D., Towards a geography of trade costs, mimeo, Chicago, 1999b. Hummels, D., Time as a trade barrier, mimeo, Purdue University,2000. Hummels, D. Ishii, J. and Yi, K-M., The nature and growth of vertical specialization in world trade, Journal of International Economics, 75–96, 2001. Keller, W., The geography and channels of diffusion at the world’s technology frontier, processed by the University of Texas, 2001. Klier, T. H., Agglomeration in the U.S. auto supply industry Economic Perspectives, Federal Reserve Bank of Chicago, 1999.1, 18–34 (at http://www.chicagofed.org/publications/economicperspectives/1999/ ep1Q99_2.pdf), 1999. Leamer, E. and Storper, M., The economic geography of the internet age, processed by UCLA, 2000. Limao N. and Venables, A. J., Infrastructure, geographical disadvantage, transport costs and trade, World Bank Economic Review, forthcoming, 2001. Marshall, A., Principles of Economics (8th edition, 1920), Macmillan, London, 1890. OECD, The software sector; a statistical profile for selected OECD countries, OECD, Paris, 1999. Porter, M. E., The competitive advantage of nations, Macmillans, London, 1990. Portes, R. and Rey, H., The determinants of cross-border equity flows, CEPR discussion paper no. 2225, 1999. Rauch, J. E. and Trindada, V., Ethnic Chinese networks in international trade, NBER working paper no. 7189, 1999. Redding, S. and Venables, A. J., Economic geography and international inequality, CEPR discussion paper no. 2568 (revised version at http://econ.lse.ac.uk/staff/ajv/winc.pdf), 2000. Turner, A., Just Capital: The Liberal Economy, Macmillan, London, 2001. Williamson, O. E., Markets and Hierarchies; Analysis and Antitrust Implications, Free Press, New York, 1975. Williamson, O. E., The Economic Institutions of Capitalism, Free Press, New York, 1985. Yeats, A., Just how big is global production sharing, World Bank Policy Research working paper no. 1871, 1998.
CENTRE FOR ECONOMIC PERFORMANCE—RECENT DISCUSSION PAPERS To order a discussion paper, please contact the Publications Unit: Tel. C44 (0)20 7955 7673; Fax C44 (0)20 7955 7595; Email
[email protected]; website at http://cep.lse.ac.uk 506. R. Dickens, D. T. Ellwood. Whither Poverty in Great Britain and the United States? The Determinants of Changing Poverty and Whether Work Will Work. 505. M. Ghell. Fixed-Term Contracts and the Duration Distribution of Unemployment. 504. A. Charlwood. Influences on Trade Union Organising Effectiveness in Great Britain. 503. D. Marsden, S. French, K. Kubo. Does Performance Pay De-Motivate, and Does It Matter?
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
636
Handbook of Technology Management in Public Administration
502. S. Nickell, L. Nunziata, W. Ochel, G. Quintini. The Beveridge Curve, Unemployment and Wages in the OECD from the 1960s to the 1990s. 501. S. Redding, M. Vera-Martin. Factor Endowments and Production in European Regions. 500. Edited by D. Marsden and H. Stephenson. Labour Law and Social Insurance in the New Economy: A Debate on the Supiot Report. 499. A. Manning. A Generalised Model of Monopsony. 498. A. Charlwood. Why Do Non-Union Employees Want to Unionise? Evidence from Britain. 497. M. Keil, D. Robertson, J. Symons. Minimum Wages and Employment. 496. A. Di Liberto, J. Symons. Education and Italian Regional Development. 495. S. Redding, A. J. Venables. Economic Geography and International Inequality. 494. A. Bryson. Union Effects on Managerial and Employee Perceptions of Employee Relations in Britain. 493. D. Metcalf. British Unions: Dissolution or Resurgence Revisited. 492. R. Gomez, S. M. Lipset, N. Meltz. Frustrated Demand for Unionisation: the Case of the United States and Canada Revisited. 491. S. Burgess, J. Lane, D. Stevens. Jobs, Workers and Changes in Earnings Dispersion. 490. S. Burgess, S. Profit. Externalities in the Matching of Workers and Firms in Britain. 489. S. Nickell, G. Quintini. Nominal Wage Rigidity and the Rate of Inflation. 488. S. Nickell, J. Van Reenen. Technological Innovation and Performance in the United Kingdom. 487. M. M. Tudela. Explaining Currency Crises: A Duration Model Approach. 486. D. Sturm. Product Standards, Trade Disputes and Protectionism. 485. G. Duranton, V. Monastiriotis. Mind the Gaps: The Evolution of Regional Inequalities in the U.K. 1982–1997. 484. H. G. Overman, Y. Ioannides. Zipfs Law for Cities: An Empirical Examination. 483. H. G. Overman, Y. Ioannides. Cross Sectional Evolution of the U.S. City Size Distribution. 482. Y. Ioannides, H. G. Overman. Spatial Evolution of the U.S. Urban System. 481. H. G. Overman. Neighbourhood Effects in Small Neighbourhoods. 480. S. Gomulka. Pension Problems and Reforms in the Czech Republic, Hungary, Poland and Romania. 479. S. Nickell, T. Jones, G. Quintini. A Picture of the Job Insecurity Facing British Men.
WEAPONS OF MASS DESTRUCTION IN INDIA AND PAKISTAN* U.S. CIA ESTIMATE OF INDIAN FORCE DEVELOPMENTS
AS OF
SEPTEMBER 2001
India continues its nuclear weapons development program, for which its underground nuclear tests in May 1998 were a significant milestone. The acquisition of foreign equipment will benefit New Delhi in its efforts to develop and produce more sophisticated nuclear weapons. During this reporting period, India continued to obtain foreign assistance for its civilian nuclear power program, primarily from Russia. India continues to rely on foreign assistance for key missile technologies, where it still lacks engineering or production expertise. Entities in Russia and Western Europe remained the primary conduits of missile-related and dual-use technology transfers during the latter half of 2000. *Anthony H. Cordesman, Arleigh A. Burke Chair for Strategy, Center for Strategic and International Studies. Copyrightq Anthony H. Cordesman, all rights reserved.
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
637
India continues an across-the-board modernization of its armed forces through ACW acquisitions, mostly from Russia, although many of its key programs have been plagued by delays. During the reporting period, New Delhi concluded a $3 billion contract with Russia to produce under license 140 Su-30 multi-role fighters and continued negotiations with Moscow for 310 T-90S main battle tanks, A-50 Airborne Early Warning and Control (AWACS) aircraft, Tu-22M Backfire maritime strike bombers, and an aircraft carrier. India also continues to explore options for leasing or purchasing several AWACS systems from other entities. India also signed a contract with France for 10 additional Mirage 2000H multirole fighters and is considering offers for jet trainer aircraft from France and the United Kingdom. In addition to helping India with the development of its indigenous nuclear-powered submarine, Russia is negotiating with India the possible lease of a Russian nuclear-powered attack submarine. Russian entities continue to supply a variety of ballistic missile-related goods and technical know-how to countries such as Iran, India, China, and Libya. Iran’s earlier success in gaining technology and materials from Russian entities has helped to accelerate Iranian development of the Shahab-3 MRBM, and continuing Russian assistance likely supports Iranian efforts to develop new missiles and increase Tehran’s self-sufficiency in missile production. Russia continues to be a major supplier of conventional arms. It is the primary source of ACW for China and India, it continues to supply ACW to Iran and Syria, and it has negotiated new contracts with Libya and North Korea. Russia continues to be the main supplier of technology and equipment to India and China’s naval nuclear propulsion programs. In addition, Russia has discussed leasing nuclear-powered attack submarines to India. The Russian Government’s commitment, willingness, and ability to curb proliferation-related transfers remain uncertain. The export control bureaucracy was reorganized again as part of President Putin’s broader government reorganization in May 2000. The Federal Service for Currency and Export Controls (VEK) was abolished and its functions assumed by a new department in the Ministry of Economic Development and Trade. VEK had been tasked with drafting the implementing decrees for Russia’s July 1999 export control law; the status of these decrees is not known. Export enforcement continues to need improvement. In February 2000, Sergey Ivanov, then Secretary of Russia’s Security Council, said that during 1998–1999 the government had obtained convictions for unauthorized technology transfers in three cases. The Russian press has reported on cases where advanced equipment is simply described as something else in the export documentation and is exported. Enterprises sometimes falsely declare goods to avoid government taxes.
U.S. DEPARTMENT OF DEFENSE ESTIMATE OF INDIAN ACTIONS NUCLEAR, BIOLOGICAL, AND CHEMICAL WEAPONS
AND INTENTIONS INVOLVING
Objectives, Strategies, and Resources In his speech to the UN General Assembly on 24 September 1998, Indian Prime Minister Vajpayee noted that while India hoped to fully participate in international arms-control negotiations, it had no intention of scaling back its nuclear weapons program. He stated that, “Mindful of its deteriorating security environment which has obliged us to stand apart from the CTBT in 1996, India undertook a limited series of five under-ground tests. These tests were essential for ensuring a credible nuclear deterrent for India’s national security in the foreseeable future.” He also declared that “in announcing a moratorium (on further nuclear tests), India has already accepted the basic obligation of the CTBT. In 1996, India could not have accepted the obligation, as such a restraint would have eroded our capability and compromised our national security.” India’s goal of indigenous production for all its pro-grams is another element of New Delhi’s strategy to demonstrate its technological and military achievements and to help it to establish independence from foreign suppliers and outside political influence. The Indian economy will continue to grow moderately, with the real
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
638
Handbook of Technology Management in Public Administration
GDP expected to grow at an average annual rate of 5–6% for the next few years, assuming India avoids major conflicts, pursues economic reforms, and has reasonable weather. Despite the announced 28% nominal increase in the 2000 defense budget, some of which reflects inflation and definitional differences, military spending is expected to increase by about 2–3% annually in real terms over the next 10 years. Future defense budgets likely will include a focus on investments for long-term military production self-sufficiency, including those for nuclear and missile forces, in keeping with India’s overall goal of achieving independence from foreign suppliers. Nuclear Program On 11 and 13 May 1998, India conducted what it claimed were five nuclear explosive tests. According to Indian officials, the 11 May tests included a fission device with a yield of about 12 kt, a thermonuclear device with a yield of about 43 kt, and a third test with a yield of about 0.2 kt. An Indian spokesman stated that the first set of tests was intended “to establish that India has a proven capability for a weaponized nuclear program.” India claimed that its 13 May tests had yields of about 0.5 and 0.2 kt, which were carried out to generate additional data for computer simulations. According to the Chairman of India’s Atomic Energy Commission, the tests enabled India to build “an adequate scientific database for designing the types of devices that [India] needs for a credible nuclear deterrent.” The tests triggered international condemnation and the United States imposed wide-ranging sanctions against India. The tests were India’s first since 1974, and reversed the previously ambiguous nuclear posture where Indian officials denied possession of nuclear weapons. Indian officials cited a perceived deterioration of India’s security environment, including increasing Pakistani nuclear and missile capabilities and perceived threats from China, to justify the tests. India has a capable cadre of scientific personnel and a nuclear infrastructure, consisting of numerous research and development centers, 11 nuclear power reactors, uranium mines and processing plants, and facilities to extract plutonium from spent fuel. With this large nuclear infrastructure, India is capable of manufacturing complete sets of components for plutonium-based nuclear weapons, although the acquisition of foreign nuclear-related equipment could benefit New Delhi in its weapons development efforts to develop and produce more sophisticated nuclear weapons. India probably has a small stockpile of nuclear weapon components and could assemble and deploy a few nuclear weapons within a few days to a week. The most likely delivery platforms are fighter-bomber aircraft. New Delhi also is developing ballistic missiles that will be capable of delivering a nuclear payload in the future. India is in the beginning stages of developing a nuclear doctrine. In August 1999, the Indian government released a proposed nuclear doctrine prepared by a private advisory group appointed by the government. It stated that India will pursue a doctrine of credible minimum deterrence. The document states that the role of nuclear weapons is to deter the use or the threat of use of nuclear weapons against India, and asserts that India will pursue a policy of “retaliation only.” The draft doctrine maintains that India “will not be the first to initiate a nuclear strike, but will respond with punitive retaliation should deterrence fail.” The doctrine also reaffirms India’s pledge not to use or threaten to use nuclear weapons against states that do not possess nuclear weapons. It further states that India’s nuclear posture will be based on a triad of aircraft, mobile land-based systems, and seabased platforms to provide a redundant, widely dispersed, and flexible nuclear force. Decisions to authorize the use of nuclear weapons would be made by the Prime Minister or his “designated successor(s).” The draft doctrine has no official standing in India, and the United States has urged Indian officials to distance themselves from the draft, which is nor consistent with India’s stated goal of a minimum nuclear deterrent. India expressed interest in signing the CTBT, but has not done so. It has pledged not to conduct further nuclear tests pending entry into force of the CTBT. Indian officials have tied signature and ratification of the CTBT to developing a domestic consensus on the issue. Similarly, India strongly opposed the NPT as discriminatory but it is a member of the IAEA.
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
639
Only four of India’s 13 operational nuclear reactors currently are subject to IAEA safeguards. In June 1998, New Delhi signed a deal with Russia to purchase two light-water reactors to be built in southern India; the reactors will be under facility-specific IAEA safeguards. However, the United States has raised concerns that Russia is circumventing the 1992 NSG guidelines by providing NSG trigger list technology to India, which does not allow safeguards on all of its nuclear facilities. India has taken no steps to restrain its nuclear or missile programs. In addition, while India has agreed to enter into negotiations to complete a fissile material cutoff treaty, it has not agreed to refrain from producing fissile material before such a treaty would enter into force. Biological and Chemical Programs India has many well-qualified scientists, numerous biological and pharmaceutical production facilities, and biocontainment facilities suitable for research and development of dangerous pathogens. At least some of these facilities are being used to support research and development for biological warfare defense work. India has ratified the BWC. India: NBC Weapons and Missile Program 1. Nuclear: conducted nuclear experiment tests on 11 and 13 May 1998; claimed a total of five tests. 2. Conducted a peaceful nuclear explosive (PNE) in 1974. Capable of manufacturing complete sets of components for plutonium-based nuclear weapons. 3. Has small stockpile of nuclear weapons components and probably can deploy a few nuclear weapons within a few days to a week. It can deliver these weapons with fighter aircraft. 4. Announced draft nuclear doctrine in August 1999 of no-first-use; stated intent to create triad of air-, land-, and seabased missile delivery systems. 5. Has signed neither the NPT nor the CTBT. 6. Biological: has substantial biotechnical infrastructure and expertise, some of which is being used for biological warfare defense research. 7. Ratified the Biological and Toxin Weapons Convention. 8. Chemical: acknowledged chemical warfare program in 1997 and stated that related facilities would be open for inspection. 9. Has sizeable chemical industry, which could be source of dual-use chemicals for countries of proliferation concern. 10. Ratified the CWC. 11. Ballistic Missiles: has development and production facilities for solid- and liquidpropellant fuel missiles. 12. Three versions of liquid-propellant a. Prithvi SRBM: Prithvi I (Army)—150 km range (produced) b. Prithvi II (Air Force)—250 km range (tested) c. Dhanush (Navy)—250 km range (unsuccessfully tested) d. Solid-propellant Agni MRBM: e. Agni tested in 1994 (estimated range 2,000 km) f. Agni II tested in April 1999 (estimated range 2,000 km) 13. SLBM and IRBM also under development. Is not a member of the MTCR. 14. Is not a member of the MTCR. 15. Other Means of Delivery a. Has ship-borne and airborne anti-ship cruise missiles; none have NBC warheads. b. Aircraft: fighter bombers. c. Ground systems: artillery and rockets.
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
640
Handbook of Technology Management in Public Administration
India is an original signatory to the CWC. In June 1997, it acknowledged that it had a dedicated chemical warfare production program. This was the first time India had publicly admitted that it had a chemical warfare effort. India also stated that all related facilities would be open for inspection, as called for in the CWC, and subsequently, it has hosted all required CWC inspections. While India has made a commitment to destroy its chemical weapons, its extensive and well-developed chemical industry will continue to be capable of producing a wide variety of chemical agent precursors should the government change its policy. In the past, Indian firms have exported a wide array of chemical products, including Australia Group-controlled items, to several countries of proliferation concern in the Middle East. (Australia Group-controlled items include specific chemical agent precursors, microorganisms with biological warfare applications, and dual-use equipment that can be used in chemical or biological warfare programs.) Indian companies could continue to be a source of dual-use chemicals to countries of proliferation concern. Ballistic Missiles The development of Indian and Pakistani ballistic missile capabilities has raised concerns about destabilizing efforts to develop and deploy nuclear-armed missiles. India has an extensive, largely indigenous ballistic missile program involving both SRBMs and MRBMs, and has made considerable progress with this program in the past several years. For example, India now has the Prithvi SRBM in production and successfully tested the Agni II MRBM in April 1999. India has development and production infrastructures for both solid- and liquid-propelled missiles. By striving to achieve independence from foreign suppliers, India may be able to avoid restrictions imposed by the MTCR. Nevertheless, India’s ballistic missile programs have benefited from the acquisition of foreign equipment and technology, which India has continued to seek, primarily from Russia. India’s Prithvi SRBM is a single-stage, liquid-fuel, road-mobile, ballistic missile, and it has been developed in three different versions. The Prithvi I has been produced for the Indian Army and has a payload of 1,000 kg and a range of 150 km. The Prithvi II has a 500 kg payload and a range of 250 km and was designed for use by the Indian Air Force. Another variant, called the Dhanush, is under development for the Navy and is similar to the Air Force version; it is designed to be launched from a surface vessel. The Indians conducted a flight test of the Dhanush in April 2000, which failed. India’s MRBM program consists of the Agni missile, with an estimated range of about 2,000 km with a 1,000 kg payload. An early version was tested in 1994 and India successfully tested the follow-on version, the rail-mobile Agni II, in April 1999. This missile will allow India to strike all of Pakistan as well as many key areas of China. Development also is underway for an Intermediate Range Ballistic Missile (IRBM), which would allow India to target Beijing. Lastly, an Indian submarine launched missile, called the Sagarika, also is under development with Russian assistance. Its intended launch platform is the “Advanced Technology Vessel” nuclear submarine. Cruise Missiles and Other Means of Delivery India has ship-launched and airborne short-range anti-ship cruise missiles and a variety of shortrange air-launched tactical missiles, which are potential means of delivery for NBC weapons. All were purchased from foreign sources including Russia and the United Kingdom. In the future, India may try to purchase more modern anti-ship cruise missiles, or try to develop the missiles themselves. However, funding priorities for such efforts will be well below that for ballistic missiles. India also has a variety of fighter aircraft, artillery, and rockets available. Source: Department of Defense, Proliferation and Response, January 2001, India section.
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
INDIA’S SEARCH FOR WEAPONS OF MASS DESTRUCTION
641 OF
MASS DESTRUCTION. INDIA AND WEAPONS
Delivery Systems 1. Despite the announced 28-percent nominal increase in the 2000 defense budget, some of which reflects inflation and definitional differences, military spending is expected to increase by about 2–3% annually in real terms over the next 10 years. Future defense budgets likely will include a focus on investments for long-term military production self-sufficiency, including those for nuclear and missile forces, in keeping with India’s overall goal of achieving independence from foreign suppliers. 2. The CIA reported in September 2001 that India continues an across-the-board modernization of its armed forces through ACW acquisitions, mostly from Russia, although many of its key programs have been plagued by delays. During the reporting period, New Delhi concluded a $3 billion contract with Russia to produce under license 140 Su-30 multi-role fighters and continued negotiations with Moscow for 310 T-90S main battle tanks, A-50 Airborne Early Warning and Control (AWACS) aircraft, Tu-22M Backfire maritime strike bombers, and an aircraft carrier. India also continues to explore options for leasing or purchasing several AWACS systems from other entities. India also signed a contract with France for 10 additional Mirage 2000H multirole fighters and is considering offers for jet trainer aircraft from France and the United Kingdom. In addition to helping India with the development of its indigenous nuclear-powered submarine, Russia is negotiating with India the possible lease of a Russian nuclear-powered attack submarine. 3. The CIA reported on January 30, 2002 that India continues an across-the-board modernization of its armed forces through ACW acquisitions, mostly from Russia, although many of its key programs have been plagued by delays. New Delhi received the first two MiG-21-93 fighter aircraft, and Hindustan Aeronautics Limited will now begin the licensed upgrade of 123 more aircraft. During the reporting period, New Delhi concluded an $800 million contract with Russia for 310 T-90S main battle tanks, as well as a smaller contract for KA-31 helicopters. India is in negotiations with Russia for nuclear submarines and an aircraft carrier, and it also continues to explore options for leasing or purchasing several AEW systems. The Indian air force has reopened the competition for jet trainer aircraft and is considering bids from the Czech Republic, France, Italy, Russia, and the United Kingdom. 4. India has two main delivery options: aircraft and missiles. 5. India possesses several different aircraft capable of nuclear delivery, including the Jaguar, Mirage 2000, MiG-27 and MiG-29. a. India is upgrading 150 Mig-21Bis fighters. It has 88 Jaguars, 147 MiG-27s, and 53 MiG-23 BN/UM configured in the strike/attack mode. b. India has 36–38 Mirage-2000Hs strike aircraft with a significant nuclear strike capability, and is considering buying and deploying 18 Mirage 2000Ds. It has 64 MiG-29s. c. India is acquiring 40 long-range Su-30 strike aircraft; 8 have been delivered The Su-30 has a strike range of 5,000 km with in-flight refueling. d. The MiG-27 and the Jaguar are strike/attack aircraft and require little or no modification to deliver nuclear weapons. The MiG-29, Su-30, and Mirage 2000 were designed for air-to-air combat combat but could be modified to deliver air-dropped nuclear weapons using external racks. 6. It can also mount a weapon on a ballistic missile. The Carnegie Endowment estimates that India has developed nuclear warheads for this purpose, but is not known to have tested such a warhead.
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
642
Handbook of Technology Management in Public Administration
7. India has two major families of missile systems: The Prithvi and Agni. Reporting on these systems differs sharply by source. Estimates based on NGO sources indicate that; a. The Prithvi is a relatively short-range missile that was tested extensively during 1995–1997, with publicly announced tests on January 27, 1996 and February 23, 1997. b. The Indian army has one Prithvi regiment with 3–5 launchers. c. There seem to be three variants: † The Prithvi SS-150 is a liquid fueled missile with a 150-km range and a 1,000kg payload. It was ordered in 1983 and became operational in 1996. It is in lowrate production. A total of 150 seem to have been produced. † The Prithvi SS-250 is a liquid fueled missile with a 250-km range and a 500– 750 kg payload. It was ordered in 1983 and became operational in 201. It is in low-rate production. A total of 50 seem to have been produced. † The Prithvi SS-1350 is a liquid fueled missile with a 350-km range and a 700– 1,000 kg payload. d. Reports in 1997 indicated that India had possibly deployed, or at least was storing, conventionally armed Prithvi missiles in Punjab, very near the Pakistani border. India began test-firing the Prithvi (25) II, the Air Force version capable of targeting nearly all of Pakistan, in early 1996. In June 1997, Prithvi (150) I mobile missile systems were moved from factories in the south into Punjab, bringing many Pakistani cities within direct range of the missile. e. India has claimed the Prithvi only has a conventional warhead. This claim seems unlikely to be true. 8. Estimates based on NGO sources indicate that the Agni is: a. A two-stage medium-range missile. b. It has been tested several times. c. The original Agni I was a liquid and solid-fueled missile with a 1,500-km range with a 1,000-kg warhead. d. In July 1997, the Indian defense ministry announced the revival of the Agni medium-range missile program. e. Testing of the Agni II resumed on April 11 1999 and reached a range near 2,000 km. The maximum range of the missile is stated to be 2,500 km, but a nominal range of 2,000 km seems more likely. It is a solid fueled missile and can be launched quickly without waiting for arming or fueling. India stated in August 1999 that it was deploying the Agni II. It was first ordered in 1983, and seems to have entered production in 2000. Indian sources have said that 20 will be deployed by the end of 2001. f. India is believed to be developing the Agni III with a range of 3,700 km, and possible an Agni IV with a range of 4,000–5,000 km. It was first ordered in 1983, and seems to have entered production in 2000. 9. India is reported to have an ICBM called the Surya under development with a range of 5,000 km. 10. The CIA reported in February 1999 that India’s ballistic missile programs still benefited from the acquisition of foreign equipment and technology. India sought items for these programs during the reporting period from a variety of sources worldwide, including many countries in Europe and the former Soviet Union. 11. The Department of Defense reported in January 2001 that: a. India has an extensive, largely indigenous ballistic missile program involving both SRBMs and MRBMs, and has made considerable progress with this program in the past several years. For example, India now has the Prithvi SRBM in production and successfully tested the Agni II MRBM in April 1999. India has development
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
643
and production infrastructures for both solid- and liquid-propelled missiles. By striving to achieve independence from foreign suppliers, India may be able to avoid restrictions imposed by the MTCR. Nevertheless, India’s ballistic missile programs have benefited from the acquisition of foreign equipment and technology, which India has continued to seek, primarily from Russia. † India’s Prithvi SRBM is a single-stage, liquid-fuel, road-mobile, ballistic missile, and it has been developed in three different versions. † The Prithvi I has been produced for the Indian Army and has a payload of 1,000 kg and a range of 150 km. † The Prithvi II has a 500-kg payload and a range of 250 km and was designed for use by the Indian Air Force. † Another variant, called the Dhanush, is under development for the Navy and is similar to the Air Force version; it is designed to be launched from a surface vessel. The Indians conducted a flight test of the Dhanush in April 2000, which failed. b. India’s MRBM program consists of the Agni missile, with an estimated range of about 2,000 km with a 1,000-kg payload. An early version was tested in 1994 and India successfully tested the follow-on version, the rail-mobile Agni II, in April 1999. † This missile will allow India to strike all of Pakistan as well as many key areas of China. † Development also is underway for an Intermediate Range Ballistic Missile (IRBM), which would allow India to target Beijing. † Lastly, an Indian submarine-launched missile, called the Sagarika, also is under development with Russian assistance. Its intended launch platform is the “Advanced Technology Vessel” nuclear submarine. c. India has ship-launched and airborne short-range anti-ship cruise missiles and a variety of short-range air-launched tactical missiles, which are potential means of delivery for NBC weapons. All were purchased from foreign sources including Russia and the United Kingdom. In the future, India may try to purchase more modern anti-ship cruise missiles, or try to develop the missiles themselves. However, funding priorities for such efforts will be well below that for ballistic missiles. 12. The CIA summarized India’s missile development programs in January 2002 by stating that: a. New Delhi believes that a nuclear-capable missile delivery option is necessary to deter Pakistani first use of nuclear weapons and thereby preserve the option to wage limited conventional war in response to Pakistani provocations in Kashmir or elsewhere. Nuclear weapons also serve as a hedge against a confrontation with China. New Delhi views the development, not just the possession, of nuclearcapable ballistic missiles as the symbols of a world power and an important component of self-reliance. b. Growing experience and an expanding infrastructure are providing India the means to accelerate both development and production of new systems. New Delhi is making progress toward its aim of achieving self-sufficiency for its missile programs, but it continues to rely on foreign assistance. c. Converting the Indian SLV into an ICBM? Rumors persist concerning Indian plans for an ICBM program, referred to in open sources as the Surya. Some Indian defense writers argue that possession of an ICBM is a key symbol in India’s quest for recognition as a world power and useful in preventing diplomatic bullying by the United States. Most components needed for an ICBM are available
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
644
Handbook of Technology Management in Public Administration
from India’s indigenous space program. India could convert its polar space launch vehicle into an ICBM within a year or two of a decision to do so. d. The 150-km-range Prithvi I SRBM continues to be India’s only deployed ballistic missile. e. The Prithvi II SRBM is a modified Prithvi I with an increased range of 250 km. f. The Agni series, which probably will be deployed during this decade, will be the mainstay of India’s nucleararmed missile force. g. The Sagarika SLBM probably will not be deployed until 2010 or later. h. India continues to push toward self-sufficiency, especially in regard to its missile programs. Nevertheless, New Delhi still relies heavily on foreign assistance. i. The DCI Nonproliferation Center (NPC) reported in February 2000, and again in August 2000 that, “While striving to achieve independence from foreign suppliers, India’s ballistic missile programs still benefited from the acquisition of foreign equipment and technology. India sought items for these programs during the reporting period primarily from Russia. New Delhi successfully flight-tested its newest MRBM, the Agni 2, in April 1999 after months of preparations.” It also reported that, Russian entities continued to supply a variety of ballistic missilerelated goods and technical know-how to Iran and were expanding missile-related assistance to Syria and India. 13. India seems to be considering nuclear submarines and cruise missiles as a possible future basing mode. a. The Indian fleet has 15 are submarines, although their operational readiness and performance is low to medicore. b. They include a total of ten diesel-powered “Project 877” Kilo-class submarines, known in India as the the EKM or Sindhu class, have been built with Russian cooperation under a contract between Rosvooruzhenie and the Indian Defense Ministry, with the tenth unit delivered to India in 2000. At least one is equipped with the SS-N-27 antiship cruise missiles with a range of 220 km. c. The FAS reports that India has a number of foreign-produced cruise missile systems in its arsenal, to include Exocet, Styx, Starbright, and Sea Eagle. It also has some indigenous cruise missile systems under development such as the Sagarika and Lakshya variant. The Sagarika is a SLCM with a potential range of 300–1,000 km. Its IOC is estimated to be in 2005. d. India leased a Charlie-class Soviet nuclear powered attack submarine for 3 years beginning in 1968. It was manned by a Russian crew training Indian seamen to operate it. India then returned it to Russia in 1991, and it was decommissioned. e. India has been working since 1985 to develop and build its own nuclear-powered submarine. It obtained plans and drawings for the Charlie II class from the FSU in 1989. This FAS reports that the project illustrates India’s industrial capabilities and weaknesses. f. “The secretive Advanced Technology Vessel (ATV) project to provide nuclear propulsion for Indian submarines has been one of the more ill-managed projects of India. Although India has the capability of building the hull and developing or acquiring the necessary sensors, its industry has been stymied by several system integration and fabrication problems in trying to downsize a 190 MW pressurized water reactor (PWR) to fit into the space available within the submarine’s hull.” g. The Proto-type Testing Centre (PTC) at the Indira Gandhi Centre for Atomic Research. Kalpakkam, will be used to test the submarine’s turbines and propellers. A similar facility is operational at Vishakapatnam to test the main turbines and gear box.
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
645
h. Once the vessel is completed, it will be equipped with Sagarika cruise missiles and an advanced sonar system. i. India has a sea-launched cruise missile under development called the Sagarika. It has an estimated range of 300 km. According to some experts, it may be a ballistic missile. 14. The CIA reported in September 2001 that India continues to rely on foreign assistance for key missile technologies, where it still lacks engineering or production expertise. Entities in Russia and Western Europe remained the primary conduits of missilerelated and dual-use technology transfers during the latter half of 2000.
Chemical Weapons 1. India has a well-developed chemical industry which produces the bulk of the chemicals India consumes domestically. 2. India has long been involved in the development of chemical weapons; possibly since the early 1980s. 3. The FAS reports that the Indian government has set up Nuclear, Biological, and Chemical (NBC) warfare directorates in each of its military services, and an interServices coordination committee to monitor the program. The Indian Army established a NBC cell at Army HQ to study the effects of NBC warfare. a. The Defence Research and Development Organisation (DRDO) is also participating in the program. Research on chemical weapons has continued in various establishments of the military and DRDO research labs. In addition, work is carried out by DRDO to design and fabricate protective clothing and equipment for troops on the battlefield in case of a chemical weapons attack. b. The Defence Research and Development Establishment (DRDE) at Gwalior is the primary establishment for studies in toxicology and biochemical pharmacology and development of antibodies against several bacterial and viral agents. In addition, research is carried out on antibodies against chemical agent poisoning and heavy metal toxicology. Chemical agents such as Sarin and nerve gas are produced in small quantities to test on protective equipment. c. Protective clothing and equipment are designed and manufactured amongst other places at the Defence Materials and Stores Research and Development Establishment at Kanpur. India has developed five types of protective systems and equipment for its troops as a safeguard against NBC hazards. The development of all five types of protective systems and equipment has been completed and their induction into the service has been formally approved. The five types of protective systems and equipment are: NBC individual protective equipment, NBC collective protection system, NBC medical protection equipment, NBC detection equipment and the NBC decontamination system. 4. It has probably reached the point of final development and weaponization for a number of agents no later than the mid-1980s. a. Work by the Federation of American Scientists (FAS) shos that India has a mixed history of compliance with the Chemical Weapons Convention (CWC). b. India became one of the original signatories of the in 1993, and ratified it on 2 September 1996. The treaty came into force on April 29, 1997. India denied that it had chemical weapons during the negotiation of the CWC and when it signed it. It stated formally that it did not have chemical weapons and the capacity or capability to manufacture chemical weapons. India did so, however, knowing that the full destruction of the weapons grade chemicals would take place only at the end of a 10-year
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
646
Handbook of Technology Management in Public Administration
period, and that India’s large chemical industry would benefit from the unrestricted trade and technology access which would be denied to non-members of the treaty. c. India claimed again at the Third UN Disarmament Conference, held in 1988 that India had no chemical weapons. Foreign Minister K. Natwar Singh repeated this claim in 1989 in the Paris Conference of the State Parties to the Geneva Protocol of 1925, as did Minister of State Eduardo Faleiro repeated at the January 1993 Paris Conference CWC signing ceremony. d. However, when India declared its stockpile of chemical weapons to the Chemical Weapons Convention in Geneva on 26 June 1997—the deadline for all signatories to the pact—India filed initial declarations on “testing and development of chemical weapons and their related facilities which were developed only to deal with the situation arising out of possible use of chemical warfare against India.” e. In its required declarations under the CWC, India acknowledged the existence of a chemical warfare program and disclosed the details of its stockpiles and the availability of manufacturing facilities on a very small scale. India pledged that all facilities related to its CW program would be open for inspection, but this declaration kept India’s chemical armory classified, since the CWC Secretariat maintains the confidentiality of such declarations. f. Some reports indicate that Indian, efforts continued for manufacturing and stockpiling chemical weapons for use against Pakistan. On 25 June 1997, however, the Indian government stated that “India will disclose to Pakistan stocks of its chemical weapons.” g. In June 1999, the FAS reported that Pakistan published allegations that India had used or was planning to use chemical weapons against the Mujahideen and Pakistani army elements fighting at the Kashmir border. Former Pakistani Inter-Services Intelligence chief Gen.(retd) Hamid Gul [who had opposed Pakistani ratification of the Chemical Weapons Convention] claimed that Mujahideen had captured a very sensitive posts at Kargil and that there were clear chances that India would use chemical weapons against the Mujahideen. 5. The U.S. Department of Defense reported in January 2001 that, a. India is an original signatory to the CWC. In June 1997, it acknowledged that it had a dedicated chemical warfare production program. This was the first time India had publicly admitted that it had a chemical warfare effort. India also stated that all related facilities would be open for inspection, as called for in the CWC, and subsequently, it has hosted all required CWC inspections. While India has made a commitment to destroy its chemical weapons, its extensive and well-developed chemical industry will continue to be capable of producing a wide variety of chemical agent precursors should the government change its policy. b. In the past, Indian firms have exported a wide array of chemical products, including Australia Group-controlled items, to several countries of proliferation concern in the Middle East. (Australia Group-controlled items include specific chemical agent precursors, microorganisms with biological warfare applications, and dual-use equipment that can be used in chemical or biological warfare programs.) Indian companies could continue to be a source of dual-use chemicals to countries of proliferation concern. Biological Weapons 1. India is a signatory to the BWC of 1972. 2. India has long been involved in the development of biological weapons; possibly since the early 1980s.
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
647
3. India has a well-developed biotechnology research base and its production facilities include includes numerous pharmaceutical production facilities and bio-containment laboratories (including BL-3) for working with lethal pathogens. It also has qualified scientists with expertise in infectious diseases. 4. The FAS estimates that some of India’s facilities are being used to support research and development for BW defense purposes. These facilities constitute a substantial potential capability for offensive purposes as well. 5. The FAS reports that DRDE at Gwalior is the primary establishment for studies in toxicology and biochemical pharmacology and development of antibodies against several bacterial and viral agents. Work is in progress to prepare responses to threats like Anthrax, Brucellosis, cholera, and plague, viral threats like smallpox and viral haemorrhage fever and bio-toxic threats like botulism. Researchers have developed chemical/biological protective gear, including masks, suits, detectors, and suitable drugs. 6. India has probably reached the point of final development and weaponization for a number of agents. 7. U.S. experts feel there is no evidence of production capability, stockpiling, or deployment. 8. The U.S. Department of Defense reported in January 2001 that India has many wellqualified scientists, numerous biological and pharmaceutical production facilities, and bio-containment facilities suitable for research and development of dangerous pathogens. At least some of these facilities are being used to support research and development for biological warfare defense work. Nuclear Weapons 1. India exploited the Atoms for Peace program the United States began in 1953, and bout a heavy water reactor from Canada in 1995 that it later used to provide the plutonium for a nuclear test in 1974. It has since developed a massive indigenous civil and military nuclear program, all of which is free from IAEA safeguards. a. The Bahaba Atomic Research Center is the key nuclear weapons facility. b. Three plutonium reprocessing facilities at Tarapur, Trombay, and Kalpakkum. Can use output from Madras 1 & 2, Kakrapur 1 & 2, and Narora 1 & 2 reactors. c. Two unsafeguarded heavy water reactors—Cirus with 40 MW and Dhruva with 100 MW at the Bahaba Atomic Research Center. d. Mines uranium in the area around Jaduguda. e. Nuclear test site at Pokaran. 2. India has had a clear interest in nuclear weapons since its 1962 border clash with China and China’s first test of nuclear weapons in 1964. a. India first demonstrated its nuclear capability when it conducted a “peaceful nuclear experiment” in May 1974. b. India probably began work on a thermonuclear weapon prior to 1980. By 1989 it was publicly known that India was making efforts to isolate and purify the lithium-6 isotope, a key requirement in the production of a thermonuclear device. c. India relies largely on plutonium weapons, but is experimenting with systems that could be used to make U-235. Some U-235 is useful in producing thermonuclear weapons. A pilot scale uranium enrichment plant is located at Rattehalli in southern India, and a laser enrichment center at the Center for Advanced Technology near Indore. d. India is experimenting with fact breeder reactors at the Indira Ghandi Atomic Research Center south of Madras.
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
648
Handbook of Technology Management in Public Administration
3. Views differ over the reasons for the timing of India’s first major series of tests. The FAS estimates that, “The nuclearisation of India has been an article of faith for the BJP.” One of the few concrete steps taken by Vajpayee in his brief 13-day term as Prime Minister in 1996 was approval for DRDO and DAE to begin preparations for a nuclear test. However, the Government fell two days before the tests could begin, and the succeeding United Front government of H.D. Deve Gowda declined to proceed. Operation Shakti was authorised two days after the Ghauri missile test-firing in Pakistan. On 8 April 1998 Prime Minister Vajpayee met with Department of Atomic Energy (DAE) chief R. Chidambaram and head of the Defence Research and Development Organisation (DRDO) A.P.J. Abdul Kalam and gave the go-ahead for nuclear weapons tests. 4. India conducted its second series of tests 24 years later on May 11, 1998. a. India exploded five nuclear devices in underground tests between May 11 and May 13, 1998. According to Indian Prime Minister Vajpayee, the weapons included: † A “fission device, † A low-yield device, and a † Thermonuclear device.” b. It emplaced the devices on May 8, when scientists from DRDO and DAE arrived at the test site Pokhran. c. On 11 May 1998 India carried out three underground nuclear tests at the Pokhran range. The three underground nuclear tests carried out at 1,545 h involved three different devices—a fission device with a yield of about 12 kt, a “thermonuclear device?” with a yield of about 43 kt and a sub-kiloton device of around 0.2 kt. All three devices were detonated simultaneously. d. The two tests carried out at 1,221 h on 13 May were also detonated simultaneously. The yields of the sub-kiloton devices were in the range of 0.2–0.6 kt.” The Indian government then announced the completion of the planned series of tests. e. These tests broke breaks an international moratorium on nuclear tests; China had conducted its last test in 1996. India deliberate scheduled activity around the test site to avoid coverage by U.S. surveillance satellites. 5. The Carnegie Endowment estimates that India has built steadily larger-scale plutonium production reactors, and facilities to separate the material for weapons use, and has approximately 400 kg of weapons-usable plutonium today. It takes about 6 kg of plutonium to construct a basic plutonium bomb, this amount would be sufficient for 65 bombs. With more sophisticated designs, it is possible that this estimate could go as high as 90 bombs. 6. India officials stated in May 1998, however, that India had enough material for 125 nuclear weapons. 7. The CIA reported in February 1999 that India continued to seek nuclear-related equipment, materials, and technology during the first half of 1998, some of which could be used in nuclear weapons applications. The most sought-after goods were of Russian and U.K. origin. India continues to pursue the development of advanced nuclear weapons, as evidenced by the underground nuclear tests that it conducted in May 1998. The acquisition of foreign equipment could benefit India in its efforts to develop and produce more sophisticated nuclear weapons. 8. The DCI Nonproliferation Center (NPC) reported in February 2000 that India continues to pursue the development of nuclear weapons, and its underground nuclear tests in May 1998 were a significant milestone. (The United States imposed sanctions against India as a result of these tests.) The acquisition of foreign equipment could benefit New Delhi in its efforts to develop and produce more sophisticated nuclear weapons. India obtained some foreign nuclear-related assistance during the first half of 1999 from a variety of sources worldwide, including in Russia and Western Europe.
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
649
9. Geroge Tenet, the Director of the CIA, testified before the Senate Foreign Relations Committee on March 20, 2000 and stated that, “India and Pakistan are developing more advanced nuclear weapons and are moving toward deployment of significant nuclear arsenals. Both sides are postured in a way that could lead to more intense engagements later this year. Our concern persists that antagonisms in South Asia could still produce a more dangerous conflict on the subcontinent.” 10. The FAS reports as of June 2000 that India is generally estimated as having approximately 60 nuclear weapons. Some estimates as high as 200 nuclear devices are based on estimates of plutonium that could be extracted from India’s six unsafeguarded heavywater nuclear power plants. In 1994, K. Subrahmanyam suggested that a force of 60 warheads carried on 20 Agnis, 20 Prithvis and the rest on aircraft would cost about Rs 1,000 crore over 10 years. In 1996, Sundarji suggested a cost of some Rs 2,760 crore—Rs 600 crore for 150 warheads, Rs 360 crore for 45 Prithvis and Rs 1,800 crore for 90 Agni missiles. 11. The CIA reported in August 2000 that India continues to pursue the development of nuclear weapons, and its underground nuclear tests in May 1998 were a significant milestone. The acquisition of foreign equipment could benefit New Delhi in its efforts to develop and produce more sophisticated nuclear weapons. India obtained some foreign nuclear-related assistance during the second half of 1999 from a variety of sources worldwide, including in Russia and Western Europe. 12. The Department of Defense summarized developments as follows in January 2001, a. On 11 and 13 May 1998, India conducted what it claimed were five nuclear explosive tests. According to Indian officials, the 11 May tests included a fission device with a yield of about 12 kt, a thermonuclear device with a yield of about 43 kt, and a third test with a yield of about 0.2 kt. An Indian spokesman stated that the first set of tests was intended “to establish that India has a proven capability for a weaponized nuclear program.” b. India claimed that its 13 May tests had yields of about 0.5 and 0.2 kt, which were carried out to generate additional data for computer simulations. According to the Chairman of India’s Atomic Energy Commission, the tests enabled India to build “an adequate scientific database for designing the types of devices that [India] needs for a credible nuclear deterrent.” c. The tests triggered international condemnation and the United States imposed wideranging sanctions against India. The tests were India’s first since 1974, and reversed the previously ambiguous nuclear posture where Indian officials denied possession of nuclear weapons. Indian officials cited a perceived deterioration of India’s security environment, including increasing Pakistani nuclear and missile capabilities and perceived threats from China, to justify the tests. d. India has a capable cadre of scientific personnel and a nuclear infrastructure, consisting of numerous research and development centers, 11 nuclear power reactors, uranium mines and processing plants, and facilities to extract plutonium from spent fuel. With this large nuclear infrastructure, India is capable of manufacturing complete sets of components for plutonium-based nuclear weapons, although the acquisition of foreign nuclear-related equipment could benefit New Delhi in its weapons development efforts to develop and produce more sophisticated nuclear weapons. e. India probably has a small stockpile of nuclear weapon components and could assemble and deploy a few nuclear weapons within a few days to a week. The most likely delivery platforms are fighter-bomber aircraft. f. New Delhi also is developing ballistic missiles that will be capable of delivering a nuclear payload in the future. India is in the beginning stages of developing a nuclear
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
650
Handbook of Technology Management in Public Administration
g.
h.
i.
j.
doctrine. In August 1999, the Indian government released a proposed nuclear doctrine prepared by a private advisory group appointed by the government. It stated that India will pursue a doctrine of credible minimum deterrence. The document states that the role of nuclear weapons is to deter the use or the threat of use of nuclear weapons against India, and asserts that India will pursue a policy of “retaliation only.” The draft doctrine maintains that India “will not be the first to initiate a nuclear strike, but will respond with punitive retaliation should deterrence fail.” The doctrine also reaffirms India’s pledge not to use or threaten to use nuclear weapons against states that do not possess nuclear weapons. It further states that India’s nuclear posture will be based on a triad of aircraft, mobile land-based systems, and sea-based platforms to provide a redundant, widely dispersed, and flexible nuclear force. Decisions to authorize the use of nuclear weapons would be made by the Prime Minister or his “designated successor(s).” The draft doctrine has no official standing in India, and the United States has urged Indian officials to distance themselves from the draft, which is nor consistent with India’s stated goal of a minimum nuclear deterrent. India expressed interest in signing the CTBT, but has not done so. It has pledged not to conduct further nuclear tests pending entry into force of the CTBT. Indian officials have tied signature and ratification of the CTBT to developing a domestic consensus on the issue. Similarly, India strongly opposed the NPT as discriminatory but it is a member of the IAEA. Only four of India’s 13 operational nuclear reactors currently are subject to IAEA safeguards. In June 1998, New Delhi signed a deal with Russia to purchase two lightwater reactors to be built in southern India; the reactors will be under facility-specific IAEA safe-guards. However, the United States has raised concerns that Russia is circumventing the 1992 NSG guidelines by providing NSG trigger list technology to India, which does not allow safeguards on all of its nuclear facilities. India has taken no steps to restrain its nuclear or missile programs. In addition, while India has agreed to enter into negotiations to complete a fissile material cutoff treaty, it has not agreed to refrain from producing fissile material before such a treaty would enter into force. The CIA reported in September 2001 that India continues its nuclear weapons development program, for which its underground nuclear tests in May 1998 were a significant milestone. The acquisition of foreign equipment will benefit New Delhi in its efforts to develop and produce more sophisticated nuclear weapons. During this reporting period, India continued to obtain foreign assistance for its civilian nuclear power program, primarily from Russia.
Missile Defenses † The CIA reported on January 30, 2002 that India signed a $270 million contract with
Israel for the Barak-1 missile defense systems.
Source: Prepared by Anthony H. Cordesman, Arleigh A. Burke Chair in Strategy, CSIS.
U.S. CIA ESTIMATE OF PAKISTANI FORCE DEVELOPMENTS
AS OF
SEPTEMBER 2001
Chinese entities continued to provide significant assistance to Pakistan’s ballistic missile program during the reporting period. With Chinese assistance, Pakistan is moving toward serial production of solid-propellant SRBMs, such as the Shaheen-I and Haider-I. Pakistan flight-tested the Shaheen-I in
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
651
1999 and plans to flight-test the Haider-I in 2001. Successful development of the two-stage ShaheenII MRBM will require continued Chinese assistance or assistance from other potential sources. Pakistan continued to acquire nuclear-related and dual-use equipment and materials from various sources—principally in Western Europe. Islamabad has a well-developed nuclear weapons program, as evidenced by its first nuclear weapons tests in late May 1998. Acquisition of nuclear-related goods from foreign sources will remain important if Pakistan chooses to develop more advanced nuclear weapons. China, which has provided extensive support in the past to Islamabad’s nuclear weapons and ballistic missile programs, in May 1996 pledged that it would not provide assistance to unsafeguarded nuclear facilities in any state, including Pakistan. We cannot rule out, however, some unspecified contacts between Chinese entities and entities involved in Pakistan’s nuclear weapons development. Pakistan continues to rely on China and France for its ACW requirements and negotiated to purchase an additional 40 F-7 fighters from China. Beijing continues to take a very narrow interpretation of its bilateral nonproliferation commitments with the United States. In the case of missile-related transfers, Beijing has on several occasions pledged not to sell Missile Technology Control Regime (MTCR) Category I systems but has not recognized the regime’s key technology annex. China is not a member of the MTCR. In November 2000, China committed not to assist, in any way, any country in the development of ballistic missiles that can be used to deliver nuclear weapons, and to enact at an early date a comprehensive missile-related export control system. During the reporting period, however, Chinese entities provided Pakistan with missile-related technical assistance. Pakistan has been moving toward domestic serial production of solid-propellant SRBMs with Chinese help. Pakistan also needs continued Chinese assistance to support development of the two-stage Shaheen-II MRBM. In addition, firms in China have provided dual-use missilerelated items, raw materials, and/or assistance to several other countries of proliferation concern— such as Iran, North Korea, and Libya. In the nuclear area, China has made bilateral pledges to the United States that go beyond its 1992 NPT commitment not to assist any country in the acquisition or development of nuclear weapons. For example, in May 1996 Beijing pledged that it would not provide assistance to unsafeguarded nuclear facilities. With respect to Pakistan, Chinese entities in the past provided extensive support to unsafeguarded as well as safeguarded nuclear facilities, which enhanced substantially Pakistan’s nuclear weapons capability. We cannot rule out some continued contacts between Chinese entities and entities associated with Pakistan’s nuclear weapons program subsequent to Beijing’s 1996 pledge and during this reporting period. China is a primary supplier of advanced conventional weapons to Pakistan, Iran, and Sudan, among others. Sudan received military vehicles, naval equipment, guns, ammunition, and tanks from Chinese suppliers in the latter half of 2000.
U.S. DEPARTMENT OF DEFENSE ESTIMATE OF PAKISTANI ACTIONS NUCLEAR, BIOLOGICAL, AND CHEMICAL WEAPONS
AND INTENTIONS INVOLVING
Objectives, Strategies, and Resources Pakistan’s nuclear and missile programs are part of Islamabad’s effort to preserve its territorial integrity against its principal external threat and rival, India. Pakistan attaches a certain immediacy and intensity to its effort and likely will continue to improve its nuclear and missile forces. Pakistan is driven by its perceived need to counter India’s conventional superiority and nuclear capability, remains fearful of India’s regional and global power aspirations, and continues to seek close security ties with China as a balance. Pakistan’s 1998 nuclear weapon tests and its missile tests
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
652
Handbook of Technology Management in Public Administration
in 1998 and 1999 likely were seen by Islamabad as necessary responses to India’s tests, and as a means of bolstering its own deterrent. Pakistan, like India, is putting emphasis on becoming self-sufficient for the production of its nuclear weapons and missiles. During the last several years Pakistan has received assistance from both China and North Korea, which will help it to achieve that goal. It has continued to seek a variety of nuclear-related and dual-use items for weapons development. However, Pakistan has less of a military production infrastructure than rival India, and thus will be forced to rely on outside support for its efforts for several years. Pakistan’s economy will recover gradually from its recent fiscal crisis and the real GDP is expected to grow at an annual rate of about 3–5% for the next several years. This growth assumes no major war, adequate financial assistance from lenders to meet foreign debt obligations, and progress on economic reforms aimed at controlling the government deficit. Pakistan’s defense budget will proceed on a generally upward track, with an average annual real increase of 1–2% expected over the next 10 years. As part of its overall national security strategy, Pakistan likely will continue to attach budget priorities to the further development of nuclear warheads and ballistic missiles. However, part of this effort will depend on continuing support from China and North Korea, or on alternative sources of financial or technical aid. Nuclear Program As a response to India’s tests, Pakistan conducted its own series of nuclear tests in May 1998. Pakistan claimed to have tested six devices, five on 28 May and one on 30 May. Dr. A. Q. Khan, a key figure in Pakistan’s nuclear program, claimed the five devices tested on 28 May were boosted fission devices: a “big bomb” and four tactical weapons of low yield that could be used on small missiles. He also claimed that Pakistan could conduct a fusion or thermonuclear blast if it so desired. The United States imposed additional sanctions against Pakistan as a result of these tests. Pakistan has a well-developed nuclear infrastructure, including facilities for uranium conversion and enrichment and the infrastructure to produce nuclear weapons. Unlike the Indian nuclear program, which uses plutonium for its weapons, Pakistan’s program currently is based on highly-enriched uranium. However, Pakistan also is developing the capability to produce plutonium for potential weapons use. An unsafeguarded heavy-water research reactor built at Khushab will produce plutonium that could be reprocessed for weapons use at facilities under construction. In the past, China supplied Pakistan with nuclear materials and expertise and has provided critical assistance in the production of Pakistan’s nuclear facilities. Pakistan also acquired a significant amount of nuclear-related and dual-use equipment and materials from various sources principally in the FSU and Western Europe. Acquisition of nuclear-related goods from foreign sources will remain important if Pakistan chooses to continue to develop and produce more advanced nuclear weapons, although we expect that, with the passage of time, Pakistan will become increasingly self-sufficient. Islamabad likely will increase its nuclear and ballistic missile stockpiles over the next 5 years. Islamabad’s nuclear weapons are probably stored in component form. Pakistan probably could assemble the weapons fairly quickly and has aircraft and possibly ballistic missiles available for delivery. Pakistan’s nuclear weapons program has long been dominated by the military, a dominance that likely has continued under the new military government and under Pakistan’s new National Command Authority (NCA), announced in February 2000. While Pakistan has yet to divulge publicly its nuclear doctrine, the new NCA is believed to be responsible for such doctrine, as well as nuclear research and development and wartime command and control. The NCA also includes two committees that advise Pakistan’s Chief Executive, General Musharraf, about the development and employment of nuclear weapons. Pakistan remains steadfast in its refusal to sign the NPT, stating that it would do so only after India joined the Treaty. Consequently, not all of Pakistan’s nuclear facilities are under IAEA safeguards. Pakistani officials have stated that signature of the CTBT is in Pakistan’s best interest,
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
653
but that Pakistan will do so only after developing a domestic consensus on the issue, and have disavowed any connection with India’s decision. Like India, Pakistan expressed its intention to sign the CTBT, but, so far, has failed to do so. While Pakistan has provided assurances that it will not assemble or deploy its nuclear warheads, nor will it resume testing unless India does so first; it has taken no additional steps. Pakistan has agreed to enter into negotiations to complete a fissile material cutoff agreement, but has not agreed to refrain from producing fissile material before a cutoff treaty would enter into force. Biological and Chemical Programs Pakistan is believed to have the resources and capabilities to support a limited biological warfare research and development effort. Pakistan may continue to seek foreign equipment and technology to expand its bio-technical infrastructure. Pakistan has ratified the BWC and actively participates in compliance protocol negotiations for the treaty. Pakistan ratified the CWC in October 1997 and did not declare any chemical agent production or development. Pakistan has imported a number of dual-use chemicals that can be used to make chemical agents. These chemicals also have commercial uses and Pakistan is working towards establishing a viable commercial chemical industry capable of producing a variety of chemicals, some of which could be used to make chemical agents. Chemical agent delivery methods available to Pakistan include missiles, artillery, and aerial bombs. 1. Nuclear. Conducted nuclear weapon tests on 28 and 30 May 1998 in response to India’s tests; claimed a total of six tests. 2. Capable of manufacturing complete sets of components for highly enriched uraniumbased nuclear weapons; developing capability to produce plutonium. 3. Has small stockpile of nuclear weapons components and can probably assemble some weapons fairly quickly. 4. It can deliver them with fighter aircraft and possibly missiles. 5. Has signed neither the NPT nor the CTBT. 6. Biological. Believed to have capabilities to support a limited biological warfare research effort. 7. Ratified the BWC. 8. Chemical. Improving commercial chemical industry, which would be able to support precursor chemical production. 9. Ratified the CWC but did not declare any chemical agent production. Opened facilities for inspection. 10. Ballistic Missiles. Has development and production facilities for solid- and liquidpropellant fuel missiles. 11. Solid-propellant program: a. Hatf I rocket—80-km range (produced) b. Hatf III—300-km range; based on M-11 (being developed) c. Shaheen I—750-km range claimed (tested) d. Shaheen II/Ghaznavi—2,000-km range claimed (in design) 12. Liquid-propellant program: a. Ghauri—1,300-km range; based on No Dong (tested) 13. Is not a member of the MTCR. 14. Other Means of Delivery a. Has ship-borne, submarine-launched, and airborne anti-ship cruise missiles; none has NBC warheads. b. Aircraft: fighter-bombers. c. Ground systems: artillery and rockets.
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
654
Handbook of Technology Management in Public Administration
Ballistic Missiles Pakistan has placed a high priority on developing ballistic missiles as part of its strategy to counter India’s conventional and nuclear capabilities. Pakistan has both solid- and liquid-propellant ballistic missile programs and, during the last several years, has received considerable assistance from China and North Korea for these efforts. Pakistan’s goal is to produce increasingly longerrange missiles. However, Pakistan likely will continue to require significant foreign assistance in key technologies for several years. In its solid-propellant program, Pakistan has developed and produced the 80-km range Hatf-1 that is now deployed with the Army. Pakistan also has developed the solid-fueled Shaheen-1 SRBM, which it tested in April 1999. According to Pakistani officials, the Shaheen-1 has a range of 750 km and is capable of carrying a nuclear warhead. Pakistan also received M-11 SRBMs from China, upon which it will base its Hatf III. Pakistan has developed and tested the liquid-propellant Ghauri medium-range ballistic missile, which is based on North Korea’s No Dong MRBM. The Ghauri was successfully tested in April 1998 and 1999. Pakistani officials claimed that the Ghauri has a range of 1,500 km and is capable of carrying a payload of 700 kg, although its range likely is the same as the No Dong, 1,300 km. Also, in April 1998, the United States imposed sanctions against a Pakistani research institute and a North Korean company for transferring technology controlled under Category I of the MTCR Annex. Following the April 1999 tests of the Ghauri and Shaheen-1, Pakistani officials announced the conclusion “for now” of “the series of flight tests involving solid- and liquid-fuel rocket motor technologies .” and called on India to join Pakistan in a “strategic restraint regime” to limit the development of missile and nuclear weapons technology and deployment. Pakistani officials also have stated that they are developing missiles called the Ghaznavi and Shaheen-II, both with an intended range of 2,000 km, which would be able to reach any target in India. Cruise Missiles and Other Means of Delivery Pakistan has sea- and submarine-launched short-range anti-ship cruise missiles and a variety of short-range air-launched tactical missiles, which are potential means of delivery for NBC weapons. All were purchased from foreign sources, including China, France, and the United States. Pakistan may have an interest in acquiring additional anti-ship cruise missiles, as well as land attack cruise missiles, in the future but may be slowed in any such efforts by financial constraints. Pakistan also has a variety of fighter aircraft, artillery, and rockets available as potential means of delivery for NBC weapons. Source: Department of Defense, Proliferation and Response, January 2001, Pakistan section.
CIA ESTIMATE OF PAKISTANI MISSILE FORCE TRENDS—JANUARY 2002 Pakistan sees missile-delivered nuclear weapons as a vital deterrent to India’s much larger conventional forces, and as a necessary counter to India’s nuclear program. Pakistan pursued a nuclear capability more for strategic reasons than for international prestige. Ballistic Missile Programs Since the 1980s, Pakistan has pursued development of an indigenous ballistic missile capacity in an attempt to avoid reliance on any foreign entity for this key capability. Islamabad will continue with its present ballistic missile production goals until it has achieved a survivable, flexible force capable of striking a large number of targets throughout most of India. 1. Pakistan’s missiles include: a. The short-range Hatf I, which Pakistan also is attempting to market, as it is relatively inexpensive and easy to operate.
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
655
b. M-11 missiles that Pakistan acquired from China in the 1990s. (The M-11 SRBM— called the Hatf III in Pakistan—is a single-stage, solid-propellant missile capable of carrying a payload at least 300 km.) c. Ghauri/No Dong MRBMs that Pakistan acquired from North Korea. d. The Shaheen I, a Pakistani-produced single-stage, solid-propellant SRBM. e. The Shaheen II, a road-mobile two-stage solid-propellant MRBM that Pakistan is developing. (Based on several mockups publicly displayed in Pakistan, the Shaheen II probably would be able to carry a 1,000-kg payload to a range of about 2,500 km.) 2. Foreign Assistance: Foreign support for Pakistan’s ambitious solid-propellant ballistic missile acquisition and development program has been critical. 3. During 2001, Chinese entities provided Pakistan with missile-related technical assistance. Pakistan has been moving toward domestic serial production of solid-propellant SRBMs with Chinese help. Pakistan also needs continued Chinese assistance to support development of the two-stage Shaheen-II MRBM. In addition, firms in China have provided dual-use missile related items, raw materials, and/or assistance to several other countries of proliferation concern—such as Iran, North Korea, and Libya.
PAKISTAN AND NUCLEAR WEAPONS Delivery Systems 1. 2. 3. 4.
Pakistan can deliver weapons with strike aircraft or ballistic missiles. Pakistan has several nuclear-capable aircraft, including the F-16 and Mirage. Pakistan has 32 F-16A/B and 56 Mirage 5s. The FAS reports that there are open-source reports suggesting that several of the A-5 Fantan have been equipped to deliver air-dropped atomic weapons. Other reports have suggested that F-16 aircraft have practiced the “toss-bombing” technique that would be used to deliver nuclear weapons. 5. Its other aircraft are 15 aging Mirage IIIEPs with a nominal strike range of 500 km, 30 Mirage 1110s, and low-grade Chinese-made fighters. 6. It is developing several different ballistic missile systems: a. The Chinese M-11 (CSS-7), with a range of 280 km. b. China exported 30 M-11 missiles to Pakistan in 1992. c. The Carnegie Endowment reports that in 1996, a U.S. National Intelligence Estimate (NIE) estimated, Pakistan had roughly three dozen M-11 missiles. The NIE reportedly stated that these were stored in canisters at the Sargodha Air Force Base, along with maintenance facilities and missile launchers; that the missiles could be launched in as little as 48 h, even though the missiles had not been used in actual training exercises; and that two teams of Chinese technicians had been sent to Pakistan to provide training and to help unpack and assemble the missiles. In addition, the document reportedly surmised that Pakistan probably had designed a nuclear warhead for the system, based on evidence that Pakistan had been working on such an effort for a number of years. As noted earlier, however, Pakistan had not conducted a full-scale test of any nuclear explosive device, nor had it flight-tested a prototype nuclear warhead with the M-11. d. The Carnegie Endowment reports that in late August 1996, a U.S. intelligence finding was leaked to the press: using blueprints and equipment supplied by China, Pakistan reportedly had in late 1995 begun construction of a factory to produce short-range missiles based on the Chinese-designed M-11. e. The factory, located near Rawalpindi, was expected to be operational in one or two years. It was not clear whether the facility would be able to build complete missiles, or
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
656
Handbook of Technology Management in Public Administration
7. 8. 9.
10.
11.
12.
whether it would manufacture some components and use imported parts to produce complete systems. f. The missile uses a solid propellant and has a 700-kg payload. The Haft 1A is a 100-km range missile which was tested on February 7, 2000. It is a development of the Haft 1, which had a range of 80 km with a 500-kg payload. The Hatf 2 is a solid-propellant missile with a range of 350 km with a 500-kg payload. a. It seems be a development based on the Chinese M-11. b. It was ordered in 1994 and began low-rate production in 1996. The Hatf 3 is a solid propellant missile with a range of 550 km, although some sources put its range at 600–800 km. a. It was ordered in 1994 and is still developmental. b. Some experts believe it is based on the Chinese M-9 design. c. Others that it is an indigenous two-stage missile similar to the earlier Haft 2, but with a large first-stage solid-fuel assembly. d. In July 1997, Pakistan reportedly tested the Hatf-3 ballistic missile, as a riposte to India’s semi-deployment of the Prithvi missile in Punjab. The launch location showed it could strike Lahore. The Haft-4, or Shaheen I, is believed to be a solid-propellant missile with a 750-km range based upon the Chinese M-9. It has a 1,000-kg payload. a. Ground tests of the Haft-4 were made in 1997 and 1998. It was flight tested on April 15, 1999. b. It was ordered in 1994, and some reported claim low-rate production started in 1999. c. It was flight-tested again in February 2000, and was displayed during the march at the Pakistan Day celebration on March 23, 2000. d. The Shaheen I and Haft 4 are identical. Shaheen II is also known as the Haft 7. a. It is supposed to have a range of 2,500 km. b. It was displayed during the march at the Pakistan Day celebration on March 23, 2000. c. The Pakistani government claims it has a range of 2,500 km and a payload of 1,000 kg. d. It is built by Pakistan’s Atomic Energy Commission’s National Development Complex, which is under the direction of Dr. Samar Mubarak Mund. e. It uses a transporter-erector-launcher vehicle similar to the Russian MAZ-547V, which was once used to transport the SS-20. f. Pakistan’s Space and Upper Atmosphere Research Company may also be involved in its manufacture. g. Pakistan said the missile would be tested shortly. The Gauri I and II missiles are built by AQ Khan Research Laboratories at Kahuta. a. The Ghauri I (Haft 5) is an medium-range missile Ghauri (Haft 5), with a range of 1,300–1500 km with a 500–700-kg payload. It is capable of reaching most cities in India. b. Development began in 1993, with North Korean assistance. c. The initial test version of the missile was the Ghauri I (Haft V) with a maximum range of 1,500 km and a 500–750-kg payload. d. Various statements indicate that it is similar to the North Korean No Dong and Iranian Shahab 3. Some analyst feel it is similar to the Chinese M-9, but the Ghauri is a 16,000-kg missile and the M-9 is only a 6,000-kg system. e. It had its first test flight on April 6, 1998, and flew 1,100 km (900 miles). It was fired from a site near Jhelum in the northeast to an area near Quetta in the southwest. It uses a TEL launcher—a system Pakistan had not previously demonstrated.
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
13.
14.
15.
16.
657
f. Delivery is believed to have begun in 1998. It is believed to have been deployed in May 1998, with 5–10 missiles in the 47th artillery brigade. g. It is believed to have both “conventional” (BCW?) warheads and a 30–40-kt nuclear payload. h. A version for a satellite booster may be in development. i. Pakistan stated in late May 1998 that it was ready to equipment the Ghauri with nuclear weapons. j. The Ghauri was tested again on April 14, 1999. Territorial limits mean that Pakistan can only test to a maximum range of 1,165 km on its own soil. This time, Pakistan seems to have tested the Ghauri II with a range of 2,000–2,300 km and a 750–1,000-kg payload. The Ghauri II (Haft-6) is sometimes credited with a range of up to 3,000 km. a. Some U.S. experts believe it has a maximum range of 2,300 km, but can only go 2,000 km with its present nuclear warhead. b. The missile was ordered in 1993 and limited production began in 1999. c. It is a liquid-fueled missile and takes sometime to prepare, possibly making it vulnerable to Indian strikes. d. The Carnegie Endowment reports that China is reported to be constructing a factory to build similar missiles. The Ghauri III (Haft-7) is sometimes credited with a range of up to 3,000 km. a. The missile was ordered in 1993 and is still developmental. b. Pakistan recovered a U.S. cruise missile that went astray during the U.S. attack on Afghanistan in late August 1998. c. The CIA reported in February 1999 that Chinese and North Korean entities continued to provide assistance to Pakistan’s ballistic missile program. Such assistance is critical for Islamabad’s efforts to produce ballistic missiles. d. In April 1998, the United States imposed sanctions against Pakistani and North Korean entities for their role in transferring Missile Technology Control Regime Category I ballistic missile-related technology. The DCI Nonproliferation Center (NPC) reported in February 2000 that Chinese and North Korean entities continued to provide assistance to Pakistan’s ballistic missile program during the first half of 1999. Such assistance is critical for Islamabad’s efforts to produce ballistic missiles. In April 1998, Pakistan flight-tested the Ghauri MRBM, which is based on North Korea’s No Dong missile. Also in April 1998, the United States imposed sanctions against Pakistani and North Korean entities for their role in transferring Missile Technology Control Regime Category I ballistic missile-related technology. In April 1999, Islamabad flight-tested another Ghauri MRBM and the Shaheen-1 SRBM. a. The U.S. intelligence community reported on July 1, 2000 that China continued to aid Pakistan in building long-range missiles, and had stepped up its shipments of specialty steels, guidance systems, and technical expertise. They also stated that Pakistan’s newest missile factory seemed to follow Chinese designs. The CIA reported in August 2000 that Chinese entities provided increased assistance to Pakistan’s ballistic missile program. North Korea continued to provide important assistance as well. Such assistance is critical for Islamabad’s efforts to produce ballistic missiles. In April 1998, for example, Pakistan flight-tested the Ghauri MRBM, which is based on North Korea’s No Dong missile. As a result, the United States imposed sanctions against Pakistani and North Korean entities in April 1998 for their role in transferring Missile Technology Control Regime Category I ballistic missile-related technology. In April 1999, Islamabad flight-tested another Ghauri MRBM and the Shaheen-1 SRBM and can be expected to respond to another successful Indian missile test (e.g., Agni-II or Prithvi-II) with a new test flight of a Ghauri or Shaheen missile.
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
658
Handbook of Technology Management in Public Administration
17. The CIA reported on January 30, 2002 that Chinese entities continued to provide significant assistance to Pakistan’s ballistic missile program during the reporting period. With Chinese assistance, Pakistan is moving toward serial production of solid-propellant SRBMs, such as the Shaheen-I and Haider-I. Pakistan flight-tested the Shaheen-I in 1999 and plans to flight-test the Haider-I in 2001. Successful development of the twostage Shaheen-II MRBM will require continued Chinese assistance or assistance from other potential sources. Pakistan needs continued Chinese assistance to support development of the two-stage Shaheen-II MRBM. In addition, firms in China have provided dual-use missile-related items, raw materials, and/or assistance to several other countries of proliferation concern—such as Iran, North Korea, and Libya. Chemical Weapons 1. Pakistan has long been involved in the development of chemical weapons—possibly since the early 1980s. 2. It has probably reached the point of final development and weaponization for a number of agents. 3. No evidence of production capability, but Pakistan’s market for industrial chemicals is expanding gradually, with production of chemicals largely confined to soda ash, caustic soda, sulfuric and hydrochloric acid, sodium bicarbonate, liquid chlorine, aluminum sulfate, carbon black, acetone, and acetic acid. Although imports account for most of the market, local production is expected to increase as new plants come on stream. There are over 400 licensed pharmaceutical companies in Pakistan, including 35 multinationals who have over 60% of the market share. Approximately one-third of Pakistan’s total consumption of pharmaceutical is imported. Major suppliers include the United States, the U.K., Germany, Switzerland, Japan, the Netherlands, and France. 4. Pakistan ratified the CWC on 28 October 1997. The CWC was neither discussed in the parliament nor brought before the Federal Cabinet. It claimed that it did have chemical weapons capabilities to declare under the Convention. Although Pakistan did not admit to the manufacture of chemical weapons, it uses and consumes a chemicals that can be utilised for producing chemical weapons, and would would have been denied access to such dual-use chemicals if it has not joined the CWC. 5. The Federation of American Scientists reports that Pakistan has manufactured weapons for blister, blood, choking, and nerve agents according to Indian intelligence estimates. China may be an supplier of technology and equipment to Pakistan. India claims that Pakistan used chemical weapons against Indian soldiers in Siachen in 1987. 6. In 1992 India declared to Pakistan that it did not possess chemical weapons, and India and Pakistan issued a declaration that neither side possessed or intended to acquire or use chemical weapons. 7. Pakistan is now obligated under the CWC to open all its installations for inspection. At the first stage, the team of UN inspectors visited the Wah Ordinance Factory on 19 February 1999 to assess whether Pakistan was producing chemical weapons. The FAS states that according to one published report, “the Pakistani government had dismantled the chemical plant in the factory, the earth was dug up quite deeply after the plant was dismantled, and it was followed by a leveling of the land.” Biological Weapons 1. Pakistan has long been involved in the development of biological weapons—possibly since the early 1980s.
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
659
2. It has probably reached the point of final development and weaponization for a number of agents. 3. No evidence of production capability, but has a well-developed biological and biotechnical R&D and production base by the standards of a developing nation. 4. Pakistan has singed the BWC, and is participating in the negotiations to develop a verification protocol. It has opposed artificial deadlines and an emphasis on creating a comprehensive verification regime that could not be based on consensus. Nuclear Weapons 1. According to the Carnegie Endowment, Pakistan began its nuclear weapons program in 1972, in the aftermath of the 1971 war with India. The program accelerated after India’s nuclear test in May 1974, and made substantial progress by the early 1980s. 2. Carnegie reports that the program was expedited by the return to Pakistan in 1975 of Dr. Abdul Qadeer Khan, a German-trained metallurgist, who was employed at the classified URENCO uranium-enrichment plant at Anselmo in the Netherlands in the early 1970s. Dr. Khan brought to Pakistan personal knowledge of gas-centrifuge equipment and industrial suppliers, especially in Europe, and was put in charge of building, equipping, and operating Pakistan’s Kahuta enrichment facility. 3. Pakistan halted further production of weapons-grade uranium in 1991, temporarily placing a ceiling on the size of its stockpile of highly enriched uranium (HEU). It has made efforts to expand other elements of its nuclear weapons program, however, including work on weapons design, on unsafeguarded facilities to produce plutonium and, possibly, on facilities to increase the production capacity for weapons-grade uranium. 4. The United States terminated economic and military aid to Pakistan in 1977 and 1979 in an effort to force it to halt its nuclear weapons program. 5. According to work by the Federation of American Scientists: a. President Ayub Khan took initial steps in 1965, but, Pakistan’s Atomic Energy commission was founded some 15 years after the Indian program. Zulfiqar Ali Bhutto was the founder of Pakistan’s Nuclear Program, initially as Minister for Fuel, Power and Natural Resources, and later as President and Prime Minister. b. Pakistan’s nuclear program was launched in earnest shortly after the loss of East Pakistan in the 1971 war with India, when Bhutto initiated a program to develop nuclear weapons with a meeting of physicists and engineers at Multan in January 1972. 6. Bhutto reacted strongly to India’s successful test of a nuclear “device” in 1974, and insisted Pakistan must develop its own “Islamic bomb.” Pakistan’s activities were initially centered in a few facilities. A.Q. Khan founded the Engineering Research Laboratories at Kahuta in 1976, which later to became the Dr. A.Q. Khan Research Laboratories (KRL). a. Almost all of Pakistan’s nuclear program was and remains focused on weapons applications. b. Initially, Pakistan focused on plutonium. In October 1974 Pakistan signed a contract with France for the design of a reprocessing facility for the fuel from its power plant at Karachi and other planned facilities. However, France withdrew at the end of 1976, after sustained pressure by the United States. 7. In 1975, Dr. A.Q. Khan provided for uranium enrichment centrifuges plans stolen from URENCO, and lists of sources of the necessary technology. Pakistan initially focused its development efforts on HEU, and exploited an extensive clandestine procurement network to support these efforts. Plutonium involves more arduous and hazardous procedures and cumbersome and expensive processes.
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
660
Handbook of Technology Management in Public Administration
8. In 1981, a U.S. State Department cable was leaked that stated that “We have strong reason to believe that Pakistan is seeking to develop a nuclear explosives capability . Pakistan is conducting a program for the design and development of a triggering package for nuclear explosive devices.” In 1983, the United States declassified an assessment that concluded that “There is unambiguous evidence that Pakistan is actively pursuing a nuclear weapons development program . We believe the ultimate application of the enriched uranium produced at Kahuta, which is unsafeguarded, is clearly nuclear weapons.” a. Chinese assistance in the development of gas centrifuges at Kahuta was indicated by the presence of Chinese technicians at the facility in the early 1980s. The uranium enrichment facility began operating in the early 1980s, but suffered serious start up problems. In early 1996 it was reported that the A.Q. Khan Research Laboratory had received 5,000 ring magnets, which can be used in gas centrifuges, from a subsidiary of the China National Nuclear Corporation. b. Pakistan became increasingly dependent on China as Western export controls and enforcement mechanisms became more stringent. This Chinese assistance predated the 1986 Sino-Pakistani atomic cooperation agreement, with some critical transfers occurring from 1980 to 1985. Pakistan Foreign Minister Yakub Khan was present at the Chinese Lop Nor test site to witness the test of a small nuclear device in May 1983, giving rise to speculation that a Pakistani-assembled device was detonated in this test. c. At some point near the signing of the 1986 Sino-Pakistani atomic cooperation agreement, Pakistan seems to have embarked on a parrallel plutonium program. A heavy water reactor at Khushab was built with Chinese assistance and is the central element of Pakistan’s program for production of plutonium and tritium for advanced compact warheads. The Khushab facility, like that at Kahuta, is not subject to IAEA inspections. Khushab, with a capacity variously reported at between 40 and 70 MWT, was completed in the mid-1990s, with the start of construction dating to the mid-1980s. 9. China has played a major role in many aspects of Pakistan’s nuclear program: a. China is reported to have provided Pakistan with the design of one of its warheads, as well as sufficient HEU for a few weapons. The 25-kt design was the one used in China’s fourth nuclear test, which was an atmospheric test using a ballistic missile launch. This configuration is said to be a fairly sophisticated design, with each warhead weighing considerably less than the unwieldy, first-generation U.S. and Soviet weapons which weighed several thousand kilograms. b. Pakistan purchased of 5,000 custom-made ring magnets from China, a key component of the bearings that support high-speed rotation of centrifuges. Shipments of the magnets, which were sized to fit the specific type of centrifuge used at the Kahuta plant, were apparently made between December 1994 and mid-1995. It was not clear whether the ring magnets were intended for Kahuta as a “future reserve supply,” or whether they were intended to permit Pakistan to increase the number of uraniumenrichment centrifuges, either at Kahuta or at another location. c. As of the mid-1990s it was widely reported that Pakistan’s stockpile consisted of as many as 10 nuclear warheads based on a Chinese design. 10. Pakistan now has extensive nuclear facilities: a. There is a 50–70-MG research and plutonium production reactor at Khushab. b. The main plutonium extraction plant is at Chasma, and is not under IAEA inspection. Pakistani Institute of Nuclear Science and Technology has pilot plants for plutonium extraction that are not under IAEA control. c. The Khan Research Laboratory at Kahuta is a large-scale uranium enrichment plant not under IAEA control.
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
661
11. The Carnegie Endowment reports that Pakistan has continued work on its 40-MW, heavy-water research reactor at Khushab, with Chinese assistance, Pakistan reported completed its Khushab reactor in 1996, but it has not been fueled, apparently because of Pakistan’s inability to procure (or produce) a sufficient supply of unsafeguarded heavy water. 12. Khushab has not been placed under IAEA controls. It is estimated to be capable of generating enough plutonium for between one and two nuclear weapons annually. Once operational, it could provide Pakistan with the country’s first source of plutonium-bearing spent fuel free from IAEA controls. Not only would this increase Pakistan’s overall weapons production capabilities by perhaps 20–30% (assuming that the Kahuta enrichment plant can produce enough weapons-grade uranium for three to four weapons per year), but the availability of plutonium would permit Pakistan to develop smaller and lighter nuclear warheads. This in turn might facilitate Pakistan’s development of warheads for ballistic missiles. In addition, Pakistan might employ the Khushab reactor to irradiate lithium-6 to produce tritium, a material used to “boost” nuclear weapons so as to improve their yield-to-weight efficiency. 13. Weapons-grade plutonium from the Khushab reactor’s spent fuel could be extracted at the nearby Chasma reprocessing plant, if that facility becomes operational, or at the pilotscale New Labs reprocessing facility at the Pakistani Institute of Nuclear Science and Technology (PINSTECH) in Rawalpindi—both facilities being outside IAEA purview. 14. China is reported to be assisting Pakistan with completing a facility linked to the Khushab reactor and thought to be either a fuel fabrication plant or a plutonium separation (reprocessing) plant. Pakistan previously was not thought to have a fuel fabrication facility to manufacture fuel for the new reactor. 15. The status of Pakistan’s reprocessing capabilities at new laboratories in Rawalpindi and at the Chasma site has not been clear from published sources. A classified U.S. State Department analysis prepared in 1983 said that the New Labs facility was “nearing completion” at that time; thus the facility could well be available for use today. Reports on the Chasma reprocessing facility in the early 1990s suggested that it was progressing, but probably still several years from completion. According to an analysis by the CIA quoted in the press, as of April 1996, China was providing technicians and equipment to help finish the facility. According to reports of August 1997, however, U.S. officials believe that, while some Chinese assistance and equipment may have trickled into the Chasma reprocessing project, the reprocessing complex at Chasma “is an empty shell.” If this description is correct, Pakistan may have only the laboratory-scale reprocessing capability at New Labs and may be further from major plutonium reprocessing activities than once thought. 16. Pakistani specialists also pursued efforts to improve the Kahuta enrichment plant and, possibly, to expand the country’s capacity to enrich uranium. A uranium weapon needs roughly 15 kg of U-235 with 93% enrichment. 17. On 28 May 1998 Pakistan announced that it had successfully conducted five nuclear tests. These tests came slightly more than 2 weeks after India carried out five nuclear tests of its own, and after many warnings by Pakistani officials that they would respond to India (the two countries have fought three wars). In addition, Pakistan’s President Rafiq Tarar declared a state of emergency, citing “threat by external aggression to the security of Pakistan.” 18. According to the announcement, the results were as expected, and there was no release of radioactivity. The Pakistan Atomic Energy Commission claimed that the five nuclear tests conducted on Thursday measured up to 5.0 on the Richter scale, with a reported yield of up to 40 kt (equivalent TNT). According to some reports the detonations took place over a two-hour period. One device was said to be a boosted uranium device, with
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
662
19.
20.
21.
22.
23.
Handbook of Technology Management in Public Administration
the four other tests being low yield sub-kiloton devices. On 30 May 1998 Pakistan tested one more nuclear warhead with a yield of 12 kt. The tests were conducted at Balochistan, bringing the total number of claimed tests to six. It has also been claimed by Pakistani sources that at least one additional device, initially planned for detonation on 30 May 1998, remained emplaced underground ready for detonation. These claims cannot be independently confirmed by seismic means. Indian sources have said that as few as two weapons were actually detonated, each with yields considerably lower than claimed by Pakistan. Three of the tests on May 28, however, may have been sub-kiloton. The two larger tests indicate one may have been a test of a boost weapon of 25–36 kt. The second has a claimed yield of 12 kt, and a seismic signature of 7–8 kt. The FAS indicates that seismic data showed at least two and possibly a third, much smaller, test in the initial round of tests at the Ras Koh range. The single test on 30 May provided a clear seismic signal, although Pakistan claimed a 12 kt yield, the data indicate 1–3 kt. a. Pakistan’s Foreign Minister announced on May 29, 1999 that Pakistan was a nuclear power. b. He stated that “Our nuclear weapons capability is solely meant for national self defense. It will never be used for offensive purposes.” He also stated, however, that “We have nuclear weapons, we are a nuclear power . we have an advanced missile program” and that Pakistan would retaliate “with vengeance and devastating effect” against any attack by India. c. He claimed that Pakistan had tested five nuclear devices in the Chagi Hills in Western Pakistan on May 28, 1998. It is not clear that Pakistan tested this many, and it may simply have claimed to have tested as many as India had earlier. d. Pakistani scientists (Dr. Abdul Qadeer and Samar Mubrik) said at the time that Pakistan would need 60–70 warheads to have a credible deterrent. e. Pakistan announced in February 2000 that it was creating a new National Command Authority to control its long-range missiles and nuclear program. It is responsible for policy and strategy, and “will exercise employment and development control over all the strategic forces and strategic organizations.” f. It is co-located with the Joint Strategic Headquarters. g. A new Strategic Plans Division has been created under a Lt. General, and acts as a secretariat for the NCA. The NCA has two committees. h. The Employment Control Council determines the shape and use of the nuclear arsenal. It is chaired by the head of state with the Foreign Minister as Deputy Chairman. It includes the Chairman of the Joint Chiefs, the service chiefs, the Director General of the Strategic Plans Division, and other scientific, technical, and political representatives as are required by the committee. i. The Development Council supervises the development of nuclear and missile forces and related C4I systems. It is chaired by the head of government with the Chairman of the Joint Chiefs as a Deputy and the service chiefs, Director General Strategic Plans Division, and scientific and technical representatives are members. The Carnegie Endowment estimates that Pakistan has over 200 kg of weapons-grade highly-enriched uranium—enough to construct fifteen to twenty-five nuclear weapons (India could build about seventy). Pakistan is thought to have received a workable nuclear bomb design from China in the early 1980s, and to have conducted a “cold test”—a full test, but without a core of weapons-grade material—of this design in 1986. The CIA reported in February 1999 that Pakistan sought a wide variety of dual-use nuclear-related equipment and materials from sources throughout the world during the first half of 1998. Islamabad has a well-developed nuclear weapons program, as
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
National Security Issues
24. 25.
26.
27.
663
evidenced by its first nuclear weapons tests in late May 1998. (The United States imposed sanctions against Pakistan as a result of these tests.) Acquisition of nuclear-related goods from foreign sources will be important for the development and production of more advanced nuclear weapons. The CIA reported in February 1999 that Pakistan and China had provided extensive support in the past to Pakistan’s WMD programs, and some assistance continues. The DCI Nonproliferation Center (NPC) reported in February 2000 that Pakistan acquired a considerable amount of nuclear-related and dual-use equipment and materials from various sources—principally in the FSU and Western Europe—during the first half of 1999. Islamabad has a well-developed nuclear weapons program, as evidenced by its first nuclear weapons tests in late May 1998. (The U.S. imposed sanctions against Pakistan as a result of these tests.) Acquisition of nuclear-related goods from foreign sources will be important if Pakistan chooses to develop more advanced nuclear weapons. China, which has provided extensive support in the past to Islamabad’s WMD programs, in May 1996 promised to stop assistance to unsafeguarded nuclear facilities—but we cannot rule out ongoing contacts. George Tenet, the Director of the CIA, testified before the Senate Foreign Relations Committee on March 20, 2000 and stated that, “India and Pakistan are developing more advanced nuclear weapons and are moving toward deployment of significant nuclear arsenals. Both sides are postured in a way that could lead to more intense engagements later this year. Our concern persists that antagonisms in South Asia could still produce a more dangerous conflict on the subcontinent.” The CIA reported in August 2000 that Pakistan continued to acquire nuclear-related and dual-use equipment and materials from various sources—principally in Western Europe—during the second half of 1999. Islamabad has a well-developed nuclear weapons program, as evidenced by its first nuclear weapons tests in late May 1998. Acquisition of nuclear-related goods from foreign sources will be important if Pakistan chooses to develop more advanced nuclear weapons. China, which has provided extensive support in the past to Islamabad’s WMD programs, in May 1996 promised to stop assistance to unsafeguarded nuclear facilities—but we cannot rule out ongoing contacts.
Source: Prepared by Anthony H. Cordesman, Arleigh A. Burke Chair in Strategy, CSIS.
DK3654—CHAPTER 8—14/10/2006—15:22—SRIDHAR—XML MODEL C – pp. 573–663
9
Negotiating Technology Issues
CONTENTS Chapter Highlights.........................................................................................................................667 Reference ...................................................................................................................................668 “We’re in It to Win It”—Negotiating Successful Outsourcing Transactions..............................669 Introduction................................................................................................................................669 Step One: Establish the Requirement .......................................................................................669 Step Two: Identify the Project Sponsor....................................................................................669 Step Three: Set Up the Teams ..................................................................................................669 Step Four: Develop the ITT ......................................................................................................670 Step Five: Run the Procurement Process..................................................................................670 Step Six: Award the Contract....................................................................................................670 Conclusion .................................................................................................................................671 Technology and International Negotiations..................................................................................671 Introduction................................................................................................................................671 The Period Prior to Focused Negotiations................................................................................673 Setting the Stage for the Law of the Sea Conference ..........................................................673 The Prenegotiation Period in Perspective .............................................................................674 Pressure for Early International Negotiations.......................................................................675 Nonbinding Forums ...................................................................................................................678 Pace of Prenegotiation Activity ................................................................................................679 The Negotiation Period .............................................................................................................681 Options...................................................................................................................................681 Management of Issues Involving Advanced Technology ....................................................682 The Deep Seabed Mining Negotiations ....................................................................................682 Economic Implications of Deep Seabed Mining ..................................................................682 Financial Arrangements Between Exploiters and the Authority..........................................684 Other Law of the Sea Conference Issues..................................................................................686 Limits of the Continental Shelf.............................................................................................686 Archipelagoes ........................................................................................................................687 Marine Living Resources and the Marine Environment ......................................................687 Conclusion .............................................................................................................................688 Role of Technical Information at Multilateral Negotiations ....................................................688 Timing of the Negotiations ...................................................................................................688 Scope of the Negotiating Objectives and Solutions .............................................................689 Management of the Technical Questions..............................................................................689 Information Generation .........................................................................................................690 Introducing Information into the Negotiation.......................................................................691 Evaluating and Integrating Technical Data into the Negotiation.........................................691
665
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
666
Handbook of Technology Management in Public Administration
Conclusion .................................................................................................................................692 Notes ..........................................................................................................................................693 Member Trust in Teams: A Synthesized Analysis of Contract Negotiation in Outsourcing IT Work ................................................................................................................706 Abstract ......................................................................................................................................706 General Background ..................................................................................................................707 A Synthesis of Definitions.........................................................................................................707 The Importance of Novel Environments and Trust ..............................................................708 The Social Context of Negotiating IT Work ........................................................................708 A Proposed Framework .............................................................................................................709 Task-Oriented Effects ................................................................................................................710 Communication of Management Support .............................................................................710 Strategic Importance of the Negotiation Effort ....................................................................711 Communication of Organizational Objectives......................................................................712 Team-Interaction Effects ...........................................................................................................712 Improved Knowledge Sharing...............................................................................................713 Diffusion Effects in Requirements Development .................................................................713 Process-Oriented Team Effects .................................................................................................714 The Use of Power in Teams..................................................................................................714 Linear Communication Practices in Contract Negotiations .................................................715 Scouting Activities Between Teams in the Larger Social Context......................................715 Requisite Team Member Effects...............................................................................................716 Team Member Perceptions of Individual Opportunities ......................................................716 Team Member Perceptions of External IT Expertise ...........................................................716 Conclusion .................................................................................................................................717 References..................................................................................................................................718 Methodologies for the Design of Negotiation Protocols on E-Markets.......................................722 Abstract ......................................................................................................................................722 Introduction................................................................................................................................723 Design Methodologies ...............................................................................................................723 Game-Theoretic Analysis of Negotiations............................................................................724 Mechanism Design Theory ...................................................................................................725 Computational Economics and Simulation...........................................................................726 Experimental Economics.......................................................................................................727 Design of a Matching Mechanism for OTC Derivatives .........................................................728 Trading Financial Derivatives ...............................................................................................728 Multi-Attribute Auctions .......................................................................................................729 An Internet-Based Marketplace for OTC Derivatives..........................................................730 Research Questions................................................................................................................731 Game-Theoretic Analyses .........................................................................................................732 Computational Exploration and Simulation..............................................................................733 Laboratory Experimentation......................................................................................................735 Experimental Design .............................................................................................................735 Comparison of Equilibrium Values ......................................................................................735 Efficiency of Multi-Attribute Auctions .................................................................................736 Conclusions................................................................................................................................736 References..................................................................................................................................737 Using Computers to Realize Joint Gains in Negotiations: Toward an “Electronic Bargaining Table” ......................................................................................................738 Abstract ......................................................................................................................................738 Introduction................................................................................................................................739
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
667
Background ................................................................................................................................739 A Framework for System Development ...............................................................................739 Existing Negotiation Support Systems..................................................................................740 Evaluation of NSS .................................................................................................................741 Negotiation Assistant: Design and Operation...........................................................................742 Design Criteria.......................................................................................................................742 Operation ...............................................................................................................................743 Experiment and Results.............................................................................................................745 Hypotheses.............................................................................................................................745 The Negotiation Scenario......................................................................................................746 Experimental Setup................................................................................................................746 Results....................................................................................................................................748 Discussion and Conclusions ......................................................................................................752 Notes ..........................................................................................................................................754 References..................................................................................................................................756 A Conceptual Framework on the Adoption of Negotiation Support Systems.............................757 Abstract ......................................................................................................................................757 Introduction................................................................................................................................758 Negotiation Support Systems and Adoption Models................................................................758 Data Analysis.............................................................................................................................759 Implications of Findings............................................................................................................762 Conceptual Framework..............................................................................................................763 Organizational Culture ..........................................................................................................764 Industry Characteristics .........................................................................................................765 Other Implications .................................................................................................................766 Concluding Remarks .................................................................................................................766 Notes ..........................................................................................................................................767 References..................................................................................................................................767
The organizational culture must be unfrozen so that new cultural norms can be produced. Edgar Schein
CHAPTER HIGHLIGHTS † Succeeding at the global negotiation table requires more than the traditional deal-making
skills. It also requires the ability to deal with the unexpected politically, ideologically, culturally, and environmentally. † There is an important distinction between distributive negotiation, in which parties bargain over a fixed pie, and integrative bargaining in which parties may expand the pie through problem solving, creativity, identification of differences in priorities, and/or compatibility of interests. † Four factors affecting member trust within contract negotiation teams merit attention. They include: (1) task-oriented effects; (2) team-interaction effects; (3) process-oriented effects; and (4) requisite team member effects. † Projected future computer-aided negotiation includes such things as the virtual handshake and meeting (short term); computer assisted information gathering and analysis
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
668
Handbook of Technology Management in Public Administration
(short/medium term); dialogue recognition and Pareto improvements (medium term); and bio-function monitoring and brainwave recognition (long/distant term). † The viability of pertinent international law is open to question in areas that are subject to the dual pressures of rapid technological advances and efforts to have the entire community of nations actively participate in the development of that law. As a result, the international community is faced with a difficult management problem. † The design of negotiation protocols is a challenging research direction and involves a number of disciplines including information systems development, game theory, mechanism design theory, simulation, and laboratory experimentation. This chapter focuses on three seminal aspects of technology negotiation: (1) suggestions on issues to be included in bargaining as one seeks to acquire technology; (2) matters to be managed as one proceeds through and subsequently lives with the results of negotiations; and (3) salient issues about which to be cognizant as one utilizes technology to negotiate technology acquisition. To the first point regarding the conceptual approach to negotiation, the bargaining game today is different as years of globalization and the proliferation of technology in all aspects of commerce and industry have left their mark. To avoid costly blunders, today’s negotiators need to be simultaneously resilient, steadfast, innovative and credible. Throughout the process, negotiators must remain focused on the fact that an information technology infrastructure is the complex set of IT resources that provide a technological foundation for a firm’s present and future business applications. High-tech (the acquisition sophisticated hardware and software systems) is not the pursuit . rather, good tech (the application of sophisticated hardware systems) is the passion (Stupak). To the second point regarding the management of the negotiation process, both ends and means must be carefully considered. The effects of acquiring and implementing new technologies and production techniques are best understood, not only in terms of the capabilities and characteristics of the technology/techniques themselves, but also as an outcome of the political bargaining leading to selection, development, deployment, and use. As one prepares for negotiation the checker board must be supplanted by the chess table with attention paid to prenegotiation process techniques, in-process techniques, and post-process techniques be rigorously pursued, Grumbacher suggests.1 Collaborative engineering, an approach to the design and deployment of technologies to support mission-critical tasks (Grunbacher), is a methodology with promise in means-end leadership/management. The final point concerns the use of technology to negotiate additional technology acquisition. An appreciation is developed for the fact that there has been a progression in negotiation over the years. From purely face-to-face (typically resulting in inefficient, suboptimal outcomes) to computer assisted negotiation (negotiation support systems), to computer support and simulation models, and now computer mediated negotiations, parties are increasingly locating and executing tradeoffs that maximize gains in multi-issue negotiations (Rangaswamy). This progression, in concert with the proliferation of virtual organizations in tech-laden markets, adds a degree of complexity (and excitement!) to technology acquisition heretofore not experienced. The potential conundrum represented by “virtuality” is the importance of technology infrastructure in virtual corporations but the reticence to capitalize such when organizational participants are autonomous, nimble, and diverse. Pondering the content of this chapter on technology negotiation will result in insights that bolster one’s adroitness as a negotiator.
REFERENCE 1. Integrating Collaborative Processes and Quality Assurance Techniques: Experiences from Requirements Negotiation by Paul Grunbacher, Journal of Management Information Systems, Spring, 2004.
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
669
“WE’RE IN IT TO WIN IT”—NEGOTIATING SUCCESSFUL OUTSOURCING TRANSACTIONS* INTRODUCTION Much emphasis has been placed on the continuing growth of the outsourcing sector in the recent recessionary times. However, whilst much of this growth has undeniably been due to the budgetdriven imperative to lower costs, the long term advantages of lower cost, guaranteed service levels and access to best practice/latest technology marks out the outsourcing sector as one which is set for sustained growth. In this context, the importance of establishing sound foundations for what are invariably long-term deals is clear. This paper will accordingly establish the key stages involved in establishing the foundations for a successful outsource project, and the pitfalls to avoid.
STEP ONE: ESTABLISH THE REQUIREMENT In today’s economic environment, this is usually straightforward, i.e., to achieve tangible cost savings. What is necessary in this regard is to have a clear picture of the real cost of providing the relevant services “in house” (factoring in the guarantee of particular service levels, which is unlikely to be on offer from an in-house function) so as to be able to compare and contrast it with the figures put forward by the potential outsource suppliers. It is essential that like is compared with like; there is little point in comparing the pricing of an apparently deficient internal function with the level of service which could be supplied by one of the leading outsource providers, such as IBM, EDS, Fujitsu, or CSC.
STEP TWO: IDENTIFY
THE
PROJECT SPONSOR
One of the most common causes of failure for outsourcing projects (both in terms of aborted procurement processes and problems arising during service provision) is the lack of an appropriate project sponsor, i.e., someone suitably senior within the client organization who assumes “ownership” of the project and who has the ability to make things happen, e.g., by freeing up resources, resolving internal disputes, or liaising with other affected parts of the business. The larger the proposed outsourcing, the more senior this person would need to be, right up to the C.I.O./IT director level, or even the Chief Executive. Suppliers will equally need to be reassured that such an individual exists and is serious about pushing ahead with the proposed outsourcing project, before committing potentially significant amounts of effort to bidding for the work.
STEP THREE: SET UP
THE
TEAMS
Running an outsource procurement process is a project in itself, and needs to be staffed and planned accordingly. From the client side, key constituents of the team will be representatives of the affected business areas, in-house procurement/sourcing specialists, HR and of course legal (invariably both in-house and from external outsourcing specialists such as Barlow Lyde & Gilbert). Increasingly, the support of external sourcing consultants is also engaged, not simply for their experience with the process but also, as with the lawyers, for their experience of negotiating similar deals with the same types of suppliers, and their knowledge of what kind of prior deals are being offered elsewhere. The Suppliers will also need to set up their own bidding teams, which can be quite sizeable. Aside from subject matter specialists to deal with the assessment of the requirement, legal advisors will need to be engaged throughout and a specialist commercial “negotiator” will usually be identified to lead the overall discussions on behalf of the Supplier. This should usually not be * Kit Burden, Barlow Lyde & Gilbert.
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
670
Handbook of Technology Management in Public Administration
the same person who is going to lead the delivery team, as the skill sets for the two roles are very different, and it would be preferable for the delivery team to be kept “clean” from any rancor which may develop during the bid process.
STEP FOUR: DEVELOP
THE
ITT
All too often, the ITT (invitation to tender) is addressed in a haphazard or half-hearted way, to the ultimate detriment of both the client and the outsource supplier. The ITT should ideally not be a broadly-worded “wish list,” but instead a clear and easily auditable statement of requirements, capable of being referred to in the ultimate contract as a document against which the performance of the supplier can be judged. Outsource suppliers will usually also see the benefit of a clearly drafted ITT, as it enables them to clearly identify where their proposals are either compliant or noncompliant, and where they may add value which the client may not otherwise immediately have appreciated. One topic for debate is whether the ITT should incorporate a full draft contract, or instead a list of “principles” which the supplier can be required to respond to. My own preference when acting for the client in such transactions is to have a proper draft contract, on the basis that the actual implications of positions adopted by a bidder will only become clear once the actual drafting of their proposed changes to the contract have been provided.
STEP FIVE: RUN
THE
PROCUREMENT PROCESS
Once the ITT has been issued and a flood (hopefully!) of responses has been received, the process of assessment begins. Whilst much concentration will inevitably and correctly be applied to considerations of cost and perceived quality, it is essential to remember that, at the end of the day, it is the contrast which represents the deal which has been struck, and accordingly the supplier’s response to the proposed contract terms should be given equal prominence in the evaluation process. To this end, the client should beware the ITT responses which duck direct questions about “difficult” contract provisions regarding such issues as service levels and credits, limits of liability, and ownership of IPRs, e.g., by professing a willingness to “negotiate” these terms in the future, but without giving any indication as to the supplier’s starting position. Suppliers can themselves use the contract negotiation process as a positive differentiator, rather than an exercise in guerrilla warfare to be either “won” or “lost.” An emphasis on openness in the way in which the contract is being approached will often be much appreciated by the client’s advisers who have to assess and compare the responses from the various bidders, and, given the right “spin,” can be presented as evidence of the supplier’s professionalism in its overall response (a characteristic of great value to a client, given the likely sensitivity of the proposed outsourcing and the impact which a failed project could have upon its business). Current best practice appears to be to reduce the number of bidders down to an initial short list for the purpose of any further due diligence, and to actually conduct detailed negotiations with no more than two (or possibly three) of them. This involves the trade off between the time and expense of parallel negotiations and the benefit to be gained from maintaining competition between bidding suppliers right up to the point of contract award, on contract terms as well as issues of pricing.
STEP SIX: AWARD
THE
CONTRACT
Contract signature is invariably accompanied by the popping of champagne works and a period of mutual self-congratulation amongst the negotiations teams of the client and the successful bidder. However, in reality the procurement process is not yet finished. There will invariably be postsignature issues to be resolved (e.g., in respect of the transfer of staff or the valuation of assets to be passed across to the supplier), albeit that post-signature due diligence should always be minimized to the greatest extent possible.
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
671
In any event, the final contract should contain a detailed process for ongoing contract governance and service reporting. From the client side, this helps ensure that it retains visibility and ultimate control of the outsourced services. The supplier can also use the meetings and reports required by such a process as a means of highlighting issues which require the client’s action, e.g., in relation to the lack of required input on conflicting demands made by different parts of the client’s business.
CONCLUSION Outsourcing projects are amongst the most complex commercial contracts around, and cannot be approached on anything other than a dedicated basis, using the input of people who are expert in the area. Whilst the larger outsource providers quite literally do this for a living and so have well developed toolkits to guide them through the process, the same can rarely be said on the client side. Careful planning and team selection can accordingly help ensure that both parties come out of the procurement process with a deal they can live with.
TECHNOLOGY AND INTERNATIONAL NEGOTIATIONS* INTRODUCTION International law needs to respond to the dual pressure for change brought on by rapid technological development and the promulgation of the “New International Economic Order.” While a number of major international negotiations have been mounted in recent years to respond to these pressures, all of them have failed to meet expectations. The value of international law in shaping and stabilizing international behavior will certainly diminish unless the performance of these international negotiations is improved or other methods for effecting change are found. It is the purpose of this article to explore some of the procedures that the international community has used to respond to these pressures and to suggest areas for improvement or further exploration.1 Two of the major developments in international law since the end of World War II have been the attention that has been given to questions of international economic and industrial development and the increased use of multilateral treaty negotiations.2 The rapid pace of significant scientific and technological advances has increased pressure on the international system to accommodate these developments. Technological advances have historically been an important factor in the development of international law and relations.3 World War II saw the dawn of the nuclear age, which brought forth significant legal activity relating to the military and peaceful uses of nuclear energy and the international legal system received an additional jolt in the late 1950s when the space age began with artificial earth satellites, which were followed shortly thereafter by space travel. The military and economic implications of this technology have only begun to be explored. Less dramatic but equally significant developments have taken place in the oceans where petroleum exploitation has moved rapidly seaward and the mining of the deep seabed has become possible, albeit not yet commercially viable. Telecommunications, weather modification, and biological engineering are other areas where new technologies have or are about to have significant impacts on international law and relations. * Jonathan I. Charney, Professor of Law, Vanderbilt University School of Law. The research for this article was supported by grants from the Vanderbilt University Research Council and the Vanderbilt University School of Law. Reproduced, with permission, from the American Journal of International Law, Volume 76, Issue 1 (January 1982), 78–118. Copyright q1982 American Society of International Law. The American Journal of International Law is published by the American Society of International Law. Please contact the publisher for further permissions regarding the use of this work. Publisher contact information may be obtained at http://www.jstor.org.journals.asil.html.
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
672
Handbook of Technology Management in Public Administration
In recent years, these advances often have given rise to multilateral negotiations aimed at producing international agreements to govern the new activities. While traditionally the international law relevant to new technological developments has slowly evolved out of the customs and practices of nations before being codified in international agreements, this gestation period has been considerably shortened in the last half of the 20th century. In the past the world and its resources have been large enough to assure that only a few national activities had an impact on the interests of other nations. This was particularly true in the world’s commons, such as the oceans, where the concept of “rescommunis” has held sway for many years. Increased competition for resources and the global implications of some of the new technological developments have forced nations to realize that today individual nations cannot act in those areas without producing impacts on other nations. Consequently, the need for the early development of legal norms of behavior has increased. Contributing to this situation is a parallel political development. International relations could be characterized by an analogy to free market competition where nations jockey for positions of power, wealth, and influence by acting independently or in concert with other nations with similar interests. While this continues to be true, the promulgation of the New International Economic Order (NIEO) by the United Nations General Assembly (UNGA) and the continued activities of the Group of 77 (representing more than 120 developing countries) in the United Nations and other multilateral forums appear to have had a significant impact on the conduct of international relations.4 The NIEO is premised on the assumption that there should be a shift of political and economic power from the developed to the developing countries. While few developed countries have embraced the substance of this doctrine, they have all accepted its procedural point that the developing countries, usually acting as a group, have a right to negotiate with the developed countries on the substantive objectives of the NIEO, i.e., the shift of wealth and power. In no sector of international relations is this more apparent than that pertaining to new technological developments. In that area, where some important rights and benefits are not yet fully vested, the NIEO provides the basis for the developing countries’ claim to have the right to participate in the earliest stages of legal and commercial development. Their objective is to assure themselves a “fair share” of the benefits from those activities and to prevent the developed countries from extending their “hegemony” to the new areas. For either or both of these reasons, interdependence and the NEIO, there is increased pressure on nations to respond by negotiating international agreements at an early date before custom and practice have had an opportunity to define the legal options. Unfortunately, the record of the international community in these efforts is rather poor. The quality of activities at the UNGA and its related bodies has deteriorated perceptibly in recent years. They often provide an arena for political grandstanding rather than responsible forums for negotiation and compromise. While one of the premier exercises of this kind, the Third United Nations Conference on the Law of the Sea (UNCLOS III), may be headed for eventual agreement, the negotiation has been very difficult and costly; no one knows whether the convention it may produce will ever come into force. The commodity negotiations under the auspices of the United Nations Conference on Trade and Development (UNCTAD), the International Telecommunication Union’s World Administrative Radio Conferences (ITU WARC), the negotiations at the Commission of International Economic Cooperation (CIEC), and the conference of the United Nations Educational, Scientific and Cultural Organization (UNESCO) on the new world information and communications order, to name a few, have been arduous and relatively unproductive. The question naturally arises, “What, if anything, should individual nations or the international community as a whole do about this problem?” At the macro level, the current international situation has been characterized as “functional eclecticism” or “incrementalism,” which means that a relatively disorganized international community reacts in an ad hoc manner to direct needs and demands.5 Modifications in the system are undertaken by trial and error in order to respond to direct and specific pressures. There exists no grand institutional design. Consequently, specially
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
673
identified short-term interests are more influential, and those concerned with second and third levels of interest are in weaker positions.6 Some suggest that alternative organizational approaches might present better opportunities for success. The functionalist school of thought would seek to expand a web of nonpolitical international institutional arrangements so that ultimately all international questions would be encompassed within a world community managed by an international bureaucracy.7 Others argue that the international system, and particularly the United Nations, should be reorganized to bring into focus the interdependence of international issues. Broad objectives could then be stated, and related interest would be addressed and reconciled within a single forum.8 Another alternative might be to terminate the ties of all international organizations to these subjects. While there may be theoretical merits to these proposals, no serious thought at the political level has been given to substantially changing the international system in this regard; nor can it be expected that any such changes will take place in the foreseeable future without a third world war, a global economic collapse, or an unforeseen development of tremendous significance.9 Is the macro system presumed to remain in place the only interstitial? This article will address certain procedural adjustments that might be made to facilitate the orderly and sound development of legal rules demanded by the new technological and political developments discussed above. It will focus on the response that nations should make to the dual pressure to develop an international law adequate to the needs brought about by new technological developments and the NIEO. The stages of legal development in these areas can be divided into two time segments: The period prior to the introduction of direct negotiations aimed at creating an international agreement and the period after the beginning of those negotiations. The discussion below will focus on those time periods and will use the recent negotiations on the law of the sea, particularly the issues relating to deep seabed mining, as a basis for illustrating the problems that need to be considered.10
THE PERIOD PRIOR TO FOCUSED NEGOTIATIONS The period prior to the holding of negotiations on an international agreement begins when serious thought is given to the need for new international law. In the course of such a period the subject receives increased attention, which will produce one of three results: the law will remain static, the law will change through the development of general international law, or the negotiation of a new international agreement will be undertaken. There appear to be four critical aspects of this period that warrant particular attention. First, there is the pressure to commence international negotiations on a new international agreement. For many, the conduct of such negotiations and the productions of law is required. Others would exercise more discretion. A second important concern is the flow of information about the subject that appears to demand new international law. In the case of new technological developments or issues requiring fairly sophisticated technical analyses, the management of information at the early stages of this process can have a critical impact on subsequent progress. Third, it is important to give serious consideration to the use of national agreements. While they may not serve as a source of law, such forums can play a critical role in the development of the legal issues. Finally, if a formal negotiation is to be undertaken, the participating nations must be committed to negotiating an agreement. One cannot assume that participation in form negotiations is equivalent to a commitment to negotiate in good faith towards an agreement. Setting the Stage for the Law of the Sea Conference To determine when the negotiation period for the Third United Nations Conference on the Law of the Sea began, one might have to go as far back as the period prior to the 1930 Hague Codification Conference, which considered the law of sea.11 While that conference produced no law of the sea convention, the 1958 United Nations Law of the Sea Conference did, and its preparatory work
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
674
Handbook of Technology Management in Public Administration
started years before.12 The Second United Nations Conference of the Law of the Sea in 1960 did not produce an international agreement, although it was clearly called for that purpose.13 In part, the Third United Nations Conference on the Law of the Sea was born out of the failures of the prior conference, but it also resulted from new technological and political demands that took on added significance in the mid-1960s and the 1970s. Thus, focus on the most recent effort began shortly after the failure of the second conference. It is commonly believed that the prospects for commercial exploitation of deep seabed manganese nodules by the developed Western countries prompted the rapid decision by the international community to seek a new convention on the law of the sea. Clearly, Arvid Pardo’s 1967 speech in UNGA spurred movement towards the negotiations even though the manganese nodule issue may not have been the original motivation for the calling of a new conference.14 Many believed that the untold riches of the deep seabed were about to be appropriated by certain Western countries, and that other nations would be excluded from these benefits. Similar fears arose about the ever improving technology to exploit hydrocarbon deposits on the continental shelf and in deeper waters. Consequently, the General Assembly passed a series of resolutions seeking to freeze the status quo, and the international community moved rapidly towards preparatory meetings and then the commencement of formal negotiations in 1973. Because many nations believed that the early entry into force of a multilateral treaty would be the only way to prevent a few nations from dominating deep seabed mining, little thought was given to alternatives or to the conduct of alternative prenegotiation activities that would maximize the opportunity for success at the negotiations.15 Seabed mining was certainly not the only reason for holding a new conference. Prior to the Pardo speech, some of the global powers had considered ways to curb the encroachments on their military mobility (especially in straits) caused by the expansion of zones of national jurisdiction.16 This concern arose from the failure of previous law of the sea negotiations to fix the seaward limits of the territorial sea and other resource zones. By linking the demand of the global powers for navigational freedoms with the pressure to avoid a grab of the deep seabed by developed countries, it was believed that a satisfactory compromise of both interests could be reached. During the period after Pardo’s speech, the international community focused primarily on setting the stage for an early universal negotiation. As the agenda was considered, each interest group suggested additional subjects that it wanted to see improved. Since it was generally accepted that a universal negotiation was the only feasible route, it was difficult to reject agenda items that might deter participation by some nations. Consequently, the final agenda consisted of a compendium of all law of the sea subjects; the conference was committed to reconsidering the entire law of the sea.17 For a similar reason, the preparation of national negotiating positions was the focus of domestic attention, and inadequate consideration was given to managing the preconference period so as maximize the opportunity for success at the negotiations. Because attention was centered on the opening of the conference and the General Assembly has dedicated itself to freezing all developments outside the negotiations, there was little use of other forums and events to refine the issues that would be considered.18 Consequently, the participating nations were far apart when the negotiation period began. This contributed to the slow and frustrating pace of the preparatory meetings and of the conference itself. The Prenegotiation Period in Perspective While the Law of the Sea Conference is unique in many ways, the procedural problems that arose during the prenegotiation period have been faced in other settings. These experiences suggest possible ways to improve the prospects for success in other areas in the future.
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
675
Pressure for Early International Negotiations It may be true that “[n]o nation goes out of its way to propose the creation of international arrangements for jointly managing a resource: unilateral or bilateral methods are always preferred.”19 Current international pressures, however, make the conduct of international negotiations and the contemplation of international institutional arrangements difficult to avoid for the subjects under consideration in this article. Interdependence and the NIEO doctrine are two of the major reasons for this situation.20 Debates over the appropriateness and timing of international negotiations have a long history. At the close of World War II the United States pressed for early negotiations to avoid the spread of nuclear weapons. The timing and conduct of the ill-fated Baruch Plan remain a subject of continuing debate.21 Considerable controversy also arose in connections with efforts to conduct early negotiations on outer space law.22 In an incisive critique, McDougal and Lipson listed six arguments that they believed were stimulating calls for early negotiations on the subject: 1. Paper agreements solve problems whether or not effective sanctions exist to assure compliance; 2. The necessary objective of negotiations is agreement; 3. Almost any agreement is better than no agreement; 4. Overall solutions that fail are superior to particular adjustments that succeed; 5. International control of appropriation or ownership is somehow demanded by the supposed supranationality of extraterrestrial activity; and 6. The tidiness of a treaty is so preferable to an uncertain legal situation that it may be worth a substantial price.23 In their opinion, these assumptions are debatable, if not altogether wrong. Little adjustment would be required to apply this list to other subjects, and the negative assessment of rushing into early agreements would be equally persuasive. In part, the difficulties faced over the law of the sea stem from the problem highlighted by McDougal and Lipson. The elementary nature of international law makes it unrealistic to expect to resolve major political issues simply through the generation of a new international agreement. The participants at the conference were wise enough to realize that a paper agreement without real political agreement would be futile. Consequently, the negotiations have proceeded at a snail’s pace in order to permit the political issues to be resolved, primarily outside the conference (e.g., the 200-mile zone). The early calling of the conference might have accelerated this process, but it is problematical whether it was necessary to the outcome. Other pragmatic considerations should also be taken into account in the course of deciding when and whether to start a negotiation. Perhaps the greatest difficulty faced at early negotiations called to respond to new technological developments relates to foreseeability. Unfortunately, the impacts that new technological developments will have on international society are almost impossible to foresee at an early date. A classic example of this difficulty was the attempts to define the boundary between airspace and outer space.24 Reputable students of the subject saw their early proposals quickly discarded as new developments changed the situation.25 Thus, premature international agreements can produce “solutions” that may not only be technically incorrect but unworkable. Later efforts to correct them may be difficult, if not fruitless.26 A similar problem applies to the deep seabed. Deep seabed mining of large volumes of manganese nodules has not yet taken place; in the late 1960s and early 1970s, the technology was barely off the drawing board. Many of the facts on which the early stages of the negotiations were predicated are now outdated. Even now, the nature of the activity and its requirements and consequences are barely known. Thus, a sense of uncertainty has pervaded the entire process and there is a substantial risk that any detailed regime established now might rapidly become obsolete.
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
676
Handbook of Technology Management in Public Administration
On the other hand, deferred resolution of legal questions may also present difficulties. In the absence of a relatively mature legal regime the commercial development of new technologies may be deterred. This history of the deep seabed mining industry presents a graphic illustration of such a case: the absence of a clear legal regime has deterred investors and corporate decision makers from moving forward. Arguments for the early negotiation of new legal regimes are also supported by those nations that view multilateral negotiations as an opportunity to exercise leverage over international decision making. They believe that there would be not such leverage if the decisions were made through the development of customary international law. The developing nations holding this view may be joined by developed nations that do not have the new technology at hand: they, too, may lack the power to directly influence the development of the pertinent customary international law and would expect their leverage to increase at an international negotiation. This confluence of interest was often demonstrated at the deep seabed mining negotiations. While some will argue that the power relationships at multilateral conferences parallel those found in the outside world, it would be more accurate to recognize that the dynamics of such a forum moderate difference in power.27 Finally, the timing of the negotiation may have an impact on the difficulties faced by the participants. It could be argued that a very early negotiation might be easier to conduct because it might precede the identification of particular national interests. Thus, the negotiation could proceed at a more theoretical level and enable the participants to develop an “ideal” system that would guide and moderate future pressures.28 The contrary arguments may be more persuasive. It would appear that the mere call for negotiations stimulates national demand that fail to take full account of real national interests in the subject.29 In the absence of substantial knowledge about the subject, those demands might very well be extremely unrealistic; resolving those demands can require additional negotiating time and effort. The deep seabed negotiations are a case in point. Serious discussion of a legal regime for deep seabed mining began in 1967 when few knew very much about the potential industry. At that time it was generally believed that commercial exploitation would begin at an early date. There were equally unrealistic expectations that manganese nodule exploitation would produce a tremendous volume of metals, which, in turn, would generate substantial income for the mining companies and the international community.30 The competing producers of these metals from land-based sources therefore decided that it was import to deter, if not stop, deep seabed mining. Deep seabed mining is still not commercially viable and is likely to be so for at least another decade. Furthermore, the scale and economic significance of the industry will be much more modest than projected in the early 1970s. While some of the industry’s problems were caused by the international political issues it inspired, its health has been greatly influenced by continuing economic and technological difficulties. A further obstacle that appears to be symptomatic of early international negotiations relating to new technological developments stems from the information gap likely to exist at those negotiations. New technologies are usually produced and held by a limited number of countries of their nationals. Consequently, many delegations at multilateral negotiations on these subjects will be dependent upon their adversaries for vital information. This created suspicions, which further complicate the negotiations. It has been pointed out that the Soviet suspicions of the United States at Baruch Plan negotiations may have been enhanced by this circumstance.31 This problem was particularly severe in the deep seabed negotiations. The technology developed for the exploitation of the deep seabed in the late 1960s and the 1970s was the product of a very few companies located in selected Western developed countries.32 Those companies maintained strict control over the flow of information about their activities and the potential of the industry. The limited information generally available to the conference came directly or indirectly from the industry. In addition, many delegations believed that the delegations from the countries whose companies were engaged in deep seabed development had been fully informed
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
677
so that they could accurately identify their national interests and appropriate negotiating positions.33 It took a long time to moderate those suspicions and to fill the information vacuum. Finally, in an early negotiation the participants’ positions may be too far apart to be effectively reconciled at formal negotiations. The process of international law development consists of a continuous narrowing of differences among nations. Early in that process national opinions on the appropriate law might be very diverse. It is clear that the more diverse the national objectives are when a formal negotiation begins, the more difficult the negotiation will be.34 Thus, during a period of gestation prior to negotiations the gap is narrowed through interstate communications and experimentation. Arguably, the efficiency of the lawmaking process and the quality of results may be improved if prior to formal negotiations these differences are narrowed through the use of other methods of communication, including the development of customary law. There is little doubt that the wide disparity of positions that nations brought to UNCOS III contributed to the negotiating difficulties. Precious little effort was made to narrow these differences prior to the negotiations. Where customary international law developed in concert with the negotiations, substantive issues were resolved readily. Such was the case with the 200-mile zone. Although it was billed as the big issue to be negotiated, developments outside the conference made it apparent that there would be a 200-mile zone of national jurisdiction and the issue rapidly receded at the negotiations.35 No independent progress was made on the deep seabed issues and that question continued to fester throughout. This discussion of the factors relevant to a decision to conduct an early negotiation makes it clear that conflicting judgments are involved. There are a number of actors unique to the immaturity of the international legal system that argue against the early development of binding international agreements. McDougal presented them well in his discussions of the outer space debate. The conduct of negotiations in response to new technologies presents additional arguments against early negotiations. Rapid changes in information about a new technology and its societal implications are not easily accommodated by international institutions. Since the rate of change will slow over the period subsequent to an initial technological breakthrough, deferral to a time of slower change and greater knowledge may be advisable. On the other hand, the potential for orderly relations expected from early agreement has appeal, particularly for those interested in harnessing the new technology for commercial use and others interested in the political leverage to be derived from early negotiation. It is questionable whether an early agreement can actually produce the desired order and whether early negotiation is the only way to obtain political leverage by those not holding the technology. Absent these two objectives, the arguments in favor of deferred negotiations are rather persuasive. Thus, it may be better to focus on ways of accommodating the contrary interests. For example, commercialization might be handled better at the earlier stages by domestic activities or limited international arrangements. Global power struggles, such as that over the NIEO, might be resolved better at the general political level first or at those negotiations that are truly ripe for substantive agreement.36 While the task of deflecting these pressures will not be easy, resistance to premature negotiations may be advisable. It is important that all of these interests be considered before the international community becomes committed to a negotiation to produce a new international agreement. Decisions to negotiate legal regimes at an early date should therefore not be made hastily. During the period prior to negotiations an important factor to consider is how much knowledge of the subject under consideration is held by the participating nations and diplomats. Clearly, in 1967 few knew anything about deep seabed mining, and only a limited group of diplomats knew much about other evolving commercial, scientific, and military uses of the oceans. Thus, at the early stages, it was necessary to educate foreign offices and diplomats so that they would be prepared to conduct the necessary consultations and negotiations.37 There has been considerable discussion in the literature of the need to assure that the international community, especially the diplomatic corps, is well informed about new technological, scientific, and business developments.38 It is argued that armed with the necessary knowledge, these
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
678
Handbook of Technology Management in Public Administration
officials would better understand when there is an actual need for new arrangements. The increased lead time that would often result would give them a greater opportunity for planning and preparation. Various international organizations sponsor research and issue reports on these developments, as do national governments. By studying such information before entering into negotiations, it is maintained, foreign office, diplomatic corps, and international organizations would improve the consideration of the new demands. Attention has been given to this objective within the United States. Suggestions include assuring that State Department personnel are technologically literate by exposing them to the fields of science and technology. Others suggest that experts be included within the state Department staff, that better linkages be forge with specialized government departments that can feed the information into the State Department, that more productive contacts be created between the State Department and the private sector of business and academia, and that more future-oriented studies be commissioned.39 It may very well be valuable to provide more and better information to the diplomatic corps, but whether the facilitation of the flow of information and its assimilation would actually change the way nations behave in these areas can be questioned. While futurology is in vogue,40 it is no sufficiently reliable over long time spans for national action. It has been pointed out that during each twenty-five year period in the course of the last hundred years, there has been at least one major change in international alignments, one major change in the institutions and politics of one major power and one major change in technology; and none of those changes would have been easy to predict from the fact known twenty-five years before.41 Even improved accuracy may not be sufficient to change national and international behavior. Foreign offices are usually so pressed by immediate needs that they are rarely able or willing to make the necessary resource and political communities to developments likely to take place far in the future. Furthermore, even if successful, the enormous effort required to maintain expertise in all areas of development would entail the acquisition of information and personnel that would so overload the system that the benefits sought would be lost.42 Finally, direct political, military, and economic pressures are more likely to dominate critical decisions to initiate negotiations and to identify national objectives than is technical information which tends to play only a supporting role. In sum, while a well-informed international community and diplomatic corps may be better able to address new and pressing needs and might even be able to perceive a new need at a marginally earlier time, it seems unlikely that the time frame for useful and productive responses will be substantially changed by better information flow and its assimilation. A more feasible, albeit modest, goal might be to assure the rapid acquisition of information as needed. This approach would make the best use of the limited resources and assimilative capacities available.
NONBINDING FORUMS Another aspect of international law development is the role played by multilateral forums that do not negotiate international agreements. Many have suggested that resolutions by the United Nations General Assembly present a valuable vehicle for law development.43 While nations discuss the subject on the agenda and often negotiate the text of the resolutions, their nonbinding nature arguably provides necessary flexibility and may eliminate some of the problems of timing and national claims posed by the negotiation of binding agreements. General Assembly resolutions have played a beneficial role in the informative stages of some international law development, particularly in human rights law and the law of outer space. The mere placement of a new item on the agenda of the General Assembly necessarily provokes national attention and international discussion. Debate on the subject will also ensue. Because they are not automatically binding as law, the resulting resolutions to make the necessary adjustments. On the other hand, the very forum and the nonbinding nature of the resolutions may create counterproductive forces: the high visibility of the General Assembly, the absence of real political and
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
679
economic stakes, and the recent use of the forum as a premier staging ground for the high politics of the NIEO argue against using UNGA resolutions as devices for establishing needed and useful legal norms. For example, UNGA discussion of the law of the sea tended to freeze positions rather than to encourage a process of accommodation and exploration. A more promising vehicle for law development may be proceeding as ad hoc international conferences such as the Stockholm Conference on the World Environment and the Rome World Food Conference. Their resolutions have even less juridical significance than UNGA resolutions, but they can serve a valuable function.44 As separate ad hoc conference they force nations to address a specific subject both over the course of the conference and during the preparatory work. Since they are diplomatic conferences attended by governmental representatives, the realities of international politics are not totally absent. The delegations that attend these conferences, however, tend to have more expertise in the subject area than those at UNGA. Consequently, communication tends to be focused more on the particular agenda item. Since the early development of law in these areas is more likely to require low-level activities than the high politics found at the General Assembly, these ad hoc forums present better opportunities for narrowing national differences and developing an international consensus. The negotiation of the final resolution and its issuance can play the valuable role of focusing attention on methods of resolving the outstanding legal and political issues.
PACE
OF
PRENEGOTIATION ACTIVITY
The use of international diplomatic forums for purposes other than negotiating international agreements is a significant vehicle for law development that can facilitate movement towards negotiated agreements. The pace of that movement, however, must be carefully adjusted. Arguably, in the case of the law of the sea negotiations the matter moved too quickly at first. Within 3 years of Arvid Pardo’s 1967 speech to the General Assembly calling attention to the need for deep seabed law, a series of UNGA resolutions were passed that committed the international community to negotiating an international agreement.45 With hindsight, it would appear that little would have been lost had more time been devoted to gathering the necessary information and narrowing national differences. If the subject had been mature for negotiation, the conference agenda might have been more manageable, the negotiating obstacles less formidable, and the atmosphere more conducive to the settlement of the issues. Instead, the difficulties and hostilities generated in the early stages of the negotiations probably increased the obstacles to success. During the more than a decade of negotiations, high seas freedoms have continued to erode, deep seabed mining has not begun, and the entry into force of a law of the sea convention remains a long way off.46 Even if an issue is considered to be ripe for negotiation, it is not certain that nations will be prepared to make the political commitment to hammer out an agreement. Arguably, this was the real problem faced at the Law of the Sea Conference; while some nations seriously sought an agreement, others may not have had that objective. As a consequence of the failure to produce an agreement over many years of negotiation, the optimal time for agreement may have been passed. The discussion of this problem can be divided into two parts. First, how does the first nation or group of nations become committed to the development of an international agreement? Second, how do the interested nations obtain the commitment of other nations whose participation in the negotiation is desired? A nation will find it more difficult to undertake affirmative foreign policy initiatives than to react to initiatives taken by others. In fact, it has been argued recently that the United States foreign policy has been primarily reactive in nature and that there is a need for more initiatives.47 The same can be said for most nations’ foreign policy, but accomplishing such a shift will no be easy. Two of the most important factors in determining whether this shift can be accomplished are the flow of information into the foreign policy sector and the domestic political environment. It was pointed out above the many suggestions have been made to augment the flow of information to the diplomatic
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
680
Handbook of Technology Management in Public Administration
corps and to assure its integration into the decision-making system. To the extent that this information is successfully absorbed, it will equip the diplomatic corps with foresight on potential problems and needs that might require new international agreements. If such sensitivity became worldwide, nations might be better prepared to take initiatives on law development as needs arise. The second factor is less mechanical and more political. Even assuming perfect knowledge and an established need for law development, no move is likely to be made unless the initiator has a clearly defined policy objective that can be translated into specific initiatives. In part, the recent inability of the United States to take international initiatives is due to its ambivalence on a number of broad international political issues. It has tended to wait until domestic demands have become strong enough to motivate action or until it has become necessary to react to initiatives taken by other nations. If the broad issues were resolved, the specific objectives might be clearly identified soon enough to stimulate foreign policy initiative before decisions are forced by other pressures. Unfortunately, it is doubtful whether any major nation, particularly the United States, could so reconcile its multiple and often conflicting interests that a forward-thinking foreign policy could be developed for many issues. Thus, more pragmatic proposals would focus on the ability to identify at the earliest possible stage the actual pressures for law development and the relevant specific foreign policy objectives. The effective flow of information into the foreign policy system thus becomes the most promising method for improving national reactions, while the clarification of general policy objectives would merit continuous, albeit somewhat fruitless, efforts. Once a government decides to seek new international law through the negotiation of an international agreement, the task before it is to stimulate the interest of other states that it wants to join in the agreement. The nature of the subject under discussion here makes it probable that the ultimate objective will be to establish new law susceptible of global acceptance. The most direct approach is to generate worldwide interest in the conduct of a global negotiation, as was done for UNCLOS III. Thus, the interested nations would seek to spread the necessary publicity about the subject and the need for agreement, UNGA discussions and resolutions and those of ad hoc diplomatic conferences can contribute significantly to this process. The calling of a negotiating conference itself would more often than not stimulate a commitment by many nations. Informed foreign offices and international civil servants would also be part of the strategy. If steps were taken to assure the availability of relevant information to foreign offices, they could more easily determine their national interests and plan a course of action. A closely related role would be played by experts in the pertinent technical fields. International professional conferences and publications as well as discussions in the popular press serve to bring the technological and legal requirements to the attention of concerned parties.48 This approach assumes that open encouragement to negotiate will lead to a commitment to do so, once the potential participants see the need for an agreement and the interest of other nations. Further support can often be obtained by linking various nations’ related objectives or even through the normal course of international diplomatic efforts to encourage participation. These efforts would normally work well when the need for a negotiation is directly felt by all participants because of the nature of the subject mater, e.g., the environment. The need for agreement on new technology may not be felt equally. Those holding the technology may be under more pressure to obtain an agreement in order to facilitate the use of the technology; those lacking the technology may only be interested in obtaining access to it or in gaining leverage over the activity. One response to this situation has been the more aggressive approach taken in the Antarctic Treaty Consultative Meetings and other specialized international groups. Rather than generally stimulating global interest in the development of a new agreement, the most interested states have directly taken the initiative by negotiating international agreements within the small group.49 By initiating the agreements and making it clear that they intend to go forward regardless of the participation of others, they create pressure on other nations either to become involved if they want to participate in the new system or to acquiesce in the initiative. This approach certainly permits the more developed nations to exercise considerable leverage over the course of events and to
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
681
stimulate the conduct of negotiations, but it also poses the risk of provoking adverse reactions if the agreement is not acceptable to a significant number of non-participants. Nevertheless, it may facilitate the rapid creation of new international law.50 There are many other ways to encourage the development of new legal norms. While methods differ and the degree of control exercised by the most interested nations will vary, the main objective should be to identify and, if necessary, create a situation that will lead to the development of new law either by direct negotiation or by obtaining general acquiescence in the actions of some states. What is critical is that once there is a need for new norms, the international community should be able to develop those norms, the international community should be able to develop those norms in a timely fashion. Significant attention should be given to the method used to develop interest; a negotiation in which a considerable number of participants are not committed to producing an agreement may be worse than no negotiation at all.
THE NEGOTIATION PERIOD It would be unduly formalistic to assume that negotiations commence only with the opening of a formal negotiating conference. In fact, negotiations take place throughout both periods under discussion. More significantly, the tenor of law development changes when it becomes clear that it will be formalized by means of a new international agreement. At that stage focus shifts from the more general methods of law development to the orchestration of diplomatic negotiations. Thus, for UNCLOS III the shift took place in 1970 when the General Assembly’s of Principles Governing the Sea-bed was passed and the preparatory meetings began.51 Options Even after negotiations have bee initiated, there is a considerable range of negotiating objectives possible. The potential objectives of the legal systems under negotiation have been divided by Ruggie into three categories: the purpose of the international regime, its instrumentalities, and its functions.52 Its purpose might be: (1) to acquire a capability, (2) to make effective use of a capability, or (3) to cope with the consequence of a capability. The instrumentalities that might carry out the objective include: (1) a common framework for national behavior, (2) a joint facility for national behavior, (3) a common policy for integrating national behavior, and (4) a common policy that substitutes for independent national behavior. Finally, an international regime might function in any of three ways: (1) informational, (2) managerial, or (3) executive.53 The selection of any option will necessarily affect the nature of the negotiation, its timing, and the difficulties faced by the participants. Each option has it appropriate place in a response to new technological developments. Unfortunately, there is no international consensus on the appropriateness of any objective as applied to specific international issues. In fact, the objectives sought by nations are often closely related to their interest in the subject matter. Thus, a nation with technology that is seeking a hospitable environment for its use will be likely to desire a loosely structured regime allowing national freedom of action. In contrast, a nation seeking to use the negotiation to obtain the benefits of others’ technological developments or additional political leverage will be likely to prefer a more comprehensive regime that circumscribes the behavior of the nations holding the technology. In part, much of the negotiation on the deep seabed regime at UNCLOS III turned on that issue. The solution attempted was to direct the negotiations towards the more pragmatic questions of deep seabed mining and to construct a regime by avoiding the structural issue. As a consequence, the deep seabed negotiations combined objectives, which only compounded the negotiating difficulties.54 Perhaps it was the collective fault of the participants that the search for compromise forced the integration of these various approaches. No nation appeared to have sought the collage of purposes, instrumentalities, and functions. Rather, it resulted from a series of pragmatic attempts to compromise often inconsistent positions. The combination of forces brought to bear on these
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
682
Handbook of Technology Management in Public Administration
negotiations thus encouraged this complex result.55 Unfortunately, it is the very complexity of the result that may doom the agreement. The expenditure of greater efforts to resolve the issue of objectives might have produced more viable results. Management of Issues Involving Advanced Technology Beyond the selection of negotiating objectives is the even more complex series of issues raised by the substantive questions, in addition to the normal difficulties faced a international negotiations, the technological foundations of the subject matter raise certain problems. The management of such questions often involves consideration of the advanced technology that gave rise to the negotiations and technical analyses of its impacts. Technical matters have regularly been considered at the UNCTAD commodity negotiations, the WARC, the International Atomic Energy Agency, the Antarctic Consultative Meetings, the Outer Space Committee, and UNCLOS III, among others. The integration of this material into political negotiations requires considerable thought, which is unfortunately has no often received. There are a number of ways that technical issue can be managed. Several approaches were used at the law of the sea negotiations with varying degrees of success.
THE DEEP SEABED MINING NEGOTIATIONS The deep seabed negotiations at UNCLOS III concerned the establishment of a legal regime to govern the commercial exploitation of deep seabed minerals, particularly manganese nodules. The task facing the negotiators was to fashion an entirely new legal system for a technology that had never been put to commercial use, while debate has raged over the question of what law would be applicable to those activities in the absence of an international organization to oversee the exploitation of seabed resources. Economic Implications of Deep Seabed Mining At the outset it was decided that the negotiation of a regime of deep seabed mining would require the delegates to focus on the likely economic and industrial consequences of deep seabed mining. Even before the formal negotiations began, the secretariats of the United Nations and of UNCTAD had been asked to undertake relevant technical studies. They produced a number of well-distributed reports on manganese nodule mining.56 Nevertheless, these reports did not induce a shift in the focus of the debate from the broad political issues of the North–South confrontation to the mining regime ostensibly under negotiation. In great part this was due to the fact that the adversaries still believed that their opponents might capitulate on the general political issues at the conference and that a more pragmatic compromise solution would therefore not be required to bring the negotiations to a successful conclusion. The resolution of this basic issue took a considerable amount of time and effort; some would say hat it has never been fully resolved. But the reports did provide information on the subject of deep seabed mining and its potential importance, and thus may have contributed in a general way to the shift in the discussions towards the specifics of the mining regime, which took place at a later date. While these studies did contain valuable information that had not previously been available to the negotiators in a useful form, their utility was limited because questions were raised about their reliability. Financial and technical limitations precluded the independent verification of a large part of the data used in those reports. In fact, almost all of the basic data used came directly or indirectly from the deep seabed mining companies, which exercised considerable control over the flow of information. Similarly, the objectivity and credibility of the analyses made in the reports were controversial. Some of the reports were produced by UNCTAD, which was known to have apolitical bias in favor of the developing countries. Studies by the countries most interested in deep seabed mining conflicted with these reports. Since the reports by the UN Secretariat and UNCTAD
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
683
were unsigned and no vehicle for otherwise supporting their credibility was initially provided, their usefulness appeared to be limited to the provision of general background information. Consequently, the question of the economic implications of deep seabed mining remained an open matter when the Caracas session of the conference opened in 1974. The significance of the economic questions made it necessary to bring more information to the attention of the delegates. By the middle of the Caracas session the Chairman of the First Committee, where the deep seabed regime was to be negotiated, invited the experts from he UN Secretariat and UNCTAD to attend a committee meeting to discuss their reports and to answer questions.57 That exercise did permit the delegates to focus on the economic issues. Little observable progress was made, however, owing in part to the combative approaches taken b the participants. The experts sough to defend their written products in the face of attempts by some developed countries to discredit them for both substantive and procedural reasons. A further effort to address the economic implications was undertaken at off-the-road “seminars” of the committee devoted to elucidating the economic issues. In preparation for the seminars, the chairman asked each interested state to bring its own expert to the committee. These experts were to enter into a discussion of the economic issues after presenting any economic data and analyses they believed would be helpful. In the short lead time that the delegations had, “experts” were brought in for this purpose. Initially, they were to speak in their individual capacity. Fearing inappropriate statements, however, the committee decided that the “experts” should speak from the seats of their sponsoring delegations. Some delegations did not even bother to replace their sponsoring delegations. Some delegations did not even bother to replace their political representatives with new “experts.” The seminars were held in the committee’s regular meeting room and a very large number of delegations attended, many expecting to become better informed on the issues. It soon became apparent that the “experts” were under a tight rein and little was to be learned from the seminars. Some delegations also distributed reports that provided written arguments in support of their previously established positions.58 No substantial progress on the issues could be observed after the seminars; even the Chairman’s tentative conclusions on the results of the exercise were strongly challenged by a number of especially interested delegations.59 As a consequence, there was little perceptible movement from the general debate, which had taken its cue from the more global North–South debate. Subsequent to the economic seminars, there were no further significant efforts within the formal framework of the conference to inform the participating delegates on the details of the economic issues. The negotiations focused more attention on the noneconomic legal issues of structuring the deep seabed regime, particularly on the international organization that would conduct or manage deep seabed mining, the International Sea-Bed Authority. Nevertheless, information on the subject did continue to flow to the delegates from published studies, informal discussions, and conferences initiated by governments, industry, and nongovernmental organizations.60 Over the years, the general level of knowledge on these economic issues increased substantially. The issue of the economic implications of deep seabed mining remained unresolved over many sessions. As matters evolved, discussions turned toward limitations on the volume of seabed production in an attempt to assure the land-based producers that the competition would not get too intense. The computation of the future world nickel demand and the potential consequences of any limitation on the deep seabed mining industry were hotly debated items. Analysis of these issues entailed the use of highly sophisticated computational skills and prognostications about the future nickel market as well as an understanding of the growth needs of a viable deep seabed mining industry. Canada was particularly involved in this negotiation and had a significant interest in its outcome because it is a major nickel producer and would have to compete with seabed production. It sought a formula that would severely limit the amount of seabed mining. At one point in the discussions at a well-attended meeting of a First Committee working group, Canada made a notable attempt
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
684
Handbook of Technology Management in Public Administration
to strengthen its argument by presenting a series of computer projections analyzing the impact of various production control formulas under discussion.61 While it did the attention of delegates, it does not appear that the analysis helped to move the negotiations forward. Immediately after the presentation, delegates began to question the many assumptions upon which the Canadian projections were based and the general validity of the analysis. There was neither sufficient time not an appropriate procedure available for the analysis to be fully explained or tested. Nevertheless, this initiative may have stimulated serious efforts to resolve the land-based producer’s concerns. Some time after Canada’s presentation it did become clear that a production control provision would be politically acceptable to the conference negotiators at that time.62 That provision had to take account of both the uncertainties of the metals market and the aspirations of the deep seabed mining industry. Thus, it was generally agreed to develop a floating production limit applicable to be fixed-time period that would be tied to the world consumption of nickel.63 Before the actual limits could be negotiated, it was decided that technical questions had to be resolved. First, it was necessary to reach a general understanding on the future of the world nickel market and, second a technically viable production control formula had to be devised that would permit the political negotiation to focus on the identification of a specific number that, when plugged into the production control formula, would actually determine the size of the limit on production. Recognizing the impossibility of negotiating the entire text of production control article in the First Committee, the participants agreed to charge a small group of “technical experts” with responding to these needs. The Sub-Group of Technical Experts, often called the Archer group in recognition of its chairman, Alan Archer of the United Kingdom delegation, was established.64 While no delegations were named to the working group and participation was open to all interested delegations, about 30 representatives from all states with substantial interests in the subject regularly participated in the off-the-record work of the group. It met regularly during the course of the Geneva session in 1978 and less so at a subsequent session. Although the issues under consideration has significant political and economic implications, important aspects of production control were addressed and resolved to the satisfaction of the participants and, ultimately, of the committee.65 The use of this group of instructed experts was one of the new clearly successful procedures for the consideration of technical questions attempted at the negotiations. There are a least two reasons why this approach succeeded. In the first place, the issue was finally ripe for substantial progress. The charge to the group centered on fulfilling technical requirements that grew out of political agreement on the framework for resolving the issue. Equally important was the approach used to fashion the group and is activities. While its size was not predetermined, the number of active participants was small enough to permit effective communication but large enough to permit all interested parties to be represented. The Archer group was not secret, now were its results predetermined or predesigned. It was clearly a functioning arm of the negotiations established and designed to meet specific needs. Interested delegations assigned individuals who had the necessary expertise or were able to become informed. The able Chairman facilitated the discussions and saw to it that they concentrated on the technical questions needing answers for the political negotiations. As its work progressed, the professionalism that characterized the group became known to the conference. In the end, the group was able to resolve many difficult technical issues and it produced a production control formula that simplified and focused the outstanding political issues.66 Financial Arrangements Between Exploiters and the Authority Another subject on which technical information became important to the progress of the deep seabed negotiations was the determination of the charges to be imposed on the exploiters of the deep seabed nodules. Obviously, the potential exploiters and their supporting countries wanted the charges to be small and certainly not so large that they would deter deep seabed mining. Other participants sough to maximize the charges. They wanted to distribute the income to
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
685
injured land-based producers even sought a tax structure that would deter deep seabed mining altogether. Thus, the financial arrangements developed into a critical substantive issue on which there was a wide range of positions. The first effort to address the problem of taxation directly took place at the sixth session in New York in 1977. First Committee Chairman Engo (Cameroon) personally established the Chairman’s Expert Advisory Group on Financial Arrangements to consider these questions. It was chaired by a member of the Australian delegation. Certain interested delegations were invited to present their views and to bring along their experts on the financial aspects of deep seabed mining or related subjects. Some representatives of industry also attended these meetings and presented their views. Discussions took place in private, but all of the taxations schemes that were received by the Chairman were incorporated into a paper that was distributed to one of the larger informal working groups of the First Committee.67 Nevertheless, this work does not appear to have resulted in any marked progress on the issue. There were a number of procedural deficiencies that contributed to the group’s faiure,68 but the effort may have been doomed from the start since there had been no political agreement on either the substance of the taxation issue or a framework for resolving it. In the year that followed, the range of disagreement narrowed substantially. Although the matter was far from settled, most delegates decided that the financial arrangements were not the appropriate vehicle for limiting deep seabed mining. Schemes that would make deep seabed mining financially prohibitive would not be negotiable. At this point it became clear that the financial impact on the industry of various approached would have to be considered. Unfortunately, there was no agreement on the financial picture of the industry then or in the future. Since there had never been any commercial exploitation of the deep seabed and the activity had few parallels to current industrial activity, the conference participants were dependent upon expert opinions. The studies issued early in the negotiations not only were controversial but also were woefully out of date and laced the detail necessary to devise a taxation scheme. Opinions expressed by various delegates enjoyed equally low creditability. The United Stated decided to try to fill this information gap. The vehicle used was as study prepared by the Massachusetts Institute of Technology (MIT) on the financial aspects of deep seabed mining.69 While its origins lay in student seminar work under the direction of Professor J. D. Nyhart of MIT, the document distributed to the conference was produced under contract to the United States government. The United States made a substantial effort to establish credibility of the study and this effort paid off when the Chairman of the negotiation group responsible for resolving the financial arrangements issue selected the study as a primary basis for his analyses. In April 1978, Ambassador Tommy Koh of Singapore had been appointed Chairman of Negotiating Group 2, which had jurisdiction over the financial arrangements issue.70 shortly after assuming that position, he established a smaller Working Group of Technical Experts. It was understood that representatives of specific delegations would participate in the group but that attendance was open to any interested delegation. While the delegates who did attend were not generally as technically qualified as the participants in the Archer group, Koh quickly turned the discussion into a highly technical seminar, and the group as a whole slowly educated itself on the financial issues and potential solutions. Although the discussions periodically lapsed into political rhetoric and some delegations sought to stand on previously taken political positions, most of the time and effort was devoted to a slow and careful analysis of the financial issues. This took place in a relatively collegial atmosphere, which permitted the group to make tangible progress.71 It soon became apparent to the participants in the working group that they needed to reach a working understanding of the financial aspects of the deep seabed mining industry. Koh sought information from all sources, but it was the MIT study proffered by the United States that attracted serious attention and developed into the yardstick against which all proposals were measured. One might question why the MIT study was accepted when the reports to the economic seminars in Caracas and the computer analysis submitted by Canada were not. Certainly,
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
686
Handbook of Technology Management in Public Administration
the observation that the MIT presented the “truth” would beg the question. It appears that the success of the MIT study can be more appropriately attributed to its perceived credibility. The U.S. efforts in this regard were most effective.72 In the end, the group proved again that the participants in this highly political multilateral negotiation could successfully tackle and resolve the most difficult issues involving a mixture of political and technical issues.73 The proposals produced by the Koh group contained detailed provisions for the transfer of an appropriate portion of revenue from the exploiters to the Authority. The amount would vary depending upon a number of factors and specific choices to be made by the exploiters. Some of these variables were stage of development, amount of return on investment realized, duration of the activities, and value of resources produced. The terms and concepts used in the proposals were as sophisticated as any provision found in resource development agreements made in the course of international business transactions. No significant difficulty was encountered when the working group’s recommendations were incorporated into the new working test.74
OTHER LAW OF THE SEA CONFERENCE ISSUES Technical questions were not unique to the deep seabed negotiations at the Law of the Sea Conference. They were clearly relevant to the negotiations over the limits of the continental shelf, the delimitation of archipelagic boundaries, the regime for marine living resources, and the regime for the marine environment. Of those four issues only the first, the limits of the continental shelf, proved difficult to resolve. This was due in part to the way relevant technical information was handled. Limits of the Continental Shelf One of the major unresolved issues left over from the first United Nations Conference of the Law of the Sea of 1958 was the definition of the exact seaward limit of coastal states’ jurisdiction over the adjacent continental shelf.75 With the rapid development of technology for exploiting hydrocarbons in the oceans, pressure mounted for the resolution of this issue. Thus, it was high on the agenda of UNCLOS III. While the negotiations were largely confined to the search for language to define the geographical limits of the continental shelf regime, at stake were conflicting national interests in the redistribution of wealth, access to hydrocarbons, and territorial control, as well as unspoken questions of military mobility and flexibility.76 This conflict assured that the issue would be difficult to resolve. Early in the negotiations it appeared that a fixed-distance limit for the regime of the continental shelf would be strongly resisted by a large number of delegations. Attention then turned to delimitation formulas that took into account they hydrographic contours of the seabed and its geologic characteristics.77 As a consequence, it became difficult for many delegations to appreciate the impact that specific boundary formulations would have on the actual limits of the continental shelf. While all could measure a fixed distance from the coastline, few had confidence in their ability to apply formulas that relied on seabed topography and geology. In fact, there was precious little public information on which the necessary calculations could be based. The lack of adequate data was particularly serious for areas adjacent to developing countries where ocean floor surveys had not been conducted. This uncertainty led many delegations to see information about the actual geographic and resource implications for their countries of the various formulations under consideration. In the course of the negotiations there were four public attempts to develop maps of the ocean floor that would assist the participants in their understanding of the various boundary formulas under consideration. The United States issued a small-scale map early in the discussions.78 An attempt to illustrate various proposed boundary formulations was prepared by the UN Secretariat at the request of the conference in 1977.79 In May of 1978, Bulgaria and the USSR requested
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
687
that the conference authorize the International Oceanographic Commission to prepare updated and more detailed maps; this request was reluctantly made by the Plenary, but no maps were produced.80 Finally, the Soviet Union made a presentation on the boundary issue to all interest delegates at informal meetings that it held for that purpose during the 1979 Geneva session of the conference.81 The treatment of this issue varied substantially from that given to technical questions at the deep seabed negotiation. In the instant case no conference committee, formal or informal, directly considered the technical information. Rather, it was treated as information for participating delegations to use in the development of their own positions and in private negotiations. No significant effort was made to explain the bases upon which the maps were prepared or their accuracy. The uncertainty thus permitted to fester added to the slow and thorny course of these negotiations. Recently, a political accommodation on the boundary issue was apparently reached.82 It is not clear, however what impact was produced by the shift from an approach that could readily be understood by all participants to one that few, if any, fully understood. The presentation of technical formulations created an information gap that produced suspicion. The efforts to fill that gap became a political issue and thus presented further obstacles to negotiation. On the other hand, the increased sophistication of the new formulations may have given the negotiators the tools for creating a compromise solution that would not have been possible under the more simplified approach. Archipelagoes One aspect of the archipelagic deliberations at UNCLOS III was the negotiation of rules permitting the archipelagic states to establish extended jurisdictional lines adjacent to their various islands. While the archipelagic states wanted the right to establish zones as they desired, various other states interested in high seas freedoms sought to limit those zones and to limit their impact on those freedoms. Early in the conference the negotiators fastened on an approach that would protect the high seas freedoms sought by the maritime states and would limit the scope of the archipelagic zones by the use of a mathematical formula that took into account the length of the boundary lines and the ratio of water to land within the lines.83 Various alternative formulations were considered in negotiations among the interest parties and the matter was settled at an early date.84 The success of this mathematical formulation was in part due to the willingness of the interested delegations to bring their technical experts to the political negotiations. These experts helped to develop the formulas and then applied them to specific geographic areas, which demonstrated to the negotiators the true impact of the proposals under consideration. By facilitating the communication between the technical analysis and the political negotiation, an agreement was reached and the issue was settled, despite the fact that territorial and resource interests were directly at stake. The early resolution of this issue at the conference presents an informative contrast to the continental shelf issue just discussed.85 Marine Living Resources and the Marine Environment The negotiations on marine living resources and the marine environment also involved questions requiring some technical information. While both of these issues met with their share of difficulty at the conference, their technically related aspects do not appear to have significantly contributed to that difficulty. This conclusion can be reached even though little effort was made to deliver such information to the conference. There appear to be at least three reasons for this situation. First, while a full grasp of the subjects in question requires considerable technical knowledge, the general parameters of the relevant data were know by most, if not all, delegations. The necessary information was not closely held. Since there were many knowledgeable persons, there was little possibility that counterproductive suspicions between opposing delegations would be aroused. Second, many international organizations commanded interest and expertise in these subjects,
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
688
Handbook of Technology Management in Public Administration
and it was known that they would provide information to any interested delegation. Third, the negotiation of these subjects was kept at a fairly general level, and the task of fashioning more detailed provisions was left to more specialized agencies. In comparison to the issues discussed above, there was less need for the conference to address the technical questions. These three factors combined made it possible to find acceptable formulations without the problems created when technical questions became an issue. Conclusion Although technical questions did become central to the deep seabed negotiations, they did not obstruct agreement once the scope of the political settlement was established. In fact, the use of technical information and analysis facilitated agreement. No sources of information used were accepted as unbiased by all participants, and little information was uncontroversial. Nonetheless, the credibility of the sources, techniques of distribution, and availability of experts as explicators of the technical information has a strong influence on its utility. The consideration of technical questions by political representatives and instructed experts appeared to be most successful when moderate-size informal groups were asked to produce technical information needed either to fill a gap in an already agreed political structure or to help build support for a political solution that was on the verge of being accepted by the participants. A similar close integration of technical analyses and political negotiations was successful in the archipelagic line negotiations. In contrast, it can be argues that the failure to integrate the technical analyses into the continental shelf negotiations contributed to the difficulties encountered there. The alternative to addressing the technical issues is to avoid the negotiation of details that require their consideration, which was successfully done in the negotiations on marine living resources and the marine environment—but only by deferring those issues to other forums. Consequently, the agreement is general and avoids most of the tough issues; nevertheless, it provides a basic framework for the development of the necessary details.
ROLE OF TECHNICAL INFORMATION AT MULTILATERAL NEGOTIATIONS While the Law of the Sea Conference is unique in many respects, the history discussed above provides an insight into the conduct of modern international negotiations involving technical issues. There appears to be a close relationship between the management of technical issues and substantive results. If the subject is highly technical and a significant number of delegates believe they are under informed, while other with conflicting interests appear to be better informed, the suspicions thus generated may inhibit progress. Although suspicions may be avoided if the absolute unavailability of the information is established, such a situation may encourage the settlement of issues by forming political coalitions that are artificial and fail to reflect real national and international interests. If the information becomes known at a subsequent date, agreements reached earlier may be destabilized.86 Similarly, the misuse of technical information may create suspicions that cannot easily be overcome. In short, the management of technical questions require planning and coordination with the political decisions. While any number of factors may be relevant to the use of technical information, some may be expected to arise in most negotiations. Timing of the Negotiations As discussed above, the timing of the negotiations in relation to the availability and distribution of the essential knowledge is an important factor. Insufficient consideration was given to this factor at the deep seabed negotiations of the Law of the Sea Conference. They took place, perhaps, at the worst possible time for their successful resolution. In the late 1960s and early 1970s, the industry sought to convince the international community that deep seabed mining would be feasible in the near future. No nation made a serious, independent effort to verify these assertions. In fact, the industry was not
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
689
yet well developed, and the data were so closely held (or not even available) that it became difficult to separate real national interests from unfounded hopes and fears. National economic interests were identified on the meager basis of these assertions and this led to the early establishment of negotiating positions that bore little relationship to reality or to well-chosen strategic objectives. The costs imposed by this poor beginning were more than the time and effort needed to set the record straight. The lost initiative damaged the international and domestic credibility of UNCLOS III; that initiative has never been recaptured. Irreversible and, perhaps, unwise decisions taken at the early stages of the negotiations may have contributed to these difficulties.87 Consequently, the likelihood that nations will accept the final compromises and bring the convention into force has suffered. In addition, this record has raised questions about the utility of similar multilateral negotiations. It will not be known for some time whether the long negotiations permitted a thorough consideration of all issues or merely forced acquiescence in an unworkable system produced by too many hands over too many years. Alternatives do present themselves: either conduct the negotiations at a very early date when all interests are remote and unfocused,88 or delay the negotiations until a full consideration of the relevant technical information and national interests is possible. Unfortunately, it is unrealistic to expect nations to be able to control the time period in which a major multilateral legal issue is negotiated. A negotiation might start at an early date, but it would be far more difficult to conclude it during an early time period. Contributing to the delay would be a lack of interest and the press of other business. Yet deferred negotiation might be an equally costly option. It could impose unacceptable costs on the interested parties due to the resulting uncertainties and the absence of an effective legal regime during a critical period. Often it will take decades for interests in ripen fully and for information to become widely available. Certainly, few international issues can be stalled for that length of time. These considerations further support the holding of prenegotiation activities designed to refine the political and legal options. Such refinements might allay pressures for early negotiation. Nevertheless, it is necessary to focus attention on improving the negotiations regardless of their timing. Scope of the Negotiating Objectives and Solutions While it may not be feasible to delay the negotiations, it may still be possible to exclude difficult technical issues or to defer them until a more appropriate time. The scope of the negotiating objectives is critical in this regard. Clearly, the objectives that required the more comprehensive and detailed international agreement were pursued at the deep seabed negotiations.89 More limited objectives could have been chosen, at least as a first step towards a more comprehensive approach.90 A more limited approach was successfully taken in the negotiations on marine living resources and the marine environment by deferring resolution of certain technical issues. There are also advantages to deferring issues that require the consideration of unavailable or highly contentious data. Issues that loom large at an early date might be resolved easily once more information is acquired and assimilated. Solutions to the issues presented at the negotiations should also be tailored to avoid complex technical problems. In the case of the negotiations on the limits of the continental shelf, the decision to turn from the distance criteria and other readily understood factors to more technical esoteric hydrographic and geological factors contributed to the difficulties faced on that question and perhaps, to the resultant delays. Management of the Technical Questions If the resolution of the issues under negotiation requires that technical subjects be considered, attention must be directed to structuring the deliberations so that they can efficiently satisfy the negotiators’ needs and facilitate the political discussions. There appear to be three stages in
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
690
Handbook of Technology Management in Public Administration
the management of technical questions: the generation of relevant information, its delivery to the negotiators, and its integration into the product of the negotiations. The factors to be considered in evaluating procedures used at all three stages are their credibility and political responsiveness. Information Generation While the accuracy of any piece of information or analysis could be debated endlessly, it should be clear that the kind of information dealt with in this article will rarely be accepted as correct strictly on the merits.91 Such information must obtain political acceptance. Apparently, the more responsive the source of the information is to the needs of the negotiations, the greater the chances are that the information will receive the necessary political acceptance. At the deep seabed negotiations, both the technical formulations developed by a small group to fill specific needs of the negotiators and the calculations of the experts used to negotiate the archipelagic lines found ready acceptance. It was far more difficult to utilize the Secretariat’s early studies on the deep seabed regime, perhaps in part because they were not narrowly drawn to respond to specific negotiating requirements. Thus, a critical question about the generation of such information is the nature of the source. Several choices are generally available: the staff of the international organizations experts hired by the sponsoring organization at the request of the negotiators, the participating delegations themselves, experts hired by the participating delegations, and nongovernmental organizations associated with the negotiations. While each of these sources may be appropriate in certain situations, some generalizable observations can be made.92 As mentioned above, the acceptance of information often depends on the apparently conflicting balance of credibility and political responsiveness. The ideal source would then appear to be a distinguished organization that is generally known for its accuracy and objectivity and is also closely involved in the negotiations. In a few circumstances this vehicle has been successfully used. Thus, the Scientific Committee on Antarctic Research (SCAR) has served as an important source of information and analysis for the Antarctic Consultative Parties during their more than 20 years of activity.93 A similar role in the outer space negotiations has been played by the Committee of Space Research (COSPAR).94 Since such associations are rare, the utility of this approach is somewhat limited. Absent a comparable continuing affiliation, the use of independent experts would not necessarily commend itself because they would be unlikely to be sensitized to the political forces at work. While the secretariat of the sponsoring organization might appear to be an ideal vehicle in some circumstances it may also have limited utility and merit. It might be well atuned to the political dynamics of the negotiations and staffed with qualified personnel. The identification of such a mission for the staff might even make for better utilization of the international organization.95 The fact that little information is neutral, however, does raise some difficult problems. An obstacle to a successful negotiation could be created if the sponsoring organization were to be perceived as favoring particular participants. The suspicions that can develop and the resulting reluctance to take advantage of the available services of the secretariat can prejudice the course of the negotiations. Clearly, the difficulties faced at the UNCTAD commodity negotiations have been compounded by the overt support for the Group of 77 by the UNCTAD staff.96 The quickest way for a sponsoring organization to be suspected of active bias is to issue a report that conflicts with the negotiating objectives of a participant in the negotiations. Thus, it may be advisable to limit the sponsoring organization’s input to facilitating the negotiations in other ways. The use of the participating delegations or their governmental staff would assure political responsiveness and, often, a quality product. That product, however, may suffer from credibility problems because it will automatically be assumed to be contrived to support the position of the originating government. If the delegation that produces the information is perceived as neutral on the issue, this problem may not arise. Other more involved delegations, however, will have a heavy
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
691
burden to establish their credibility. The personal reputation and credibility of the individual or agency directly producing the information may help in this regard. A far better approach in critical situations may be to take advantage of the credibility of an independent research institution. It must be willing to invest the time necessary to become familiar with the political situation while still standing behind its product. This approach was notably successful in the case of the MIT report. Similarly, it may be advisable to encourage more independent expert groups to develop a working knowledge of international negotiations related to their area of expertise, as illustrated by SCAR and COSPAR. Focus could be on long-term negotiations and international organizations. Serious recommendations made to the International Council of Scientific Unions (ICSU) by delegations or the sponsoring international organizations might be the appropriate route to follow in this regard.97 Introducing Information into the Negotiation Equally critical to the use of technical information is its delivery to the negotiators. If the information results from the work of negotiators at the conference, the question, or course, will not arise. Otherwise, the method of delivery must be considered, especially if the information bears on a controversial matter, in this regard, comparison of the Canadian and MIT studies is informative. In the first place, the release of the MIT study appeared to coincide with the time when the political disagreements had narrowed to the point that the technical information could be helpful in resolving the remaining differences.98 The Canadian study may have been premature, and thus it became embroiled in the more controversial political debate. Second, the MIT study was packages and introduced to maximize its credibility. Although both were conceived to serve advocacy positions, the aura of credibility surrounding the MIT study certainly facilitated its introduction and ultimate appectance.99 Finally, the forums that considered the studies differed substantially. The Canadian study was first discussed at a large Committee I working group; the MIT study at a small informal group of so-called technical experts. Since there was less political conflict in the latter forum, the MIT study probably had a better chance of being sold to the more specialized group as a prelude to tacit acceptance by the remaining participants. Thus, the timing credibility and forum would appear to be important considerations. A more diffuse approach may be more appropriate in certain circumstances. In cases where the information needs are not highly specialized but involve a basic understanding of new technology or industrial activity, the use of many sources might be commended. Rather than concentrating on the delivery of a single report, multiple sources might be used so that the general level of knowledge is raised and the appearance of a monopoly of information is dispelled. Thus, governments interested in this approach would encourage research and publication by many sources at the international and domestic level and urge conference participants and observers to hold meetings and discussions to consider the relevant information. In part, the early stages of the deep seabed negotiations served that function, in addition, the general press and scholarly publications devoted considerable space to deep seabed mining. The Secretariat’s studies contributed to this information flow, and nongovernmental organizations held seminars and other semi-social functions at which deep seabed mining was a topic. Industry representative participated in informal luncheon clubs and other activities devoted to open discussion of the industry and its technology (within certain limits, of course). Over time this flow of information increased the level of knowledge and may have tempered some of the original hostility. Evaluating and Integrating Technical Data into the Negotiation The use of the Technical Experts Groups to integrate the MIT study into the negotiations represented one effective method of managing externally generated information. Once the report
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
692
Handbook of Technology Management in Public Administration
received the imprimatur of the Chairman and was used by the group itself, it became extremely difficult to challenge both the study and the group’s conclusions.100 Whether such a group will be successful may be largely dependent upon four characteristics: its size, the quality of its leadership, and the procedures adopted.101 Thus, the experience of the deep seabed financial arrangements—the route from expert opinion to a technical expert committee of delegates to informal political negotiations and beyond—appears to commend itself. More direct introduction of information may be less time-consuming, but it also presents additional risks. The handling of the financial arrangements issue contrasts dramatically with the continental shelf boundary negotiations. In the latter case, there was never sufficient opportunity to analyze alternative formulas based on the available maps and other data, which contributed to the unnecessarily slow progress made on the issue. The use of technical working groups at international negotiations is not unique to UNCLOS III. The technique has been used for years—and with notable success—at the meetings of the consultative parties to the Antarctic Treaty, where technical working groups serve as a regular conduit for reports from SCAR and other sources. An additional lesson can be learned from the Antarctic Consultative Meetings and other successful negotiations involving technical issues: the value of staffing delegations with persons qualified in the relevant technical area. Such experts facilitate the communication of technical information and therefore its evaluation and possible integration into the negotiations.102 When the information emanates from a variety of sources, as discussed above, its integration into the negotiations would tend to be more general and less direct than when it is obtained from a single source for limited purposes.
CONCLUSION The viability of the pertinent international law is open to question in areas that are subject to the dual pressures of rapid technological advances and efforts to have the entire community of nations actively participate in the development of that law. As a result, the international community is faced with a difficult management problem. The lack of expeditious progress at recent international negotiations and the high risks of failure have created a need to improve these processes. While there appear to be a number of improvements that can be recommended, the realities of the international system limit the possibility for substantial change and improvement. Beneficial results might be obtained from efforts directed at three fronts. First, there is a need to facilitate the early and effective integration into the foreign policy apparatus of information on new technological developments that may require new international law. Second, before the international community is committed to the development of formal international agreements, especially when poorly understood technical matters are involved, considerable time and effort should be devoted to narrowing national differences and the range of legal solutions. International negotiations that drag on for years and regularly court failure are not substitute for deferred negotiations that stand a greater chance of success. Large-scale international negotiations are suffering from a bad press, which directly affects their productivity. If the new technologies and new political realities are to be adequately served by the international community, the international legal system must reestablish its credibility in these areas. Arguably, the deferral of international negotiations and an improved rate of success may prejudice the NIEO by enabling the developed countries with the technology to shape the law unilaterally. This argument may not e correct. The early commencement of international negotiations does not guarantee, and in fact may not significantly advance, their early or successful conclusion. Rather, the problems created by an early start of negotiations might prove to be counterproductive for all interest. Even the proponents of the NIEO might be better served by the successful management of prenegotiation activities that develop an international consensus concerning their objectives.
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
693
These prenegotiation activities should at least include the following: (1) a serious consideration of the real value of an international agreement as opposed to the development of law through more diffuse methods; (2) a review of the state of the technology that is relevant to the negotiations (relevant factors include its rate of change and the availability of information about it); (3) an examination of the pressures for early agreement, such as commercialization, to determine whether agreement is really needed and whether alternative methods of satisfying the demand can be found; (4) an exploration of procedures that would narrow national differences before the commencement of negotiations; and (5) a determination that participants in the negotiation will be committed to a negotiated settlement rather than a counterproductive exercise in international diplomatic maneuvers. Finally, the process of generating technical information and integrating it into international negotiations requires the concerted attention of the international community. International law and international negotiations are still at very rudimentary stages of development. Nevertheless, they must respond to the needs of a world that is increasingly shaped by technological developments and sophisticated technical analyses of events. International law and international negotiations must be made capable of responding to these needs and using these tools. This can only be done if international negotiations are organized so that they can make effective use of relevant technical information. A serious part of conference planning should be devoted to defining the scope of negotiating objectives, the composition of delegations, the availability of credible experts, the selection of various negotiating groups of experts, and the manner of distributing relevant information. Unless the international legal system is capable of responding to these requirements, it will become increasingly anachronistic.
NOTES 1. For a symposium of the substantive aspects of this problem, see The Third World Challenge, 59 FOREIGN AFF. 366 (1980–1981). 2. W. Friedmann, The Changing Structure of International Law 11–12, 123–26 (1964). Many of these activities take place within international organizations or continuing conferences such as the General Agreement on Tariffs and Trade (GATT), the International Civil Aviation Organization (ICAO), the International Bank for Reconstruction and Development (IBRD, World Bank), the International Monetary Fund (IMF), the International Telecommunication Union (ITU), the World Administrative Radio Conference (WARC), and the United Nations Conference on Trade and Development (UNCTAD). The value of international agreements is discussed in R. Bilder, Managing the Risks of International Agreement 6–7 (1981). 3. M. McDougal, H. Lasswell, & I. Vlasic, Law and Public Order in Space (1963). 4. Declaration on the Establishment of a New Economic Order, GA Res. 3201 (S-VI) (1974). See also Charter of Economic Rights and Duties of States, GA Res. 3281(XXIX) (1974). For an informative discussion of the underlying pressures for systemic changes, see ul Hag, Negotiating the Future, 59 Foreign Aff. 398 (1980–1981). The recent Cancun meeting of the leaders of 22 industrial and developing nations was convened for the purpose of finding means to break the deadlock on the issues. See Right, Cancun Parley Concludes Without Agreement, New York Times, Oct. 24, 1981, at A4, cols 3–5. 5. Brown & Fabian, Toward Mutual Accountability in the Nonterrestrial Realms, 29 International organization 877, 882 (1975); and Ruggie International Responses to Technology: Concepts and Trends, ibid. at 557, 578. 6. Ruggie, note 5, at 578. Ruggie & Haas, Environmental and Resource Interdependencies: Reorganizing for the Evolution of International Regimes, in Report of the
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
694
Handbook of Technology Management in Public Administration
7. 8. 9.
10.
11.
12.
commission on the organization of the government for the conduct of foreign policy, App. B, at 218, 220 (1975) (hereinafter cited as Murphy Commissions Report). E. Haas, Beyond the Nation-State: Functionalism and the International Organization 6 (1964). Ruggie, note 5, at 582. Ruggie & Haas, note 6, at 224. The conceptual bases for the reorganization would be the quality of life, the global environment, food and population, and energy and minerals. Ibid. at 225-28. Even the highly visible Brandt Commission Report [North–South: A Program for Survival (W. Brandt & A. Sampson eds., 1980)] cannot be characterized as suggesting a systemic change. See Lescaze, Brandt Commission Seeks Revived North–South Dialogue, Washington Post, Feb. 13, 1980 at A25, cols 1–6. There is little indication that many of its substantive proposals will be implemented. The law of the sea is a particularly appropriate vehicle for illustrating these issues. Throughout its long and well-documented history there has been a constant interplay of politics and technology. Furthermore, the series of negotiations that has now culminated at the Third United Nations Conference on the Law of the Sea has brought this subject to an advanced stage. Thus, the LOS negotiations provide a complete record of the international issues that are the focus of this paper. At the same time, the law of the sea is not so atypical that the observations that might be made on the basis of this record are not generalizable. For a comprehensive series of articles on the UNCLOS III negotiations, see Stevenson & Oxman, The Preparations for the Law of the Sea Conference, 68 AJIL 1 (1974); The Third United Nations Conference on the Law of the Sea: The 1974 Caracas Session, 69 ibid. at 1 (1975); The 1975 Geneva Session, ibid. at 763; and Oxman, The Third United Nations Conference on the Law of the Sea: The 1976 New York Session, 71 ibid. at 247 (1977); The 1977 New York Sessions, 72 ibid. at 57 (1978); The Seventh Session (1978), 73 ibid. at 1 (1979); The Eighth Session (1979), 74 ibid. at 1 (1980); The Ninth Session (1980), 75 ibid. at 211 (1981); and The Tenth Session (1981), ibid. at p. 1. Some procedural aspects of the negotiations are discussed in B. Buzan, Seabed Politics (1976); Miles, The Structure and Effects of the Decision Process in the Seabed Committee and the Third United Nations Conference on the Law of the Sea, 31 International Organization 158 (1977); B. Buzan, A Sea of Troubles? Sources of Dispute in the New Ocean Regime (Adelphi Paper No. 143, 1978); Buzan, United We Stand . Informal Negotiating Groups at UNCLOS III, 4 Marine Policy 1983 (1980); Buzan Negotiating by Consensus: Developments in Technique at the United Nations Conference on the Law of the Sea, 75 AJIL 324 (1981). League of Nations, Acts of the Conference for the Codification of International Law, held at The Hague from March 13th to April 12th, 1930, Doc. C. 351.M.145.1930.V, Vol.I, at 50–54, 123–37; Doc. C.74M.39.1929.V, Vol. II; and Doc. C.351(b).M.145(b).1930.V, Vol. III. These documents are reproduced in League of Nations Conference for the Codification of International Law (S. Rosenne ed. 1975). Preparations for this conference began in 1949. Report of the International Law Commission covering the work of its eighth session, 11 UN GAOR, Supp. (No. 9) 2, UN Doc. A/3159 (1956), reprinted in 2 Y.B. Int’l L. Comm’n 253 (1956), 51 AJIL 154 (1957). The first conference produced four conventions: Convention on the High Seas, 13 UST 2312, TIAS No. 5200, 450 UNTS 82, reprinted in 52 AJIL 842 (1958); Convention of the Continental Shelf, 15 UST 471, TIAS No. 5578, 499 UNTS 311, reprinted in 52 AJIL 858 (1958); Convention on the Territorial Sea and the Contiguous Zone, 15 UST 1606, TIAS No. 5639, 516 UNTS 205, reprinted in 52 AJIL 834 (1958); Convention on Fishing and Conservation of the Living Resources of the High Seas, 17 UST 138, TIAS No. 5969, 559 UNTS 285, reprinted in 52 AJIL 851(1958).
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
695
13. See Dean, The Second Geneva Conference on the Law of the Sea: The Fight for Freedom of the Seas, 52 AJIL 751 (1960); Jessup, The Law of the Sea Around Us, 55 ibid. at 104 (1961). 14. Note verbale to Secretary-General U Thant, UN Doc. A/6695 (1967). See generally Swing, Who Will Own the Oceans?, 54 Foreign Aff. 527 (1976). 15. The informal conferences held by the Law of the Sea Institute and Pacem in Maribus played a role in this regard but were more informational than anything else. 16. For a discussion of this issue, see Burke, Submerged Passage Through Straits: Interpretations of the Proposed Law of the Sea Treaty Tex, 52 Wash. L. Rev. 193, 195–200 (1977). See also Knauss, The Military Role in the Ocean and its Relation to the Law of the Sea, in the Law of the Sear: A New Geneva Conference 77 (Proceedings of the Sixth Annual Conference of the Law of the Sear Institute, L. Alexander ed. 1972). 17. See Organization of the second session of the Conference and Allocation of Items: Report of the General Committee, UN Doc. A/Conf. 62/28 (1974), 3 Third United Nations Conference of the Law of the Sea, Official Records [hereinafter cited as UNCLOS III! Off. Rec.] 57 (1975). 18. Question fo the Reservation Exclusively for Peaceful Purposes of the Sea-bed and Ocean Floor, and the Sub-soil thereof, Underlying the High Seas beyond of Limits of Present National Jurisdiction, and the Use of their Resources in the Interests of Mankind [Moratorium Resolution], Floor, and the Sub-soil thereof, beyond the Limits of National Jurisdiction, GA Red. 2749 (XXV) (1970). For a review of the UN activities from 1967 to 1970, see B. Buzan, Seabed politics, note 10, at 65– 116. See also note 15. 19. Ruggie & Haas, note 6, at 218. 20. “Interdependence” is used here to mean “mutual sensitivity.” See Baldwin, Interdependence and Power: A Conceptual Analysis, 34 Int’l Organization 471 (1980). 21. House Comm. On International Relations, Science, Technology and American Diplomancy: An Extended Study of the Interactions of Science and Technology with United States Foreign Policy [hereinafter cited as Extended Study of American Diplomacy] 77–122 (Comm. Pring 1977). 22. The arguments are collected in L. Kipson & Katzenbach, report to the National Aeronautics and Space Administration on the law of outer space 51–59 (1961). 23. McDougal & Lipson, Perspectives for a Law of Outer Space, 52 AJIL 407, 420 (1958). 24. Ibid. at 424–25. 25. Ibid. at 425; Jaffee, Reliance upon International Custom and General Principles in the Growth of Space Law, 7 St. Louis U.L.J. 125, 130 (1962); M. McDougal, H. Lasswell, & I. Vlasic, note 3, at 323–49. 26. Schachter, The Prospects for a Regime in Outer Space and International Organization, in Law and Politics in Space 95, 99 (M. Cohen ed. 1964). Apparently, this issue has been revived in light of the development of the United States space shuttle. See Kopel, The Question on Defining Outer Space, 8 J. Space L. 154 (1980). 27. G. Schwarzenberger, Power Politics: A Study of World Society 109 (3rd ed. 1964). 28. See, e.g., Sovereignty in Space, Newsweek, Dec. 19, 1955, at 82; Haley, Law of Outer Space—A Problem for International Agreement, 7 Am. U.L. Rev. 70, 76 (1958); Clarke, The Challenge of the Spaceship, 6 J. Brit. Interplan. Soc’y 66–81 (1946). 29. Schachter, note 26, at 98. See also McDougal & Lipson, note 23, at 414. 30. The economic viability of deep seabed mining was extensively considered in the course of the multi-session consideration of deep seabed mining legislation before the U.S. Congress. That history is reviewed in Caron, Municipal Legislation for
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
696
Handbook of Technology Management in Public Administration
Exploitation of the Deep Seabed, 8 Ocean Dev. & Int’l L. 259 (1980). On June 28, 1980, President Carter signed the legislation into law: the Deep Seabed Hard Mineral Resources Act, Pl. L. 96–283. The economic issues were explored in a recent book entitled Deepsea Mining (J. T. Kildow ed. 1980). See also D. Leipziger & J. Mudge, Seabed Mineral Resources and the Economic Interests of Dveloping Countries (1976). That there has been mininformation about the importance of deep seabed mining ha been clear for some time. Many were led to believe that commercial operations would begin by the mid-1970s; it now appears that the earliest possible date will be late 1980s. While the uncertain political–legal situation may have contributed to this delay, basic technological and economic obstacles also have to be overcome. Early projections presumed that there would be a rapid rise in the number of commercial mine sites once commercial production became possible; these projections are now considered to have been unduly optimistic aside from the potential legal constraints on production. While some counseled against accepting these optimistic forecasts, the industry, which projected an early development of major nodule exploitation, dominated the information flow. Without accusing industry spokesmen of intentionally misleading their governments and the international community, it should be pointed out that an optimistic forecast could serve their interest in many ways: i.e., in obtaining favorable internal company decisions, in obtaining financing from external sources, and in persuading their home governments to seek a favorable legal regime at an early date. See the voluminous congressional testimony on the economic issued giving during consideration of the Deep Seabed Hard Mineral Resources Act and its predecessor bills. A genealogical table of the bills that led up to this legislation is found in Caron at 287. Some of the older and more optimistic reports on the economic viability of deep seabed mining include: Drechlser, The Value of Subsea Mineral Resources, in A New Geneva Conference, note 16, at 112–13; Rothstein & Kaufman, The Approaching Maturity of Deep Ocean Mining—The Pace Quickens, Mining Engineering, April 1971, at 31, 33; Additional Notes on the Possible Economic Implications of Mineral Production from the International Sea-bed Area: Report of the Secretary-General, UN Doc. A/Ac. 138/73, at 9–10 (1972). Contra, Sorensen & Med, A Cost-Benefit Analysis of Ocean Mineral Resource Development: The Case of Manganese Nodules, Am. J. Agricultural Econ., December 1968, at 1611. See generally R. Wright, Ocean Mining, An Economic Evaluation (Staff Study, Ocean Mining Administration, U.S. Dept. of the Interior, 1976); and Charney, The Equitable Sharing of Revenues from Seabed Mining, in H.G. Knight, J. Charney, & J. Jacobson, Policy Issues in Ocean Law 53 (1975). 31. Extended Study of American Diplomacy, note 21, at 77. 32. In the mid-1970s the nations directly involved in explorations related to nodule mining included Belgium, West German, France, Canada, Great Britain, Japan, and the United States. The firms were Deepsea Ventures (a consortium of Tenneco, U.S. Steel, Union Miniere of Belgium, and some Japanese companies); the Kennecott Consortium (Kennecott Cooper, Rio Tinto Zinc, Consolidated Gold Fields, Noranda Mines, and Mitsubishi); the International Nickel Company of Canada Consortium (The German AMR group, SEDCO, and a Sumitomo-led Japanese group (DOMCO)). Also, it was mistakenly believed that the Summa Corporation was involved; however, it was actually developing a system to try to recover a Soviet submarine for the CIA. D. Leipziger & J. Mudge, note 30, at 128n. 15. See also B. Buzan, Seabed Politics, note 10, at 80. 33. In truth, the industry had so effectively controlled information on deep seabed mining that the developed country delegations had little knowledge that was not derived from the industry and already generally known.
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
697
34. J. Rubin & B. Brown, The Social Psychology of Bargaining and Negotiation 145 (1975); P. Gulliver, Disputes and Negotiations 145 (1979). 35. Compare Stevenson & Oxman, Preparations for the Law of the Sea Conference, note 10, at 13–23, with Stevenson & Oxman, The 1974 Caracas Session, note 10, at 15–17. 36. See the proposed agenda discussed in ul Hag, note 4. 37. See B. Buzan, Seabed Politics, note 10, at 71; see also note 15. 38. See Extended Study of American Diplomacy, note 21, in general and at 1491–1502, 1682–86, and 1702; Keohane & Nye, Organizing for Global Environmental and Resource Independence, 1 Murphy Commission Report, note 6, App. B, at 46, 61–62. 39. See note 38. 40. UNESCO, Thinking Ahead 151 (1977). 41. Deutsch, Outer Space and International Politics: A Look to 1988, in Outer Space in World Politics 139 (J. Goldsen ed. 1963). 42. Bergman, Organizing the U.S. Government Response to Global Population Growth, 1 Murphy Commission Report, note 6, App. B 65, 80. 43. Schachter, Scientific Advances and International Law Making, 55 Cal. L. Rev. 423, 424– 27 (1967); M. Bedjaoui, Towards a New International Economic Order 138–42 (1979). 44. See Sohn, The Stockholm Declaration of the Human Environment, 14 Harv. Int’l L. J. 423 (1973). 45. This history is recounted in Swing, note 14, See also note 18. 46. There have been international law developments that were molded by activity at the conference in the same way that nonbinding conference proceedings discussed above have influence on law development. The fact that this was a treaty negotiation, however, stimulated national claims and positions that may prove counterproductive in the long run. 47. Extended Study of American Diplomacy, note 21, at page 1491. 48. The obverse is also true. The experience of the Law of the Sea Conference has shown that nations will be wary of committing themselves to the completion of a negotiation if they believe that their lack of information puts them at a severely disadvantageous position relative to other participants (unless, of course, they have identified an independent need for agreement). This places a premium on the timely and credible distribution of relevant information through government and private channels. 49. The most recent example of this procedure is the Convention on the Conservation of Antarctic Marine Living Resources, opened for signature Aug. 1, 1980 to December 31, 1980, reprinted in 19 ILM 841 (1980). See also Agreed Measures for the Conservation of Antarctic Fauna and Flora, reprinted in Congressional Research Service 95th Cong., 1st Sess., Treaties and Other International Agreements on Fisheries, Oceanographic Resources, and Wildlife Involving the United States 28–34 (Comm, Print 1977); Convention for the Conservation of Antarctic Seals, opened for signature June 1, 1972, TIAS No. 8826 (entered into force March 11, 1978), reprinted in 11 ILM 251 (1972). 50. The question of international participation in the development of the Antarctic mineral resource regime is discussed in Charney, Future Strategies for an Antarctic Mineral Resource Regime—Can the Environment be Protected?, in The New Nationalism and the Use of Common Spaces: Issues in Marine Pollution and the Exploitation of Antarctica 2006. (J. Charney ed. 1981). 51. See note 18. 52. Ruggie, note 5, at 571. 53. Ibid. 54. Since it was assumed that deep seabed mining would not be operating commercially at the commencement of the new regime, the negotiations has as one aim facilitating
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
698
Handbook of Technology Management in Public Administration
the acquisition of the capability. In addition, the Enterprise was to have as its initial objective the acquisition of deep seabed mining capabilities. Another purpose of the regime, once the capability was acquired, was to use that capability by conducting deep seabed mining. Finally, the negotiations addressed the consequences of deep seabed mining by drafting provisions on its environmental impact, the sale of metals, and the amelioration of any adverse economic impact on competing producers. The instrumentalities proposed for this regime were equally comprehensive. Depending on the nature of the specific activity, at least three of the four alternatives would be utilized. The rules applicable to seabed prospecting could best be considered as establishing a common framework for national behavior. Commercial exploration and exploitation by contractors would probably be considered a common policy for integrating national behavior; and the operations of the Enterprise would be a common policy substituting for independent nationally behavior. Similarly, depending on the particular activity, the seabed regime would act in an informational, managerial, or executive role. 55. In part, it is that fear which may have motivated the so-called L-5 to vehemently protest the new Moon Treaty. Agreement Governing the Activities of States on the Moon and Other Celestial Bodies, opened for signature Dec. 18, 1979, UN Doc. A/34/664, reprinted in 18 ILM 1434 (1979). See Dangerous Defects in the Draft UN Moon Treaty (Letter to the Editor by K. Eric Drexler, Director L-5 Society), N.Y. Times, Oct 9, 1979, at A22, cols. 3–5. See generally Christol, The Common Heritage of Mankind Provision in the 1977 Agreement Governing the Activities of States on the Moon and Other Celestial Bodies, 14 Int’l Law. 429 (1980). 56. See Report of the Secretary-General, Economic Significance, in terms of Sea-bed Mineral Resources, of the Various Limits Proposed for National Jurisdiction, UN Doc. A/AC.138/87, at 27–28 (1973) [hereinafter cited as Economic Significance]; UN Dept of Economic and Social Affairs, Mineral Resources of the Sea (UN Doc. St/ECA/125, 1970); Report of the Secretary-General, Economic Implications of Sea-bed Mineral Development in the International Area, UN Doc. A/Conf.62/25 (1974), 3 UNCLOS III, Off. Rec. 4 (1975); Report of the Secretary-General, Possible Impact of Sea-bed Mineral Production in the Area Beyond National Jurisdiction on World Markets, with Special Reference to the Problems of Developing Countries: A Preliminary Assessment, UN Doc. A/Ac.138/36 (1971); Report of the Secretary-General, Possible Methods and Criteria for the Sharing by the International Community of Proceeds and Other Benefit Derived from the Exploitation of the Resources of the Area Beyond the Limits of National Jurisdiction, Un Doc. A/AC.138/38 (1971); Progress Report by the Secretary-General, Sea-bed Mineral Resources: Recent Developments, UN Doc. A/AC.138/90 (1973); Report by the UNCTAD Secretariat, The Effects of Production of Manganese from the Sea-bed, with Particular Reference to the Effects on Developing Country Produces of Manganese Ore, UN Doc. TD/B/483 (1974); An Econometric Model of the Manganese Ore Industry, UN Doc. TB/B 483/Add.1 (1974); Report by the UNCTAD Secretariat, The Effects of Possible Exploitation of the Sea-bed on the Earnings of Developing Countries from Copper Exports, UN Doc. TD/B/484 (1974); Report by the UNCTAD Secretariat, Commodity Problems and Policies, Mineral Production from the Area of the Sea-bed Beyond National Jurisdiction: Issues of International Commodity Policy, UN Doc. TD/B/113, Supp. No. 4 (1972); Report by the FAO Secretariat, Possible Adverse Effects of the Exploitation of the Sea-bed Beyond National Jurisdiction of Fishery Resources, UN Doc. TD/B/447 (1973); Note by the UNCTAD Secretariat, Exploitation of the Mineral Resources of the Sea-bed Beyond National Jurisdiction: Issues of International Commodity Policy, UN Doc. TD/B/449 (1973); Exploitation of the Mineral Resources of the Sea-bed Beyond National Jurisdiction: Issues of International Commodity Policy, Case Study of Cobalt, UN doc. TD/B/449/Add.1 (1973).
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
699
57. See 2 UNCLOS III, Off. Rec., Meetings 9 and 10 of comm. I, at 45–51 (1975). In preparation for this discussion the committee Chairman issued a summary of the relevant documents that had been previously presented to the conference, Note by the Chairman of the First Committee, UN Doc. A/Conf.62//C.1/L.2 (1974), 3 UNCLOS III, Off. Rec. 151 (1975). 58. United States of America: Working Paper on the Economic Effects of Deep Sea-bed Exploitation, UN doc. A/Conf. 62/C.1/L.5, 2 UNCLOS III, Off. Rec. 164; Seminar on Economic Implications of Sea-bed Mineral Development, Statement by Mr. Gavin Moncrieff (United Kingdom, July 31, 1974) (unpub.). See also Statement by Leigh S. Ratiner, Alternate United States Representative, Committee I (Aug. 8, 1974). This statement was made at an informal meeting of Committee I and is unpublished. Much of the substance was repeated, however, at a formal meeting of Committee I, 2 UNCLOS III, Off. Rec. 64–66. 59. See Statement by U.S. Representative Ratiner at the 14th meeting of Committee I on Aug. 19, 1974, 2 UNCLOS III, Off. Rec. 75. Chairman Engo’s summary of the seminar is found in ibid. at 68–70. 60. A particularly significant role was played by an interested and active nongovernmental organization (NGO), the Ocean Education Project (EOP), which periodically sponsored informal seminars for interested delegates. It held approximately 60 luncheons and seminars on UNCLOS III matters. This NGO was established by the World Federation Association and Friends World Committee for consultation for the purpose of assisting the negotiations. Its active leaders were strongly committed to the success of the negotiations, including the establishment of a strong and viable deep seabed mining regime. Most of the active participants were U.S. citizens, but they had little or not political power, domestic or international. Their activities were periodically reported in the occasional conference newspaper, Neptune, sponsored by OEP and the Law of the Sea Project of the United Methodist Church. In addition to the continued flow of information, there were efforts by small and fairly secret groups of interested states directly to negotiate a political settlement of the economic issues. Once agreement was reached within those groups, the members sought to legitimatize their agreement by orchestrating the consideration of the subject at a rather large Committee I working group or similar body. While they did have periods of apparent success, opposition ultimately developed to their agreements (usually from excluded participants) and the initiatives collapsed. These abortive efforts took place during negotiations that preceded the issuance of the Informal Single Negotiating Text, UN Doc. A/Conf. 62/WP. 8 (1975), 4 UNCLOS III, Off. Rec. 137 (1975) [hereinafter INST] and in the interval before the issuance of the Revised Single Negotiating Text, UN doc. A Conf. 62/WP. 8/Rev. 1 (1976), 5 ibid. at 125 (1976) [hereinafter RSNT]. 61. This took place during a meeting of the Committee I Chairman’s Working Group, chaired by Ambassador Jens Evensen of Norway, on the afternoon of May 26, 1977. No records of this meeting have been issued. The document contained printouts from a computer program which projected the number of manganese nodule mine sites during the period 1985 to 2000, based on various assumptions. The tables were headed “Program: Group of 77 Hi Formulation Offshore Minerals Section: RMCB” (unpub.). At an annual growth rate of 6%, it projected a total of 23.8899 mine sites in the year 2000. 62. With the U.S. review of UNCLOS III announced by the Reagan administration in March 1981, this issue, in addition to many others, may be reopened. See UN Third Conference on the Law of the Sea (Resumed 10th Session), Report submitted to the House of Comm. of Foreign Affairs, 97th Cong., 1st (Comm. Print 1981). As an analysis of the dynamics of the international negotiations, the observations made here remain unchanged. It is still an open question whether the solution of this and other issues will ultimately be found to be politically acceptable, as expressed through a binding international agreement.
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
700
Handbook of Technology Management in Public Administration
63. The formula provided that the maximum quantity of nickel that can be produced from seabed mining will not exceed a certain percent of the growth segment of world nickel consumption. RSNT, pt. I, Text Presented by the Chairman of the First Committee, Art. 9.4 (ii) and Ann. 1, para. 21; see Introductory Note by the Chairman of the First Committee, pt. IV, 5 UNCLOS III, Off. REc. 125; Information Composite Negotiating eText, UN Doc. A/Conf. 62/WP. 10 (1977), 8 UNCLOS III, Off. Rec. (1978) [hereinafter INCT], Art. 150.1 (g); ICNT/Rev.1, UN Doc.A/Conf.62/WP.10/Rev.1. (1979), Art. 150.1 (g); and ICNT/Rev.2, UN Doc. A/Conf.62/WP.10/Rev.2 (1980), Art. 151. 64. The Archer Group of Technical Experts, begun at the intersessional meetings, was made a subgroup of Negotiating Group 1 (NG 1), which had been assigned the responsibility of negotiating the system of exploration and exploitation and resource policy (Chairman: Frank Njenga, Kenya). See Oxman, The Seventh Session, note 10, at 10, and U.S. Delegation Report, 7th sess., UNCLOS III, at 13 (1978) (unpub.). 65. See Negotiating Group 1 Sub-Group of Technical Experts, Progress Report, Conf. Doc. NG1/7, 10 UNCLOS III, Off. Rec. 28 (1978); Negotiating Group 1 Sub-Group of Technical Experts, Second Progress Report, UNCLOS III, Conf. Doc. GH1/9 (May 9, 1978); and NG-1 Sub-Group of Technical Experts, Final Report, UNCLOS III, Conf. Doc. BG1/11 (May 11, 1978). See also U.S. Delegation Report, note 64; and Oxman, note 64. 66. The political negotiation of the production control article within UNCLOS III at that time was held in another subgroup of NG 1, the Production Control Group chaired by Ambassador Satya Nandan (Fiji) (Nandan group). See Oxman, The Eighth Session, note 10, at 11; U.S. Delegation Report, 8th Sess., UNCLOS III, at 14–17 (1979) (unpub.); and U.S. Delegation Report, 9th Sess., UNCLOS III, at 17 (1980) (unpub.). This work was incorporated into the ICNT/Rev.2, note 63, Art. 151. See generally ICNT, note 63, Art. 150; ICNT/Rev. 1, note 63, Arts. 150 and 151; Draft Convention on the Law of the Sea (Informal Text), UN Doc. A/Conf. 62/WP.10/rev.3, Art. 151 (1980) [hereinafter DC(IT)]; Draft Convention on the Law of the Sea, UN Doc. A/Conf.62/L.78, Art. 151 (1981) [hereinafter DC]. 67. See Chariman’s Expert Advisory Group on Financial Arrangements, Financial Terms of Contracts First Committee (June 27, 1977) (unpub.). Related documents issued to the committee to assist in educating the participants on this issue included: Costs of the Authority and Contractual Means of Financial Mean of Financing its Activities, UN Doc. A/Conf.62/C.1/L.19 (1977); and Note by the Secretariat, Hypothetical Computation of Production of Nickel from the Area (May 27, 1977) (unpub.). The work of the Chairman’s Advisory Group on Financial Arrangements found its way into the ICNT, note 63, Ann. II, para. 7, but it was highly qualified; see footnote * at para. 7, and the Explanatory Memorandum by the President of the Conference on Document A/Conf.62/WP.10, UN Doc. A/Conf.62 WP/10/Add.1 (1977), 8 UNCLOS III, Off Rec. 65, 66–67 (1978). 68. Unlike the Archer group, this group held its meetings in relative secrecy and there was a notable absence of the sense of expert consultation. In fact, there were a number of important participants who were under the impression that the Chairman alone would determine the results of the group, and that they were obliged to convince him to adopt their views. Thus, the atmosphere appeared to be one of advocacy rather than expert inquiry and analysis. In the end, the Chairman did not attempt to resolve the issues. Rather, the resulting document incorporated every type of tax presented to the group into one taxing scheme, an approach not acceptable to any but the most ardent opponents of deep seabed mining. A rumored power conflict between two of the major conference actors may have contributed to the failure of this effort. 69. J. Nyhart, L. Antrim, A. Capstaff, A. Kohler, and D. Leshaw, A Cost Model of Deep Ocean Mining and Associated Regulatory Issues (Report N. MITSG 78-4, 1978).
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
701
70. UN Doc. A/Conf.62/62, 10 UNCLOS III, Off. Rec. 6 (1978); Note by Ambassador R.R.B. Koh, Chairman of Negotiating Group 2, Conf. Doc. NG 2/1 (April 19, 1978) (unpub.); and Oxman, The Seventh Session, note 10, at 11–13. 71. Various experts were brought to the group to aid the Chairman and the participating delegations when necessary. Throughout much of its work, the Chairman invited an expert from the United Nations Development Programme (UNDP) to aid in some technical analysis. Assistance was also given by various expert numbers of participating delegations and the Secretariat. Ambassador Koh made a particular effort to assemble a group of qualified assistants to help him and the Group to Technical Experts to address the relevant issues, and a number of delegations brought with them or were represented by persons expert in the areas under discussion. 72. At least five factors appear to have contributed to this success. First, the report was presented not as the product of an interested delegation but rather as the product of a highly reputable academic institution with known expertise in the general area. Second, the report did not appear fully to confirm the political positions taken by the sponsoring delegation. In fact, the United States was required to modify some of its positions in light of the findings of the report. Thus, the credibility of the study was further established. Third, the principal author of the study and his technical assistants were made available to the interested delegates and the conference staff for informal discussions and other meetings so that questions could be raised about the report and additional computer analyses from MIT to determine the impact of various proposals and changed assumptions fourth, in addition to a readable statement of the ultimate conclusions and the basic reasons for those conclusions, the report contained a full presentation of the details underlying the entire report. Fifth, no other delegation or authority presented an analysis of the potential industry that even approached the level of sophistication of the MIT study. These factors certainly facilitated the acceptance of the study by the working group. 73. The work of the Koh group is documented in some of the materials issued by NG 2. See, e.g., Note by Ambassador T. T. B. Koh, note 70; Financial Terms of Contracts, Some Issues and Questions, Conf. Doc. BG2/2 (April 26, 1978); Financial Arrangements of the Authority, Conf. Doc. NG2/4 (May 4, 1978), 10 UNCLOS III, Off. Rec. 54; Financial Terms of Contracts, Explanatory Notes on the Technical Terminology, Conf. Doc. NG2/6 (May 8, 1978), ibid. at 70; Financial Terms of Contracts, The Chairman’s Suggested Compromise Proposals, Conf. Doc. NG2/7 (May 12, 1978), ibid. at 58; Financial Terms of Contracts, The Chairman’s Explanatory Memorandum on Document NG2/7, Conf. Doc. NG2/8 (May 11, 1978), ibid. at 63; Report of the Chairman of Negotiating Group 2 to the First Committee, Conf. Doc. NG2/9 (May 16, 1978), ibid. at 52; and Second Report by the Chairman of Negotiating Group 2, UN Doc. A/Conf.62/C.1/L.22, 11 UNCLOS III! Off. Rec. 103 (1980). See also U.S. Delegation Report, note 64, at 16–19; U.S. Delegation Report, 8th Sess., note 66, at 17–22; and U.S. Delegation Report, 9th Sess., note 66, at 20–21. For a discussion of the negotiation of the financial arrangements issue by the U.S. representative to the Koh group, see Katz, Financial Arrangements. 74. See ICNT/Rev.2, note 63, Ann. III, Art. 13, and DC, note 66, Ann. III, Art. 13. See also Oxman, The Eighth Session, note 10, at 13–15. 75. Article 1 of the Convention on the Continental Shelf, note 12, defines the continental shelf as the seabed and subsoil of the submarine areas adjacent to the coast but outside the area of the territorial sea, to a depth of 200 meters or, beyond that limit, to where the depth of the superjacent waters admits of the exploitation of the natural resources of the said areas . A thorough analysis of the relevant negotiating history is found in Oxman, The Preparation of Article 1 of the Convention on the Continental Shelf, 3 J. Mar. L. 245, 445, and 683 (1972).
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
702
Handbook of Technology Management in Public Administration
76. Some countries wanted to maximize the seaward limits of the continental shelf either to assure their access to valuable hydrocarbons believed to be there or merely to assure national control over a maximum area of seabed. Others, not likely to gain from a expansive definition of the continental shelf, sought a narrow definition, often combined with a demand for the international sharing of revenues to be derived from the continental shelves of all nations, while this latter group included those nations which were seeking a more equitable distribution of the benefits likely to be realized by the new nations with valuable continental shelf resources, there were others with more direct interests. Some of those countries sought a narrow limit in order to contain potential interferences with high seas freedoms; others sought to limit competing production of hydrocarbons. 77. The early negotiating texts left the limits of the continental shelf vague. ISNT, note 60, pt. II, Art. 62; RSNT, note 60, pt. II, Art. 64; ICNT, note 63, Art. 76. Beginning with the ICNT/Rev. 1, the formulation became more technical and specific. ICNT/Rev. 1, note 63, Art. 76; ICNT/Rev.2, note 63, Art. 76. Although the text remained static for some time, efforts to develop a precise technical definition began at an early date. See RSNT, pt. II, Introductory Note by the Chairman of the Second Committee, 5 UNCLOS III, Off. Rec. 152, 153 (1967). See also ICNT, Memorandum by the President, notes 67 and 68. Negotiating Group 6 was established at the seventh session of the conference in 1978 to address this issue. The three alternatives, in addition to the formula found in the ICNT, were before the NG 6 at that time. The Irish formula would determine the limit of the continental shelf would be located seaward—that could claim an additional 60 nautical miles. Under the other, the state could go beyond the base of the continental slope to where the depth of sediments was no less than 1% of the distance between that point and the foot of the slope., informal Suggestion by Ireland, Article 76, Definition of the Continental Shelf, Conf. Doc. NG6/1 (May 1, 1978). The Arab Group proposed to limit the extent of the continental shelf absolutely to 200 miles from the coastline. Informal Suggestion by the Arab Group, Article 76, Definition of the Continental Shelf, Conf. Doc. NG6/2 (May 11, 1978). The Soviet Union’s position was that the limit of the continental shelf should stop at 300 nautical miles. Conf. Doc. C.2/Informal Meeting/14 (April 27, 1978). Those proposals are published in UN Doc. A/Conf.62/C.2/L.99 (1979) at Anns. I–III, 11 UNCLOS III, Off. Rec. 121 (1980). See generally Oxman, The Seventh Session, note 10, at 19–22. 78. This map purported to illustrate the seabed gradient of every continental margin, which provided a general picture of the situation but was not very useful for analyzing specific boundary formulas. In the first place, it was of such a small scale that little detailed information could be derived. Second, the map did not provide all the data needed to apply the various formulas under negotiation and to compare their relative impacts. Office of the Geographer, Dep’t of State, Major Topographic Divisions of the Continental Margins (Map. No. 78784 8-70). 79. See Preliminary Study Illustrating Various Formulae for the Definition of the Continental Shelf, UN Doc. A/Conf.62/C.2/L.98 and Adds. 1–3 (1978). These maps did illustrate various boundary delimitation formulas. Unfortunately, after they were issued, questions were raised about the accuracy of the assumptions made in their preparation. See Study of the Implications of UN Doc. A/Conf.62/C.2/L.99, note 77. Furthermore, the small scale used made it difficult to appreciate the full impact of various formulas. Since representatives of the expert group that prepared the maps, the Lamont-Doherty Geological Observatory, were not made available to the conference, there was not opportunity for an interchange that might have allayed the doubts of some delegations. Finally, as the negotiations proceeded, changes were made in the
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
80.
81.
82. 83. 84.
85.
86.
703
boundary formulas under consideration; this caused the utility of these maps to decrease over time. Despite their limitations, these maps were often referred to in the course of the negotiations. This last request of a series of unprepared maps comparing the various formulas was made in May 1978 at the end of the seventh session of the conference. UN Doc. A/Conf.62/Sr.103, 9 UNCLOS III, Off. Rec. 65–66 (1980). Mr Yankov (Bulgaria) suggested that a competent organization be invited to prepare a map of the ocean regions indicating the practical consequences of the various boundary delimitation formulas. The USSR and Bulgaria wanted the International Oceanographic Commission (IOC), a subsidiary organ of UNESCO, to prepare a set of large-scale maps illustrating the current boundary formulas. Unlike the prior requests, this one was actively resisted by some countries, particularly those seeking expansive coastal state jurisdiction. Ibid. at 66–69. Delegations from Australia and Peru questioned the feasibility of the Bulgarian proposal. After some conflict, it was agreed that a phased request would be made to the IOC. Ibid. Ultimately, the IOC agreed to prepare the maps, but apparently because of ambiguities in the request and an inadequate data base, it was determined that the maps could not be prepared. See UN Doc. A/Conf.62/C.2/L.99, note 77. The effort was then abandoned. This left the delegates in the position of negotiating boundary formulas whose impacts admittedly could not be determined, at leas on the basis of the generally available information. The Soviet experts brought with them various maps and illustrations with the clear purpose of discrediting the co-called Irish formula for setting the limits of the continental shelf and of defending their own proposal to place an absolute limit on the breadth of the continental shelf at 300 miles. While the Soviet position was forcefully presented, it also appears to have had little impact, perhaps due to the unabashed advocacy nature of the presentation. See generally note 77. It is an extremely complex multifactor formula. See DC, note 66, Art. 76. ISNT, note 60, pt. II, Arts. 117–131. See Hodgson & Smith, The Informal Single Negotiating Text (Committee II): A Geographic Perspective, 3 Ocean Dev. & Int’l L. 225, 241–44 (1976). Substantively, this text has remained virtually static during the course of UNCLOS III. See RSNT, note 60, pt. II, Arts. 118–127; ICNT, note 63, Arts. 46–54; ICNT/Rev.1, note 63, Arts. 46–54; ICNT/Rev.2, note 63, Arts 46–54; DC (IT), note 66, Art. 46–54; and DC, note 66, Arts. 46–54. Of course, the limited number of archipelagic states limited the number of geographic conditions to be considers and the number of highly interested parties; also, the formula relied on known geographic information, unlike the later continental shelf formulas. It is important to ensure that the information delivered to a conference is accurate. A classic case of a conference that proceeded on the basis of inadequate data is the first United Nations Conference on the Law of the Sea of 1958. The Convention on the Continental Shelf produced by the conference did not establish a definite limit to the continental shelf and no substantial effort was undertaken to fix that limit. Supra note 75. The failure to define the limit contributed to the massive expansion of coastal state jurisdiction in the 1960s and 1970s and the calling of UNCLOS III. This omission by the delegates to UNCLOS I occurred because they were led to believe that uses of the seabed at distances and depths beyond the area indisputably within the regime of the continental shelf would not take place for a great man years, if ever. See L. Henkin, Law for the Sea’s Mineral Resources 4 and 37 (ISHA Monograph No. 1, 1968). Not only was this area reached very quickly, but the then current activities were rapidly approaching those limits. This unfortunate result took place despite the fact hat the
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
704
Handbook of Technology Management in Public Administration
87.
88.
89. 90.
91.
92.
93.
94.
International Law Commission, which conducted the preparatory work, made efforts to incorporate the relevant data into this work. One critical choice was to abandon the joint venture system for exploiting the deep seabed in favor of the so-called parallel system in which state and private business entities would be able to exploit the seabed directly, as would the internationally established Enterprise. While the system does put the two side in competition, the supporters of both side will not be willing to accept the demise of their favored side of the system even though the risks of deep seabed mining will be high. The fragility of this arrangement is recognized, and the fact that a single joint venture system was cast aside early in the negotiations had been regretted by some. While the negotiation of a comprehensive legal regime may be possible at such an early date, a better choice might be to establish a basic legal structure that would facilitate the development of additional parts of the legal regime as required. To some extent, that was the route taken for the Antarctic Treaty of 1959, 402 UNTS 71, 12 UST 794, TIAS No. 4780. See Bilder, The Present Legal and Political Situation in Antarctica, in The New Nationalism, note 50, at 167. See generally A Symposium, Antarctic Resources: A New International Challenge, 33 U. Miami L.R. 285 (1978). See text at note 52. The United States seriously considered a simplification of the text in 1978, but the deep seabed mining industry successfully thwarted this effort. At that stage it may have been too late to more away from a detailed text anyway. See Oxman, The Seventh Session, note 10, at 15–16. it is expected that a number of technical issues relevant to deep seabed mining will be delegated to a preparatory commission. DC, note 66, Art. 308(4). The question of tailoring the scope and content of international agreement to limit risks and to take account of uncertainties is discussed in R. Bilder, note 2. “Neutral disinterested” experts may be available if the negotiation involved a small number of participants and does not concern universal issues. In large negotiations involving the “big” issues, however, sometimes even the appearance of neutrality or independence will be impossible. The absence of neutral players is a significant obstacle to such universal negotiations in many respects. The increased importance of committee chairmen at UNCLOS III is one response to the problem; it is a solution that has had mixed results. Arguably, no scientific and social science information is neutral. M.McDougal, H. Lasswell, & I. Vlasic, note 3, at 1096; UNESCO, note 40, at 151; Keohane & Nye, note 38, at 63; Haas, Is There a Hole in the Whole? Knowledge, Technology, Interdependence, and the Construction of International Regimes, 29 Int’l Organization 827, 875 (1975). The type of information sought is an important variable, i.e., computations, judgment, or compromise. E. Haas, note 7, at 105, 110, and 127. No single theory or model will fit all situations. E. Haas, M. Williams, & D. Babai, Scientists and World Order: Uses of Technical Knowledge in International Organizations 335 (1977). See, e.g., Possible Environmental Effects of Mineral Exploration and Exploitation in Antarctica (J. H. Zumberge ed., 1979); Zumberge, Potential Mineral Resources Availability and Possible Environmental Problems in Antarctica, in the New Nationalism, note 50, at 115; Final Report of the Tenth Antarctic Treaty Consultative Meeting, Ann. 6, Report of the Group of Ecological, Technological and Other Related Experts on Mineral Exploitation and Exploration in Antarctica (1979). The meetings of the Antarctic Consultative Parties are a creature of the Antarctic Treaty, note 88. See Jessup & Taubenfeld, The United Nations Ad Hoc Committee on Peaceful Uses of Outer Space, 53 AJIL 877, 878 (1959); Taubenfeld, Weather Modification and Control: Some International Legal Implications, 55 Cal. L. Rev. 493, 505 (1967). A similar approach has been recommended in the field of weather and climate
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
95. 96.
97. 98. 99. 100. 101.
102.
705
modification. S. Brown, N. Cornell, L. Fabian, & E. Weiss, Regimes for the Ocean, Outer Space and Weather 237–38 (1977). E. Haas, note 7, at 445; Schachter, Some Reflections on International Officialdom, in International Organization; Law in Movement 53, 61–62 (J. Fawcett & R. Higgins eds. 1974). Draft Agreement Establishing the Common Fund for Commodities, UN Doc. TD/IPC/ Conf/L.15 and Corr. 1 (1980), reprinted in 19 ILM 896 (1980). See R. Rothstein, Global Bargaining: UNCTAD and the Quest for a New International Economic Order (1979); P. Reynolds, International Commodity Agreements and the Common Fund (1978); Wasserman, UNCTAD: The Common fund, 13 J. World Trade L. 355 (1979). There is a great deal of debate about the economic ramifications of the various commodity agreements. See generally, A. Law, International Commodity Agreements (1975); D. McNicol, Commodity Agreements and Price Stabilization: A Policy Analysis (1978); F. Adams & S. Klein, Stabilizing World Commodity Markets (1978). See Keohane & Nye, note 38, at 61. See Haas, note 91, at 850. See text at notes 69–74. A similar legitimization experience was observed in the ILO Committee of Experts; E. Haas, note 7, at 252–59. The Archer and Koh groups at UNCLOS III were particularly successful. While the reasons for the success of particular negotiating groups may be hard to establish, it is possible to identify some general characteristics that appear to optimize the opportunity for success. In a recent article Barry Buzan identified three characteristics of a successful negotiating group established to address important political issues: size, quantity of membership, and quality of leadership. Buzan, United We Stand, note 10, at 199. With some modification and the addition of a fourth characteristic, this list can be used to identify the salient qualities of a successful negotiating group established to address technical issues: (1) Size: Preferably between 12 and 30 participants should be included to assure the inclusion of the key interests and necessary experts while permitting operational efficiency. (2) Quality of Membership: The group should include persons of high intellectual caliber who represent important interests and others with technical expertise. (3) Quality of Leadership: The chairman should be a person of exceptional qualities including personal prestige, mastery of the subject and political setting, a reputation of impartiality and fair dealing, support from his or her delegation and a large capacity for work. (4) Procedure: The meetings of the group should be announced to the conference participants but closed to the public and conducted in a relatively informal collegial atmosphere, preferably in one language without a record. The agenda of the group should be well defined and manageable. While this should be the responsibility of the home governments, there are ways that the conference organizers may encourage the accreditation of technically competent persons. Not only can it be understood that each delegation will have technical support, but the conference structure can make it clear that technically qualified persons will be required for certain conference activities, such as participation in a standing technical committee. For financial and other legitimate reasons, all nations may not be able to have their own experts in all relevant fields. In such situations nations with similar interests should be encourages to obtain the necessary expertise jointly. Encouragement and even financial support for these cooperative activities might be given by the appropriate international organizations and forums, e.g., UNGA, UNCTAD, the Group of 77, regional organizations, and development programs.
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
706
Handbook of Technology Management in Public Administration
MEMBER TRUST IN TEAMS: A SYNTHESIZED ANALYSIS OF CONTRACT NEGOTIATION IN OUTSOURCING IT WORK* ABSTRACT This study examines the social context of trust with regard to team-based designs in the novel environment of negotiating information technology (IT) work that will be outsourced by organizations. The effects of organizational social systems, internal and external team factors, and emergent perspectives were synthesized into a conceptual model for understanding these underlying factors and processes that affect the development of collective trust among team members in the contract negotiation phase of IT outsourcing. More specifically, the trust, information technology, and professional communication literature was evaluated and synthesized to provide this robust model to better understand team-based designs, trust development and the diversity and complexity of IT contract negotiations. The proposed general model is discussed showing theoretical linkages among the key constructs, and strategic and practical implications for designing and managing teams and developing trust as a shared belief. Trust is the outcome of communication behaviors, such as providing accurate information, giving explanations for decisions and demonstrating sincere and appropriate openness. IABC Research Foundation (Shockley-Zalabak, P., Ellis, K. & Cesaria, R., 2000)
Never have business organizations been more interested in the outsourcing of IT services than today (Lee, Huyah, Kwok, & Pi, 2003; Toscano & Waddell, 2003). As IT products and processes become more complicated to use and maintain, organizations have needed to decide whether to develop their own organic IT resources or to buy the expertise they need from external providers. The make-or-buy decision in the IT context has a rich theoretical foundation, especially with regard to teams, in transaction cost theory, resource dependency theory, resource-based view, and power theory (Ang & Straub, 1998; Lacity & Hirschheim, 1993; Lacity, Willcocks & Feeny, 1996; Nam, Rajagopalan, Rao, & Chaudhury, 1996; Straub, Weill, & Stewart, 2002; Teng, Cheon, & Grover, 1994; Tsang, 2000). These theories have been empirically supported in the IT context such as auction and game theories in predicting outsourcing results (Elitzur & Wensley, 1998; Kern, Willcocks, & van Heck, 2002). The vendor selection decision can be a team-based process where IT outsource teams develop, negotiate, and ratify outsource contracts between partners. Yet little academic research integrates the necessary academic literature that currently exists with regard to the outsourcing of IT work from a team-based and trust perspective. The study of team-based designs requires a closer look at the role of social context and discourse communities in the development of trust in the contract negotiation process. Our synthesized analysis provides a framework for understanding how collective trust is formed and affected by various discourse communities that comprise team membership. My analysis was made under the rubric of outsourcing IT services because the process of identifying qualified extra-organizational support frequently deals with fundamental organization changes in novel environments; i.e., how will IT fill the organization’s gaps or needs based on the negotiations between buyer and provider teams (Grover, Cheon, & Teng, 1994; Sengupta & Zviran, 1997)? The study of IT outsourcing as a novel environment provides an excellent venue for studying trust and team-based decision effects in the larger context of discourse communities. The proposed model in this paper provides a micro-analytic perspective for how teams develop trust while negotiating the IT outsource * Terry R. Adler, Associate Professor, Department of Management, College of Business Administration and Economics, New Mexico State University, P.O. Box 30001, MSC 3DJ, Las Cruces, NM 88003-0001. Tel.: 505-646-3328; Fax: 505646-1372; Email:
[email protected] DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
707
contract. Task-oriented, team-interaction, process-oriented, and requisite team member effects are considered in their relationship to the development of trust in IT outsource teams.
GENERAL BACKGROUND The management and academic literature is recognizing the importance of developing trust in organizations (Meyerson, Weick, & Kramer, 1996; Simons & Peterson, 2000; Williamson, 1996), but little research has been conducted concerning the influence of organizational context on trust in team-based designs in the IT context (Cohen & Bailey, 1997; Gibson, 2001; Grover, 1996; Kramer, 1996; Lee & Kim, 1999; Wegner, 1986; Weick & Roberts, 1993). One promising field of study is the consideration of social factors such as team-interaction effects in the outsourcing of IT work. Unfortunately, current research is underdeveloped and in its early stages, especially with regard to understanding team-based and discourse community effects in IT outsourcing. Colomb and Williams (1985) define the concept of a discourse community as a singular discipline conceived to bring about a specific end-result beyond the experience of the particular community, as typically generated within such professions as law, business, medicine, and academics. A discourse community consists of diverse sets of stakeholders who influence decision making in general. In this study, discourse communities include both internal and external stakeholder interests that comprise the social context within which the negotiation of IT work is discussed. The study of team interactions provides a venue to understand discourse communities, in a broad sense, and how their social interactions influence team behavior and negotiation outcomes. One such study includes a measure of the effectiveness of team-based negotiating of IT outsourcing requirements (Lacity, Willcocks, & Feeny, 1996; Venkatraman, 1997). Team-based communication represents a forum where conflicting discourse norms converge because collective team values are affected by social discourse. This is especially true when teams communicate with IT providers that will implement organizational outsourcing decisions. Although an individual’s values lead to certain behaviors, it is also possible that an individual’s membership in a team can lead to unique and persistent team values that support, complement, or contrast with individual or organizational decisions to outsource. For instance, teams may collaborate and lend valuable resources to other teams supporting superordinate organizational goals, while other teams may exhibit unnecessary, prolonged self-survival trust and behaviors that conflict with organizational objectives (Adler, 2000).
A SYNTHESIS
OF
DEFINITIONS
The terms group and team are used interchangeably throughout this paper even though this view may not be universally shared. The trend of studying team effectiveness, virtual teams, and teambased designs has favored the use of group versus team, but, as Cohen and Bailey (1997) suggested, the distinction between group and team is perfunctory. Teams can be defined using one of many team-based design definitions. In this study, Cohen and Bailey’s (1997) definition of a team is used because it incorporates many of the key elements of the traditional team concept. For instance, Hackman (1990) emphasized teams as intact social systems while Guzzo (1995) framed teams as bounded social units working in larger social systems. Consequently, a team can be conceptualized as a collection of individuals who are interdependent in their tasks, who share responsibility for team outcomes, who see themselves and are seen by others as an intact social entity embedded in the larger organizational context, and who manage their team relationships across organizational boundaries. Thus, team members typically come from diverse educational and discourse backgrounds, but are integrated by their degree of task interdependence and identification with team objectives, responsibilities, and outcomes.
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
708
Handbook of Technology Management in Public Administration
In accomplishing team tasks, teams have to communicate with others inside and outside the team boundary or framework (Ancona & Caldwell, 1992; Keller, 1994; Pinto & Pinto, 1990). It is this larger social system (e.g., business unit, organization, or market) that provides the context from which teams draw support, resources, information, and approval. McIntyre and Salas (1995) found that effective team activities, or teamwork, included communication outside the team with these more complex and dynamic organizational environments. Coovert, Craiger, and Cannon-Bowers (1996) add that two defining characteristics of teams are adaptability of teams to their environments and the bi-directional relationship between task environments and team activity. This adaptation is extremely important and complex as teams negotiate diverse stakeholder interests in coming to agreement about contract terms and requirements. As teams adapt, they communicate internally and externally relative to their team boundaries to achieve effective team outcomes such as task accomplishment, quality, satisfaction, and emotional tone, and reduced turnover (Klimoski & Jones, 1995). How teams communicate within their external environment has been studied been before (Allen, 1984; Allen & Cohen, 1969; Ancona & Caldwell, 1992). The Importance of Novel Environments and Trust A review of the literature indicates that the study of novel, or new, environments provides data for researching intra-team adaptation and communication (Coovert et al., 1996; Marks, Zaccaro, & Mathieu, 2000). Intra-team adaptation frequently depends on the natural setting of teams within organizational settings. Recent research indicates that team-based contract negotiation is a process characterized by team adaptation and is dependent on the team’s ability to process information in its social context (Adler, 2000; Dannels, 2000; Kleimann, 1993; Knorr & Knorr, 1978). As teams interact, they share knowledge and build beliefs about team-based processes and outcomes based on social exchanges within teams. Evidence of our lack of understanding of how teams share knowledge and form collective beliefs is demonstrated by the increased interest in trust as a factor influencing team decision making. Clearly there is a cognitive, knowledge-based element to the underpinnings of trust that affect team knowledge framing and interaction leading to the development of team mental models (Lewicki & Bunker, 1996). Trust, in general terms, is defined as the expectancy held by an individual, or team, that the word or written statement of another individual or group can be relied upon (Rotter, 1980). A team-based judgment of trust is more narrowly defined as a shared, strongly held belief that guides team members toward positive expectations regarding the behavior and outcomes of future team interactions among team members. Trust, at the team level, is typically based on generalized expectations (Zand, 1972) that are substantiated and reinforced through intra-team interactions. This interaction creates an opportunity for information processing among team members as they share individual beliefs and stories (Meyerson et al., 1996). The Social Context of Negotiating IT Work Outsourcing literature indicates that teams implement management decisions that have organizational implications in outsourcing IT work (e.g., strategic ties; Ang & Straub, 1998). The use of team-based designs requires unique communication skills that many teams do not have nor do team members appreciate. Research on the contract negotiation phase of outsourcing IT work provides an intricate blend of information regarding team adaptation and communication. For instance, outsource teams integrate individual, team, functional and organizational norms in negotiating an appropriate IT infrastructure (Bazerman, 1983; Knorr & Knorr, 1978; Lacity & Willcocks,
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
709
1998; Niederman, Brangeay, & Wetherbe, 1991), yet are constrained by organizational, team, and individual factors and resources. An information technology infrastructure is the complex set of IT resources that provide a technological foundation for a firm’s present and future business applications (Earl, 1989; Lacity & Willcocks, 1998; Niederman et al., 1991; Venkatraman, 1997). An IT infrastructure typically includes platform hardware and software, network and telecommunications technology, core organizational data, and data processing applications that are fundamental to the organization’s daily operations. Proper IT outsourcing is vital to maintain an organization’s competitiveness and fundamental to meet future internal customer needs (Grover et al., 1994). Thus, the success of contract negotiations is crucial to both the acquiring and provider organizations to meet unique and joint strategic objectives. Organizations outsource IT services for a variety of reasons: e.g., improve managerial decision making, anticipate cost savings, and form strategic alliances (Ang & Straub, 1998; Currid, 1994; Duncan, 1995; Hopper, 1990; MacMillan, 1997; Outsourcing Institute, 1996; Remenyi, 1996; Richmond & Seidmann, 1993). Loh and Venkatraman (1992) define outsourcing as the significant contribution by extra-organizational service providers of the physical and human resources associated with the entire, or components of, IT infrastructure. The negotiation of components of the IT infrastructure frequently causes social change in organizations; therefore, the effects of organizational processes, internal and external team factors, and emergent perspectives effects on team-based judgments of trust should be considered. Teams often carry the responsibility for success or failure in implementing outsourcing decisions in provider selection and contract administration. For instance, Grover and Ramanlal (1999) state that team-based designs support the customization of IT services to meet larger, organizational objectives.
A PROPOSED FRAMEWORK Figure 9.1 presents a model of organizational social systems pertinent to the development of teambased judgment of trust in an IT outsourcing framework. Teams typically convey organizational requirements to the IT provider through the process of contract negotiation. The contract negotiation of components of the IT infrastructure is a complex process where IT requirements can range from the simple to the extremely complex in partitioning work descriptions for the outsource provider. The proposed framework presented here integrates key aspects in negotiating an adequate IT outsource contract. Task-oriented effects are considered first in the development of team commitment that leads to member trust. Fundamental to understanding team commitment is an analysis of the communication of management support, organizational objectives, and the strategic importance of the IT contract negotiation effort. Team-interaction effects are also considered with regard to the diversity of teams due to discourse communities. Non-technical team member inputs are important considerations in the development of team trust, however, this diversity exacerbates this already difficult communication process in contract negotiation (Adler, 2000; Bazerman, 1983; Dannels, 2000; Odell & Goswami, 1985; Scarbrough, 1995). The model, and the propositions that follow, are based on research provided by Lee and Kim (1999) who found that participation, communication, information sharing, and technological support were positively related; the age of the outsource relationship and the mutual dependency between buyer and provider were negatively related to the success of the outsourcing partnership. My proposed model also incorporates process-oriented and requisite team effects with regard to the use of power and linear communication practices in team processes and the proclivity of the emergent perspective that the IT contract negotiation team’s primary purpose is mediation, that the negotiation provides opportunities to those already in the organization, and that IT expertise typically resides external to the firm. Given the importance of IT to an organization’s decisionmaking and communication infrastructure the incorporation of team-based trust in the IT
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
710
Handbook of Technology Management in Public Administration
Task-Oriented Effects
Team-Interaction Effects
P1 Management signals of support for IT negotiation (+) P2 Strategic impact of contract negotiation (+) P3 Communication of clear objectives to learn (–)
P4 Perception that non-technical team members improve the knowledge-sharing process (–) P5 Perception that requirement are diffused (+)
Escalation of Team Commitment in IT Outsourcing Effort
Member Trust in Contract Negotiation Team
Process-Oriented Effects P6 Internal power influences within team (–) P7 Extent of linear communication practices within team (+) P8 Extent of scouting activities external to team (+)
Requisite Team Member Effects P9a Agreement that IT contract negotiators renewed as mediators non-translators (+) P9b Agreement that outsourcing effort provides an opportunity to team members (+) P9c Agreement that IT expertise resides external to the team (–)
FIGURE 9.1 Factors affecting member trust within contract negotiation team.
outsourcing process is a fundamental underpinning for improving our understanding of team effects in the contract negotiation process.
TASK-ORIENTED EFFECTS Figure 9.1 contains the linkages discussed in this paper. There are three task-oriented effects pertinent to the study of team-based designs in IT outsource contract negotiations: communication of management support, the strategic importance of the negotiation effort, and the communication of organizational objectives. Communication of Management Support Several authors have found that management support is useful in understanding team commitment (Bishop, Scott, & Burroughs, 2000). Communication of management support is crucial for the IT negotiation team to understand the organization’s fundamental mission and to feel appreciated in their work. Management’s involvement in the outsourcing effort also builds team commitment and, ultimately, trust in the organization. For instance, Lacity et al. (1996) describes instances when senior management involvement in inviting external and internal bidders was crucial for ultimate outsourcing success. The type of communication between top management and an outsource team has been characterized by Ancona and Caldwell (1992) as an ambassador type of external team communication. This type of communication is typically vertical and meant to persuade and maintain team image. Since teams form collective cognitive frameworks (Mohammed & Dumville, 2000), they also form collective beliefs that influence the forming of cognitive maps. In other words, there is a collective affective team process that exists before teams form shared mental models. The development of collective team-based trust, for instance, depends on individual team members sharing information and individual beliefs in a team setting. Top management
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
711
communication and social interaction with teams improve the collective development of trust as team members are exposed to diverse and new information. The team collectively forms beliefs that ultimately influence how knowledge is framed and dispersed in team interactions. Teams in novel settings such as contract negotiations, especially need the support of top management as collectively held beliefs have typically not yet formed. Westley and Mintzberg (1989) suggest that one of the primary responsibilities of top management is the communication of a vision that inspires organizational resources to perform organizational objectives. The inspiration and confidence teams acquire can lead to greater commitment and trust, as characterized by Chiles and McMackin (1998). Some issues are routine and simplistic, and teams can more readily recognize and adapt through well-learned responses (Starbuck & Milliken, 1988). However, for novel work such as outsourcing, interaction with management is even more important for a team’s belief that it can adapt to changing organizational and environmental conditions. Maintaining team image is important as individuals persuade top management (Dutton & Dukerich, 1991) on the decisions and progress of teams in the negotiation process. The implementation of IT can also be controversial because outsourcing typically displaces internal resources (Outsourcing Institute, 1996). Teams that receive support from top management are more likely to view the IT outsourcing effort as a positive experience with positive expectations in the partitioning of contract requirements. This is true even when organizational resources, maybe even friends of outsource team members, might be displaced by provider resources. Keil (1995) found that individual commitment escalated in IT projects, even when individuals received negative feedback, due to justification and compensatory behaviors. The more team members can justify team behavior, the more they will build consensus, further escalating their commitment to the IT outsourcing effort (Woolridge & Floyd, 1990). In this case, team member internal communication would build positive expectations of collective team purpose and actions. Thus, Proposition 1 (P1 in Figure 9.1): The more top management communicates support for the IT negotiation team in contract negotiations, the more negotiation team members justify team decisions and processes that increase team commitment, ultimately leading to the development of team trust.
Strategic Importance of the Negotiation Effort Organizations vary in their reasons for outsourcing. In general, organizations outsource to lower costs, access expertise outside their organizational boundaries, and improve efficiency (Davy, 1998). Historically, cost savings range from twenty to forty percent of total project costs by acquiring an outsource provider (Ang & Straub, 1998; Lacity & Hirschheim, 1995) which is supported by the efficiency perspective of make-or-buy decisions (Williamson, 1983). Thus, strategic flexibility is increased as firms hire providers for resources and assets that can be turned on or off based on organizational needs (Currid, 1994; Hitt, Keats & DeMarie, 1998). Teams that recognize the importance of outsourcing business applications of the IT infrastructure will most likely realize the team is fulfilling an organizational, or superordinate, goal of becoming more efficient and flexible (Ang & Cummings, 1997). Staw and Fox (1997) suggest that commitment increases when superordinate goals are satisfied. Thus, team commitment should also increase as team members become cognizant of the importance of their work to the organization. Undoubtedly, team goals may come in conflict with organizational goals. Lacity et al. (1996) argues that IT be viewed as a portfolio of concerns since many IT services have far-reaching implications for organizational resources. Given the choice of satisfying lower-level team goals versus organizational goals, teams will more likely satisfy organizational goals that are important and valued by team members. Certainly the structuring of IT services falls into this important category (Nam, Rajagopalan, Rao, & Chaudhury, 1996).
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
712
Handbook of Technology Management in Public Administration
The increase in team commitment can also lead to the development of team trust. For instance, research indicates a clear relationship between team commitment and team task interdependence (Bishop & Scott, 2000; Bishop et al., 2000). Team trust should also benefit from an increase in team commitment because team members can see linkages between the IT contract negotiations and organizational goals with regard to systems operations and telecommunications (Grover, Cheon, & Teng, 1994). The increase in team trust merely reflects increased team commitment due to the team’s realization of progress and making their firm more competitive. Thus, Proposition 2 (P2 in Figure 9.1): The more contract negotiation team members realize the strategic importance of the IT outsourcing effort to the organization, the more negotiation team commitment increases, ultimately leading to the development of team trust.
Communication of Organizational Objectives A related issue of the strategic importance of an IT outsourcing effort to organizational goals is how well this linkage is communicated to the IT negotiation team. It is not safe to assume that all management communicates well with its members (Lee & Kim, 1999). In fact, some organizations are known especially for not communicating organizational objectives with its members (see Motorola example in Wheelwright & Clark, 1992). Support for this hypothesis comes from the Outsourcing Institute’s (1996) series of findings that successful outsourcing required management to have developed a clear set of goals before the outsourcing decision was made and communicated. Since teams provide a primary resource to fulfill organizational objectives (Katzenbach & Smith, 1993; Salas, Dickinson, Converse, & Tannenbaum, 1992), management must communicate goals clearly to IT negotiation teams about expectations, strategic intent, and reporting procedures (DiRomualdo & Gurbaxani, 1998; Fielden, 2001). Communicating clear objectives has a moderately strong positive relationship with team commitment (Hollenbeck, Williams, & Klein, 1989). Clear objectives indicate linkages between organizational and team efforts. With the myriad competing uses of IT infrastructure within organizations, clearly communicated organizational objectives can lead to a consensual set of strongly held beliefs, or judgment of trust, within a team setting. Communication of organizational objectives can increase team member commitment because team members will better identify with the tasks of the organization. The increase in team commitment will result in more positive expectations by team members, which leads to the following proposition: Proposition 3 (P3 in Figure 9.1): The more clearly top management communicates organizational objectives to the IT contract negotiation team, the more negotiation team commitment increases, ultimately leading to the development of team trust.
TEAM-INTERACTION EFFECTS From a team perspective there are two related perceptions about the effectiveness of team-based designs regarding knowledge sharing between discourse communities in contract negotiations: intrateam interactions with non-technical team members are value-added, and outsource requirements become diluted in negotiations through team interactions (Adler, 2000). These propositions are discussed with the realization that knowledge sharing among individuals is essential for improving team effectiveness, especially when different discourse communities are represented on a team (Dougherty, 1992; Gibson, 2001).
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
713
Improved Knowledge Sharing There is a strong effect by discourse communities on teams in the development and discussion of requirements (Odell, Goswami, Herrington, & Quick, 1983). Strong norms of discourse communities influence how members of these communities communicate, especially when communicating outsource requirements. Team designs integrate many discourse communities, heightening the need for integration. Even academic institutions are not immune to discourse community norms in the writing process. For instance, Burnham (1986) describes how a Writing across the Curriculum course was necessary to integrate different academic discourse language and lessons learned. Teams are a robust structure for integrating discourse knowledge, and many times knowledge sharing leads to positive organizational outcomes (Fielden, 2001; Senge, 1990). When team members share knowledge in negotiating IT work, the more trust they will have and the more meaningful their work will become to them (Lee & Kim, 1999; Plunkett, 1990; Sashkin, 1984). When knowledge sharing breaks down, team members develop less trust. For instance, MacNeil (1978) suggests that writing frequently becomes meaningless with regard to enforcement and practicality because contract writers do a poor job of integrating requirements. The integration of non-technical team member knowledge into contract negotiations greatly improves a team’s likelihood of adaptation and survival (Adler, 2000). However, it appears that successfully incorporating non-technical knowledge depends on the experience levels of the technical team members. For instance, experienced engineers valued non-technical inputs in the writing process while novice engineers tended to forego non-technical inputs (Selzer, 1983). Team designs that facilitate knowledge sharing among all team members will most likely develop trust for each other and the team’s purpose. Proposition 4 (P4 in Figure 9.1): The more technical team members value non-technical team member knowledge among the IT contract negotiation team, the more likely the negotiation team members will collectively develop team trust.
Diffusion Effects in Requirements Development Current research indicates that the social context of writing in organizations is a difficult and complex process (Anson, 1988; Dannels, 2000; Dobrin, 1983; Odell et al., 1983). Communication in team-based designs is even more difficult due to recursive reviews by top management and professional groups in facilitating the negotiation process (Duncan, 1995; Hitt et al., 1998). Adler (2000) found that the more requirements are reviewed within an organization, the more requirements become diffused. Diffusion occurs when requirements become verbose and details are omitted, thus weakening the validity and meaning of the original requirement. The legal, financial and productivity ramifications of diffusion are more pronounced to the degree requirements are changed in negotiation. If requirements are modified so extensively in reviews that they become meaningless to technical communities, then team members will not value either the requirements or the team. When requirements become diffused there is no agreement between buyer and provider, limiting any perceived gains for either team in the contract negotiation. Requirements language drives meaning in a contract when the requirements are considered as a whole (Lamir, 1992). The basis for understanding requirements occurs when the language of the requirements, from each discourse community, is considered in the context of that discourse community as a stakeholder in the contract. Unfortunately, management and other reviewer can diffuse requirements to the point that interpretation and meaningfulness are removed from the contract. This may be one reason why Trice (1993) referred to legal reviews as “deal killers.”
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
714
Handbook of Technology Management in Public Administration
The more requirements are modified and changed in reviews, the less team members have faith that the IT outsource requirements reflect work necessary to support the IT outsourcing effort, a serviceable IT infrastructure and a working relationship with their future IT provider. Diffusion weakens team trust by lessening team member confidence in the accuracy of requirements and dampening the future success of the outsourcing effort (Lee & Kim, 1999). Thus, Proposition 5 (P5 in Figure 9.1): The more diffusion occurs from contract requirements and language being reviewed outside the IT contract negotiation team, the less likely the negotiation team members will collectively develop team trust.
PROCESS-ORIENTED TEAM EFFECTS Three team processes that need to be considered in the negotiating of IT work are power influences, linear writing practices, and scouting activities. These three processes affect how teams, as intact social systems, integrate the diverse discourse communities represented by team members (Tyerman & Spencer, 1983). Outsourcing IT work is inherently controversial as work, resource, and reward assignments change with new organizational structures and philosophies. The three process-oriented team effects discussed next reflect the competitive nature of work in an organization that manifests itself in IT contract negotiation teams. The Use of Power in Teams The negotiation of organizational requirements in IT outsourcing can be dependent on the discourse community interests displayed internally in the IT contract negotiation team. Negotiating contract requirements is affected by these issues as agendas are invoked in the discussion of IT outsourcing issues, processes and decisions (Anson, 1988; Friedlander, 1993). Writing in organizational settings is based on the social interactions of members as both a team member and organizational member (Couture & Rymer, 1993). Unfortunately, the effective negotiation of contract requirements may depend more on team member image than requirement validity. This may be one reason why Kern, Willcocks, & van Heck (2002) found that partners to an IT outsource effort tend to mutually renegotiate terms of the partnership after the contract is signed because providers tend to overpromise what they can realistically deliver. Negotiation team member image may also be influenced by negative assessments due to perceived large power distances (Ancona & Caldwell, 1992). Eisenhardt and Bourgeois (1988) suggest that less favorable team-oriented outcomes occur when individuals frequently use their power in a group setting. Agency theory provides a framework for understanding individual concerns in a partnership. Partnerships may crumble when individuals on teams shirk their responsibilities as organizational members and they act from moral hazard or perquisite consumption (Dharwadkar, George, & Brandes, 2002; Eisenhardt, 1989). This may be one reason why Weill and Broadbent (1998) suggest that management not abdicate decisions about the IT outsourcing process to non-management organizational members. In general, the more team members perceive that power-based influences are important in teambased designs, the less likely IT contract negotiation teams will develop as a collective and hold similar beliefs about an IT outsourcing decision. Teams without a cohesive base for performing organizational work will most likely have members that do not trust team processes and outcomes because of the perceived power influences active in the team. Thus, Proposition 6 (P6 in Figure 9.1): The more power influences are used among IT contract negotiation team members to manage the internal IT contract negotiation process, the less likely the negotiation team members will collectively develop team trust.
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
715
Linear Communication Practices in Contract Negotiations Communication practices that mimic linear development processes will hurt the development of trust in team-based designs. Founded in the management of technology literature, linear communication practices support over-the-wall work transitions that Jelinek (1979) warns pass on poorly composed, weakly developed, and error-filled products to other teams in the development of products and services. Team interactions that support the incremental discussion of requirements may actually be harmful to team interactions because information is not passed among team members and between teams in an organization. Informal social context is important in contract negotiations (Feldman, 1984; O’Reilly, 1989; Rousseau, 1990). Social perceptions also influence how team members value discourse community inputs and team member reputation on both sides of the negotiation (Adler, 2000; Dannels, 2000). Team settings are an ideal venue for demonstrating the value of a discourse community if its requirements are included in the organizational contract. However, linear communication practices can also exacerbate the integration of requirements in the contract negotiation if requirements are not considered as an integrated set. Linear communication practices work against the integration of discourse communities in outsourcing IT because there is typically not enough time to adequately integrate social considerations and share lessons learned. The segregation of requirements by functional type limits sharing lessons learned and collective negotiation about the value of a requirement, all of which are important factors for developing a collective judgment of trust of the IT outsourcing effort (Kern, Willcocks, & van Heck, 2002). Proposition 7 (P7 in Figure 9.1): The more linear communication practices are used among IT contract negotiation team members to manage the IT outsourcing effort, the less likely the negotiation team members will collectively develop team trust. Scouting Activities Between Teams in the Larger Social Context When there is high interdependence among team members, the study of team-based designs involves a closer look at the influence of team sharing or scouting for information externally. Since teams, especially cross-functional and professional teams, represent diverse discourse communities (Klimoski & Jones, 1995), sharing and gathering of information is important for tailoring an IT infrastructure to meet organizational goals. The process of information management is well defined in Ancona and Caldwell’s (1992), Allen and Cohen’s (1969), Allen’s (1984), and Katz and Tushman’s (1979) initial work on information flows between teams. More specifically, Ancona and Caldwell (1992) found that scouting activities involve information-gathering activities for team follow-on sense-making and mapping. This is a horizontal communication activity aimed at general information gathering across team boundaries about competition, the market, or technology versus specific knowledge-sharing activities within a team. Related research also indicates that scouting is dependent on the team leader’s ability and type of team doing the scouting (Lievens & Moenaert, 2000). Teams that scout well will have external information that potentially builds confidence in the team’s performance, reputation, and communication within the larger social context of the organization. Scouting activities also serve to reduce uncertainty in team decision-making and can lead to positive expectations of team outcomes in the contract negotiation phase. Proposition 8 (P8 in Figure 9.1): When IT contract negotiation team members use scouting activities to acquire information about their negotiating partner or the outsource effort, it is more likely that the negotiation team members will collectively develop team trust.
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
716
Handbook of Technology Management in Public Administration
REQUISITE TEAM MEMBER EFFECTS Several emergent and requisite outsourcing effects are fundamental to understanding how team trust develops. Since organizations typically develop IT requirements based on internal needs and expectations (Beckman & Mowery, 1993), one of the traditional roles of negotiation teams has been to translate organizational needs into written requirements. This traditional role is changing, however. For instance, Dannels (2000) found that the engineer role is no longer a “translation” focus but a “mediation” focus which minimizes the negotiation team’s role of quantifying needs into requirements and emphasizes the role of mediating the organization’s communication to understand its requirements. Multiple and diverse customers from unique discourse communities increase the necessity of the mediation role because requirements become more complex and interdependent. Scarbrough (1995) suggests that IT project teams fail to account for the role of social interaction in communicating information and organizing IT projects. Lacity, Willcocks, and Feeny (1996) also found that the development of IT outsourcing requirements depends on the technical maturity of the IT framers and developers in dealing with customers and users. While the mediation of customer needs into requirements is a special competence, this competence assumes the prioritization of customer needs, clear communication of contract language, and integration of technical and administrative requirements across organizational boundaries. Teams that are used as meadiators to outsource IT infrastructure would be more likely to develop positive beliefs about themselves and their team in managing the IT outsourcing effort. Thus, team trust would benefit from a mediation perspective versus the traditional translation perspective. Team Member Perceptions of Individual Opportunities Negotiating IT work can provide new organizational opportunities (Staw, 1980) because individuals can learn new concepts and meet new people (Davy, 1998). A key strategic effect of outsourcing is the development of individuals through reorganizing and restructuring organizational work (Davy, 1998). From a psychological viewpoint, outsourcing may provide new opportunities for individual expression and development through job openings, team interdependencies, and social relationships through better perceived job security (Nicholson, 1996). Individuals may view the IT contract negotiation as an opportunity to secure their own position within the organization. Team trust is most likely to develop if team members perceive that the negotiation effort provides them with an opportunity. This is especially true with regard to how team perceptions develop. For instance, Davy (1998) argues that if a firm is outsourcing an existing IT function, its approach to the outsourcing effort is key to the development of long-term employee support and morale. Team trust should develop when team members see the opportunity for personal advancement, new skill acquisition, and increased job security. Team Member Perceptions of External IT Expertise A final team member perception is focused on where the IT expertise resides. Many view IT expertise as external to the organization, a perception driven by the rapidly changing IT infrastructures that require managerial competencies not inherent within most organizations. The strategic–theoretic discrepancy model presented by Teng, Cheon, and Grover (1994) offers insight into this perspective from a supply and demand framework. From a supply perspective, it is difficult to find qualified, knowledgeable IT who can keep the organization running, especially when the outsource involves systems operations and telecommunications (Grover, Cheon & Teng, 1994). Currid (1994) found that business executives had expectations of increased expertise and shorter internal implementation periods when outsourcing
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
717
IT work. Acquiring external expertise to develop and maintain the IT infrastructure can be necessary for organizations whose core competency is not IT-related and who cannot evaluate IT contributions (Lacity et al., 1996; Willcocks, Fitzgerald, & Lacity, 1996). On the demand side, managers often do not know the IT provider’s capabilities or services they could use. Thus, it is not unusual to find that outsourcers are four times more likely to request outsourcing to external providers than to keep the service in-house just because something better is perceived to exist outside the firm (Teng et al., 1994). IT contract negotiation teams with a history of failing to meet stakeholder expectations will most likely value external IT expertise. Team members who perceive a good fit between external provider services and their organizational needs will most likely develop positive expectations about the outsourcing and negotiating effort. These positive expectations can lead to the collective development of team trust for the IT provider and the provider’s work since the team might not have the means, or possibly the desire, to adequately develop and assess their internal IT infrastructure. This leads to the following three propositions: Proposition 9 (P9 in Figure 9.1): IT contract negotiation team members who view the management of IT from an emergent perspective will be more likely to develop trust within the contract negotiation team to the extent that any one of the following conditions are met: Proposition 9a (P9a in Figure 9.1): There is agreement among negotiation team members that the team’s primary purpose is mediation, not translation, in negotiating contract requirements that integrate stakeholder needs and expectations. Proposition 9b (P9b in Figure 9.1): There is agreement among negotiation team members that the IT infrastructure being outsourced will provide new opportunities for internal organizational resources. Proposition 9c (P9c in Figure 9.1): There is agreement among negotiation team members that IT expertise resides external to the firm.
CONCLUSION Academicians and practitioners need to rethink how contract requirements are developed into contract form that may involve changes to fundamental processes to achieve improvementsteam member judgment of trust is affected by team-based designs in the negotiation of IT infrastructures. Without an ability to manageadequate consideration of social context, the managerial choices made regarding team-based designs will fall short in meeting organizational and individual goals and expectations in the outsourcing of IT work, and may mean that negotiation team structures take on expanded roles such as mediation. Previous models in IT contract negotiation have failed to incorporate adequate synthesis of process-oriented and requisite team effects. The model presented in this paper addresses these effects on team-member judgment of trust as key to understanding how teams manage novel environments in the contract negotiation of IT products and services. Trust needs to be evaluated as a team-held belief that affects the implementation of organizational goals and follow-on team and other organizational member behaviors. Recent research and theory indicate that the fundamental problem in outsourcing IT infrastructures lies in poorly developed contract requirements (Hirschheim & Lacity, 1998; Weill & Broadbent, 1998). Some models suggest that IT outsourcing success depends on key considerations like clearly stating buyer expectations or completing a needs analysis (Lee et al., 2003). Missing in these analyses is an awareness of how team-based issues complicate strategic implementation issues and contribute to the development of poorly negotiated IT contract requirements. The social context of teams makes differences between discourse communities very difficult to manage and predict. The selection of team members based on discourse community membership,
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
718
Handbook of Technology Management in Public Administration
the management of teams once members are selected, and the support of management for the IT contract negotiation are all important considerations not fully in the current literature. An underlying assumption in the outsource process is that the implementation of organizational strategy will be accomplished as top management desires by generic team structures following management’s expectations. Clearly, like all assumptions, there are risks to this approach this paper hopefully addresses. The novel environment of contract negotiation makes the use of team-based designs an extremely important consideration in the success of future outsource outcomes. Thus, future studies in IT outsourcing, more specifically IT contract negotiation, should consider the social context of organizational, functional, and team interactions in relation to the model presented in this paper. Avenues of future research might consider how an IT contract negotiation is affected by a hierarchy of discourse norms (i.e., professional, organizational, and team)? What are the strengths of these norms in relation to each other, especially in the negotiation of organizational requirements? Many authors have found that strong professional norms affect the communication processes embedded in firms (Dannels, 2000; Odell & Goswami, 1985). The identification of how norms guide team members would also aid our understanding of the effects of social interaction in developing team-based trust. Team-based designs exacerbate IT outsourcing implementations by hampering honest, critical discussion and consensus. The ability to adequately manage team trust when developing IT outsourcing requirements is a critical area organizations need to address when considering teambased designs in contract negotiations.
REFERENCES Adler, T., An evaluation of the social perspective in the development of technical requirements, IEEE Transactions on Professional Communication, 43(5), 17–25, 2000. Allen, T., Managing the Flow of Technology: Technology Transfer and the Dissemination of Technological Information within the R&D Organization, M.I.T. Press, Cambridge, MA, 1984. Allen, T. and Cohen, S., Information flow in R&D laboratories, Administrative Science Quarterly, 14, 12–19, 1969. Ancona, D. and Caldwell, D., Bridging the boundary: external activity and performance in organizational teams, Administrative Science Quarterly, 37, 634–665, 1992. Ang, S. and Cummings, L., Strategic response to institutional influences on IS outsourcing, Organization Science, 8(3), 235–256, 1997. Ang, S. and Straub, D., Production and transaction economies and IS outsourcing: a study of the U.S. banking industry, MIS Quarterly, 22(4), 535–552, 1998. Anson, C., Toward a multidimensional model of writing in the academic disciplines, In Advances in Writing Research, Volume Two: Writing in Academic Disciplines, Jolliffe, D. A., Ed., Ablex Publishing Corp, Norwood, NJ, 1988. Bazerman, C., Scientific writing as a social act: a review of the literature of the sociology of science, In New Essays in Technical and Scientific Communication: Research, Theory, Practice, Anderson, P. V., Brockman, R. J., and Miller, C. R., Eds., Baywood Publishing Corp, Farmingdale, NY, 1983. Beckman, S. and Mowery, D., Getting the right products to market: a study of product definition in the electronics industry, Design Management Journal, Spring, 54–61, 1993. Bishop, J. and Scott, K., An examination of organizational and team commitment in a self-directed team environment, Journal of Applied Psychology, 85(3), 439–450, 2000. Bishop, J., Scott, K., and Burroughs, S., Support, commitment, and employee outcomes in a team environment, Journal of Management, 26(6), 1113–1132, 2000. Burnham, C., The consequences of collaboration: discovering expressive writing in the disciplines, The Writing Instructor, 6, 17–24, 1986. Chiles, T. and McMackin, J., Integrating variable risk preferences, trust, and transaction cost economics, Academy of Management Review, 21, 73–100, 1996. Cohen, S. and Bailey, D., What makes teams work: group effectiveness research from the shop floor to the executive suite, Journal of Management, 23(3), 239–290, 1997.
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
719
Colomb, G. and Williams, J., Perceiving structure in professional prose: a multiply determined experience, In Writing in Nonacademic Settings, Odell, L. and Goswami, D., Eds., Guilford, New York, pp. 87–128, 1985. Coovert, M., Craiger, J., and Cannon-Bowers, J., Innovations in modeling and simulating team performance: Implications for decision making, In Team Effectiveness and Decision Making in Organizations, Guzzo, R. A. and Salas, E., Eds., Jossey-Bass, San Francisco, pp. 149–203, 1996. Couture, B. and Rymer, J., Situational exigence: composing processes on the job by writer’s role and task trust, In Writing in the Workplace: New Research Perspectives, Spilka, R., Ed., Southern Illinois University Press, Carbondale, IL, 1993. Currid, C., Computing Strategies for Reengineering your Organization, Prima Publishing, Rocklin, CA, 1994. Dannels, D., Learning to be professional, Journal of Business and Technical Communication, 14(1), 5–37, 2000. Davy, J., Outsourcing human resources headaches, Managing Office Technology, 43(7), 6–8, 1998. Dharwadkar, R., George, G., and Brandes, P., Privatization in emerging economies: an agency theory perspective, Academy of Management Review, 27(3), 650–670, 2002. DiRomualdo, A. and Gurbaxani, V., Strategic intent for IT outsourcing, Sloan Management Review, 39(4), 67–80, 1998. Dobrin, D., What’s technical about technical writing, In New Essays in Technical and Scientific Communication: Research, Theory, Practice, Anderson, P., Brockman, R., and Miller, C., Eds., Baywood Publishing Company, Farmingdale, NY, 1983. Dougherty, D., Interpretive barriers to successful product innovation in large firms, Organization Science, 3(2), 179–202, 1992. Duncan, N., Capturing IT infrastructure flexibility: a study of resource characteristics and their measure, Journal of Management Information Systems, 12(2), 1995. Dutton, J. and Dukerich, J., Keeping an eye on the mirror: image and identity in organizational adaptation, Academy of Management Journal, 34(3), 517–554, 1991. Earl, M., Management Strategies for Information Technology, Prentice-Hall, London, 1989. Eisenhardt, K., Agency theory: an assessment and review, Academy of Management Review, 14(1), 57–74, 1989. Eisenhardt, K. and Bourgeois, L., Politics of strategic decision making in high velocity environments: toward a midrange theory, Academy of Management Journal, 31, 737–770, 1988. Elitzur, R. and Wensley, A., Can game theory help us to understand information service outsourcing contracts? In Strategic Sourcing of Information Systems, Willcocks, L. and Lacity, M., Eds., John Wiley and Sons, Ltd., Chicester, U.K., pp. 103–136, 1998. Feldman, D., The development and enforcement of group norms, Academy of Management Review, 9, 47–53, 1984. Fielden, T., Keeping your IT partners on a short leash, InfoWorld, 23(7), 52, 2001. Forbes, When should you outsource IT functions? Forbes, 1(1), 54–55, 1998. Friedlander, F., Patterns of individual and organizational learning, In The Executive Mind, Srivesta, S. and Associates, Eds., Jossey-Bass, San Francisco, 1993. Gibson, C., From knowledge accumulation to accommodation: cycles of collective cognition in work groups, Journal of Organizational Behavior, 22, 121–134, 2001. Grover, V., Cheon, M., and Teng, J., An evaluation of the impact of corporate strategy and the role of information technology on IS functional outsourcing. European Journal of Information Systems, 3(3), 179–191, 1994. Grover, V. and Ramanlal, P., Six myths of information and markets: Information technology networks, electronic commerce, and the battle for consumer surplus, MIS Quarterly, 23(4), 465–495, 1999. Guzzo, R., Introduction: at the intersection of team effectiveness and decision making, In Team Effectiveness and Decision Making in Organizations, Guzzo R. A. and Salas E., Eds., Jossey-Bass, San Francisco, pp. 1–8, 1995. Hackman, R., Conclusion: creating more effective work groups in organizations, In Groups That Work (and those that don’t): Creating Conditions for Effective Teamwork, Hackman, J. R., Ed., JosseyBass, San Francisco, pp. 479–504, 1990. Hirschheim, R. and Lacity, M., Backsourcing: an emerging trend, Journal of Strategic Outsourcing Information, 45, 78–89, 1998.
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
720
Handbook of Technology Management in Public Administration
Hitt, M., Keats, B., and DeMarie, S., Navigating in the new competitive landscape: building strategic flexibility and competitive advantage in the 21st century, Academy of Management Executive, 12 (4): 22–42, 1998. Hollenbeck, J., Williams, C., and Klein, H., An empirical examination of the antecedents of commitment to difficult goals, Journal of Applied Psychology, 74, 18–23, 1989. Hopper, M., Rattling SABRE—New ways to compete in information, Harvard Business Review, 68 (May–June), 118–125, 1990. Jelinek, M., Institutionalizing Innovation: A Study of Organizational Learning Systems, Praeger Publishers, New York, 1979. Katz, R. and Tushman, M., Communication patterns, project performance, and task characteristics: an empirical evaluation and integration in an R&D setting. Organizational Behavior and Human Performance, 23, 139–162, 1979. Katzenbach, J. and Smith, D., The Wisdom of Teams, HarperBusiness, New York, 1993. Keil, M., Escalation of commitment in information systems development: a comparison of three theories, Academy of Management Journal, Best Paper Proceedings: 348–365, 1995. Keller, R., Technology-information processing fit and the performance of R&D project groups: a test of contingency theory, Academy of Management Journal, 37(1), 167–179, 1994. Kern, T., Willcocks, L., and van Heck, E., The winner’s course in IT outsourcing: to avoid extreme relational trauma, California Management Review, 44(2), 47–69, 2002. Kleimann, S., The reciprocal relationship of workplace culture and review, In Writing in the Workplace: New Research Perspectives, Spilka, R., Ed., Southern Illinois University Press, Carbondale, IL, 1993. Klimoski, R. and Jones, R., Staffing for effective group decision making: key issues in matching people and teams, In Team Effectiveness and Decision Making in Organizations, Guzzo, R. A. and Salas, E., Eds., Jossey-Bass, San Francisco, pp. 291–332, 1995. Knorr, K. and Knorr, D., From Scenes to Scripts: On the Relationship Between Laboratory Research and Published Paper in Science, Research Memorandum No. 132, Institute for Advanced Studies, Vienna, 1978. Kramer, R., Divergent realities and convergent disappointments in the hierarchic relation: trust and the intuitive auditor at work, In Trust in Organizations: Frontiers of Theory and Research, Kramer, R. M. and Tyler, T. R., Eds., Sage, Thousand Oaks, CA, pp. 216–245, 1996. Lacity, M. and Hirschheim, R., Theoretical foundations of outsourcing decisions: the political model, In Information Systems Outsourcing: Myths, Metaphors, and Realities, John Wiley and Sons, Ltd., Chichester, U.K., pp. 37–47, 1993. Lacity, M. and Hirschheim, R., Beyond the Information Systems Outsourcing Bandwagon, John Wiley and Sons, Ltd., New York, 1995. Lacity, M. and Willcocks, L., An empirical investigation of information technology sourcing practices: lessons from experience, MIS Quarterly, 22(3), 363–408, 1998. Lacity, M., Willcocks, L., and Feeny, D., The trust of selective IT sourcing, Sloan Management Review, 37(3), 13–20, 1996. Lamir, F., Contract interpretation. Air Force Contract Law Center Notes. Air Force Material Command, 1992. Latham, G. and Yukl, G., A review of research on the application of goal setting in organizations, Academy of Management Journal, 18, 824–845, 1975. Lee, J-N. and Kim, Y-G., Effect of partnership quality on IS outsourcing: conceptual framework and empirical validation, In Journal of Management Information Systems, 15(4), 29–52, 1999. Lee, J-N., Huyah, M., Kwok, R., and Pi, S., IT outsourcing evolution: past, present, future, Communications of the ACM, 46(5), 84, 2003. Lewicki, R. and Bunker, B., Developing and maintaining trust in work relationships, In Trust in Organizations: Frontiers of Theory and Research, Kramer, R. M. and Tyler, T. R., Eds., Sage, Thousand Oaks, CA, pp. 114–139, 1996. Lievens, A. and Moenaert, R., New service teams as information processing systems: reducing innovative uncertainty, Journal of Service Research, 3(1), 46–65, 2000. Loh, L. and Venkatraman, N., Dilution of IT sources and the Kodak effect, Information Systems Research, 3, 334–358, 1992. MacMillan, H., Managing information systems: three key principles for general managers, Journal of General Management, 22(3), 12–22, 1997.
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
721
MacNeil, I., Contracts: adjustments of long-term economic relations under classical, neoclassical and relational contract law, Northwestern University Law Review, 72, 854–906, 1978. Marks, M., Zaccaro, S., and Mathieu, J., Performance implications of leader briefings and team-interaction training for team adaptation to novel environments, Journal of Applied Psychology, 85(6), 971–986, 2000. McIntyre, R. and Salas, E., Measuring and managing for team performance: lessons from complex environments, In Team Effectiveness and Decision Making in Organizations, Guzzo, R. A. and Salas, E., Eds., Jossey-Bass, San Francisco, pp. 9–45, 1995. Meyerson, D., Weick, K., and Kramer, R., Swift trust and temporary groups, In Trust in Organizations: Frontiers of Theory and Research, Kramer, R. M. and Tyler, T. R., Eds., Sage, Thousand Oaks, CA, pp. 166–195, 1996. Mohammed, S. and Dumville, B., Team mental models in a team-knowledge framework: expanding theory and measurement across disciplinary boundaries, Journal of Organizational Behavior, 22, 89–106, 2001. Nam, K., Rajagopalan, S., Rao, H., and Chaudhury, A two-level investigation of Information Systems Outsourcing, Communications of the ACM, 39(7), 36–44, 1996. Nicholson, N., Career systems in crisis: change and opportunity in the information age, Academy of Management Executive, 10(4), 40–51, 1996. Niederman, F., Brangeay, J., and Wetherbe, J., Information systems management issues for the 1990s, MIS Quarterly, 15(4), 474–500, 1991. Odell, L. and Goswami, D., Writing in Nonacademic Settings, Guilford Press, New York, 1985. Odell, L., Goswami, D., Herrington, A., and Quick, D., Studying writing in non-academic settings, In New Essays in Technical and Scientific Communication: Research, Theory, Practice, Anderson, P., Brockman, R., and Miller, C. Eds., Baywood Publishing Company, Farmingdale, NY, 1983. O’Reilly, C., Corporations, culture and commitment: motivation and social control in organizations, California Management Review, 31, 9–25, 1989. Outsourcing Institute, The top ten reasons companies outsource. Retrieved November 8, 1997, from http:// www.outsourcing.com//getstart/topten.html, 1996. Pinto, M. and Pinto, J., Project team communication and cross-functional cooperation in new program development, Journal of Product Innovation Management, 7, 200–212, 1990. Plunkett, D., The creative organization: an empirical investigation of the importance of participation in decision making, Journal of Creative Behavior, 24, 140–148, 1990. Remenyi, D., Ten common information systems mistakes, Journal of General Management, 21(4), 78–89, 1996. Richmond, W. and Seidmann, A., Software development outsourcing contract: structure and business trust, Journal of Management Information Systems, 10(1), 57–72, 1993. Rotter, J., Interpersonal trust, trustworthiness, and gullibility, American Psychologist, 35, 1–7, 1980. Rousseau, D., Assessing organizational culture: the case for multiple methods, In Organizational Climate and Culture, Schneider, B., Ed., Jossey-Bass, San Francisco, pp. 153–192, 1990. Salas, E., Dickinson, T., Converse, S., and Tannenbaum, S., Toward an understanding of team performance and training, In Teams: Their Training and Performance, Swezey, R. W. and Salas, E., Eds., ABLEX, Norwood, NJ, 3–29, 1992. Sashkin, M., Participative management is an ethical imperative, Organizational Dynamics, Spring, 4–22, 1984. Scarbrough, H., Blackboxes, hostages and prisoners, Organization Studies, 16(6), 991–998, 1995. Selzer, J., What constitutes a “readable” technical style? In New Essays in Technical and Scientific Communication: Research, Theory, Practice, Anderson, P., Brockman, R., and Miller C., Eds., Baywood Publishing Company, Farmingdale, NY, 1983. Senge, P., The Fifth Discipline, Doubleday Currency, New York, 1990. Sengupta, K. and Zviran, M., Measuring user satisfaction in an outsourcing environment, In IEEE Transactions on Engineering Management, 44(4), 414–421, 1997. Shockley-Zalabak, P., Ellis, K., and Cesaria, R., IABC Research Foundation unveils new study on trust, Communication World, 17(6), 7–9, 2000. Simons, T. and Peterson, R., Task conflict and relationship conflict in top management teams: the pivotal role of intragroup trust, Journal of Applied Psychology, 85(1), 102–11, 2000.
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
722
Handbook of Technology Management in Public Administration
Starbuck, W. and Milliken, F., Challenger: fine-tuning the odds until something breaks, Journal of Management Studies, 25, 319–341, 1988. Staw, B., The consequences of turnover, Journal of Occupational Behavior, 12, 253–270, 1980. Staw, B. and Fox, F., Escalation: the determinants of commitment to a chosen course of action, Human Relations, 30, 431–450, 1997. Straub, D., Weill, P., and Stewart, K., Strategic control of IT resources: a test of resource-based theory in the context of selective IT outsourcing, Working paper, Georgia State University and MIT Sloan School of Management, 2002. Teng J., Cheon, M., and Grover, V., Decisions to outsource information systems functions: testing a strategy– theoretic discrepancy model, Decision Sciences, 26(1), 75–103, 1994. Toscano, L. and Waddell, J., Business transformation outsourcing, Public Utilities Fortnightly, 4(2), 30, 2003. Tsang, E., Transaction cost and resource-based explanations of joint ventures: a comparison and synthesis, Organization Studies, 21(1), 215–242, 2000. Trice, A., Occupational Subcultures in the Workplace, ILR Press, Cornell University, Ithaca, NY, 1993. Tyerman, A. and Spencer, C., A critical text of the sheriff’s robber’s cave experiments: intergroup competition and cooperation between groups of well-acquainted individuals, Small Group Behavior, 14, 515–531, 1983. Venkatraman, N., Beyond outsourcing: managing IT resources as a trust center, Sloan Management Review, 38(3), 51–64, 1997. Wegner, D., Transactive memory: a contemporary analysis of group mind, In Theories of Group Behavior, Mullen G., and Goethals G., Eds., Springer-Verlag, New York, pp. 185–208, 1986. Weick, K. and Roberts, K., Collective mind in organizations: heedful interrelating on flight decks, Administrative Science Quarterly, 3, 357–381, 1993. Weill, P. and Broadbent, M., Leveraging the New Infrastructure: How Market Leaders Capitalize on Information Technology, Harvard Business School Press, Boston, 1998. Westley, F. and Mintzberg, H., Visionary leadership and strategic management, Strategic Management Journal, 10, 17–33, 1989. Wheelwright, S. and Clark, K., Revolutionizing Product Development: Quantum Leaps in Speed, Efficiency, and Quality, The Free Press, New York, 1992. Willcocks, L., Fitzgerald, G., and Lacity, M., To outsource IT or not? Recent research on economics and evaluation practice, European Journal of Information Systems, 5(3), 143–161, 1996. Williamson, O., Markets and Hierarchies: Analysis and Antitrust Implications, The Free Press, New York, 1983. Williamson, O., Economic organization: the case for candor, Academy of Management Review, 21, 48–58, 1996. Woolridge, B. and Floyd, S., Strategic process effects on consensus, Strategic Management Journal, 10, 295–302, 1990. Zand, D., Trust and managerial problem solving, Administrative Science Quarterly, 17, 229–239, 1972.
METHODOLOGIES FOR THE DESIGN OF NEGOTIATION PROTOCOLS ON E-MARKETS* ABSTRACT Markets play a central role in the economy, facilitating the exchange of information, goods, services, and payments. In recent years, there has been an enormous increase in the role of information technology, culminating in the emergence of electronic marketplaces. Negotiations are at the core of each negotiation. Economists, game theorists, and computer scientists have started to take a direct role and designed different kinds of negotiation protocols for computer products, * Martin Bichler, IBM T. J. Watson Research Center, P.O. Box 218, Yorktown Heights, NY 10598 and Arie Segev, Fisher Center for IT and Marketplace Transformation, Walter A. Haas School of Business, University of California, Berkeley, CA 94720-1930.
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
723
travel, insurances, or electric power. The design of negotiation protocols is a challenging research direction and involves a number of disciplines including information systems development, game theory, mechanism design theory, simulation and laboratory experimentation. In this paper we survey the key techniques for the design of negotiation protocols and describe the main steps in the design of a multi-attribute auction protocol for a financial market. The case illustrates the interplay of the various methodologies.
INTRODUCTION The emergence of the Internet and the World Wide Web has led to the creation of new electronic market places. An electronic market system can reduce customers’ costs of obtaining information about the prices and product offerings of alternative suppliers as well as suppliers’ costs of communicating information and negotiating about the prices and product characteristics [4]. The past few years have shown an enormous growth in the number of Internet marketplaces. Electronic catalogs were the first step in this direction. Many companies are now moving beyond simple price setting and online order taking to creating entirely new electronic marketplaces. These companies are setting up marketplaces for trading products such as phone minutes, gas supplies, and electronic components, a field that is expected to grow enormously over the next few years [10]. Like stock exchanges, these electronic markets must set up mechanisms for clearing transactions and for making sure that both buyers and sellers are satisfied. The most widely used form of market mechanism today is online auctions. Companies like Onsale (http://www.onsale.com) or eBay (http://www.ebay.com) run live auctions where people outbid one another for computer gear, electronic components, and sports equipment. Different market mechanisms are appropriate in different situations and there is not a single solution for the various negotiation situations. What is so special about the design of “electronic” market mechanisms is the fact that a designer has many more possibilities to design a mechanism than those available in physical markets. Computer networks make it easy to communicate large amounts of information relevant to a market transaction, not just price quotes. Using decision support software it is possible to analyze this information quickly, in order to make more informed and, hopefully, better decisions. The basic question is, what is a good market mechanism for a given marketplace? Unfortunately, to date there is no general, computational theory of negotiations and the design of market mechanisms. The design of electronic markets is a challenging task and involves a number of disciplines. This paper is an approach towards establishing a toolset for the design of negotiation protocols on electronic markets. This field is also referred to as “market design” by many economists [30], although, the scope of a market transaction in general is much wider including tasks such as information gathering as well as settlement. In the next section, we survey the most important methods including game theory, mechanism design theory, simulation and laboratory experimentation. Then, the third section describes the design of a multi-attribute auction mechanism for trading over-the-counter (OTC) financial derivatives. Although, this section describes a very particular market, it illustrates how multiple techniques can help in answering the strategic questions of a certain negotiation situation. Finally, the findings are summarized and open research questions in this field are outlined. Throughout the paper we will use the terms “negotiation protocol” and “market mechanism” interchangeably.
DESIGN METHODOLOGIES Market design creates a meeting place for buyers and sellers and a format for transactions. Recently economists, game theorists and computer scientists have started to take a direct role and designed different kinds of market mechanisms for electronic markets, e.g., auction markets for electric power, railroad schedules, and procurement markets for electric components. A recent example for successful market design was the case of radio spectrum auctions by the U.S. Federal
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
724
Handbook of Technology Management in Public Administration
Communications Commission (FCC). Between 1994 and early 1997, the FCC raised US $23 billion from thirteen auctions. The difficulty was to design an auction procedure to promote price discovery of complicated, inter-related packages of spectrum rights in different regions. Because licenses may be more valuable in combination than separately, it was necessary to allow bidders to bid on bundles, in a way that allowed them to change packages in response to price changes during the auction. For this reason, multi-round auctions were adopted, and much of the design focus was on rules intended to promote efficient price discovery, by preventing bidders from concealing their interest in certain licenses and then bidding at the last minute. Economists successfully deployed game-theoretical analysis in order to design the bidding process in the case of the FCC spectrum auctions. Game theory is important, but it is by far not the only technique needed for successful market design. Electronic market design practice is in its early stages and comprises different methodologies from economics and computer science. Game-Theoretic Analysis of Negotiations The classic microeconomic theory of general equilibrium as formulated by Leon Walras and refined by his successors Samuleson [34], Arrow [1], and others depicts the outcome of competition, but not the activity of competing. Game-theoretic models, by contrast, view competition as a process of strategic decision making under uncertainty. It has only been relatively recently that the depth and breadth of robust game-theoretic knowledge has been sufficient so that game theorists could offer practical advice on institutional design. The amount of literature in this field is huge and in this section we can only introduce the most basic concepts. For a more rigorous discussion see Refs [2] and [11]. Nash [27,28] initiated two related, influential approaches. He proposed a model which predicted an outcome of bargaining based only on information about each bargainer’s preferences, as modeled by an expected utility function over the set of feasible agreements and the outcome which would result in case of disagreement. Nash described a two-person multi-item bargaining problem with complete information and used the utility theory of von Neumann and Morgenstern [42]. Nash’s approach has influenced many researchers and initiated extensions like the analysis of repeated or sequential bargaining games. In sequential games the players do not bid at the same time, but one player moves and then the other player responds. These dynamic games are more difficult to solve than static ones. Rubinstein [32] calculated perfect equilibrium in a bargaining model that involved a pie, two players, and sequential alternating offers of how to split the pie. Each player has his own different cost per time period. Rubinstein shows that if player one has a lower time cost than player two, the entire pie will go to player one. Harsanyi and Selten [13] extended Nash’s theory of two-person bargaining games with complete information to bargaining situations with incomplete information and found that there are several equilibria; a shortcoming is due to the fact that these models have little predictive power. Besides bargaining, single-sided auction mechanisms such as the English, Vickrey, Dutch, and first-price sealed-bid auctions have been another very active area of game-theoretical analysis (see, for example, Refs [23] and [26]). The most thoroughly researched auction model is the symmetric independent private values (SIPV) model. In this model all bidders are symmetric/indistinguishable and all bidders have a private evaluation for the good, which is independent and identically distributed. The bidders are risk neutral concerning their chance of winning the auction, and so is the seller. Under these assumptions, the bidders’ behavior can be modeled as a non-cooperative game under incomplete information. An example of an object which fits the SIPV model would be a work of art purchased purely for enjoyment. It is interesting to find out under these assumptions, whether the auctions achieve the same equilibrium price, or if we can rank the different auction formats in any order. The surprising outcome of the SIPV model is that with risk neutral bidders all four auction formats are payoff equivalent. This is also known as the revenue equivalence theorem (see Ref. [29] or [43], p. 372 ff), which does not hold if we remove some of the basic assumptions of the SIPV (such as risk neutrality of bidders). Another approach to analyze auctions is the common
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
725
value model, which assumes that valuation of an object to a bidder is determined both by the private signal mentioned above and also by one or more external factors, such as its resale value or the opinions of other bidders. A frequently observed phenomenon in these auctions is the so-called winner’s curse, where the winner bids more than the good’s true value and suffers a loss. The main lesson learned from the common value model is that bidders should shade their bids, as the auction always selects the winning bidder as the one who received the most optimistic estimate of the item’s value. Although some of their qualitative predictions have received some support, the existing models have performed poorly as point predictors in laboratory experiments [17]. For example, Balakrishnan et al. [5] points to several empirical studies, some of which suggest that fundamental concepts in game theory fail. There has also been criticism concerning many of the basic assumptions of game-theoretical auction models and their validity for real-world environments [31]. Nevertheless, game theory, together with experimental economics, has lead to a considerable knowledge about auction mechanisms. Moreover, game theory is often seen as a basic guideline for implementing negotiation strategies in agent-based environments where software agents behave in a more rational way and have greater computational abilities than human agents [40]. Mechanism Design Theory Hurwicz [15] was one of the first to go beyond traditional equilibrium and game-theoretical analysis to actively focus on the design of new institutions and resource allocation mechanisms. Mechanism design theory differs from game theory in that game theory takes the rules of the game as given, while mechanism design theory asks about the consequences of different types of rules. Mechanism design theory provides helpful guidelines for electronic market design. Formally, a mechanism M maps messages or signals from the agents, S Z{s1, ., sm}, into a solution as a function M:S/f of the information that is known by the individuals. An important type of mechanisms in this context is a direct revelation mechanism in which agents are asked to report their true private information confidentially. Hurwicz mentions a number of criteria for a new mechanism, which he calls (Pareto-)satisfactoriness. First, the outcome of the mechanism should be feasible. In addition, one would wish that the mechanism possesses some equilibrium for every class of environments it is designed to cover and that it produces a uniquely determined allocation, i.e., only a single equilibrium price. Finally, a mechanism should be non-wasteful, i.e., (Pareto-)efficient. Formally, a solution, f, is Pareto-efficient if there is no other solution, g, such that there is some agent j for which Uj ðrjg ÞO Uj ðrj f Þ and for all agents k, Uk ðrkg ÞR Uk ðrkf Þ. Hurwicz has also stressed that incentive constraints should be considered coequally with resource (and budget) constraints that are the focus of classic microeconomic models. The need to give people an incentive to share private information and exert efforts may impose constraints on economic systems just as much as the limited availability of resources. Incentive compatibility is the concept introduced by Hurwicz [15] to characterize those mechanisms for which participants in the process would not find it advantageous to violate the rules of the mechanism. If a direct mechanism is incentive compatible then each agent knows that his best strategy is to follow the rules, no matter what the other agents will do. Such a strategic structure is referred to as a dominant strategy game and has the property that no agent needs to know or predict anything about the others’ behavior. Gibbard [12] made a helpful observation that is now called the revelation principle. In order to find the maximum efficient mechanism, it is sufficient to consider only direct revelation mechanisms. In other words, for any equilibrium of any arbitrary mechanism, there is an incentive-compatible direct-revelation mechanism that is essentially equivalent. Therefore, by analyzing incentive-compatible direct-revelation mechanisms, one can characterize what can be accomplished in all possible equilibria of all possible mechanisms. In the following we summarize
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
726
Handbook of Technology Management in Public Administration
some of the most important guidelines one can derive from game theory and mechanism design, relevant to the design of electronic markets. † The solution of a mechanism is in equilibrium, if no agent wishes to change its message
given the information it has about other agents. When designing a mechanism, one would like to know, if it converges towards equilibrium and if it produces a uniquely determined allocation. In a Nash equilibrium each agent maximizes his expected utility, given the strategy of the other agent. † A general criterion for evaluating a mechanism is Pareto efficiency, meaning that no agent could improve its allocation without making another agent worse off. In the Prisoner’s dilemma, for example, the Nash equilibrium, in which both players defect, is not a Pareto-efficient solution. † The solution of a mechanism is stable, or in the core [41], if there is no subset of agents that could have done better by coming to an agreement outside the mechanism. If a mechanism is stable, then it is Pareto-efficient, although the reverse is not true [39]. † A direct auction is incentive compatible if honest reporting of valuations is a Nash equilibrium. A particularly strong and strategically simple case is an auction where truth telling is a dominant strategy. This is a desirable feature because an agent’s decision depends only on its local information, and it gains no advantage by expending effort to model other agents [22,26]. Mechanisms that require agents to learn or estimate other’s private information do not respect privacy.
It is important to keep these guidelines in mind, when designing a new mechanism. However, the designer of an electronic marketplace has to solve numerous finer grained problems. One has to define the closing conditions such as the elapsed time in an open-cry auction and the deadline for sealed-bid auctions. In all auctions one can specify minimum starting bids and minimum bid increments. In addition, one has to decide about participation fees, the possibility of multiple rounds, etc. Prototyping and laboratory experiments can be valuable aids. Computational Economics and Simulation Computational methods have spread across the broad front of economics to the point that there is now almost no area of economic research that is not deeply affected. Although computational exploration of markets is relatively new to economic research, there are several examples where researchers successfully deployed computational methods as a tool to study complex environments. In a 1991 research report on computational economics Kendrick [19] mentions that “. simulation studies for devising institutions that improve markets such as varieties of electronic market making and looking at search techniques of market participants and their results are necessary.” The traditional economic models are based on a top-down view of markets or transactions. In general equilibrium theory, for example, solutions depend on an omnipotent auctioneer who brings all production and consumption plans in the economy into agreement. The mathematical modeling of dynamic systems such as artificial societies and markets often requires too many simplifications, and the resulting models may not be therefore valid. Operations research and system sciences often use simulation methods for the purpose of analyzing stochastic problems, which would require very complex mathematical models. Advances in information technology have made it possible to study representations of complex dynamic systems that are far too complex for analytical methods, such as weather forecasting. Also, a number of economists have started to explore different approaches to economic modeling. Here the model-maker has to specify in detail how agents evaluate information, form expectations, evolve strategies, and execute their plans. Simulation is an appropriate methodology in all of these cases. This newly developing field, also called agent-based computational economics
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
727
(ACE), is roughly defined as the computational study of economies modeled as evolving decentralized systems of autonomous interacting agents [36], and is a specialization of the economics of the basic artificial life paradigm [37]. The models address questions that are often ignored in analytical theory, such as the role of learning, institutions, and organization. A central problem for ACE is to understand the apparently spontaneous appearance of regularity in economic processes, such as the unplanned coordination of trading activities in decentralized market economies that economists associate with Adam Smith’s invisible hand. Agents in ACE models are typically modeled as heterogeneous entities that determine their interactions with other agents and with their environment on the basis of internalized data and behavioral rules. That is, they tend to have a great deal more internal cognitive structure and autonomy than what can be represented in mathematical models. Evolution and a broader range of agent interactions are typically permitted in ACE models. A good example is the trade network game (TNG) developed by Tesfatsion [38] for studying the formation and evolution of trade networks. TNG consists of successive generations of resourceconstrained traders who choose and refuse trade partners on the basis of continually updated expected payoffs, and evolve their trade strategies over time. Each agent is instantiated as an autonomous, endogenously interacting software agent with internally stored state information and with internal behavioral rules. The agents can therefore engage in anticipatory behavior. Moreover, they can communicate with each other at event-triggered times. Experimentation with alternative specifications for market structure, search and matching among traders, expectation formation, and evolution of trade site strategies can easily be undertaken. Roth [30] used computational experiments and simulations in order to test the design for professional labor markets: “Computational methods will help us analyze games that may be too complex to solve analytically.” When game theory is used primarily as a conceptual tool, it is a great virtue to concentrate on very simple games. Computation can play different roles, from explorations of alternative design choices, to data exploration, to theoretical computation (i.e., from using computational experiments to test alternative designs, to directly exploring complex market data, to exploring related simple models in ways that nevertheless elude simple analytical solutions). When analyzing the economic behavior of complex matching mechanisms simulation can be an excellent supplement to the tool set of game theory. Experimental Economics Laboratory experiments are an important complement to the set of methods we described so far. They help to inform us about how people behave, not only in environments too complex to analyze analytically, but also in simple environments (in which economists’ customary assumptions about behavior may not always be such good approximations). For market design it is useful to study new market mechanisms in the laboratory before introducing them in the field. Laboratory experimentation can facilitate the interplay between the evolution and modification of proposed new exchange institutions. Experimenters can repeat testing to understand and improve the features of new market mechanisms. When analyzing mechanisms such as auctions or one-to-one bargaining the experimental literature is particularly large. Many experimental observations of the outcomes of various types of auctions examine game-theoretic hypothesis such as the revenue equivalence theorem [16]. Others are designed not to test mathematically precise theories, but rather to test proposed new market mechanisms. McCabe, Rassenti, and Smith [24], for example, compare the properties of several new market institutions whose theoretical properties are as yet poorly understood. Banks et al. [6] tested innovative mechanisms for allocating and pricing a planned space station. Although the results of laboratory experiments are interesting, there is still a question, if one can generalize the findings of experimental tests. Experimental sciences use “induction” as an underlying principle and assume that regularities observed will persist as long as the relevant underlying conditions remain substantially unchanged. What makes experiments so different
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
728
Handbook of Technology Management in Public Administration
from other methods microeconomists use is the presence of human subjects. Vernon Smith [35] refers to this question as the “parallelism precept”: “Propositions about the behavior of individuals and the performance of institutions that have been tested in laboratory microeconomies apply also to non-laboratory microeconomies where similar ceteris paribus conditions hold.” Nowadays, experiments are commonplace in game theory, finance, electronic commerce and many other fields.
DESIGN
OF A
MATCHING MECHANISM
FOR
OTC DERIVATIVES
In the previous section we outlined some of the basic methodologies for electronic market design. Here we describe the design of a particular market mechanism for OTC financial derivatives (although the mechanism itself might well be suited for other domains). The section summarizes the result of a research project, which we have conducted during the past two years. It describes how methods from various disciplines can help gain an understanding about a new market mechanism. We will concentrate on auction mechanisms and omit other multi-lateral negotiation protocols such as unstructured bidding, following the standard view among economists that an auction is an effective way of resolving the one-to-many or many-to-many negotiation problem. For example, Milgrom [25] shows that of a wide variety of feasible selling mechanisms, conducting an auction without a reserve price is an expected-revenue-maximizing mechanism. Auctions use the power of competition to drive the negotiation, and their simple procedural rules for resolving multi-lateral bargaining enjoys wide popularity. Traditionally, auctions are a means for automating price-only negotiations. In the field of OTC derivatives, however, it is important to support negotiations on a wider variety of attributes. Trading Financial Derivatives Automating negotiations in OTC financial markets is much harder than traditional financial markets because products are not standardized and therefore one has to negotiate on more than just the price. In the following we will provide a brief introduction to OTC derivatives trading, in order to illustrate the special needs of a market mechanism in this domain. Financial derivative instruments comprise a wide variety of financial contracts and securities, including forwards, options, futures, swaps, and warrants. Banks, securities firms, or other financial institutions are intermediaries who principally enable end-users to enter into derivative contracts. An option is the right to buy (call) or sell (put) an underlying instrument at a fixed point in time at a strike price. It is bought by paying the option premium/price upon conclusion of the contract and restricts the risk of the buyer to this premium. For example, the holder of a call purchases from the seller of the call the right to demand delivery of the underlying contract at the agreed price any time upon (American style) or exactly at (European style) the expiration of the option contract. The strategies of market participants as well as their valuations for various product attributes depend on the investor’s market expectations, the investor’s objective and risk tolerance and the chosen market. Whereas standardized options are traded on an exchange, OTC options are traded off-floor. In general, option contracts are based on a number of preset terms and criteria: † † † † † †
Type of option (call or put) Style (American or European) Underlying instrument and price Contract size or number of underlying instruments Maturity Strike price.
All these criteria influence the option premium no matter whether the options are traded on an exchange or OTC. For example, every change in the price of the underlying asset is reflected by
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
729
a change in the option premium. In order to set a certain option premium in the context of its strike price and other parameters, traders often use the so-called implied volatility which indicates the volatility implied by a certain market price. Thus the value of a certain price can be measured independent of the strike price. For many traders it has become common practice to quote an option’s market price in terms of implied volatility [21]. On an exchange all of these attributes are specified in advance and the only negotiable attribute is the price. This makes trading much easier, however, it reduces the number of derivatives traded to a small set of possible products. Financial engineers created a whole bunch of different financial OTC products tailored for specific purposes, ranging from plain vanilla options where all important attributes are negotiated during the bargaining process, to exotic derivatives with certain predefined properties (see Ref. [20] for different types of options and details of option pricing). Trading OTC options is not bound to an organizational structure in that supply and demand are concentrated in a centralized trading floor. Potential buyers of OTC options bargain with a number of investment brokers or banks on attributes such as the strike price, the style, the maturity and the premium of an option. Terms and conditions are usually not determined by auction, but by way of bargaining. On the one hand, bargaining with banks or investment brokers is conducted over the phone, leading to high transaction costs for a deal. Unlike electronic exchanges, investors lose their anonymity and also have to bear the contracting risk. On the other hand, negotiating on several attributes gives a participant many degrees of freedom during the negotiation and has the potential to achieve a better deal for both parties. Multi-Attribute Auctions It would be useful in this context to have a mechanism that takes multiple attributes of a deal into account when allocating it to a bidder. In other words, the mechanism should automate multi-lateral negotiations on multiple attributes of a deal. We have taken a heuristic approach and proposed a set of multi-attribute auction mechanisms. The buyer first has to define his preferences for a certain product in the form of a utility function. The buyer has to reveal this utility function to suppliers whereas the suppliers do not have to disclose their private values. Then the mechanism designates the contract to the supplier who best fulfills the buyer’s preferences, i.e., who provides the highest overall utility for the buyer. This way, traditional auction mechanisms are extended to supporting multi-attribute negotiations. Formally, a bid received by the auctioneer can be described as a vector Q of n relevant attributes indexed by i. We have a set B of bids and index the m bids by j. A vector xj Z ðx1j .xjn Þ can be specified, where x1j is the level of attribute i in bid bj. In the case of an additive scoring function S(xj) the buyer evaluates each relevant attribute xij through a scoring function Sj ðxij Þ. Under the assumption that an additive scoring function corresponds to the buyer’s true utility function U(xj), the individual scoring function S:Q/R, translates the value of an attribute into “utility units.” The overall utility S(xj) for a bid bj is then given by the sum of all individual scorings of the attributes. For a bid bj that has values x1j .xjn on the n relevant attributes, the overall utility for a bid is given by
Sðxj Þ Z
n X iZ1
wi Si ðxij Þ
and
n X iZ1
wi Z 1
(9.1)
In this scoring function we use weights wi in order to express the importance of the various attributes. A reasonable objective in allocating the deal to the suppliers is to allocate them in a way that maximizes the utility for the buyer, i.e., selecting the supplier’s bid with the highest
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
730
Handbook of Technology Management in Public Administration
overall utility for the buyer. This function maxS(xj) (and 1!j!m) gives us the utility of the winning bid and can be determined through various auction schemes. In a so-called first-score sealed bid auction the winner gets a contract awarded containing the attributes xj of the winning bid. The multi-attribute English auction (also first-score open-cry auction) works in the same way, except that all bids are made available to the participants during an auction period. In a second-score sealed-bid auction we take the overall utility achieved by the second highest bid SmaxK1 and transform the gap to the highest overall utility (SmaxKSmaxK1) into implied volatility. Consequently, the winning bidder can charge a higher option price in the contract. In the first-score and second-score sealed bid schemes the auction closes after a certain preannounced deadline. In a multi-attribute English auction, bids are made public and the auction closes after a certain elapsed time in which no further bids are submitted. An Internet-Based Marketplace for OTC Derivatives Based on the above ideas we implemented an Internet-based marketplace for OTC derivatives. The electronic market system implements three multi-attribute auction mechanisms through the use of a buyer’s client and a bidder’s client. In a first step, a buyer specifies his utility, i.e., scoring function for the bidders using a Java applet which can be downloaded over the Web (see Figure 9.2). Eliciting the buyers’ preferences is one of the key problems that needs to be addressed by the graphical user interface of the applet. We then need to map the buyer’s preferences, from input by the applet into coherent utility functions. Researchers in the field of operations research and
FIGURE 9.2 Buyer client.
DK3654—CHAPTER 9—4/10/2006—17:40—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
731
business administration attempted to utilize utility theory in order to actively make decisions. In our work, we adopt those concepts of classic utility theory and decision analysis in order to determine the buyer’s utility function. Nowadays, decision analysis techniques such as the multi-attribute utility theory (MAUT) [9,18], the analytic hierarchy process (AHP) [33] and conjoint analysis [3] are used in a broad range of software packages for decision making and can also be used to determine the utility function of a buyer. In our current implementation we use MAUT with an additive utility function. Figure 9.2 shows a screenshot of the Java applet we use in our implementation on the buyer side. The user interface consists of several areas. In the upper left field the buyer supplies a unique identifier, which he gets upon registration through a WWW form. Below we find a list of relevant attributes for the auction. The negotiable attributes in this case are the strike price and the implied volatility. “Duration,” i.e., maturity and “style” are fixed in advance. In the lower left panel users can define the individual utility functions for the negotiable attributes, which can be either a continuous or discrete functions. From the input of the buyer the applet compiles a request for bids (RFB) in XML format and sends the RFB via HTTP to an electronic brokerage service. The RFB contains the bidder ID, the product description and the parameters for the additive utility function. The brokerage service parses the RFB, retains all the important data in a database and informs potential bidders via e-mail. After the auction begins the buyer can query a list of bids submitted on the right hand side of the applet, ranked by overall utility (third column). By clicking on a certain bid the buyer can see the details of every bid in the form of green numbers on the left-hand side of the applet. Bidders, on the other hand, download the RFB from the URL they received via e-mail to a bidder client, allowing them to enter parameters for all negotiable attributes and to upload an XML-formatted bid via HTTP to the brokerage service. Research Questions The implementation of the electronic marketplace is helpful in obtaining a detailed understanding of the procedure. However, for the deployment of a new market mechanism in the field it is very important to understand the economic behavior of this new allocation scheme. The following is a list of selected questions, which are important for the effective deployment of multi-attribute auctions in an electronic market: † Do all multi-attribute auction formats achieve the same results?
As we have seen, similar to conventional auction theory there are various multi-attribute auction formats, namely the English, the first score, the second score, and the Dutch multi-attribute auction. The question is if one of these formats is better than the other ones in terms of seller revenue. † Are the equilibrium values achieved in a multi-attribute auction higher compared to
single-attribute auctions with respect to the underlying utility function of the bid taker?
A basic question of auction design is which auction format maximizes the bid takers profit. In a multi-attribute auction, the bidder has several possibilities to improve the value of a bid for the bid taker, sometimes even without increasing her costs and thereby creating joint gains for all parties. A more specific question is, how the number of negotiable attributes impacts the results. † Are multi-attribute and single-attribute auctions efficient?
Allocation’s efficiency can be measured in terms of the percentage of auctions where the high value holder wins the item. We want to learn about the efficiency of multi-attribute auctions compared to single-attribute auctions.
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
732
Handbook of Technology Management in Public Administration
Of course there are many more interesting research questions. However, in this paper we want to focus on these most important ones. In the next section we will show how different methodologies can be used to tackle the various questions.
GAME-THEORETIC ANALYSES One approach is game theory. Only little game-theoretical work has been done in the field of multiattribute auctions so far. A thorough analysis of the design of multi-attribute auctions has been provided by Che [8]. Che studied design competition in government procurement by a model of two-dimensional auctions, where firms bid on price and quality. He focuses on an optimal mechanism in cases where bids are evaluated by a scoring rule designed by the procurer. Each bid contains a quality, q, and a price, p, and quantity in this model is normalized to one. The buyer in this model derives a utility from a contract comprising q and p Uðq; pÞ Z VðqÞKp
(9.2)
where V is the individual utility function of quality. On the other hand a winning firm earns profits from a contract (q, p) pi ðq; pÞ Z pKcðq; qi Þ
(9.3)
In the cost function c the unit cost is expressed as q which is private information. q is assumed to be independently and identically distributed. Losing firms earn zero profits and trade always takes place, even with a very high q. In Che’s model, an optimal multi-attribute auction selects the firm with the lowest q. The winning firm is induced to choose quality q which maximizes V(q) considering the costs. Che considers three auction rules: In a so-called “first-score” auction—a simple generalization of the first-price auction, each firm submits a sealed bid and, upon winning, produces the offered quality at the offered price. In other auction rules, labeled “second-score” and “second-preferredoffer” auctions, the winner is required to match the highest rejected score in the contract. The second-score auction differs from the second-preferred-offer auction in that the latter requires the winner to match the exact quality-price combination of the highest rejected bid while the former has no such constraint. A contract is awarded to the firm whose bid achieves the highest score in a scoring rule S ZS(q, p). In the model it can be shown, that the equilibrium in the first-score auction is reduced to the equilibrium in the first price auction if the quality is fixed. An important question analyzed by Che [8] tries to discover the optimal scoring rule for the buyer. He showed that if the scoring function under-rewards quality compared to the utility function, first- and second-score auctions implement an optimal mechanism. This is true, because the true utility function fails to internalize the informational costs associated with increasing quality. Che also shows that if the buyer’s scoring function reflects the buyer’s preference ordering, i.e., equals his utility function, all three auction schemes yield the same expected utility to the buyer. This is an initial answer to our first research question in the section of research questions above, and a two-dimensional extension of the revenue equivalence theorem. The costs in Che’s model are assumed to be independent across firms. In the context of procurement auctions one might expect the costs of the several bidders not to be independent. Branco [7] derives an optimal auction mechanism for the case when the bidding firms’ costs are correlated, but the initial information of firms is independent. He shows that when the quality of the item is an issue, the existence of correlations among the costs has significant effects on the design of optimal multi-attribute auctions. Under these conditions the multi-attribute auctions analyzed by Che are not optimal. Unlike in the independent-cost model of Che, optimal quality cannot be
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
733
achieved just through the bidding process. As a result, the procurer has to use a two-stage mechanism: a first-score or second-score auction, followed by a stage of bargaining over quality between the procurer and the winner of the first stage.
COMPUTATIONAL EXPLORATION AND SIMULATION The difficulty of multi-attribute auctions is the variety of different scoring functions and parameter settings one can deploy. This is a reason why the basic assumptions of game-theoretical models are kept relatively simple. The models in the previous section describe two-dimensional negotiations (price and quality) and also the bidders’ behavior is modeled in a rather simple way. Nevertheless, the analytic complexity of these models poses tight constraints for the modeler. In this section we describe the results of two simulation models exploring the economic behavior of multi-attribute auctions. In this model we assume a generic good, which can be described by its price and a certain amount of qualitative attributes. The buyer’s scoring function has the form shown in Equation 9.1. Every bidder in this model has a profit function p in the form of pðx; pÞ Z pK
nK1 X 1
q i xi
(9.4)
where p is the price of the good, qi is the private cost parameter for an attribute xi and n the number of attributes including the price. The vector (x,p) corresponds to the qualitative attributes x1, ., xnK1 and the price p Z xn from the buyer’s scoring function in Equation 9.1. The cost parameter q is uniformly distributed between [0, 1] and describes the efficiency of the bidder in producing the good. The minimum profit a bidder wants to achieve can also be modeled as a uniformly distributed random variable [pmin, pmax], where pmin bzw. pmax is a lower and upper bound for the minimum expected profit. This parameter is a proxy for the risk aversion of the bidder. For reasons of simplicity we assume the individual utility of all qualitative attributes in the scoring function to be continuous, ascending and convex. Not all bidders in our model are able to provide the maximum quality for the attributes x1 . xnK1. Therefore, we assume the maximum values xi a bidder is able to provide to be uniformly distributed between [xi,min,xi,max]. The price p can now be determined for every combination of attribute values p ZpC
nK1 X iZ1
½qi f ðxi ; wi Þ :
(9.5)
During the simulation bidders perform an optimization f ðxi ; wi Þ of their bids in that they consider the weights wi from the buyer’s scoring function Equation 9.1, when determining the level of a qualitative attribute xi. We implemented this model in Java using the simjava package [14], which includes several classes for discrete event simulation. Figure 9.3 depicts the key actors in this simulation. In our analysis we have assumed ten relevant attributes of the good, i.e., nine qualitative attributes and the price. Figure 9.4 depicts the average results of 60 auction periods. In every auction period we had 12 virtual bidders and every bidder posted 10 bids. In the first bid the bidder assumed that all 10 relevant attributes were negotiable. In the second bid she assumed that nine attributes (including the price) were negotiable and one attribute was pre-specified at a level of (xi,max/2) by the buyer, and so forth. Finally, she assumes all qualitative attributes to be pre-specified and only negotiates on the price. We used six different scoring functions in which we altered the values of the weights wi. In addition, we had to draw from a number of uniquely distributed random variables for the bidders in every new auction period, namely the cost parameter qi as well as the upper bounds for all
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
734
Handbook of Technology Management in Public Administration
Auctioneer
S(x)
xj …
Bidder (1...m)
FIGURE 9.3 Simulation of multi-attribute auctions.
qualitative attributes xi . From these initial endowments bidders calculated their optimal bids. The total number of bids evaluated was 43,200. Figure 9.4 shows the results of the simulation using different weights for price and qualitative attributes in the scoring function S(x) and a different number of negotiable attributes. The scores dimension in the figure shows the utility values that correspond to winning bids for a given number of negotiable variables and parameter values. A single line is always the result of the same scoring function using a different number of negotiable attributes. For example, in line 1 the price is of high importance to the buyer (wprice Z 91), whereas all the qualitative attributes have a weight of 1. In contrast, all attributes including price have a weight of 10 in line 6. It can be easily seen that in cases where all attributes are of equal importance (i.e., line 6), it is useful to deploy multi-attribute auctions since the multi-attribute auctions value high achievements in all attributes. If the scoring function correctly mirrors the buyer’s utility function she can expect to be better off at the end. The more relevant attributes come into play, the higher is the difference in the achieved utility values. If the buyer’s scoring function puts a high emphasis on the price, there is only little difference in the outcome of a multi-attribute versus a single-attribute auction. If all ten attributes are of equal importance, the bidder has many more possibilities to comply with the buyer’s preferences. She cannot only lower the price, but she can improve on several qualitative attributes. The simulation is sensitive to changes in the basic assumptions such as the initial distributions of attribute values or the cost parameters. However, in all other settings there was a positive correlation between the achieved utility values and the number of negotiable
90 85 80
Scores
75
P=91 P=73 P=55 P=37 P=19 P=10
70 65 60 55 50 45 40
1
2
3 4 5 6 7 8 Number of negotiable attributes
9
FIGURE 9.4 Simulation results assuming different scoring functions.
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
10
Negotiating Technology Issues
735
attributes in the auction. The simulation provides a number of ideas on how the number of attributes impacts the results and therefore an answer to the second research question in above.
LABORATORY EXPERIMENTATION Game-theoretic analyses and computational exploration helped us gain an insight into the economic behavior of multi-attribute auctions. Game theory tells us that under certain assumptions there is revenue equivalence between several multi-attribute auction formats. The simulation showed that multi-attribute auctions achieve better utility values whenever multiple attributes are of interest to the buyer. In this section we want to test these results in the laboratory and we want to learn whether multi-attribute auctions are efficient in real-world environments. For the laboratory experiments we have used the electronic marketplace described in the section “An Internet-Based Marketplace for OTC Derivatives.” This section provides a brief summary of the experimental results. Experimental Design During the May and October 1999 we conducted sixteen experimental sessions with MBA students at the Vienna University of Economics and Business Administration. In every session a group of four subjects conducted six different trials, namely a first-price sealed bid auction, a Vickrey and an English auction, all of them in their single-attribute and their multi-attribute form. Before the experiment we introduced the scenario in a 40-min lecture to all students, and provided them with examples of valuations and bids along with profit calculations to illustrate how the auction works. Before each session (approximately 1.5 h) we conducted two dry runs in order to familiarize the students with multi-attribute bidding and the bidder applet. Before a session began we asked all participants to provide us with a list of valuations, i.e., a minimum implicit volatility value for each strike price. These valuations were used afterwards to analyze efficiency and strategic equivalence of the different auction schemes. In order to give the MBA students an incentive to bid reasonably during all auction periods, we introduced a reward mechanism. In our trials we wanted the subjects to bid consistently with their risk attitude and market expectations. After the option expired (after a month) we took the actual data of the Vienna Stock Exchange and computed the profits and losses for all winners of an auction. Students gained credit for participating in the experiment in the following way. We ranked the students by their profits and gave them additional credit points towards the final grade depending on their profit. If a student incurred a loss, he also lost part of his credit towards the final grade. The following summary of results is centered around the research questions outlined in above. We provide a comparison of equilibrium values achieved in conventional and multi-attribute auctions. Comparison of Equilibrium Values In a first step, we computed the utility score of the winning bid as a percentage of the highest valuation given by the participants at the beginning of each session. This allowed us to compare trials under different conditions (e.g., stock market prices). In our experiment the utility scores achieved in multi-attribute auctions were significantly above those of single-attribute auctions for groups of size n Z4. Multi-attribute auctions in our experiment achieved, on average, 4.27% higher utility than single-attribute formats. In 72.92% of all trials the overall utility achieved in multiattribute auctions was higher than in single-attribute auctions. An explanation for this result is that in a multi-attribute auction, a bidder has more possibilities for improving the value of a bid for the bid-taker, sometimes even without increasing her own costs. As can be seen in Figure 9.5, we could not find evidence for the hypothesis of revenue equivalence among the various auction formats.
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
736
Handbook of Technology Management in Public Administration 100.00% 95.00% 90.00% 85.00% 80.00% 75.00% 70.00% Single-attribute Multi-attribute
English
First Score
Second Score
89.25% 94.64%
92.97% 96.15%
86.08% 90.32%
FIGURE 9.5 Difference between multi-attribute and single-attribute auctions.
Efficiency of Multi-Attribute Auctions In single-attribute private value auctions efficiency is measured in terms of the percentage of auctions where the high value holder wins the item. Efficiency has to be computed slightly differently in the case of multi-attribute auctions. Here, the high value holder is the one where one of her valuations (containing strike price and volatility) provides the highest overall utility score for the buyer. Subjects had to report these valuations before each session to the experimenters, based on their market expectations. In all trails 79.17% of the single-attribute auctions and 74.47% of the multi-attribute auctions were efficient. The slightly lower efficiency achieved in multi-attribute auctions is a possible consequence of the difficulty for the bidder to determine the “best” bid, i.e., the combination of values providing the highest utility for the buyer.
CONCLUSIONS Multi-attribute auctions are a very useful addition to conventional negotiation protocols, which can be used in a number of contexts. We utilize competitive bidding on multiple attributes, in order to achieve efficient results in complex, multi-lateral negotiation situations. However, it is important to consider a few issues, when applying multi-attribute auctions. Bidding is more complex in multiattribute auctions, as it is not obvious for the bidder right from the start which combination of attributes provides the highest overall utility for the bid taker. This is a minor issue in the case of two negotiable attributes; however, in the case of many negotiable attributes, this can lead to outcomes that are not efficient. Appropriate decision support tools for the bidder play a crucial role in overcoming this problem. In addition, buyers also have to get used to the new tool and learn about the consequences of different parameter settings in their scoring function. As we have learned from the simulation, it is important to have a good knowledge about market conditions in order to define a “good” scoring function. We believe that in a professional environment like corporate procurement buyers will adapt quickly to the new tool. Less experienced buyers, however, do not know the market conditions that well and face the danger that their scoring functions, and consequently the results of the auction are biased.
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
737
Based on the results of this research a multi-attribute auction market has been developed in cooperation with a mayor European destination management system. Using this software in a realworld environment, we plan to collect and analyze field data as a next step in our research. This data contains a wealth of information about buyers’ preferences and suppliers’ capabilities. A thorough analysis of this data can result in a deeper understanding of the economic issues involved with multi-attribute auctions. Currently, many companies introduce new negotiation protocols without or with only little theoretical or empirical validation. The danger is, that the resulting outcomes are far from efficient in an economic sense. Only a thorough analysis of the various aspects of a new negotiation protocol can prevent such situations. There is not a single technique for the design and evaluation of new negotiation protocol. However, as we have illustrated, a combination of methods from economics and computer science can gain relevant insight into the economic behavior of a new mechanism.
REFERENCES 1. Arrow, K. J., Block, H. D., and Hurwicz, L., On the stability of competitive equilibrium II, Econometrica, 27, 82–109, 1959. 2. Aumann, R. J. and Hart, S., Handbook of Game Theory, Vol. 1, North-Holland, Amsterdam, p. 733, 1992. 3. Backhaus, K., Erichson, B., Plinke, W., and Weiber, R., Multivariate Analysemethoden, Springer, Berlin, 1996. 4. Bakos, Y., A strategic analysis of electronic marketplaces, MIS Quarterly, 15, 295–310, 1991. 5. Balakrishnan, P., Sundar, V., and Eliashberg, J., An analytical process model of two-party negotiations, Management Science, 41, 226–243, 1995. 6. Banks, J. S., Ledyard, J. O., and Porter, D. P., Allocating uncertain and unresponsive resources: an experimental approach, RAND Journal of Economics, 20, 1–23, 1989. 7. Branco, F., The design of multidimensional auctions, RAND Journal of Economics, 28, 63–81, 1997. 8. Che, Y.-K., Design competition through multidimensional auctions, RAND Journal of Economics, 24, 668–680, 1993. 9. Clemen, R. T., Making Hard Decisions: An Introduction to Decision Analysis, Wadsworth, Belmont, CA, 1996. 10. Cortese, A. E. and Stepanek, M., Good-bye to Fixed Pricing, Business Week, 1998. 11. Fudenberg, D. and Tirole, J., Game Theory, MIT Press, Boston, 1991. 12. Gibbard, A., Manipulation of voting schemes: a general result, Econometrica, 41, 587–602, 1973. 13. Harsanyi, J. C. and Selten, R., A generalized Nash solution for two-person bargaining games with incomplete information, Management Science, 18, 80, 1972. 14. Howell, F. and McNam, R., Simjava: a discrete event simulation package for Java with applications in computer systems modelling, In First International Conference on Web-based Modelling and Simulation, Vol. HoMc, Society for Computer Simulation, San Diego, CA, 1998. 15. Hurwicz, L., The design of mechanisms for resource allocation, American Economic Review, 63, 1–30, 1973. 16. Kagel, J. H., Auctions: a survey of experimental research, In The Handbook of Experimental Economics, Kagel, J. H. and Roth, A. E., Eds., Princeton University Press, Princeton, NJ, pp. 501– 587, 1995. 17. Kagel, J. H. and Roth, A. E., In The Handbook of Experimental Economics, Princeton University Press, Princeton, NJ, 1995. 18. Keeny, R. L. and Raiffa, H., Decision Making with Multiple Objectives: Preferences and Value Tradeoffs, Cambridge University Press, Cambridge, 1993. 19. Kendrick, D., Research Opportunities in Computational Economics, Center for Economic Research, UT Austin, TX, 1991. 20. Kolb, R. W., Financial Derivatives, New York Institute of Finance, Englewood Cliffs, NJ, 1993. 21. Kwok, Y.-K., Mathematical Models of Financial Derivatives, Springer, Singapore, 1998. 22. Ma, C., Moore, J., and Turnbull, S., Stopping agents from “cheating”, Journal of Economic Theory, 46, 355–372, 1988.
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
738
Handbook of Technology Management in Public Administration 23. McAfee, R. and McMillan, P. J., Auctions and bidding, Journal of Economic Literature, 25, 699–738, 1987. 24. McCabe, K. A., Rassenti, S. J., and Smith, V. L., Designing a uniform price double auction an experimental evaluation, In The Double Auction Market Theories and Evidence, Friedman, D. and Rust, J., Eds., Addison-Wesley, Reading, MA, 1993. 25. Milgrom, P. R., Auction theory, In Advances in Economic Theory: Fifth World Congress, Bewley, T., Ed., Cambridge University Press, Cambridge, 1987. 26. Milgrom, P. R. and Weber, R. J., A theory of auctions and competitive bidding, Econometrica, 50, 1089–1122, 1982. 27. Nash, J., The bargaining problem, Econometrica, 18, 155–162, 1950. 28. Nash, J., Two-person cooperative games, Econometrica, 21, 128–140, 1953. 29. Riley, J. G. and Samuleson, J. G., Optimal auctions, American Economic Review, 71, 381–392, 1981. 30. Roth, A. E., Game Theory as a Tool for Market Design, Vol. 1999, 1999. 31. Rothkopf, M. H. and Harstad, R. M., Modeling competitive bidding: a critical essay, Management Science, 40, 364–384, 1994. 32. Rubinstein, A., Perfect equilibrium in a bargaining model, Econometrica, 50, 97–109, 1982. 33. Saaty, T. L., The Analytic Hierarchy Process, McGraw-Hill, New York, 1980. 34. Samuelson, P. A., Foundations of Economic Analysis, Harvard University Press, Cambridge, MA, 1947. 35. Smith, V. L., Microeconomic systems as an experimental science, American Economic Review, 72, 923–955, 1982. 36. Tesfatsion, L., Agent-based computational economics: a brief guide to the literature, In Reader’s Guide to the Social Sciences, Fitzroy-Dearborn, London, U.K., 1998. 37. Tesfatsion, L., How economists can get a life, In The Economy as an Evolving Complex System, Arthur, B., Drulauf, S., and Lane, D., Eds., Addison-Wesley, Reading, MA, 1997. 38. Tesfatsion, L., A trade network game with endogenous partner selection, In Computational Approaches to Economic Problems, Amman, H., Rustem, B., and Whinston, A., Eds., Kluwer Academic Publishers, Dordrecht, 1997. 39. Tesler, L. G., The usefulness of core theory in economics, Journal of Economic Perspecitives, 8, 151–164, 1994. 40. Varian, H., Economic mechanism design for computerized agents, In Usenix Workshop on Electronic Commerce, New York, 1995. 41. Varian, H., Microeconomic Analysis, Norton, New York, 1992. 42. von Neumann, J. and Morgenstern, O., Theory of Games and Economic Behavior, Princeton University Press, Princeton, NJ, 1944. 43. Wolfstetter, E., Auctions: an introduction, Journal of Economic Surveys, 10, 367–420, 1996.
USING COMPUTERS TO REALIZE JOINT GAINS IN NEGOTIATIONS: TOWARD AN “ELECTRONIC BARGAINING TABLE”* ABSTRACT Multi-issue negotiations present opportunities for tradeoffs that create gains for one or more parties without causing any party to be worse off. The literature suggests that parties are often unable to identify and capitalize on such trades. We present a Negotiation Support System, called NEGOTIATION ASSISTANT, that enables negotiators to analyze their own preferences and provides a structured negotiation process to help parties move toward optimal trades. The underlying model * Arvind Rangaswamy, The Smeal College of Business, Pennsylvania State University, University Park, PA 16802-3007 and G. Richard Shell, The Wharton School, University of Pennsylvania, Philadelphia, PA 19104. Reprinted, by permission, Arvind Rangaswamy and G. Richard Shell, Using Computers to Realize Joint Gains in Negotiations: Toward an “Electronic Bargaining Table,” Management Science, 43(8), 1997, 1147–1163. Copyright q 1997, the Institute for Operations Research and the Management Sciences (INFORMS), 7240 Parkway Drive, Suite 310, Hanover, MD 21076, U.S.A.
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
739
is based on a multiattribute representation of preferences and communications over a computer network where offers and counteroffers are evaluated according to one’s own preferences. The parties can send and receive both formal offers and informal messages. If and when agreement is reached, the computer evaluates the agreement and suggest improvements based on the criteria of Pareto-superiority. In this paper, we motivate the system, present its analytical foundations, discuss its design and development, and provide an experimental assessment of its “value-in-use.” Our results strongly suggest that parties using the system of structured negotiation settings would achieve better outcomes that parties negotiating face to face or over an e-mail messaging facility, other things being equal. For example, only 4 of the 34 dyads (11.1%) negotiating a simulated sales transaction face to face or over e-mail reached an “integrative” settlement, as compared with 29 of the 68 dyads (42.6%) using NEGOTIATION ASSISTANT. Systems such as NEGOTIATION ASSISTANT have the potential to be used in emerging “electronic markets.”
INTRODUCTION In the past decade, there has been increasing interest in the application of computer technologies to facilitate negotiations.1 Using a variety of modeling approaches and spurred by the demand of realworld negotiating environments, the field of Negotiation Support Systems (NSS) is now developing along a number of innovative lines. These range from the design of specialized expert systems that help negotiators prepare for a negotiation, to mediation and interactive negotiation systems that restructure the way negotiations actually take place. There are at least two reasons for this growing research interest in computer-supported negotiations. First, research consistently suggests that conventional face-to-face negotiations often lead to inefficient outcomes, i.e., settlements that can e improved upon for all parties (e.g., Dwyer and Walker 1981, Gupta 1989, Neale and Bazerman 1991, Sebenius 1992). NSS offer the promise of improving negotiation outcomes for the negotiating parties by helping them prepare for a negotiation, and/or by providing computer-structured mechanisms to order the negotiation process. Second, business transactions are increasingly being conducted over computer networks, but without dedicated software support. Securities trading is already computerized, and the use of computers to assist other kinds of trades is spreading rapidly (e.g., Konstad 1991). The growth of networked systems such as the Internet, consumer online services, and Lotus Notes portend greater se of computer-mediated negotiations. NSS can facilitate negotiations in these emerging electronic “bargaining tables” by providing systematic models that structure network negotiations and render them more economically productive. This paper presents an NSS model to facilitate negotiation over computer networks and describes an experiment to investigate whether the use of the system helps parties locate and execute tradeoffs that maximize the gains from trade in multi issue negotiations. The system, called NEGOTIATION ASSISTANT (hereafter referred to as NA), is based on concepts is drawn from the emerging field of negotiation analysis and provides parties with both preparation tools and an “electronic bargaining table” for two-party multiissue negotiations. The contributions of this research are twofold: From an academic perspective, it provides an analysis of a plausible alternative to a face-to-face negotiation process, a field of increasing interest as evidenced by papers devoted to this topic in the special issue of Management Science (October 1991). From a practical perspective, it points to the emergence of workable mechanisms to enhance outcomes of business transactions over computer networks.
BACKGROUND A Framework for System Development For computers to add measurable value to the negotiation process, NSS design must be linked to a conceptual framework of negotiation that categorizes various structures under which negotiations
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
740
Handbook of Technology Management in Public Administration
take place and stipulates criteria for evaluating outcomes. Walton and McKersie (1965) make the important distinction between “distributive” bargaining in which parties bargain over a fixed pie, and “integrative” bargaining in which parties may “expand the pie” through problem solving, creativity, and identification of difference in priorities and/or compatibility of interests. Research on integrative bargaining suggests that parties negotiating face to face often have difficulty in bargaining in ways that permit them to identify and realize integrative tradeoffs. Thus, many negotiations are characterized by suboptimal tradeoffs, failed communications, and lost opportunities (Pruitt 1981). The fact that parties leave money on the table has led to a search for systematic ways to help parties achieve more integrative settlements, a search that has given rise to the emerging field of “negotiation analysis.” Here, we summarize the key precepts of this area. Sebenius (1991) and Young (1991) provide comprehensive reviews. Unlike purely anecdotal approaches to bargaining (e.g., Cohen 1980), negotiation analysis uses formalisms and analytical approaches that are based on models used in economics, decision analysis, and game theory. However, unlike the pure forms of these theoretical models, negotiation analysis seeks to incorporate realistic assumptions about the way negotiations are actually conducted. For example, neither side is stipulated to act in accord with the precepts of gametheoretic rationality. Rather, both sides are expected to conduct themselves based on their subjective assessments of each other in the light of the usually imperfect information actually available to them. Sebenius (1991) characterizes his approach as “nonequilibrium game theory edge.” An important aspect of negotiation analysis has been the application of various tools from decision analysis, including multiattribute utility assessment to help parties prepare for negotiations (Raiffa 1982, pp. 133–165). Negotiation analysis seeks ways to “anticipate the likelihood of ex-post Paretoinefficient agreements, in order to identify ways to help the parties to ‘expand the pie’” (Sebenius 1991, p. 21).2 Finally, negotiation analysis eschews the search for unique equilibria and solution concepts such as are found in cooperative game theory, and focuses instead on subjective perceptions of possible zones of agreement, with the objective of identifying agreements that are “among the best” available to the parties. In operational terms, negotiation analysis is used for developing methods to achieve integrative settlements by giving negotiators decision-analytic and other tools to help them articulate their own preference clearly, and to help on or more parties match up their preferences with those of other parties during the negotiation process. Existing Negotiation Support Systems Many existing NSS have explicitly or implicitly relied on some of the concepts of negotiation analysis as a basis for their design. Several of these systems are summarized in Jelassi and Faroughi (1998). NSS may be classified as follows: (1) Preparation and evaluation systems that operate away from the bargaining table to help individuals privately organize information, develop preference representation, refine prenegotiation strategies, or evaluate mid negotiation offers, and (2) Process support systems that operate at or in lieu of a bargaining table. These systems restructure the dynamics and procedures of the negotiation process in order to make salient the possible gains from integrative bargaining (Thiessen and Loucks 1992). Thus, process support systems are designed not only to assist parties in gaining a subjective representation of the negotiation situation, but also to help negotiators to more toward more integrative settlements. Examples of preparation systems include NEGOPLAN (Kersten et al. 1991), NEGOTEX (Rangaswamy et al. 1989), and GMCR (Fang et al. 1993). In addition to these formal preparation systems, generic decision analysis and spreadsheet software packages are also used in preparing for both negotiation and medication (Nagel and Mills 1990). Process support systems may be further subdivided into two types: mediation systems and interactive bargaining systems. In mediation systems, a computer model substitutes for or assists a human mediator to prompt the parties toward jointly optimal agreements. Communications among parties using a mediation system are filtered through the computer or a human mediator, although the parties remain in control of the outcome.
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
741
Interactive bargaining systems simultaneously support the negotiation processes of all the parties, and enable the parties to communicate directly with each other over computer networks. Interactive systems may also contain a function for computer-assisted mediation. Examples of process support systems include PERSUADER (Sycara 1990, 1991), ICANS (Thiessen and Loucks 1992), and the proposed NA system. We make the following summary observations regarding NSS models and systems reported in the literature. First, among the existing systems, GMCR, ICANS! and NA have more closely relied on the concepts of negotiation analysis. NA is closest to ICANS in this regard. However, NA differs in significant ways in its design and operation compared to its predecessors. First, NA is designed to be more of a facilitator, rather than a mediator. In particular, it is fully interactive system allows negotiators to communicate directly with one another over computer networks. Second, NA uses design principles that are somewhat different from the approaches used in Group Decision Support Systems (GDSS). In particular, NA does not require the same high degree of collaboration between parties that is characteristic of GDSS, but may be difficult to establish in real-world negotiation settings. In this sense, NA is differentiated from systems such as PERSUADER, MEDIATOR (Jarke et al. 1987), DECISION CONFERENCING,3 and other GDSS such as those developed by Nunamaker et al. (1991). For a review of the DGSS area, see Rao and Jarvenpaa (1991). Evaluation of NSS Few studies have systematically examined the impact of computer-assisted negotiation preparation or computer-mediated communications during negotiation. Although it is generally believed that prior preparation by the parties will enhance negotiation outcomes (e.g., Raiffa 1982, pp. 1119–1122), there is very little published in the academic literature that has explore the benefits and limitations of computer preparation tools (Lim and Benbasat 1993). The only reported tests we could find were experiments to evaluate ICANS (Thiessen and Loucks 1992) and NEGOTEX (Eliashberg et al. 1993). There is some published research that has compared computer-mediated communication with face-to-face communication in group decision tasks. This literature suggests that computermediated communication has the following effects: (1) reduces the communication bandwidth, thereby resulting in fewer exchanges are somewhat higher (Siegel et al. 1986); (2) increases anonymity, which could lead to less cooperative behavior (Wichman 1970, Arunachalam and Dilla 1995) and more uninhibited behavior (Siegel et al. 1986); and (3) restricts spontaneous expression because of the need (perceived or actual) to take turns communicating. Experimental evidence suggests that computer-mediated communication enhances outcomes in some interactive decision tasks, but diminishes outcomes in other tasks. Nunamaker et al. (1991) provide evidence that computer-mediated groups tend to be efficient and effective in generating options for mutual gain. Siegel et al. (1986) show that in the context of risky choice, computermediated communication groups shifted further away from members’ initial individual choices than groups that followed face-to-face discussions. Hiltz et al. (1986) conclude that the quality of decisions was equally good for these two modes of communication, but there was greater agreement on decisions among the group members in the face-to-face groups. Their experiments also suggest that while computerized conferences were rated as satisfactory, face-to-face meetings were consistently rated as more satisfactory. A couple of studies have more directly examined the role of the “mode of communication” in influencing outcomes in negotiations. In the context of a single-issue negotiation with asymmetric information, Valley et al. (1995, p. 13) provide evidence that face-to-face negotiations resulted in significantly more mutually beneficial outcomes than negotiations where the parties used written offers and messages that were transmitted by messengers (simulating an e-mail facility). In the context of a multiple-issue negotiation, Arunachalam and Dilla (1995) also report that as compare
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
742
Handbook of Technology Management in Public Administration
to the use of an e-mail messaging system, face-to-face negotiation leads to high individual and group profits. This is the only study that were are aware of that has examined outcomes associated with computer-mediated communication in a context where the proposed NA is likely to be useful. In summary, past studies, have only provided modest and inconsistent insights for assessing the impact that systems such as NA will have on the process and outcomes of negotiations. In this study, we attempt to isolate the effects of computer-assisted preparation and computer-facilitated communication in the context of a multiple-issue, integrative bargaining problem.
NEGOTIATION ASSISTANT: DESIGN AND OPERATION In this section, we first describe the design criteria for the NA system, and relate these criteria to the appropriate concepts described in the previous section. Next, we provide a description of the operation of the system. Design Criteria Moving Toward Pareto-Efficiency. The NA System is designed to foster more efficient outcomes by lessening the impact of factors that hinder the realization of integrative outcomes, which are more likely to occur when the parties are able to identify differences regarding their priorities, resources, risk preferences, and utilities (Pruitt 1981). Trading on these differences represents a rich source of value to be mined in a negotiation (Raiffa 1982, p. 131; Lax and Sebenius 1986). However, it is difficult to identify and optimally trade on these differences because (1) parties are not clear about their own priorities, (2) optimal trades are sometimes “lost” in the complex communication pattern that characterizes a negotiation with many issues, (3) most negotiation situations present the potential for strategic behavior and parties sometimes mislead others regarding their preferences and priorities, (4) human emotions often interfere with rational judgment, and (5) a bias toward “fair” solutions sometimes leads negotiators to exhibit what we call “compromise bias,” i.e., parties prefer to find some compromise position between the parties’ initial demands on each separate issue rather than to explore tradeoffs between issues that might yield them higher individual and joint gains. This is similar to the notion of the fixed-pie bias referred to by Neale and Bazerman (1991, p. 63) NA’s design addresses these barriers to integrative bargaining in the following ways. First, through the use of several utility assessment techniques, the system helps the parties disaggregate their own preferences and priorities in order to better understand them. Preference assessment is based on a combination of simple additive utility functions recommended by Keeney and Raifa (1991), and conjoint analysis techniques that have found wide application in psychology and marketing research (Green and Srinivasan 1978, Green and Krieger 1993). These procedures enable users to develop a more precise gradation of their preferences by assessing issues both “one at a time” and as “part of a package.” At every stage of utility assessment, users are given maximum flexibility to internalize insights that are gained as a result of reflection on the bargaining set. By disaggregating preferences, we expect that the parties are more likely to identify and trade on differences between their priorities (Keeney and Raiffa 1991). Second, the systems uses a depersonalized computer network environment through which parties negotiate, thus separating “people” from the “problem.” The system also provides both parties with real-time, subjective evaluations of the value of offers and counteroffers as they are made. These aspects engender a problem-solving orientation that make salient the rational settlement points (Pruitt 1981). Finally, by providing a postsettlement option, the system helps parties identify Pareto-superior settlements, where at least one party is strictly better off, and neither party is worse off. In this way, NA provides a technique for minimizing value left on the table after the parties have reached a settlement (Raiffa 1985).
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
743
Maximize Confidentiality and Minimize Potential for Gaming the System. Another important design objective of NA is to maximally protect the confidentiality of each side’s subjective preferences until such time as both sides have agreed to a deal and both sides agree to examine options that may improve the deal they have concluded. At no time are the inputs of one party revealed to the other except as that party may choose voluntarily to share such information with her counterpart, just as she might in a conventional interaction. Operation NA utilizes a multistage process that enables negotiators to prepare for, execute, and evaluate negotiated solutions over a computer network. The inputs provided by users in the preparation stages may be edited and revised as often as needed as negotiations progress. The system provides three main functions: preparation for negotiation, structured communications, and postsettlement evaluation. To illustrate the operation of the system and the user interface we provide a few sample screens from the system in Figure 9.6.4 Stage I of NA, called “issues,” specifies the domain of the negotiation, including the issues in play and the options that may be considered for each issue (Screen 1, Figure 9.6). Following Keeney and Raiffa (1991, p. 132), the current version of the system employs the restrictive assumption that “all inventing and creating of issues has occurred,” and that the parties are ready to negotiate over the identified issues. While this is a significant limitation to the practical application of our system, it does, however, allow us to more precisely test the potential value of the system to enhance integrative bargaining. In Stage II, called “Prepare,” NA uses an additive “self-explicate” scoring model to elicit information regarding (1) the relative preferences among issues, and (2) the relative preferences among the options for each issue. The users are first requested to distribute 100 points across all the issues; the users then indicate how many of the points available for each issue they would award themselves for obtaining each option within that issue. NA require that the most preferred option for an issue to be assigned all the points associated with that issue and the least preferred option to be assigned zero, and other options awarded some number of points between these two extremes (with ties getting equal numbers of points).5 Using the scores from Stage II, NA constructs in Stage II, called “Ratings,” a set of sample settlement packages that includes one option from every issue in play (Screen 2, Figure 9.6). The rating task gives the user the opportunity to contemplate options in the context of an overall agreement covering all issues simultaneously. The set of packages is selected automatically using conjoint design to form an orthogonal array. The use of an orthogonal array enables the computation of utilities for each issue and for each option within each issue independently of other issues and options.6 The selected set of packages is arranged in descending order of preference based on the scores provided in Stage II, but the scores themselves are not displayed to give users a fresh look at the consequences of their prioritization in Stage II.7 The user is then asked to rate each package on a scale from 0 to 100 to indicate the value that package would have if it were to become the final settlement. Conjoint design is used in selecting all but a maximum of two of the packages to be rated. These two packages frame the conjoin set. The top package is one that yields the highest Stage II score for the user (i.e., it gives the user his or her most preferred options on each of the issues) and is rated at 100 points. The bottom package is one that yields the lowest score (i.e., it gives the user his or her least preferred options on each of the issues) and is rated at 0 points. Between these two extremes are displayed the orthogonal packages which may be rated at any value the user desires. In essence, in completing the ratings task, the user confronts many of the tradeoffs implicit in the negotiation. When the ratings stage is completed, the utility weights, uij, for the ith issue and the jth option of that issue are computed automatically using the following dummy variable regression model (the
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
744
Handbook of Technology Management in Public Administration SCREEN 1: Entering/Modifying Issues for Negotiation
SCREEN 2: Ratings – Conjoint packages Introduction Config Prepare Ratings Graphs Negotiate Post-settlement Exit
Introduction Config Issues Prepare Ratings Graphs Exit Enter/Exit issues for negotiation 4 Price 4 Delivery 4 Currency 4 Dispute Use arrow keys to move around F10: Edit Settlement options for issue F3: Choose negotiation type
Option
Delivery
Price
1 2 3 4 5 6 7 8
14 months 14 months 12 months 14 months 14 months 12 months 12 months 8 months
225,000 225,000 225,000 195,000 210,000 210,000 195,000 225,000
SCREEN 3: Graphical display of preferences for issues
46 Total = 100 25 20 9
Delivery
Price
Currency
-----
14 months 225,000 US $ London
Value of offer = 94
Ratings
US London US US ICC Hungary London Hungary
0 0 0 0 0 0 0 0
Alt-R: Reorder Packages
50 45 40 35 30 25 20 15 10 5 0
PGDN: Next Screen
46 Maximum = 46 30
11 0 14 months
12 months
ESC: View graph or issues
SCREEN 5: Receiving an offer
8 months
6 months
Use up and down arrow to adjust weights
SCREEN 6: Post-Settlement Option
Introduction Config Prepare Ratings Graphs Negotiate Postsettlement Exit Your Offer Delivery Price Currency Dispute
Dispute
SCREEN 4: Graphical display of preferences for options within issues
Dispute
Use up and down arow to adjust weights Press enter to view graphs for levels of issue or ESC to return to menu
US $ Other hd Euro $ US $ Euro $ Other hd Hungarian US $
Based on your Prepare inputs, these 17 packages are in approximately Decreasing order of preference. Please rate these packages by giving the Most preferred package 100 points and the least preferred package 0 points. Assign suitable points for the remaining packages.
Alt-D: Delete issue ESC: Menu
ESC: Main Menu
50 45 40 35 30 25 20 15 10 5 0
Currency
Other Party’s Offer Delivery -Price -Currency -Dispute --
8 months 180,000 Hungarian Hungary
Value of offer = 10
Message Received: Have you received my offer? Return: Compose offer for highlighted issue F1: Accept offer F2: Decline offer F3: Send message F4: Send offer F5: Send offer & message F6: Terminate negotiation ESC: Main Menu
Introduction Config Prepare Ratings Graphs Negotiate Post-settlement Exit Option
Delivery
Price
Currency
Dispute
Ratings
1 2
14 months 14 months
210,000 195,000
Hungarian Euro $
Hungary London
75 73
These 2 packages will not make either of you worse of compared to the settlement you have reached. At least one of you, possibly both of you, will be better off choosing one of these options. ESC: Continue to Next Step
Note: In screen 5, issues are arranged in order of importance based on the utility function of the user.
Note: This is displayed only if both partners agree to view the pareto superior packages.
FIGURE 9.6 Sample screen displays from NEGOTIATION ASSISTANT.
number of packages presented to the user is chosen to provide sufficient degrees of freedom to estimate the coefficients of the model) RðSÞ Z a C
Ji I X X iZ1 jZ1
Uij Xij C e
where S is a particular settlement presented to the user for her rating; A is a constant term in the regression model; R(S) is the rating score for S given by the user (on a scale of 0 to 100); uij is the utility associated with the jth option (j Z1, 2, 3, ., Ji) of the ith issue; I is the number of issues; Ji is the number of options of issue I; Xij is the dummy variable with Xij Z1 if the jth option of the ith
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
745
issue is present in settlement package S, or 0 otherwise; and e is an error term, under the usual assumptions of the linear model. Once the values of uij are computed, this information can be accessed graphically from Stage IV, called “Graphs” (Screens 3 and 4, Figure 9.6). The utility function is presented in the form of bar graphs showing the relative weights of each issue and, within issues, of each option (rounded to the nearest integer). The graphs are also scaled between 0 and 100. In essence, users now observe graphically how their issue-by-issue and option-by-option priorities are affected by the exercise of trading these items off against one another in proposed settlement packages. It is not uncommon for users to feel somewhat dissatisfied with the values reflected in the graphs, and NA permits users to manipulate the graph bars directly using cursor keys to further refine their preferences. Stage V, called “Negotiate,” takes place after the computer has received Stage IV graphic inputs from both parties to the transaction. In essence, the system provides an electronic bargaining table on which negotiations take place. Offers, counteroffers, and written message can be sent and received over the network. All offers are binding and cannot be retracted, but messages can be exploratory.8 Explicit offers (displayed on the left side of the screen) and counteroffers (displayed on the right side of the screen) are both scored for the user utilizing the party’s private preference scores for options generated in Stage IV (Screen 5, Figure 9.6). Bargaining proceeds in this fashion until either an impasse or an agreement is reached. If not agreement is reached, the parties simply terminate the negotiation, just as they would in a conventional encounter. Mindful of the potential for strategic behavior if an impasse were to trigger release of information in the form of suggestions for continued bargaining. NA does not prompt the parties to continue, not does it reveal anything about the parities’ preferences. If the parties succeed in reaching an agreement, they enter Stage VI, called “Postsettlement” (Screen 6, Figure 9.6). This feature follows the suggestion of Raiffa (1985) regarding the possible value of “postsettlement settlements” in which a third party might help negotiators make Paretoimproving moves following an agreement. In Stage VI, NA acts as a computer-mediator. The system examines the final agreement and compares this package with all other possible packages in the negotiation set. It then generates a list of packages that are, based on the Stage IV inputs of both parties, more advantageous than the current settlement package for one or both sides without making either side worse off.9 (These Pareto-superior packages are calculated in the computer’s internal memory, and are not stored anywhere. Once the negotiation ends, this information simply disappears.) If both parties agree, the Pareto-superior packages are revealed to the negotiators in order of their respective desirability to each party. Once again, if both parties agree, they may continue the negotiations in hopes of reaching an agreement on one of the packages suggested by NA. If no such agreement can be reached, the parties revert to their original “Pareto-inferior” deal. Stage VI can be repeated as often as NA is able to identify at least one package that makes one party better off without making the other party worse off. When a final deal has been struck, either with or without the help of the “postsettlement” stage, the parties are congratulated on reaching an agreement and they can then exit the system. NA then creates and store files recording their inputs, negotiation exchanges, and postsettlements.10
EXPERIMENT AND RESULTS Hypotheses To test the efficacy of the NA system, we designed a laboratory experiment using a simulated twoparty, multiissue sales negotiation. In designing our experiment, we sought to answer two overriding questions: (1) Would parties using NA achieve a higher proportion of efficient agreements as compared to negotiators using conventional face-to-face negotiations, or using an e-mail messaging system? (2) How do the three basic functions of NA, namely, preparation using utility assessment, structured communication, and postsettlement facilitation, contribute to its overall impact on
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
746
Handbook of Technology Management in Public Administration
negotiation outcome? More specifically, we hypothesized that parties using NA would make more integrative trades as compared to parties not using NA, and we hypothesized that each function of NA would add incremental value by building on the part that precedes it. Thus, we propose the following formal hypotheses (see also Lim and Benbasat 1993): Hypothesis 1. Computer-based utility assessment prior to negotiation leads to more Paretoefficient outcomes (i.e., subjects using NA for preparation (NAP) will make more integrative trades as compared to subjects who negotiate face to face or over an e-mail system). Hypothesis 2. The mere use of computers, without support for negotiation preparation and structured communication, will not lead to more efficient outcomes (i.e., subjects using e-mail for negotiation will achieve fewer integrative trades than those using NA). Hypothesis 3. Structured communication and postsettlement evaluation enhances achievement of Pareto-efficient outcomes (i.e., subjects using NA only for preparation will achieve fewer integrative trades than subjects who use all functions of NA). Hypothesis 4. The settlement option in NA will provide Pareto-improvements to agreements reached using only the preparation and structured communication features of NA.
The Negotiation Scenario In the scenario presented for the negotiation, the subjects were instructed to act as agents for their respective companies. The information specified that, after a preliminary round of discussions, four issues remained to be resolved between the parties for the transaction to go through: price, delivery date, type of currency to be used, and forum for dispute resolution should contractual disputes arise. A range of options was stipulated for each issue, and they buyers’ and sellers’ separate instructions revealed the relative importance of each issue and option to them. Table 9.1 summarizes the induced preference structures for the two roles.11 Due to a shortage of hard currency, the Hungarian buyer for East Europa Medical Group gave the highest priority to the type of currency to be used and preferred Hungarian currency to all other options. In contract, currency was the U.S. seller’s (Healthcare, Inc.) least important issue. The U.S. party valued a delayed delivery date of 14 months over all other items because of a shortage of inventory. The Hungarian buyer, on the other hand, rated delivery as third importance, just above its fourth-rated dispute resolution issue. Both parties rated price second in priority and both could close a transaction at any of the four price options listed in their instructions. The U.S. seller valued the dispute resolution forum third, just above the least important currency issue. There was thus a clear, mutually advantageous tradeoff to be made between the parties if the buyer obtained Hungarian currency (the buyer’s first choice on its highest ranked priority—and the seller’s least important issue) in exchange for an agreement to delay delivery to 14 months (the seller’s first choice on its highest ranked priority—and the buyer’s third ranked issue), assuming some acceptable agreement could be achieved on the issues of price and dispute forum. Experimental Setup First-year MBA students at the Wharton School of the University of Pennsylvania were recruited to participate in this study during their orientation week. Groups of MBA students were randomly assigned to one of four negotiation conditions: (1) face to face (FF), (2) e-mail messaging system (EML),12 (3) NA system used for preparation, but followed by a face to face negotiation (NAP), and (4) NA system used for both preparation and for structured communication (NAA). Two hundred seventy students participated in our experiment.13 We used a simple one-way research design to obtain an overall assessment of the NA system. While this design reduces the total resources required for testing the hypotheses, we acknowledge its limitations in precisely teasing out the
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
747
TABLE 9.1 Preferences of the Two Parties EE
HC Price
180,000 195,000 210,000 225,000 (2)
225,000 210,000 195,000 180,000 (2) Delivery
6 Months 8 Months 12 Months 14 Months (3)
14 Months 12 Months 8 Months 6 Months (1) Currency
Hungarian Euro $ U.S. $ Other hard (1)
U.S. $ Other hard Euro $ Hungarian (4) Dispute Settlement
Hungary ICC London U.S. court (4)
U.S. court London ICC Hungary (3)
EE, East European Medical Equipment Company; HC, Healthcare, Inc. Numbers in parentheses indicate importance ranks of the issues.
effects of each component of NA. for example, the differences between NAA and NAP include the effects of both computer-mediated communication and computer-supported postsettlement analysis. In each experimental condition, subjects were randomly assigned to the roles of buyer and seller. In the face-to-face condition, subjects met in pairs in supervised classrooms, were given the negotiation simulation to study, and were then permitted to freely negotiate with each other for as long as it took them to reach an agreement. The pairs preparing and/or negotiating over the computer network met in supervised computer laboratories, were given both the scenario and the appropriate instructions on the use of NA or the e-mail systems. Those negotiating over the network were not allowed to speak with each other face to face. Those in the NAP condition first prepared for the negotiation without knowing who their partner would be. After their preparation was complete, they were introduced to their partner for the face-to-face encounter. No time restrictions were placed on subjects in any experimental condition with respect to either preparation or negotiation. To give the subjects a tangible incentive to bargain toward the goals stated in their respective role instructions, subjects were further informed that nondivisible individual prizes worth at least $!00 would be awarded to the buyer and seller in each experimental condition who best fulfilled their respective management’s priorities.14 After reviewing and studying the case (and, for those in the NAP and NAA groups, preparing to negotiate using Stages I, II, and III of NA), but before
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
748
Handbook of Technology Management in Public Administration
actually negotiation and their “realistic” expectations about what a final agreement would look like. The subjects filled out a second questionnaire when they concluded the negotiation indicating the terms of their final agreement, their perceptions regarding the negotiation process, their affirmation that they bargained in good faith and did not collude to split the prize and, for those in NAP, NAA, or EML conditions, their perceptions regarding the system. The questionnaires used in the study were designed not only to provide us data for testing the formal hypotheses, but to also provide other information to help us characterize the subjects’ overall experiences under the different negotiation conditions.15 Results Prenegotiation Results. As expected, there were few significant initial differences between the groups in the four experimental conditions, except for a slightly higher average age in the FF condition (Table 9.2). The FF group consisted of entering MBA students in an earlier year. All groups reports occasional participation in actual negotiations over the past year, and two-thirds of the subjects in each conditions were male, reflecting the gender composition of MBA programs. A more import difference between the groups is that subjects using NA spent more than twice the time in preparing for the negotiation than subjects in the FF and EML groups. This difference between the groups is attributable to the fact that groups using NA had to master the operation of the system prior to negotiating. This required them to read through a 12-page manual, and to go through the system’s prenegotiation Stages I, II, and III outlined in the previous section. Further, those in the NAP condition also had to print out the graphs of their preferences to take with them to the subsequent faceto-face negotiation. While this difference in preparation time could arguably explain some of our results, it is important to remember that increase preparation time is a direct consequence of a variable being manipulated in this study, namely, the use of the NA system to prepare for the negotiation. Prenegotiation Aspirations. Subjects using NA had somewhat more integrative “a priori realistic expectations” regarding their priorities. For example, a higher proportion of subjects expect Hungarian currency and 14 month delivery than subjects in the EML and FF conditions. These difference between the groups are intriguing and, we believe, reflect the subjects’ use of NA’s preparation stages to better understand and internalize their own preferences. The buyers in the NA groups had a higher expectation of Hungarian currency at settlement (19 out of 62 buyers versus 13 out of 64 buyers in FF and EML conditions combined), and sellers had a higher expectation of 14 months delivery at settlement (30 out of 63 sellers versus 19 out of 62 in the FF and EML conditions combined).16 These results suggest that people who understand their bargaining positions more clearly may be more likely to form expectations that they can achieve their higher priorities and positions. In the concluding part of this section, we explore the extent to which these aspirations influence the outcomes observed in the negotiations. Postnegotiation Results. There were several significant difference in outcomes achieved by the four groups. Most importantly, parties using NA for preparation (i.e., those in the NAP condition) executed a higher number of integrative trades than those who did not use NA, providing strong support for H1. For example, Table 9.3 highlights the most frequent settlements for the issues “Currency” and “Delivery.” Recall that our scenario embedded an integrative tradeoff between these two issues that called for the seller to achieve a 14 month delivery term and the buyer to achieve Hungarian currency. Twelve of the 34 pairs in NAP achieved this integrative settlement, suggesting capitulation by both sides on lower rated issues in order to obtain the best options on their highest rated issue. Only 4 of 34 pairs in FF, and 4 of 33 pairs in the EML conditions made this trade. To assess the statistical validity of these differences in outcomes, we conducted a Pearson c2 test of independence. That is, we tested the null hypothesis that outcomes reached under FF, EML, and the NAP are independent of the negotiation condition.17 this is rejected at a significance level less than 0.023 (c2(3)Z9.58). (For conducting this test, we combined the results of FF and EML conditions because outcomes under these two conditions are not significantly different from each other (c2(3)Z1.54).)
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
749
TABLE 9.2 Mean Perceptions of the Negotiators (Before the Negotiation)
Prior negotiation experience (scaled from never to frequently with a value of 3 representing 2–3 negotiations per year in the past) Agea Percentage males Minutes spent preparing for exercisea Expect process to be enjoyable (1Z strongly disagree, 5Zstrongly agree)
Overall
FF
EML
NAP
NAA
2.94
2.91
2.79
3.09
2.96
27.60 68.00 23.70
29.34 68.00 13.68
27.00 70.00 13.46
26.80 71.00 36.00
27.50 65.00 31.79
3.85
3.86
3.76
3.87
3.92
Realistic Expected Outcomes (Arranged in Decreasing Order of Importance within Each Issue for East Europa) FF (n Z61–62) EML (nZ63–64) Price 180,000 195,000 210,000 225,000 Dispute settlement Hungary ICC London U.S. court Currency Hungarian Euro $ U.S. $ Other hard Delivery 6 Months 8 Months 12 Months 14 Months
NAP (n Z60) NAA (nZ65–66)
1.6 51.6 43.5 3.2
7.9 55.6 33.3 3.2
11.7 45.0 43.3 0.0
3.0 52.1 44.9 0.0
1.6 59.0 31.1 8.3
3.1 42.2 45.3 9.4
0.0 46.7 45.0 8.3
3.1 44.6 47.7 4.6
14.5 58.1 8.1 19.4
15.6 43.7 18.8 21.9
20.0 46.7 10.0 23.3
20.0 55.4 9.2 15.4
3.2 25.8 56.5 14.5
4.7 21.9 56.2 17.2
0.0 20.0 56.7 23.3
1.5 18.5 52.3 27.7
Note: Realistic expected outcomes include the perceptions of both buyers and sellers and may be informally interpreted as the forecast of settlements reached based solely on ex-ante perceptions, not modified by the negotiation process. a
Significant difference between groups at a level of 0.001 or better. All others are insignificant at a level of 0.05.
The EML outcomes are inferior to the outcomes from the NAA condition at a significance level less than 0.009 (c2(3) Z11.57), providing strong support for H2. An interesting outcome in the EML condition is that three pairs did not reach any agreement, when in fact the scenario included only options that provided gains from trade for both parties. This, combined with the inability of tow pairs in the NAA condition to reach an agreement, suggests that computer-based communication leads to very poor outcomes for some parties who are not able to effectively handle an
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
750
Handbook of Technology Management in Public Administration
TABLE 9.3 Summary of Settlement Points
Hung-14 months Hung-12 months Euro-12 months Other No agreement Total
FF
EML
NAP
NAA
4 9 15 6 0 34
4 9 9 8 3 33
12 10 5 7 0 34
15 (17) 3 (4) 6 (5) 8 (6) 2 34
35 (37) 31 (32) 35 (34) 29 (27) 5 (5) 135
Note: Numbers in parentheses under the NAA condition are outcomes after the postsettlement option was initiated.
impersonal mode of communication, and behave in a more noncooperative manner (Wichman 1970; Arunachalam and Dilla 1995). Thus, the use of systems such as NA may in fact make disagreement outcomes more likely to occur in negotiation contexts with little integrative potential. This raises interesting research issues for further evaluation of NSS. Although outcomes in the NAA condition (after postsettlement) appear to be more integrative than outcomes in the NAP condition (17 versus 12 out of 34 pair settling on Hungarian-14 months), the overall differences in outcomes are not statistically significant given our small samples. However, by partitioning the chi-square value to test for independence between components (Agresti 1990, p. 50), there is a marginally significant difference (p!0.065) between NAP and NAA with regard to achieving Hungarian-14 month versus Hungarian-12 month outcomes (c2(1) Z 3.41). It is also important to note that the number of incremental dyads (5) that reached integrative trades in NAA is more than the entire set of dyads that reached integrative agreements in the FF or the EML conditions (4 each). To analyze the outcomes between the NAP and NAA conditions more fully, we will examine the preference structure of the two parties summarized in Table 9.1, and the distribution of outcomes on each option of each issue, as summarized in Table 9.4. From Table 9.1, we see that integrative solutions are characterized by East Europa giving up on delivery (third import issue for EE and most important issue for HC) to gain on currency (most important issue for EE, but least important issue for HC). In addition to this “major trade,” there is a “minor trade” that enhances the efficiency of outcomes. The parties could trade on Dispute (least important for EE, but third most important for HC), where EE can give up on Dispute options in exchange for concessions from HC on other issues (e.g., price). This suggests that in efficient settlements, we should see Dispute settlements more favorable to HC (more London and U.S. courts). The outcomes in the NAA and NAP conditions seem to support this in a directional sense. Thus NA’s structured communication process and postsettlement support provide only secondary benefits compared with the value added by NA’s preparation function. However, as the efficiency of relatively minor trades become more import (e.g., when the number of issues increase), these secondary benefits could become very significant. In summary, we found only directional support for H3, a surprice given our expectations for the impact of electronic bargaining tables. Hypothesis 4 was not supported. Eighteen of the 32 pairs reaching an agreement in the NAA condition settled on a final agreement without utilizing the “postsettlement” feature (i.e., their agreement was already Pareto-efficient given their inputs). The remaining 14 pairs accessed the postsettlement feature and examined packages that were Pareto-superior to their initial settlement, based on their prenegotiation inputs. Of these 14 pairs, only six chose to reinitiate the negotiation, and five of these pairs reached a settlement different from the one they had initially
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
751
TABLE 9.4 Percentage Distribution of Agreements on Each Level of Each Issue EF (n Z34)
EML (nZ30)
NAP (n Z34)
NAA (nZ 32)
Price 180,000 195,000 210,000 225,000
2.9 47.1 47.7 2.9
10.0 43.3 43.3 3.4
8.8 61.8 29.4 0.0
6.3 62.5 31.2 0.0
Dispute Settlement Hungary ICC London U.S. court
5.9 35.2 47.1 11.8
3.3 40.0 40.0 16.7
2.9 38.3 38.2 20.6
3.1 28.1 50.0 18.8
Currency Hungarian Euro $ U.S. $ Other hard
38.2 58.8 3.0 0.0
43.3 50.0 6.7 0.0
61.8 32.4 5.8 0.0
65.5 31.3 3.2 0.0
Delivery 6 Months 8 Months 12 Months 14 Months
0.0 0.0 73.5 26.5
0.0 6.7 63.3 30.0
0.0 2.9 47.1 52.9
0.0 9.4 31.3 59.3
Note: Arranged in decreasing order of importance for East Europa.
agreed to. Of these five pairs, three pairs moved from their initial settlement to a Pareto-superior one that incorporated the tradeoff between Hungarian currency and a 14-month delivery term. Thus, the postsettlement feature did prompt some parties to examine and capture additional joint gains from the negotiation, but more than half of those who accessed the postsettlement feature did not utilize it. Subjects’ responses to open-ended questions and debriefings suggest several possible explanations for this result. First, some subjects reported that reopening the negotiation after reaching an agreement revived uncomfortable, distributive aspects of the bargaining that they preferred not to reexperience. Second, some subjects experienced subtle changes in preferences as a result of interactions that took place during the negotiation. Their postsettlement preferences thus diverged both from those stated in the scenario and from their own prenegotiatin scoring inputs, rendering the suggested postsettlement options unattractive. Finally, in combination with the factors listed above, subjects simply found the postsettlement feature awkward to use as designed. These results suggest that we should rethink the design of the postsettlement feature for NA. Aspiration Levels and Postnegotiation Outcome. To explore how NA influences outcomes, it is instructive to first select for analysis dyads in which either the buyer aspired for Hungarian currency or the seller aspired for 14-month delivery. Of 30 such dyads (out of a total of 67) in the combined FF and EML conditions, only three dyads achieved the integrative trade with Hungarian-14 month outcome, while 13 achieved the next best outcome, namely, Hungarian12 months. In contract, in the NAP condition, there were 19 dyads (out of a total of 34) with at least one party having high aspirations and nine of them achieved the integrative trade, and a further five dyads achieved the Hungarian-12 month outcome. These results, in conjunction with
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
752
Handbook of Technology Management in Public Administration
TABLE 9.5 Mean Perceptions of the Negotiation Processes (After the Negotiation) Perception The negotiation process was realistic The negotiation process could be termed “friendly”a I communicate with the other party honestlya I drove a hard bargainb I was in control during the negotiationsb I could communicate whatever I wanted to the other party By the end of the negotiations, I understood the other party well enough to predict their offer I made concessions mostly on issues that were not so important to me The other party made concessions on issues that were important to me I felt rushed into reaching an agreementc I was satisfied with the agreement reached in the negotiationb The negotiations went according to my expectationsc The settlement was not as favorable to my interests as I had expectedb
Overall
FF
EML
NAP
NAA
3.30 4.08 3.84 3.20 3.44 3.33 3.54
3.35 4.18 3.71 3.00 3.21 3.25 3.55
3.33 3.78 3.52 3.31 3.58 3.37 3.68
3.11 4.34 4.09 3.14 3.45 3.49 3.53
3.39 4.00 4.03 3.37 3.54 3.24 3.39
3.67
3.57
3.55
3.88
3.67
4.03
4.07
3.84
4.18
4.00
2.40 3.87 3.38 2.71
2.13 3.93 3.47 2.85
2.70 3.63 3.08 2.92
2.30 3.94 3.43 2.57
2.47 3.97 3.54 2.50
Notes: 1, strongly disagree; 5, strongly agree. a b c
Significant differences between the groups at the 0.01 level or better. Significant difference between the groups at the 0.10 level or better. Significant difference between the groups at the 0.05 level or better.
the overall outcomes summarized in Table 9.3, suggest that when the parties have high aspirations, integrative trades are more likely to occur, and this likelihood is enhanced greatly by the use of the NA system. Earlier, we noted that the preparation function of NA also helps establish higher aspirations prior to negotiation. Table 9.5 summarizes some of the post experiment perceptions of the negotiators in the four groups. Thos using NA appear to have communicated more honestly and felt that the settlement was more favorable to their interests than those in the other groups.
DISCUSSION AND CONCLUSIONS The experimental test provides support for the hypothesis that the use of the NA systems developed from our research is likely to help negotiators achieve Pareto-superior outcomes in structured multiissue negotiations. The fact that negotiators using NA make more integrative trades than those who negotiated face to face or using an e-mail systems suggests that NA system played a key role in helping parties overcome come of the barriers to integrative bargaining that afflict conventional negotiations. The equivalence in outcomes (in terms of integrative trades) between subjects using the e-mail system and those negotiating face to face suggests that the mere use of computer technology will not improve negotiation outcomes. The key to achieving integrative trades is to set and maintain high aspirations in conjunction with a problem-solving orientation (Pruitt and Lewis 1977, p. 181). High expectations provide the motivation to keep looking for integrative trades without settling on compromise solutions, while the problem-solving orientation provides the approach for identifying alternative proposals to offer to the other party that still maintain high potential benefit for self. Thus, the value of NA derives from helping negotiators prepare for the negotiation, and this value is preserved and enhanced if computer communication is structured to make the preparation inputs salient during the negotiation.
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
753
Our results demonstrate that NA serves as a useful operational mechanism to implement negotiation analysis to facilitate integrative negotiations. These results, however, do not suggest that NA offers a uniquely superior computer system to prepare or conduct negotiations. Other systems that incorporate utility assessment procedures and/or structure the communications between the parties might also do as well as NA. Based on our results, we feel comfortable recommending that NA be used for preparation, preferably by all parties to a negotiation. However, we question our initial vision that refined versions of our electronic bargaining table could be deployed across computer networks. First, the subjects in our test began with a fully specified set of issues and options. In fact, in conjoint analysis, a basic assumption is that all options of every issue are in the acceptable range (Srinivasan and Wyner 1989). Most real-world negotiations are not so well structured. To remedy this shortcoming, the system would have to be expanded to include an agenda-setting stage prior to the current “issues” stage. This raises additional concerns. An agenda-setting stage could introduce strategic behavior on the part of the negotiators that might subvert the use of our formal model. This requires further investigations. A second, more general limitation of the tested version of the NA involves its utility assessment procedures, and thus, applies both to the preparation feature and to the electronic bargaining table. The methods of multiattribute utility analysis do not easily model the various interactions among issues that sometimes exist in complex bargaining situations. For example, some interactions significantly alter the value of an issue under special, specified assumptions, thus requiring the system to present models that list the issue as having a very high value under one set of assumptions and a much lower value under others. Such problems are imbedded in the use of multiattribute utility analysis and are subject to solutions as negotiation analysis develops improved models for representing preference interactions. A third limitation of the system, discussed with respect to H4, involves the postsettlement stage. As now configured, this stage may leave the parties vulnerable to pure distributional bargaining between Pareto-superior packages, especially if there are only a few such packages. This could injure a relationship that, prior to the postsettlement stage, was in good working order. One solution to this problem is to simply ask the parties, prior to the beginning of the negotiation, to agree to an objective criterion for selecting an optimal postsettlement. The negotiators may be asked to choose from a set of criteria such as those suggested by Keeney and Raiffa (1991). The efficacy of alternative methods of postsettlement support have to be evaluated in future research, especially in view of the possibility of the users gaming the system. An interesting variation on our experiment to test H4 would be to ask subjects who negotiate entirely on a face-to-face basis, to use NA after they reach an agreement to see whether the postsettlement feature improves outcomes. These limitations of NA are significant. For the moment, however, the value of the system has been demonstrated in our experimental setting, and in our classrooms, where we use it to teach students in a tangible way the structure of integrative tradeoffs and the value of analytical approaches to facilitate negotiations. The system has been used successfully for several years at a few leading MBA programs to demonstrate the principles of utility assessment, integrative tradeoff, Pareto-optimality, and other concepts of negotiation analysis. NA also presents new research opportunities. For example, it might be used to help investigate paths toward integrate settlements. Mumpower (1991) has provided some initial insights into preference structures which facilitate “horse-trading.” Because the system can keep track of the history of offers, counteroffers, and messages, this allows for investigating patterns that lead to integrative bargaining solutions. Another opportunity for future research is the comparative testing of the NA process against competing processes such as those used in ICANS, or even simple training programs focusing on integrative bargaining, to isolate the relative merits of each of each of these approaches in situations where all of them can be deployed.18
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
754
Handbook of Technology Management in Public Administration
NOTES 1. We use the terms negotiation and bargaining interchangeably. 2. An efficient agreement may be conceptualized in terms of the framework of cooperative game theory, as proposed by Nash (1950). The Nash model reckons payoffs from potential settlements of a negotiation in terms of the utilities of each potential settlement to each party. If mixed strategies (random strategies) are allowed, then the Nash model proposes a normative settlement, called the Nash bargaining solution, that satisfies several appealing criteria including Pareto-efficiency. However, the Nash model falls short as a description of real negotiations. In particular, the use of mixed strategies is rarely observed in negotiations, possibly because the performance of a real-world negotiator is evaluated in terms of the utility associated with the actual settlement realized, rather than on the strategic desirability of a mixed strategy (Luce and Raiffa 1957). Realworld negotiations are often conducted using pure strategies, i.e., in issue space rather than in utility space. If the negotiation involves only one issue, then the settlement reached using pure strategies will generally be Pareto efficient, but this need not be the case when the negotiation involved multiple issues. 3. DECISION CONFERENCING is a prototype GDSS that can be applied in a negotiation context (Rao and Jarvenpaa 1991). The negotiating parties first separately develop a decision model with the help of a third party facilitator using decision-analytic techniques. After this, however, the parties communicate directly in identifying a mutually preferred settlement relying on “democratic protocols” and by using various techniques such as decision trees, expected utility maximization, and Pareto algorithms. 4. Interested readers may obtain a more detailed illustration of the operation of the system by writing to the authors. 5. This assures that the worst outcome in the negotiation (equivalent to the Best Alternative to a Negotiated Agreement (BATNA)) has a value equal to 0, and the best outcome has a value equal to 100. Note also that the “constant sum” scale used here has intervallevel properties. 6. An orthogonal array of packages yields several additional benefits. First, orthogonality minimizes the number of packages to be evaluated by users, while still giving a good picture of the user’s preferences. For example, if there are four issues each with four options each, there are a total of 256 possible settlement packages. An orthogonal design here could consist of as few as 16 packages. Second it provides an “additive” utility model enable the system to derive the imputed value of any package discussed during the negotiation, including, in particular, those not presented in the sample set of packages rated by the users. The conjoint analysis feature is a significant departure from multiattribute preference elicitation procedures (where used) in previous NSS systems. ICANS and MEDIATOR use formal mechanisms for preference elicitation. However, the packages presented by these systems are not orthogonal, and hence, the resulting utility measurements do not necessarily provide a reliable additive model of preferences. If the set of packages departs considerably from orthogonality, the parameters of the estimated additive utility functions can be unstable, and not valuable for the purposes of identifying efficient settlements. 7. The Prepare stage is technically referred to as the self-explicated, or the “compositional” method of preference elicitation (Srinivasan and Wyner 1989, Green and Krieger 1993). In contrast, conjoint analysis is a “decompositional technique” in which overall preference scores are decomposed into the utility values attached to each issue and options within issues. In early trials of the system, we only had the “Ratings” stage where the profiles were presented in random order. However, the respondents found this task to be very difficult because of their inability to find appropriate anchors to facilitate the rating
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
8. 9.
10. 11.
12.
13.
14.
15.
755
process. It is in view of this that we added the “Prepare” stage as a way to facilitate the Ratings stage. In electronic markets, intermediaries are emerging to ensure the security and integrity of the system, and enforce all the rules agreed to by parties. The Pareto-superior packages displayed to the users are automatically scored according to their own preference functions. However, the revelation of these packages only provide ordinal information about the other party’s preferences, i.e., it reveals whether a settlement is equal to or superior to the agreed settlement without disclosing the degree of superiority. An alternative display format would indicate only that Pareto-superior packages exist without disclosing the packages themselves. This is the approach adopted in the design of the ONDINE II system (Nyhart and Samarasan 1989). Additional criteria such as “equitability” of each superior package may be used to trim the number of packages displayed. This information is only stored in the local computer of the user. The users may choose not to record any of the exchanges, by selecting the appropriate option in the “Config” menu options. Only ordinal preferences were induced. The subjects internalized these preferences in their own idiosyncratic manner. This approach enabled us to minimize preference variability between subjects, while at the same time allowing subjects in the computer condition to use the preference assessment procedure to better understand their preferences. In real negotiations, subjects are not as clear about the priorities, and may benefit more by using the NA system to understand their preferences. Thus, the experimental procedure is likely to understate any realized benefits of the system. A Windows-based e-mail system was designed specifically for our experiment. In addition to allowing parties to send messages of unlimited size to each other, the system allowed the parties to conveniently review past messages sent and received. Because e-mail systems have become commonplace, we are not describing our system in any detail here in order to conserve space. The experimental procedures involving the three computer conditions took place in August 1995, except for four dyads that were completed in September 1994. Subject in the three computer conditions were randomly assigned to the treatments. The faceto-face negotiations took place in September 1993. At that time, the subjects were randomly assigned to either the FF condition or to the NAP condition. The results of the NAP conditions are similar to those reported here, and were included in earlier versions of this paper. To conserve space, they are not reported here. Because the groups in the face-to-face condition negotiated at a different time than the groups in the computer conditions (but at the same school, and under similar conditions), there is a possibility that the experimental results are a function of the pretest difference in the subjects. However, the demographic profiles of our subjects were similar across all times and conditions (see Table 9.1). Further, the agreements reached in the face-to-face condition are consistent with literally hundreds of classroom simulations over four years using this same scenario for instructional purposes. To minimize chances of collusion in the face of this monetary incentive, we emphasized that the subjects would be required to sign a statement after completing the negotiation that they did not collude to obtain any part of the prize. In the context of the Wharton School’s Code of Academic Integrity, we expect this signature to be a significant deterrent to bad faith conduct. In addition, as noted above, all negotiations took place in facilities where subjects were under observation throughout. A copy of the experimental materials may be obtained by writing to the authors. In the interest of space, we do not report the analyses we have done on the postnegotiation questionnaires.
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
756
Handbook of Technology Management in Public Administration
16. Note that not all respondents provided answers to this question. This accounts for the variations in sample size used for these statistics. 17. In conducting the following c2 tests, we collapse the no agreement outcomes under the “other” category, except when directly comparing outcomes of EML and NAA. This does not materially affect the results reported. 18. This research was funded in part by the Reginald H. Jones Center for Management Policy, Strategy, and Organization at the Wharton School, the SEI Center for Advanced Studies in Management at the Wharton School, the University of Pennsylvania Research Foundation, Center for Dispute Resolution, Northwestern University, and The Institute for the Study of Business Markets, Penn State University. The authors thank Professor Paul Green for making available computer software used in early versions of the system reported in this paper, Animesh Karna for providing programming support, and Katrin Starke for help in conducting the experiments. The authors also thank Professors Wayne DeSarbo, Srinath Gopalakrishna, Gary Lilien, Leo Smyth, Ernest Thiessen, and the editor and reviews for their insightful comments on an earlier draft of this paper.
REFERENCES Agresti, A., Categorical Data Analysis, John Wiley & Sons, New York, 1990. Arunachalam, V. and Dilla, W. N., Judgment accuracy and outcomes in negotiations: a casual modeling analysis of decision-aiding effects, Organizational Behavior and Human Decision Processes, 61, 3, 289–304, 1995. Cohen, H., You Can Negotiate Anything, Lyle Stuart, Secaucus, NJ, 1980. Dwyer, R. F. and Walker, O. C., Bargaining in an asymmetrical power structure, Journal of Marketing, 45, 104–115, 1981. Eliashberg, J., Gauvin, S., Lilien, G. L., and Rangaswamy, A., An experimental sudy of alternative preparation aids for international negotiations, Journal of Group Decision and Negotiation, 243–267, 1992. Fang, L., Hipel, K.W., and Marc Kilgore, D., Interactive Decision Making, John Wiley & Sons, New York, 1993. Green, P. E. and Srinivasan, V., Conjoint analysis in consumer research: issues and outlook, Journal of Consumer Research, 5, September, 103–123, 1978. Green, P. E. and Krieger, A. M., Conjoint analysis with product-positioning applications, In Handbooks in Operations Research and Management Science, Eliashberg J. and Lilien G.L., Eds., Vol. 5, Elsevier Science Publishers, North-Holland, Amsterdam, 467–515, 1993. Gupta, S., Modeling integrative multiple issue bargaining, Management Science, 35, 7, 788–806, 1989. Hiltz, R. S., Johnson, K., and Turoff, M., Experiments in group decision making, 1: communications process and outcome in face-to-face versus computerized conferences, Human Communication Research, 13(2), 225–252, 1986. Jarke, M., Jellasi, M. T., and Shakun, M. F., MEDIATOR: towards a negotiation support system, European Journal of Operations Research, 313–334, 1987. Jelassi, M. T. and Faroughi, A., Negotiation support systems: an overview of design issues and existing software, Decision Support Systems, Special Issue on Group Decision Support Systems, 5, 167– 181, 1989. Keeney, R. L. and Raiffa, H., Structuring and analyzing values for multiple-issues negotiations, In Negotiation Analysis, Peyton Young H., Ed., University of Michigan Press, Ann Arbor, MI, 1991. Kersten, G., Michalowski, W., Szpakowicz, S., and Koperczak, Z., Restructurable representations of negotiation, Management Science, 37, 10, 1269–1290, 1991. Konstadt, P., The cotton club, CIO, September, 26–32, 1991. Lax, D. A. and Sebenius, J. K., The Manager as Negotiator: Bargaining for Cooperation and Competitive Gain, Free Press, New York, 1986. Lim, L. and Benbasat, I., A theoretical perspective of negotiation support systems, Journal of Management Information Systems, 9, 3, 27–44, 1992–1993. Luce, R. D. and Raiffa, H., Games and Decisions, Wiley, New York, 1957.
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
757
Mumpower, J. L., The judgment policies of negotiators and the structure of negotiation problems, Management Science, 37, 10, 1304–1324, 1991. Nash, J. F., The bargaining problem, Econometrica, 18, 155–162, 1950. Nagel, S. S. and Mills, M. K., Multi-Criteria Methods for Alternative Dispute Resolution: With Microcomputer Software Applications, Quorum Books, New York, 1990. Neale, M. A. and Bazerman, M. H., Cognition and Rationality in Negotiation, The Free Press, New York, 1991. Nunamaker, J. F. Jr., Dennis, A. R., Valacich, J. S., and Bogel,D. R., Information technology for negotiating groups: generating options for mutual gain, Management Science, 37, 10, 1325–1345, 1991. Nyhart, J. D. and Samarasan, D. K., The elements of negotiation management: using computers to help resolve conflict, Negotiation Journal, January, 42–62, 1989. Pruitt, D. G., Negotiation Behavior, Academic Press, New York, 1981. Pruitt, D. G. and Lewis, S. A., The psychology of integrative bargaining, In Negotiations: Social-Psychological Perspectives, Druckman D., Ed., Sage Publications, Beverly Hills, CA, 1977. Raiffa, H., The Art and Science of Negotiation, Belknap Press of Harvard University, Cambridge, MA, 1982. Raiffa, H., Post-settlement settlement, Negotiation Journal, 1, 9–12, 1985. Rao, V. S. and Jarvenpaa, S. L., Computer support of groups: theory-based models for GDSS research, Management Science, 37, 1347–1362, 1991. Rangaswamy, A., Eliashberg, J., Burke, R., and Wind, J., Developing marketing expert systems: an applicatin to international negotiations, Journal of Marketing, 52, 4, 24–49, 1989. Sevenius, J. K., Negotiation analysis: a characterization and review, Management Science, 38, 18–38, 1992. Siegel, J., Dubrovsky, V., Kiesler, S., and MCGuire, T. W., Group processes in computer-mediated communication, Organizational Behavior and Human Decision Processes, 37, 157–187, 1986. Srinivasan, V., and Wyner, G. A., CASEMAP: computer-assisted self-explication of multiattributed preference, In New Product Development and Testing, Henry, W., Menasco, M., and Takada H., Eds., Lexington Books, 1989. Sycara, K. P., Negotiation planning: an AI approach, European Journal of Operations Research, 46, pp. 216– 234, 1990. Sycara, K. P., Problem restructuring in negotiation, Management Science, 37, pp. 1248–1268, 1991. Thiessen, E. M. and Loucks, D. P., Computer Assisted Negotiation of Multiobjective Water Resources Conflicts, Working Paper, School of Civial and Environmental Engineering, Cornell University, Ithaca, NY, 1992. Valley, K. L., Moag, J., and Bazerman, M. H., Away With the Curse: Effects of Communication on the Efficiency and Distribution of Outcomes, Working Paper, Harvard Business School, Cambridge, MA, 1995. Walton, R. E. and McKersie, R. B., A Behavioral Theory of Labor Negotiations: An Analysis of a Social Interaction System, McGraw-Hill, New York, 1965. Wichman, H., Effects of isolaton and communication on cooperation in a two-person game, Journal of Personality and Social Psychology, 16, pp. 114–120, 1970. Young, H. P., Negotiation analysis, In Negotiation Analysis, Young H.P., Ed., University of Michigan Press, Ann Arbor, MI, 1991.
A CONCEPTUAL FRAMEWORK ON THE ADOPTION OF NEGOTIATION SUPPORT SYSTEMS* ABSTRACT An exploratory study was conducted to identify factors affecting the intention to adopt negotiation support systems (NSS) by managers and executives. Drawing from past literature, the Theory of Planned Behavior and the Technology Acceptance Model provided a basis for analyzing our results. Overall, subjective norm and perceived behavioral control emerged as strongest * John Lim, School of Computing, National University of Singapore, Kent Ridge, Singapore, Singapore 119260.
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
758
Handbook of Technology Management in Public Administration
determinants of intention to adopt NSS. Further probing of subjective norm revealed organizational culture and industrial characteristics to play significant roles. A new conceptual framework is proposed which would be of both theoretical and practical importance.
INTRODUCTION Negotiations have become increasingly important and inevitable in today’s business. As computer and communication technologies become more advanced and easily available, using computers to aid negotiations has become viable, especially as the issues negotiated become more complex. This has led to the emergence of negotiation support systems (NSS), a specialized class of group support systems designed to help negotiators achieve optimal settlements. A number of commercial NSS packages are available for sale. However, practical usage of NSS in organizations has been minimal. This phenomenon causes the motivation for this study, which is to identify factors affecting business managers’ and executives’ intention to adopt NSS. In information systems research on user behavior, intention models from social psychology have been frequently used as the potential theoretical foundation for research on the determinants of user behavior [1,2]. Among these theories are the Theory of Planned Behavior (TPB) [3,4] and the Technology Acceptance Model (TAM) [5,6]. As TPB and TAM are both viable and popularly employed explanatory mechanisms for IT adoption and their explanatory powers vary depending on the technology [7,8], the current paper makes use of them as a theoretical basis for the context of NSS. The next section discusses the technology as well as reviews the two adoption models. The third section describes the analysis of data collected, while the fourth section discusses the implications of the findings. In the last section, a new conceptual framework is presented.
NEGOTIATION SUPPORT SYSTEMS
AND
ADOPTION MODELS
Bui et al. [9] described negotiations as complex, ill-structured, and evolving tasks requiring sophisticated decision support. However, weak information processing capacity and capability, cognitive biases and socio-emotional problems often hinder the achievement of optimal negotiations [10–13]. As a result, much interest has been generated to provide computer support for negotiations. This leads to NSS, a special class of group support systems designed to support bargaining, consensus seeking and conflict resolution [9]. Conceptually, NSS consist of decision support systems (DSS), which are networked [14]. The DSS component helps to refine negotiators’ objectives and, at the same time, provides a tactful forum for expressing them [15]. It supports the analysis of subjective preference and/or external objective data. NSS also provide modeling techniques (based on regression analysis, multi-criteria decision making, and game theory) to generate integrative solutions or viable strategies [16–18]. This information processing capability and capacity, as well as the identification of potential settlements, enhance easy interpretation and objective evaluation of issues and outcomes. Much of NSS research has focused on the design and implementation of NSS, as well as the building blocks of NSS (i.e., the DSS and communication components) [15–17,19–22]. Another key area addresses the modeling and representation of negotiation problems [23–26]. Some of these efforts have involved inter-disciplinary research and collaboration, ranging from management science to cognitive/behavioral sciences to applied artificial intelligence, and neural networks [24,27–29]. They cover a variety of domains, including static buyer/seller negotiations, dynamic scenario management [30], and electronic mobile marketplace [31]. Empirical research on NSS has shown that computer-aided negotiations generally yielded higher joint outcomes and greater satisfaction; in short, NSS help to improve the negotiation process as well as the negotiation outcome [13,32,33]. Despite these positive results, widespread adoption of NSS is not observed. This paper seeks to understand what factors are dominant in affecting people’s intention to adopt NSS by employing two established adoption models.
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
759
The TPB [3] postulates three conceptually independent determinants of intention: attitude, subjective norm and perceived behavioral control. Attitude refers to the extent to which a person evaluates the behavior (i.e., adoption of NSS) favorably or unfavorably. A person will have a favorable attitude towards this behavior if he or she believes that doing it will have largely positive consequences. On the other hand, if a person perceives mostly negative outcomes from performing the behavior, then he or she will view the behavior unfavorably. According to TPB, attitude towards NSS adoption is an additive function of the products of behavioral belief and outcome evaluation of that belief. Subjective norm refers to the perceived social pressure to, or not to, adopt NSS. It is determined by normative beliefs, which are concerned with the likelihood of important referent individuals or groups approving or disapproving of performing the behavior. Subjective norm regarding NSS adoption is an additive function of the products of normative belief about each referent and motivation to comply with that referent. Perceived behavioral control refers to the perceived ease or difficulty of performing the behavior and is dependent on second-hand information, experiences of acquaintances and friends, and anticipated assistance and impediments. Specifically, an individual’s perceived control increases as he or she perceives greater resources and opportunities, and anticipates fewer obstacles and impediments. Perceived behavioral control over NSS adoption is an additive function of the products of control belief and perceived power of that belief. In general, the more favorable the attitude and the subjective norm, and the greater the perceived behavioral control, the stronger would be an individual’s intention to adopt NSS. Examples of applying TPB to IS topics include [7] (use of spreadsheet software), [8] (use of Computing Resource Center), and [34] (adopting IT in small businesses). The TAM [5] postulates two determinants, perceived usefulness and perceived ease of use. Perceived usefulness is defined as the extent to which a person believes that using NSS will improve his or her job performance within an organizational context. Perceived ease of use refers to the degree to which a person expects the usage of NSS to be free of effort. This model also invokes the attitude concept, and predicts that adoption intention (of NSS) is a function of perceived usefulness and attitude, which in turn is determined by perceived usefulness and perceived ease of use. Examples of TAM studies pertaining to IS include [35] (executive information system), [7] (spreadsheet software), [8] (Computing Resource Center), [36] (email), and [37] (Word, Excel). Other TAM studies have focused on the relationship between usage and perceived usefulness and perceived ease of use [38–40]. Whereas the above adoption models can be and have been applied to studying various types of IT, in the current paper they are employed as theoretical basis for understanding NSS adoption.
DATA ANALYSIS An exploratory study was conducted. Questionnaires, adapted from Harrison et al. [34] (for measuring items related to TPB) and Davis [38] (for measuring items related to TAM), were sent to managers and executives. The target involved a representative sample of firms located in Singapore, characterized by an open economy and typical of a developed city. The major industries touched on were manufacturing, services, and commerce. Collected data were subject to validity and reliability tests; if constructs are valid, tests such as factor analysis will yield relatively high correlations between measures of the same construct and low correlations between measures of constructs that are expected to differ. Three principal component factor analyses with promax rotation were performed (an oblique rotation was chosen because the factors might be correlated). The first factor analysis was performed on measures pertaining to behavioral, normative and control beliefs; these are the product terms for attitude, subjective norm, and perceived behavioral control, respectively. The result of the factor analysis is shown in Table 9.6. Factor loadings, except for two items, were greater than 0.55. From the factor analysis, six factors were identified: three sets of behavioral beliefs, one set of normative beliefs, and two sets of control beliefs. For behavioral beliefs, the first set pertains to the perceived advantages of using NSS, the second set the perceived
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
760
Handbook of Technology Management in Public Administration
TABLE 9.6 Pattern Matrix for Measures of Behavioral, Control, and Normative Beliefs Factor Construct
Item
1
2
3
4
5
6
0.96 0.94 0.93 0.89 0.77 0.69
K0.10 0.00 0.00 0.00 0.13 K0.01
K0.01 0.00 0.00 0.01 0.00 0.21
0.17 0.00 0.01 0.01 K0.22 K0.01
0.00 0.00 K0.01 K0.01 0.18 0.00
0.01 0.00 0.01 0.01 K0.14 0.00
K0.01 0.01 K0.20 0.00
0.84 0.76 0.75 0.68
0.22 0.27 0.26 K0.23
0.16 0.00 0.10 0.34
0.01 K0.01 0.12 K0.19
0.00 0.17 K0.13 0.01
Behavioral beliefs I (perceived advantages)
Improve information access Improve communication Increase speed Better service Reduce costs Easier to use
Control beliefs I (noncomputer-related)
Training Employees’ support Time Financial assets
Normative beliefs
Suppliers/vendors Customers/clients IT specialists Other employees
0.00 0.01 0.01 0.19
0.15 0.15 0.01 0.17
0.83 0.77 0.72 0.67
K0.12 K0.19 0.14 K0.01
0.00 0.00 0.01 0.00
K0.01 K0.12 0.16 K0.01
Control beliefs II (computer-related)
Compatibility with software Compatibility with hardware
0.00 0.12
0.22 0.25
K0.01 0.00
0.93 0.91
0.00 0.00
K0.16 K0.01
Behavioral beliefs II (perceived disadvantages)
More downtime Integration problems Employees’ resistance Less organized Reduced info security
K0.01 0.00 K0.11 0.15 0.00
0.12 K0.30 K0.42 0.21 0.24
0.01 K0.11 0.38 K0.33 K0.29
0.00 0.10 0.24 K0.22 K0.12
0.80 0.71 0.55 0.45 0.45
0.00 0.01 0.00 K0.34 0.41
Behavioral beliefs III (direct costs)
Higher training costs High costs to develop
0.00 0.00
0.00 0.00
0.01 K0.12
K0.01 K0.17
0.12 0.00
0.87 0.82
disadvantages of adopting NSS, and the third set the direct costs associated with using NSS in the organization. Control beliefs are divided into computer-related and non-computer-related factors. The items in these factors were utilized in regression analyses for subjective norm and perceived behavioral control to be discussed later. The second factor analysis was conducted for measures of intention, attitude, subjective norm and perceived control (see Table 9.7). All the items loaded into their respective factors with loadings of 0.67 and greater. Reliabilities varied between 0.72 and 0.95. The third factor analysis was conducted for measures of perceived usefulness and perceived ease of use (see Table 9.8). All the items loaded into their respective factors with loadings of 0.66 and greater. Reliabilities ranged from 0.87 to 0.96. Echoing the two adoption models, two regression equations were tested which highlighted subjective norm and perceived behavioral control as significantly strong determinants of intention to adopt NSS. Further stepwise regression analyses were therefore performed on subjective norm and perceived behavioral control, whose respective product terms were obtained from an earlier factor analysis reported in Table 9.6. Table 9.9 shows that subjective norm was significantly influenced by endorsement from customers, IT specialists, and (other) employees. For perceived behavioral control, the only significant factor was employees’ support for the organizational adoption of NSS.
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
761
TABLE 9.7 Pattern Matrix for Measures of Intention, Attitude, Subjective Norm, and Perceived Behavioral Control Factor Construct
Item
1
2
3
4
a
Attitude
Positiveness of attitude Effectiveness Goodness Helpfulness Wiseness
0.93 0.86 0.85 0.70 0.67
0.12 0.19 0.00 K0.28 K0.12
0.00 K0.12 0.00 0.00 0.17
K0.20 K0.01 K0.01 0.33 0.13
0.87
Intension
Commitment Certainty Likelihood
0.00 0.00 0.01
0.94 0.94 0.88
K0.01 0.00 0.10
0.01 0.01 0.00
0.95
Perceived behavioral control
Degree of being under control Simplicity to arrange Easiness
0.00 0.00 0.00
K0.21 0.88 0.12
0.86 0.10 0.70
0.00 0.00 0.14
0.72
Subjective norm
Strong approval of important people Approval of important people
K0.01
0.01
0.00
0.92
0.91
0.00
0.01
0.00
0.92
TABLE 9.8 Pattern Matrix for Measures of Intention, Attitude, Perceived Usefulness, and Perceived Ease of Use Factor Construct
Item
Perceived usefulness
Accomplish tasks faster Improve job performance Enhance effectiveness Increase productivity Easier to do job Useful
Perceived ease of use
1
2
3
4
a
0.97 0.97 0.94 0.91 0.88 0.66
K0.14 0.00 0.00 0.00 0.11 0.36
0.00 0.00 0.00 0.01 0.00 0.00
0.11 0.00 0.00 0.01 K0.01 K0.15
0.96
Easy to use Easy to become skillful Easy to operate Flexible to interact with Clear and understandable interaction Easy to do what they want
0.00 K0.01 0.00 0.00 0.12
0.93 0.93 0.89 0.85 0.81
0.01 0.12 K0.01 K0.11 0.00
0.00 K0.12 0.01 0.01 0.00
0.94
0.01
0.76
0.00
0.12
Attitude
Negative/positive Ineffective/effective Bad/good Harmful/helpful Foolish/wise
K0.15 0.01 0.01 0.01 K0.01
0.00 0.00 0.00 0.00 0.01
0.92 0.79 0.77 0.77 0.76
0.00 0.13 0.00 K0.14 0.00
0.87
Intention
Certainty Commitment Likelihood
0.00 K0.01 0.01
0.00 0.01 K1.37
0.00 0.00 0.00
0.96 0.95 0.91
0.95
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
762
Handbook of Technology Management in Public Administration
TABLE 9.9 Stepwise Regression for Subjective Norm and Perceived Behavioral Control (Based on TPB) Function
Regression Coefficients
1. SN ZCustomersZSuppliersZ IT specialistsCEmployees Customers 0.10* IT specialists 0.13** Employees 0.21***
Correlation Coefficients
0.54*** 0.55*** 0.61***
2. PBCZFinancial resourcesCTimeCSoftware compatibilityCHardware compatibilityCEmployeesCTraining Employees 0.57*** 0.29** *p!0.05; **p!0.01; ***p!0.001.
Exploratory analysis was performed by separating the data according to countries of origin (Western vs. Asian countries). From the results, Asian countries’ data set showed subjective norm to be the only significant factor (p!0.001) influencing intention; in contrast, the only significant factor found in Western countries’ data set was perceived behavioral control (p!0.05). The data was also analyzed based on three major industries identified: manufacturing (33%), services (13%), and commerce (11%). For all three-regression analyses, subjective norm emerged as a significant predictor. In addition, perceived behavioral control was significant for the manufacturing industry.
IMPLICATIONS
OF
FINDINGS
Exploratory analysis performed on subjective norm showed that it was significantly influenced by customers/clients, IT specialists, and other employees in the organization. An organization’s dependence on its trading partners has often affected its decision making on various aspects of inter-organizational collaboration [41,42], such as the adoptions of NSS and electronic data interchange (EDI). In particular, organizations are dependent on their customers and clients, as they are the revenue generators in the organization’s value chain. For example, suppliers of major U.S. automobile companies were required to adopt EDI systems if they wanted to continue doing business with these customers, who had proactively adopted inter-organizational systems [43]. Similarly, NSS are inter-organizational systems1 that are designed to support business negotiations (such as contract or even merger negotiations) between organizations. Therefore, customers are a considerable driving force behind organizations’ intention to adopt NSS. On the other hand, an organization’s suppliers and vendors are relatively dependent on the organization and consequently they will have less influence on the organization’s NSS adoption intention and decision. Prior IS research has stressed the importance of user participation [38]. For instance, Baroudi et al. [44] found user participation and user information satisfaction to be positively correlated with system usage. Barki and Hardwick [45,46] found that user participation was a determinant of user attitude. The non-IT employees (end users) who participate in the development process of a NSS are likely to develop beliefs that the new system is good, important and personally relevant. Through participation, employees may be able to influence the design of this system. In turn, they may develop feelings of satisfaction and ownership, as well as a better understanding of the new system and how it can help them in their job. Thus, the significance of employees (both IT specialists and other employees) in influencing subjective norm is consistent with, and lends further support to, the importance of employee participation in system adoption and acceptance.
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
763
While perceived behavioral control was presumed to be affected by financial cost, implementation time, software and hardware compatibility, employees’ support and training, exploratory analysis showed employees’ support for the organizational adoption of NSS as the only significant factor. Although NSS were relatively novel, past experience with other new systems might be able to provide managers with an estimation of employees’ support (or resistance) level. Hence, in the case of NSS, employees’ support may be the most crucial factor in determining the perceived behavioral control of adopting the new technology. This also further strengthens the importance of employees’ influences (i.e., subjective norm) in system adoption. Moreover, employees’ support is likely to reduce resistance to change, which has been found to be a major inhibitor for EDI adoption [47]. The findings of this study have important implications for practitioners, including managers and executives, as well as marketers of NSS technology. For management people, the findings indicate that they should be aware of the social factors that may hinder the successful adoption of a new IT such as NSS. They should encourage active participation from referent groups who play critical roles in influencing organizational decisions; for example, views and positions of IT specialists should be consulted in the case of system acquisition, and end users’ feedbacks should be accommodated in the case of system development or user acceptance testing. In fact, analysis showed that most respondents possessed a strong desire to comply with their referent groups. Thus, if organizations perceive that their referent groups are in favor of their NSS adoption, they will very likely intend to adopt NSS. Special attention should be paid to customers/clients, IT specialists, and other employees. Information about the new IT could be disseminated to them to highlight the importance of adopting such an IT. Similarly, marketers should actively seek out these referent groups and create an awareness of NSS among them. In particular, a product or technology champion to educate the users on the new technology and facilitate its adoption would help to garner user support for its adoption [43]. Existence of a champion has been found to be an important factor in IT adoption [48], and particularly in inter-organizational systems and EDI adoption [49,50]. Therefore, a champion to promote NSS would help to generate favorable social pressure and encourage its adoption. Furthermore, practitioners should also take note of the resources required for any successful implementation of NSS. They should first identify the resources available in their organizations and then evaluate the feasibility of an adoption. Moreover, as perceived behavioral control, besides actual control, carries a substantial weight in influencing adoption intention, information on the required resources for adopting NSS should be readily available to any potential adopters to help them assess and improve their perceived behavioral control. Particular attention should be focused on increasing employees’ support for NSS adoption. To heighten employees’ support, management could involve the participation of employees in the installation of a NSS in the organization. A special customized introduction process aimed at making users feel comfortable with the new system will also help to foster greater employee support. Baronas and Louis [51] found that special training during the system implementation process would help to restore or enhance an employee’s sense of control over his work, which might have been threatened during such process. This will in turn cause employees to be more satisfied with the system and positive in their interaction with system implementers. Consequently, user acceptance of NSS will be greater. In fact, correlational analysis between employees’ support (control beliefsupport!perceived powersupport) and training (control belieftraining!perceived powertraining) revealed a strong positive correlation (rZ 0.69, p!0.001), suggesting that employees’ training may influence perceived behavioral control indirectly through employees’ support.
CONCEPTUAL FRAMEWORK Based on empirical analysis, this section puts fourth a conceptual framework regarding the adoption of NSS (see Figure 9.7). In this framework, subjective norm and perceived behavioral control are
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
764
Handbook of Technology Management in Public Administration
posited to influence the intention to adopt NSS. This linkage, however, is moderated by organizational culture and industry characteristics. In other words, the extent to which subjective norm and perceived behavioral control affect adoption intention depends on the specific conditions assumed by organizational culture and industry characteristics. Further, subjective norm is conceived to be affected by three exogenous variables: customers’ endorsement, IT specialists’ support, and employees’ support; the last variable also impacts perceived behavioral control. Whereas these exogenous variables have been dwelt upon in the previous section, the following deliberates on the two proposed moderators. Organizational Culture Based on his cross-country study on culture, Hofstede [52] categorized countries with four different characteristics—individualism/collectivism, uncertainty avoidance, power distance, and masculinity/femininity.2 These four value dimensions, which distinguish national value systems, also affect individuals and organizations. According to Hofstede [53], prevalent value systems, which form a part of an organization’s culture, encompass a national component reflecting the nationality of the organization’s founders and dominant elite. He proposed that founders of organizations, being unique individuals belonging to a national culture, are likely to incorporate their national values into their organizational culture, even if the organizations spread internationally. For example, there is something American about IBM the world over, something Swiss about the Red Cross. Therefore, it is conceivable that individuals who join the organization will subsequently go through processes of selection, self-selection, and socialization to assimilate themselves to the organizational culture [53]. It is also noted that survey respondents were instructed to answer from the organization’s perspective. Correspondingly, the organizational culture may be reflected in the responses. Among the four cultural dimensions, the individualism/collectivism component may provide particular insight into the behavior of organizations with respect to subjective norm. Individualism/collectivism relates to the self-concept of “I” or “we.” In an individualist society, individuals see themselves as “I” and are motivated by self-interest and self-actualization. Tasks enjoy higher priorities over relationships. On the other hand, in a collectivist society, individuals see themselves
Organizational culture
Industry characteristics Customers’ endorsement Subjective norm IT specialists’ support
Employees’ support
Perceived behavioral control
FIGURE 9.7 Conceptual framework on the intention to adopt NSS.
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
Intention to adopt NSS
Negotiating Technology Issues
765
as part of “we” and are motivated by group interests and relationships. It is evident from Hofstede’s work that Asian countries were largely collectivist societies, and Western countries were relatively individualistic. A Pearson correlational analysis showed a negative correlation (r Z K0.18, p!0.10) between subjective norm and individualism index scores as reported by Hofstede, thus confirming the role of organizational culture in the proposed framework. The influence of organizational culture can be further deliberated by examining an organization’s value tradeoffs between relationships and tasks. Asian organizations are likely to be collectivist and relationships with employees and trading partners are placed above other considerations. In a collectivist organization, people think in terms of “we” (work group, organization) and “they” (the others). Accordingly, social influences will have a greater impact on organizational decisions in collectivist organizations. The employer–employee relationship in these organizations comprises a moral component, with a tendency for employers to protect employees and their welfare. Thus, organizational members’ opinions matter and if they disapprove of NSS adoption, the organization’s intention to adopt will be visibly weakened. In the business arena, harmonious and lasting relationships with business partners are highly regarded. Hence, if business partners disapprove of the organization’s NSS adoption, it is unlikely that the organization will adopt NSS to avoid jeopardizing painstakingly built relationships. In contrast, Western organizations are characterized as individualistic. Individualistic organizations may be more concerned with task-oriented issues such as productivity and efficiency. In these organizations, employers may see employees as “a factor of production” and part of a “labor market.” Moreover, opinions of employees regarding a system considered useful by top management may not carry much weight, relatively speaking. In business relationships, task considerations take precedence over personal relationships. Thus, employees’ and trading partners’ approvals or disapprovals may be of secondary importance in these organizations. Industry Characteristics An organization’s intention to adopt a system may also be affected by the nature of the business. From the exploratory analysis on the manufacturing sector, subjective norm and perceived behavioral control emerged as significant predictors of intention. On the other hand, only subjective norm was important for the commerce and services sector. Manufacturing companies represent machine bureaucracies, which are characterized by standardization, functional structural design, and large size [54]. These structures are generally associated with mass production technology in which repetition and standardization dictate the products, process and distribution. In other words, the manufacturing environment symbolizes stability and few changes in working procedures and policies. As a result, if employees perceive that the introduction of NSS will bring about revolutionary changes to existing work practices (as in using NSS to negotiate), the new system will be met with strong resistance. Moreover, the implementation of large-scale just-in-time (JIT) systems by these organizations to streamline their operations requires that the introduction of NSS does not disrupt the operations of existing tightly coupled systems. These factors make perceived behavioral control an important component of organization’s intention to adopt NSS. Other factors such as computer anxiety may also determine the perceived ease of adopting NSS. Thus, successful implementation of NSS in these organizations requires sound strategies to ensure compatibility and smooth integration with other systems, as well as minimal changes to current work practices. Special training programs will also help to alleviate employees’ resistance to the new information technology. On the other hand, the services industry is dynamic and uncertain because they involve customer participation [55]. The information processing involved and the task activities performed vary in response to customers’ requirements and wants [56]. The environment faced by the commerce industry is also competitive and constantly changing. Thus, to achieve or maintain a competitive advantage, organizations in these environments are constantly scanning and
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
766
Handbook of Technology Management in Public Administration
implementing new technologies [49]. Firms that fail to do so may quickly lose their competitiveness. As Grover and Goslar [49] found in their study, environment uncertainty was positively correlated with IT contribution. Correspondingly, firms in uncertain environment view IT as a potential solution to their business challenges. These organizations tend to ignore the perceived ease or difficulty of implementing NSS in their consideration of adopting NSS, so long as important business partners agree with the adoption. In other words, perceived behavioral control may not be an important factor for these organizations. In sum, industry characteristics play a crucial role in influencing intention. For organizations where tasks and processes are repetitive and unchanging, the introduction of a new system may be viewed as a threat to current work practices by employees. On the other hand, organizations in dynamic industries often see IT as business solutions to help them achieve a competitive advantage. Therefore, organizations in these industries will be more willing to adopt innovative technologies. Nonetheless, for organizations in both stable and dynamic industries, generating positive perceptions in referent groups will help to facilitate the adoption of new technologies. In addition, perceived behavioral control is also important to organizations in non-dynamic industries. To alleviate potential resistance from users, special training programs should be formulated to assure users and build their confidence. For example, sessions can be organized to address and resolve the fears and stress associated with the system implementation. Hands-on sessions can also be held to dissipate feelings of threats and uncertainties among potential users. Through these efforts, employees’ support for system adoption will increase. In turn, increased employee support will help to improve organizations’ perceived behavioral control. Correspondingly, organizations’ intentions to adopt the new system will be strengthened. Other Implications This study examines the adoption intention of a relatively new technology in organizations; this is in contrast to many past studies of adoption, which focused on more familiar technologies. It is conceivable that the level of exposure—the degree to which users have knowledge of and/or experience with the system, may influence the magnitude of adoption intention. Low exposure to the system will lead to unfamiliarity with the system, which in turn will lead to a weaker relationship among the variables and low prediction power. On the other hand, high exposure will result in system familiarity and subsequently, stronger variable relationship and higher prediction power. In fact, the degree of exposure is closely related to prior experience, which has been found to be an important determinant of behavior [57,58]. As Eagly and Chaiken [59] proposed, knowledge gained from past behavior would help to shape intention. This is because experience makes it easier to remember acquired knowledge [60,61]. Past experience also brings out non-salient events, ensuring their consideration in the formation of intentions [57].
CONCLUDING REMARKS Using the TPB and the TAM as a starting point, a study was performed on the adoption intention for NSS, which identified subjective norm and perceived behavioral control to be the key determinants. Further analysis suggested organizational culture and industrial characteristics to play a moderating role. A new conceptual framework encompassing the above elements was put forth, and should help provide future research directions. As companies become increasingly globalized, adoption research must be approached with organizational culture as a major aspect of its design. As well, the inter-relationship between organizational culture and industry characteristics should provide interesting research topics of practical values. Lastly, case studies can be conducted to take into account the full complexity of the framework, and provide holistic understanding of the issues at hand.
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
767
NOTES 1. Inter-organizational systems refer to telecommunication-based computer systems that are used by two or more organizations to support data, information and applications sharing among users in different organizations [62,63]. A recent form, for example, is the continuous replenishment program (CRP) [64]. 2. Notwithstanding the use of Hofstede’s conceptualizations here, its limitations and oppositions are noted [65].
REFERENCES 1. Swanson, E. B., Measuring user attitudes in MIS research: a review, OMEGA, 10, 157–165, 1982. 2. Christie, B., Face to File Communication: A Psychological Approach to Information Systems, Wiley, New York, 1981. 3. Ajzen, I., The theory of planned behavior, Organizational Behavior and Human Decision Processes, 50, 179–211, 1991. 4. Ajzen, I., Perceived behavioral control, self-efficacy, locus of control, and the theory of planned behavior, Journal of Applied Social Psychology, 32, 665–683, 2002. 5. Davis, F. D., A Technology Acceptance Model for Empirically Testing New End-User Information Systems: Test and Results, Doctoral Dissertation, Sloan School of Management, Massachusetts Institute of Technology, 1986. 6. Legris, P. and Ingham, J., Collerette, P., Why do people use information technology? A critical review of the technology acceptance model, Information and Management, 40(3), 191–204, 2003. 7. Mathieson, K., Predicting user intentions: comparing the technology acceptance model with the theory of planned behavior, Information Systems Research, 2(3), 173–191 September, 1991. 8. Taylor, S. and Todd, P., Understanding information technology usage: a test of competing models, Information Systems Research, 6(3), 144–175,1995. 9. Bui, T., Jelassi, M. T., and Shakun, M. F., Negotiation support systems, Proceedings of the 25th Annual Hawaii International Conference on System Sciences, 4, 152, 1992. 10. Bazerman, M. H. and Neale, M A., Heuristics in negotiation: limitations to dispute resolution effectiveness, In Negotiation in Organizations, Bazerman, M. H. and Lewicki, R. J., Eds., Sage, Beverly Hills, CA, pp. 51–67, 1983. 11. Bazerman, M. H., Magliozzi, T., and Neale, M. A., Integrative bargaining in a competitive market, Organizational Behavior and Human Decision Processes, 35, 294–313, 1985. 12. Foroughi, A. and Jelassi, M. T., NSS solutions to major negotiation stumbling blocks, Proceedings of the 23rd Annual Hawaii International Conference on System Sciences, Vol. 4, Emerging Technologies and Applications Track, Kailua-Kona, Hawaii, pp. 2–11, 1990. 13. Foroughi, A., Perkins, W. C., and Jelassi, M. T., An empirical study of an interactive, session-oriented computerized negotiation support system, Group Decision and Negotiation, 4, 485–512, 1995. 14. Lim, L. H. and Benbasat, I.,A theoretical perspective of negotiation support systems, Journal of Management Information Systems, 9(3), 27–44, 1993. 15. Bui, T., Building DSS for negotiators: a three-step design process, Proceedings of the 25th Annual Hawaii International Conference on System Sciences, 4, 1992. 16. Anson, R. G. and Jelassi, M. T., A development framework for computer-supported conflict resolution, European Journal of Operational Research, 46, 181–199, 1990. 17. Jelassi, M. T. and Foroughi, A., Negotiation support systems: an overview of design issues and existing software, Decision Support System, 5, 167–181, Abstract-INSPEC Z $Order Document, 1989. 18. UNISYS Corporation, Computer assisted negotiation at the American Academy of Arts and Sciences, Cambridge, MA, 1987. 19. Carmel, E. and Herniter, B.C., Proceedings of the International Conference on Information SystemsProceedings of the International Conference on Information Systems, 1989. 20. Jarke, M., Jelassi, M. T., and Shakun, M. F., MEDIATOR: towards a negotiation support system, European Journal of Operational Research, 31(3), 314–334, 1987.
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
768
Handbook of Technology Management in Public Administration 21. Kersten, G.E., NEGO—Group decision support system, Information and Management, 8, 246–327, 1985. 22. Lim, J. L. H., Multi-stage negotiation support: a conceptual framework, Information and Software Technology, 41(5), 249–255, 1999. 23. Kersten, G. E. and Szapiro, T., Generalized approach to modeling negotiations, European Journal of Operational Research, 26(1), 142–149, 1986. 24. Sengupta, K., Cognitive conflict and negotiation support, Proceedings of the 26th Annual Hawaii International Conference on System Sciences, 4, 1993. 25. Shakun, M. F., Proceedings of the 26th Annual Hawaii International Conference on System SciencesProceedings of the 26th Annual Hawaii International Conference on System Sciences, (1993). 26. Sycara, K., Problem restructuring in negotiation, Management Science, 37(10), 1248–1268, 1991. 27. Helou, C., Dssouli, R., and Crainic, T., Performance testing of a negotiation platform, Information and Software Technology, 44(5), 313–330, 2002. 28. Kersten, G. E. and Szpakowicz, S., Negotiation in distributed artificial intelligence: drawing from human experience, Proceedings of the 27th Annual Hawaii International Conference on System Sciences, 258–270, 1994. 29. Matwin, S., Szapiro, T., and Haigh, K., Genetic algorithms approach to a negotiation support system, IEEE Transactions on Systems, Man and Cybernetics, 21, 1, 1991. 30. Bui, T., Kersten, G., and Ma, P. C., Supporting negotiation with scenario management, Proceedings of the 29th Annual Hawaii International Conference on System Sciences, 4, 209–218, 1996. 31. Yen, J., Lee, H. G., and Bui, T., Intelligent clearinghouse: electronic marketplace with computermediated negotiation supports, Proceedings of the 29th Annual Hawaii International Conference on System Sciences, 4, 219–227, 1996. 32. Delaney, M. M., Foroughi, A., and Perkins, W. C., An empirical study of the efficacy of a computerised negotiation support system (NSS), Decision Support System, 20, 85–197, 1997. 33. Jones, B. H., Analytical Negotiation: An Empirical Examination of the Effects of Computer Support for Different Levels of Conflict in Two-Party Bargaining, Doctoral Dissertation, School of Business, Indiana University, Bloomington, IN, 1988. 34. Harrison, D. A., Mykytyn, P. P., Jr., and Riemenschneider, C. K., Executive decisions about adoption of information technology in small business: theory and empirical tests, Information Systems Research, 8, 2, June, 1997. 35. Pijpers, G. G. M., Bemelmans, T. M. A., Heemstra, F. J., and van Montfort, K. A. G. M., Senior executives’ use of information technology, Information and Software Technology, 43(15), 959– 971, 2001. 36. Szajna, B., Empirical evaluation of the revised technology acceptance model, Management Science, 42(1), 85–92, January, 1996. 37. Chau, P. Y. K., An empirical assessment of a modified technology acceptance model, Journal of Management Information Systems, 13(2), 185–204, Fall, 1996. 38. Davis, F. D., Perceived usefulness, perceived ease of use and user acceptance of information technology, MIS Quarterly, 13, 319–340, September, 1989. 39. Adams, D. A., Nelson, R. R., and Todd, P.A., Perceived usefulness, ease of use and usage of information technology: a replication, MIS Quarterly, 16(2), 227–247 June, 1992. 40. Hendrickson, A. R. and Collins, M. R., An assessment of structure and causation of IS usage, The DATA BASE for Advances in Information Systems, 27(2), 61–67, Spring, 1996. 41. Daft, R. L. and Weick, K. F., Toward a model of organizations as interpretation systems, Academy of Management Review, 9(2), 284–295, 1984. 42. Provan, K. G., Beyer, J. M., and Kruytbosch, C., Environment linkages and power in resourcedependence relations between organizations, Administrative Science Quarterly, 25, 200–225, June, 1980. 43. Premkumar, G. and Ramamurthy, K., The role of interorganizational and organizational factors on the decision model for adoption of interorganizational systems, Decision Sciences, 26(3), 303–336, May/June, 1995. 44. Baroudi, J. J., Olson, M. H., and Ives, B., An empirical study on the impact of user implementation on system usage and information satisfaction, Communications of the ACM, 29(3), 232–238, 1986, Abstract-Compendex.
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
Negotiating Technology Issues
769
45. Barki, H. and Hardwick, J., Rethinking the concept of user involvement, MIS Quarterly, 13(1), 53– 64, March, 1989. 46. Hardwick, J. and Barki, H., Explaining the role of user participation in information system use, Management Science, 40(4), 440–465, April, 1994. 47. Ferguson, D. M., The state of U.S. EDI in 1988, EDI Forum, 21–29, 1998. 48. Beath, C. M., Supporting the information technology champion, MIS Quarterly, 15(3), 355– 372, 1991. 49. Grover, V. and Goslar, M. D., The initiation, adoption, and implementation of telecommunications technologies in U.S. organizations, Journal of Management Information Systems, 10(1), 141–163, Summer, 1993. 50. Reich, B.H. and Benbasat, I., An empirical investigation of factors influencing the success of customer-oriented strategic systems, Information Systems Research, 1(3), 325–347, 1990. 51. Baronas, A. M. K. and Louis, M. R., Restoring a sense of control during implementation: how user involvement leads to system acceptance, MIS Quarterly, 12(1), 111–126, March, 1988. 52. Hofstede, G., Culture’s Consequences: International Differences in Work-Related Values, Sage Publications, Beverly Hills, CA, 1980. 53. Hofstede, G., The interaction between national and organizational value systems, Journal of Management Studies, 22(4), 347–357, July, 1985. 54. Leifer, R., Matching computer-based information systems with organizational structures, MIS Quarterly, 12(1), 63–73, March, 1988. 55. Griffin, R. K., Baldwin, D., and Sumichrast, R.T., Self-management information system for the service industry: a conceptual model, Journal of Management Information Systems, 10(4), 111–133, Spring, 1994. 56. Mills, P. and Turk, T., A preliminary investigation into the influence of customer-firm interface on information processing and task activities in service organizations, Journal of Management, 12(1), 91–104, Spring, 1986. 57. Ajzen, I. and Fishbein, M., Understanding Attitudes and Predicting Social Behavior, Prentice-Hall, Englewood Cliffs, NJ, 1980. 58. Bagozzi, R. P., A field investigation of causal relations among cognitions, affect, intentions and behavior, Journal of Marketing Research, 19, 562–584, 1982. 59. Eagly, A. H., Chaiken, S., The Psychology of Attitudes, Harcourt Brace Jovanovich, Orlando, FL, 1993. 60. Fazio, R. H. and Zanna, M., Attitudinal qualities relating to the strength of the attitude behavior relationship. Journal of Experimental Social Psychology, 14(4), 398–408, July, 1978. 61. Regan, D. T. and Fazio, M., On the consistency between attitudes and behavior: look to the method of attitude formation, Journal of Experimental Social Psychology, 13(1), 28–45, 1977. 62. Barrett, S. and Konsynski, B., Inter-organization information sharing systems, MIS Quarterly, 6(4), 93–105, December, 1982. 63. Cash, J. I., Jr., Interorganizational systems: an information society opportunity or threat?, The Information Society, 3(3), 134–142, Winter, 1985. 64. Raghunathan, S. and Yeh, A. B., Beyond EDI: impact of continuous replenishment program (CRP) between a manufacturer and its retailers, Information Systems Research, 12(4), 406–419, 2001. 65. Baskerville, R. F., Hofstede never studied culture, Accounting, Organizations and Society, 28(1), 1–14, 2003.
DK3654—CHAPTER 9—4/10/2006—17:41—SRIDHAR—XML MODEL C – pp. 665–769
10
Technology and the Professions
CONTENTS Chapter Highlights.........................................................................................................................773 Teaching Science and Technology Principles to Non-Technologists—Lessons Learned...........775 Abstract ......................................................................................................................................775 Pedagogical Dilemmas ..............................................................................................................780 Responding to the Need ............................................................................................................782 Context...................................................................................................................................782 Connection to the Real World Is Critical to Capturing the Interest of the Student Audience ........................................................................................................783 Provide Historical Perspectives.............................................................................................783 Insist on Clear Expression.....................................................................................................783 De-emphasize the Memorization of Technical Vocabulary .................................................783 Don’t Evangelize. Don’t Sell ................................................................................................784 Future Direction.........................................................................................................................784 Notes ..........................................................................................................................................784 References..................................................................................................................................785 Internet Addiction, Usage, Gratification, and Pleasure Experience: The Taiwan College Students’ Case .............................................................................................785 Abstract ......................................................................................................................................785 Introduction................................................................................................................................785 Research Assumptions and Questions.......................................................................................786 Literature Review ......................................................................................................................787 Methods......................................................................................................................................789 Instruments.............................................................................................................................789 Subjects and Distribution Process .........................................................................................790 Results........................................................................................................................................791 Factor Analysis of C-IRABI-II and PIEU-II ........................................................................791 Questionnaire Scores, Usage Hours, and Impact Ratings ....................................................792 Internet Addicts Versus Non-Addicts ...................................................................................792 Addicts’ Versus Non-Addicts’ Internet Use Hours and Questionnaire Scores....................793 Regression Analysis of Internet Addiction ...........................................................................795 Discussions and Conclusion ......................................................................................................795 Acknowledgments .....................................................................................................................797 References..................................................................................................................................797 Ensuring the IT Workforce of the Future: A Private Sector View ..............................................798 IBM’s Experience......................................................................................................................798 Getting Compensation Right .....................................................................................................798 Keys to Success .........................................................................................................................799 Commitment of Senior Management ........................................................................................799 Phase Approach .........................................................................................................................799 Classification..........................................................................................................................799
771
DK3654—CHAPTER 10—3/10/2006—15:27—SRIDHAR —XML MODEL C – pp. 771–844
772
Handbook of Technology Management in Public Administration
Common Pay Increase...........................................................................................................799 Pay Differentiation.................................................................................................................799 Work/Life Options.................................................................................................................800 Telecommuting ......................................................................................................................800 Professional Development .....................................................................................................800 Clear Communication............................................................................................................800 Conclusion .................................................................................................................................801 Organizational Passages: Context, Complexities, and Confluence ..............................................................................................................................801 Introduction................................................................................................................................801 Start-Up Phase ...........................................................................................................................802 Brief Description ...................................................................................................................802 Operating Functions ..............................................................................................................802 Transition Functions ..............................................................................................................804 Small Business Phase ................................................................................................................804 Brief Description ...................................................................................................................804 Operating Functions ..............................................................................................................804 Transition Functions ..............................................................................................................805 Middle Market Phase.................................................................................................................806 Brief Description ...................................................................................................................806 Operating Functions ..............................................................................................................806 Transition Functions ..............................................................................................................807 Leadership Business ..................................................................................................................807 Brief Description ...................................................................................................................807 Operating Functions ..............................................................................................................808 Transition Functions ..............................................................................................................809 Maladaptive Business ................................................................................................................809 Brief Description ...................................................................................................................809 Operating Functions ..............................................................................................................809 Transition Functions ..............................................................................................................810 Conclusion .................................................................................................................................811 References..................................................................................................................................812 Improving Technological Literacy................................................................................................813 Abstract ......................................................................................................................................813 Introduction................................................................................................................................813 What Is Technology?.................................................................................................................814 Hallmarks of Technological Literacy .......................................................................................815 Laying the Foundation...............................................................................................................817 Knowledge Management...............................................................................................................819 Anecdotes...................................................................................................................................819 Origami ..................................................................................................................................819 Stuff We Know ......................................................................................................................820 My Work................................................................................................................................820 The Internet............................................................................................................................820 Information Revolution .............................................................................................................820 Kinds of Knowledge—In General.............................................................................................822 Knowledge of What?.................................................................................................................822 From Data to Wisdom and All the Steps in Between ..............................................................823 Too Much Knowledge? .............................................................................................................824 Implications of Knowledge Management to Companies..........................................................825 Accounting Doesn’t Cut It ........................................................................................................825
DK3654—CHAPTER 10—3/10/2006—15:27—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
773
Kinds of Capital.........................................................................................................................826 Human Capital .......................................................................................................................826 Structural Capital ...................................................................................................................827 Customer Capital ...................................................................................................................827 Conclusion .................................................................................................................................828 Notes ..........................................................................................................................................828 Legal Pluralism and the Adjudication of Internet Disputes .........................................................828 Abstract ......................................................................................................................................828 Introduction................................................................................................................................829 The Three-Stage Argument .......................................................................................................829 Legal Pluralism..........................................................................................................................832 Values on the Internet ...............................................................................................................833 Case Study .................................................................................................................................834 Conclusion .................................................................................................................................840 Notes and References ................................................................................................................841 Further Reading .........................................................................................................................844
In life change is inevitable; in the professions change is vital.
Ronald J. Stupak
CHAPTER HIGHLIGHTS As a reader of this Handbook, you have, directly or indirectly, experienced the pervasive (insidious?) impact of technology on your vocational life. As a professional, as you consider this chapter, please reflect on the following: † Those who try to avoid or minimize their contact with technology will be increasingly
marginalized in their professional lives. Technology should not be avoided—it should be selectively, thoughtfully engaged. † Technology expands our professional learning requirements. While we are constantly challenged to stay current within our vocational sphere, we are simultaneously challenged to develop targeted technological skills (keyboarding, software mastery, Internet acumen, etc.). † Technology requires a retooling and rebalancing of our professional armamentarium. Interpersonal skills remain, but they must be supplemented with technopersonal skills. Intrapersonal assessment persists, but it must include self-dialogue regarding e-veracity. “Power” remains the last dirty word in the English language (Moss-Kanter), but it is now underlined with technology. † Technology impacts decision making. Technology-averse, barnacled professionals steeped in incrementalism are at great risk of losing market share and/or organizational influence to assertive technology-savvy professionals who can quickly map alternative worlds, scenario plan, and meaningfully choose among options. The professions: † The development and implementation of information and communication technologies
(ICT) in the education sector have been posing a challenge to the traditional learning
DK3654—CHAPTER 10—3/10/2006—15:27—SRIDHAR —XML MODEL C – pp. 771–844
774
Handbook of Technology Management in Public Administration
†
†
†
†
†
†
†
†
environment. Four principal approaches have been identified as supporting the burgeoning introduction of ICT in education: (1) social as education reflects societal norms, values and trends; (2) vocational in that technology is required as a basic job skill in most markets today; (3) pedagogical in that technology assists the learning process; and (4) catalytic by improving the cost-effectiveness of the delivery of educational services. A Computer World survey of information technology professionals found that 78% indicated that academia is not preparing graduates for the information technology roles that they will be asked to fill upon graduation. The shortcomings are not primarily in the technical areas of information technology, rather in the areas of business skills, troubleshooting skills, interpersonal communication, project management and systems integration. Businesses have developed elaborate strategies to compete in high technology industries. Research partnerships have gained a prominent place in those strategies. The need for flexibility and quick response to rapidly changing technological and market circumstances is raising organizational costs through burgeoning hierarchies—particularly in information technology areas. To correct this, companies are creatively developing alternative organizational structures that more efficiently and effectively mirror and connect with the market. The impact on professionals, especially those in middle management roles, is considerable. Respected management seer Peter Drucker projects a progressive withering away of traditional institutions, both great and small, that are unable to control costs while struggling to adjust programs to meet the needs of a high-tech, diverse, and globally oriented society. As traditional institutions wither away, so will traditional professionals. At the heart of the technological society that characterizes the United States lies an unacknowledged paradox. Although the nation increasingly depends on technology and is adopting new technologies at a breathtaking pace, its citizens are not equipped to make well-considered decisions or think critically about technology. No single entity—academic, corporate, governmental, or non-profit—administers the Internet. The Internet is a distributed system that straddles geographical and jurisdictional boundaries. The dilemma for private and public establishments is that the primary technology which has revolutionized the business world is largely unregulated because of boundary spanning, multiple simultaneous points of activity, and complex values constellations across business networks. In that the Internet is a pluralistic society, legislative, judicial, and medical professionals must come to terms with its diverse political, cultural and religious viewpoints. Event-driven news, defined as news coverage of activities that are, at least at their initial occurrence, spontaneous and not managed by officials within institutional settings, has a significant societal impact. Increasing in both number and significance, unpredicted, non-scripted, spontaneous news requires that professionals be increasingly adroit at anticipatory management, contingency planning, and “garbage can” decision making. Technologies promise to make our organizations more productive, efficient, and effective. But experience does not always fulfill this promise. Public safety agencies are the target of a barrage of new information technologies offering better performance, many of which are attractive because they can be financed through federal grants. The dual seduction of “free money” and “technological fix” must give way to thorough critical analysis of the impact of technology on the agency, society, and tax payer. Use of the Internet on college campuses. The Internet is an important part of college life. But Internet addiction is an increasing problem on college campuses and in society generally. Internet addicts spend an average of 27 h per week on the Internet—three
DK3654—CHAPTER 10—3/10/2006—15:27—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
775
times more than the average user. Additional research on this topic is needed to understand and subsequently treat what may become a technology-spawned disease. Intellectual property: † Buried deeply in company archives are results from previously failed research and
†
†
†
†
development. Patents, copyrights, failed drugs, etc., represent a tremendous source of future growth and revenue. Archives need to be mined—there is gold in this stored intellectual capital. The measurement and evaluation of the impacts and value of investments in intellectual capital is a critical obstacle to turning those investments into sources of competitive advantage. There is a link between organizational investments in training and education and the assessment of organizations made by markets and stakeholders. The value contributes to the longevity and long-term success of the organization as it continuously improves the quality of its products and services. There is irony in the fact that organizations, necessary to produce, coordinate, and maintain complex techno-scientific systems, also have irreducible and emergent effects on the way complex information is transmitted, communicated, processed, and stored. In order to aid the process of foreign students studying in the United States then returning home (reverse brain drain) rather than staying in the United States (brain drain), the United States should improve the flow of science and technology information to other world regions. Doing so enhances the prospects of trade with developing countries. IBM’s ability to stay in the forefront of the computing industry is primarily a function of two dynamics—their technology and their people. These forces are complementary. Attracting and keeping the best technology workers and the intellectual property they represent, is a priority for technology-laden business.
Perhaps our leading line in this chapter best summarizes the impact that technology is having on us professionally: In life change is inevitable; in the professions change is vital.
TEACHING SCIENCE AND TECHNOLOGY PRINCIPLES TO NON-TECHNOLOGISTS—LESSONS LEARNED* ABSTRACT Much has been written over the past several decades on the “inevitability” that the technological revolution will transform international, interpersonal, and business relations. But are the effects of technological change as far reaching as the literature suggests and where it does reach does it penetrate very deeply into the general culture, its organizations, or into the psyche of its citizens? The linkage of science and technology (S&T) education to industrial trends, and its prominence in public policy debates makes it all the more important to ensure that the educated public have as complete a grounding in S&T issues as possible. Perhaps it is a unique twenty-first century paradox that it is more important for “progress” and public policy formulation to focus the attention of our educational system upon the inter-relationships, consequences, and implications of current and
* Peter M. Leitner, George Mason University and U.S. Department of Defense, P.O. Box 5725, Arlington, VA 22205. E-mail:
[email protected] DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
776
Handbook of Technology Management in Public Administration
previous technological developments rather than mindlessly joining the “bandwagon of progress.” Students must be exposed to the theories, language, culture, engineering difficulties, societal implications, and public policy problems posed by the inevitable advance of technology. The primary target of such efforts should be the non-technologists who tend to enter government service, run for public office, enter the teaching profession, are more politically active and where the greatest multiplier effect can be achieved. It often appears that the oncoming rush of technology sweeps over us like waves against the breakers. The seemingly irresistible force of change holds the potential for advancement of the human condition. But is it, in fact, an irresistible force and is it a force for positive change? Must it be accepted wholesale or is it subject to a natural selection process where humans consciously or unconsciously filter, screen, scale, regulate, and control the aperture through which technology must pass before finding acceptance in our daily lives? And once it passes through these gates and finds acceptance is it actually understood by the general population? Does the general population possess a vocabulary that indicates more than a passing familiarity with these new tools or are these “tools” accepted as mere “tools” with little additional significance attached to them? And, finally, is it important for the general population to address the implications of technical advancement in the context of its underlying psycho-social consequences or should the tools be viewed as new gadgets and processes that are neat, timesavers, fun, and an abstract harbinger of “better times” to come? Much as been written over the past several decades on the “inevitability” that the technological revolution will transform international, interpersonal, and business relations. Contemporary claims to that effect are just the latest to argue that technological change has profound effects on human and societal relations. In fact, the overwhelming theme in the social and scientific literature supports these presumptions. But are the effects of technological change as far reaching as the literature suggests and where it does reach does it penetrate very deeply into the general culture, its organizations, or into the psyche of its citizens? And where does the educational system enter into the process of understanding the role of technology upon our physical world and day-to-day existence? This article approaches such issues from the reflective practitioner model and will offer observations based upon the author’s twenty-five years of hands-on experience in government-based science and technology policy issues and over ten years of university-level teaching experience. Intuitively, one could reasonably conclude that those living in a period of rapid change, in large part, would have an intimate understanding of those forces, their basis, and consequences. After all, historians and archeologists often portray ancient cultures in such an idealized manner. A presumption often underlying their studies is that all elements of the group, tribe, or civilization were aware of, or at least affected by the technology of the time. While it is a virtual certainty that living conditions and patterns of activity were influenced by the contemporary state of technological development it is by no means certain that all members of those societies understood the technology available to them or its underlying scientific basis. In all likelihood they simply used technology as a tool without giving it much thought. A recently released study by the U.S. National Science Foundation (NSF) lends support to this proposition.1 The study reveals that while the general U.S. population is growing more aware of basic scientific concepts there still exists a large gap in understanding many of the basic forces affecting their lives and futures. In fact, Figure 10.1 represents the NSF’s findings across nine major knowledge areas. What is striking about the NSF’s findings is the level of unfamiliarity with basic knowledge of human existence and the physical world. For example, only 10% of respondents understood the term “molecule” while less than 30% understood the term “DNA.” Less than 50% of the respondents knew that it takes one year for the Earth to go around the Sun, that electrons are smaller than atoms, or that lasers do not work by focusing sound waves. Only about 50% agreed that humans did not exist at the time of dinosaurs. The study concludes that Americans do not believe they are well informed about issues pertaining to science and technology. In fact, for all issues included in the NSF survey, the level
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
777
TEACHING TECHNOLOGICAL PRINCIPLES Molecule 1995 1997 1999 DNA 1995 1997 1999
"Understands the term" molecule "Understands the term" DNA Knows lasers do not work by focusing sound waves Electrons are smaller than atome
0
The Earth goes around the Sun once a year
10
20
30
40
Earliest humans did not live at the time of the dinosaurs Knows that all radioactivity is not manmade Light travels faster than sound The continents are moving slowly about on the face of the Earth 0
10
20
30 40 50 60 70 80 Percent answering statements correctly
90
100
FIGURE 10.1 Public understanding of scientific terms and concepts.
of self-assessed knowledge appears considerably lower than the level of expressed interest. This is especially true for complex subjects, like science and technology, where a lack of confidence in understanding what goes on in laboratories or within the policymaking process is most evident. For example, in 1999, at least 40% of respondents in the NSF’s public attitudes survey said they were very interested in science and technology. Yet only 17% described themselves as well informed about new scientific discoveries and the use of new inventions and technologies; approximately 30% thought they were poorly informed.2 As one may expect, the more math and science courses one has taken, the better informed one thinks he or she is. The relationship between education and self-assessed knowledge is particularly strong for new scientific discoveries, the use of new inventions and technologies, and space exploration. It is also strong for economic issues and business conditions, for international and foreign policy issues, and somewhat less strong for the use of new inventions, technologies, and medical discoveries. Understanding how ideas are investigated and analyzed is a sure sign of scientific literacy. This knowledge is valuable not only in keeping up with important issues and participating in the political process, but also in evaluating and assessing the validity of various other types of information. Figure 10.2 illustrates the level of understanding evident in the general population regarding what scientific inquiry is.3 If one compares these indicators of weak “understanding” with the data in Figure 10.3, depicting the ubiquity of computer resources within the general population,4 the conclusion can readily be drawn that these new technologies are being readily accepted as “tools” rather than as brilliant artifacts of the modern world. “Acceptance” without “understanding,” “adoption” without “analysis,” and “use” without “questioning” evinces behavior not too far removed from cultures where magic and the occult are part of the fabric of everyday life. The assumption that a ubiquitous, but opaque, system impenetrable to casual understanding will be accepted as fact is reflected in twenty-first century man’s willingness to follow technology’s lead with little thought as to where it will take them or what really goes on within that black box on their desks.5 Many have drawn parallels between today’s rapid pace of technological change and that experienced during the Industrial Revolution. It is currently fashionable to claim that we are in
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
778
Handbook of Technology Management in Public Administration
All Adults Sex Male Female Formal education Less than high school High school graduate Baccalaureate degree Graduate/professional degree Science/mathematics education Low Middle High Attentiveness to science & technology Attentive public Interested public Residual public 0
10
20
30
40
50
60
70
80
90
Percent understanding scientific inquiry
FIGURE 10.2 Public understanding of the nature of scientific enquiry.
Percentage of adults with access at home to: 60 1995
1997
One or More CD-ROM more than one reader working working computers computer
Modem
1999
50 40 30 20 10 0
Internet service
FIGURE 10.3 Ubiquitous access to information technology.
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
E-mail
WWW access
100
Technology and the Professions
779
TABLE 10.1 Industrial Revolution vs. Information Revolution Industrial Revolution Tangible Understandable More mechanical Transparent Large scale Small market Limited to industrial settings Replication of manual labor Little education required to “understand” Precipitated centralization and population densification
Today Intangible Less understandable—more arcane More electronic, microelectronic, materials, chemistry, and physics Opaque Small and large scale Mass market Pervasive: industrial, workplace, home Replication of biological and neurological Advanced degrees/specialized training Facilitates decentralization and remote access
the throes of a 2nd Industrial Revolution—an information based revolution. This may indeed be the case but the current “revolution”is manifestly different from the first. Table 10.1 compares several significant features of the two revolutions and reveals fundamental differences. Of these differences, perhaps the most obvious is the intangible nature of today’s information technology. The hallmark of the Industrial Revolution was essentially an analog computer—the Jacquard Loom. This programmable cloth and rug weaving device used a series of cards of wood or paper with holes punched in them. The cards were interconnected into a “program” and each card passed over a perforated four-sided drum against which a set of needles that were connected by wired to the warp threads. The needles were pressed onto the punched card and wherever there was a hole for the wire to pass through the card and corresponding warp thread would be raised. Each card made a row and eventually the cards made the pattern.”6 One of the early icons of our 2nd Industrial Revolution was the ubiquitous “punched cards” employing the Hollerith Code to direct and program modern computer systems. However, aside from the obvious surface similarities of these two pioneering uses of “punched data” the former was used to program a large, tangible, easily viewable and well understood mechanical device. While the later instructed a relatively small, intangible, electronic system whose internal functions were understood by very few. It is this lack of transparency, an impenetrable mystery to most, and a shift away from the mechanical replication of man’s labors to the mimicking of his thought process that creates a climate for ready acceptance in spite of a generalized bewilderment over how it works. Such a reality forces the next question: is it important to understand a technology so broadly and readily accepted and applied throughout the population? The answer is an unequivocal, Yes. However, as displayed in Figure 10.4, there is a significant disparity among the general population between their level of interest in scientific and technological matters versus their perceived understanding of these fields.7 There was also exhibited a dramatic difference in the public’s understanding of abstract science and technology issues compared to more tangible issues effecting their daily lives. These gaps in “interest” versus “understanding” highlight the need to refocus the nation’s educational resources on making S&T topics more transparent and understandable. It is important for the citizenry to have some knowledge of basic scientific facts, concepts, and vocabulary. Those who possess such knowledge have an easier time following news reports and participating in public discourse on various issues pertaining to science and technology. It may be even more important to have an appreciation for the scientific process.8 Curiously, the disparity between the deep penetration of advanced technology throughout the general population and
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
780
Handbook of Technology Management in Public Administration Indices of public interest in and self-assessed knowledge about scientific and technological issues: 1990-1999 Level of interest
Level of self-assessed knowledge Agricultural and farm issues
1990
Space exploration
1992
The use of nuclear energy to generage electricity International and foreign policy issues
1996 1997 1999
Military and defense policy Economic issues and business conditions The use of new inventions and technologies Issues about new scientific discoveries Local school issues Environmental pollution New medical discoveries 100
80
60
40
20
0
0
Mean index score
20
40 60 Mean index score
80
100
FIGURE 10.4 Industrial revolution vs. information revolution.
the fundamental lack of understanding of the principles underlying tools they are wielding points to the presence of a rather large sub-class that can be termed “Technical Idiot Savants.” Technical Idiot Savants are comprised of: technology users, technicians without depth, “6-month experts” that are created by certificate-granting proprietary programs with grandiose titles such as “Certified Network Engineer” or “Certified Web Designer,” Help-Desk workers who are just one step ahead of the clients they serve, and the “rest of us”—the end-users—who are able to use canned programs to do their jobs but have no inkling of how they operate or what to do when something goes wrong. The compartmentalization of technical knowledge and skills is a key characteristic of the rise to prominence of Technical Idiot Savants within our society. In fact, the information technology field displays the most dramatic gap between the users of technology and an understanding of the scientific principles of the tools they employ on a daily basis. Users can, it large measure, be forgiven for not being proficient in such diverse and arcane areas as microelectronics, photonics, micro-machining, encryption, telecommunications, neutral algorithms, microlithography, etc., consumers have driven this industry with an abiding insistence on idiot-proof, turn-key systems and applications. The refusal of consumers to grapple with an understanding of how these things work has spawned a service industry of technology integrators and facilitators who install and maintain equipment in the field. Since the early days of the office automation when field technicians assigned to servicing serves, local area networks, and computers exhibited skill levels that stopped at the “board swapping” level—little substantive change has occurred. This is in spite of the ever closer coupling of software and hardware—a development that has made installation, maintenance, and upgrades easier than ever.
PEDAGOGICAL DILEMMAS As the overwhelming majority of the educated workforce has developed a symbiotic relationship with advanced technology in the form of state-of-the-art telecommunication, computing materials, and medicine—to name a few areas—it is incumbent upon the teaching profession to introduce non-technologists to the various dimensions of the unavoidable high-tech environment that will envelope them.
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
781
The National Science Foundation has found a direct correlation between attentiveness to science and technology—policy issues, years of formal education, and the number of science and mathematics courses taken during high school and college. In 1999, only 9% of people without high school diplomas were classified as attentive to science and technology policy issues, compared with 23% of those with graduate and/or professional degrees. Similarly, 9% of those with limited coursework in science and mathematics were attentive to science and technology policy issues, compared with 19% of those who had taken nine or more high school and college science or math courses. Men were more likely than women to be attentive to science and technology policy issues.9 The NSF has also found that science literacy in the United States (and in other countries) is fairly low. That is, the majority of the general public knows a little, but not a lot, about science and technology. For example, most Americans know that the Earth goes around the Sun and that light travels faster than sound. However, not many can successfully define a molecule, and few have a good understanding of what the Internet is despite the fact that the Information Superhighway has occupied front page headlines throughout the late 1990s—and usage has skyrocketed. In addition, most Americans have little comprehension of the nature of scientific inquiry. The NSF’s findings are supported by the Organization for Economic Cooperation and Development’s (OECD) report entitled “Education at a Glance, 2001.” Figure 10.5 reveals that the United States ranks only twenty-fifth among the world’s nations in the percentage of college degrees awarded in the sciences.10 In part, this apparently low ranking is because “The United States awards a huge number of degrees in a wide variety of fields, such as business, law and healthrelated areas, in which other nations do not offer degrees; this skews the numbers somewhat.”11
41.8
South Korea 36.0
Germany
34.5
Finland 32.4
Czech Republic
31.2
Switzerland Japan
29.3
France
29.1
Britain
27.9
Sweden
27.6 27.2
Ireland 21.5
Canada
20.4
Australia
19.6
Israel
18.4
United States 15.6
Norway 0.0
10.0
20.0
30.0
40.0
50.0
FIGURE 10.5 Ranking of the United States among the world’s nations in the percentage of college degrees awarded in the sciences.
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
782
Handbook of Technology Management in Public Administration
These findings illustrate the imperative for institutions of higher learning to inculcate a functional knowledge of science and technology in their student population. Of course, the primary target of such efforts should be the non-technologists within the student body. It is this population—the business, arts, humanities, and social science majors—that needs the greatest degree of attention. It is also this population which tends to enter government service, run for public office, enter the teaching profession, is generally more politically active, and where the greatest multiplier effect can be achieved. They generally form what the NSF describes as the Attentive Public.12 In the experience of the author, the most critical factor in teaching technology to nontechnologists is dosage. In other words, how much is enough. Considering the broad and extremely uneven scientific background that non-technologists bring to the classroom it is often most effective to focus classroom activities on the implications of the contemporary technological revolution, with an illuminating focus upon artifacts such as the stirrup, the printing press, and nuclear weapons. Students asked to consider issues of technology in business and world politics in the context of ongoing public debates will acquire a back-door or semi-passive infusion of technical principles in a familiar contextual setting. Addressing theories of technological determinism and the social construction of technology, as well as questions such as whether technologies are merely neutral tools or are they shaped by their social context, or do machines make history further reinforce a mutually supportive multidisciplinary exploration of the role of technology in our lives. Other valuable questions and areas of concern that may be addressed include: What is technology? Does technology drive history? Technology strategy Technology transfer From idea to market Marketing in high-tech firms The use of technology in marketing Development of technology The product development process R&D management, organization, portfolio Strategies for protection Entrepreneurship and innovation The innovation process Innovation models Creativity
RESPONDING
TO THE
Planning for innovation Does the Internet promote democracy? Will the Information revolution destroy the sovereign state? What will future wars be like? Will technology make war obsolete? What is information warfare? What will be the consequences of the global economy going online? How does the world deal with technology’s effects on the global environment? Technology and National Security Proliferation Issues National Controls International Controls The Future
NEED
In preparing for, or at least consciously facing, our contemporary period of rapid change, social institutions have a profound duty to initiate active measures to insure that the general population is capable of adapting to the changing circumstances that inexorable technological advancement will inevitably leave in its wake. But institutions face serious obstacles in creating a genuinely multidisciplinary approach to educating the non-technologically savvy Attentive Public. The following advice is offered to those considering the creating of such a course of study. The author has found these to be essential elements in the development of a successful program. Context Running through the traditional litany of historical evolution of technology is a basic essential for the initial framing of the journey. Provoking students to grapple with many of the issues enumerated
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
783
earlier (What is technology? What are the differences between science and technology? Is there such a thing as technological determinism?) provides an invaluable equalizing exercise for all students regardless of their technical/scientific background. Connection to the Real World Is Critical to Capturing the Interest of the Student Audience The ability to draw a direct connection between science and technology developments and the daily life of our nation, its individuals, and organizations is an unavoidable requirement. Further relating or embedding this connection in the context of ongoing social issues is also vital. Concepts, principles, and theories must be made as real and relevant as possible. Achieving this goal requires the use of props, examples, hardware, simulation, modeling, game playing, etc. Watching student reactions to touching a small piece of an F-15 bulkhead while explaining the importance of titanium and 5-axis machining to civil/military aviation quickly validates the importance of making this particular subject tangible. Provide Historical Perspectives During their school years, students should encounter many scientific ideas presented in historical context. It matters less which particular episodes teachers select than that the selection represent the scope and diversity of the scientific enterprise. Students can develop a sense of how science really happens by learning something of the growth of scientific ideas, of the twists and turns on the way to our current understanding of such ideas, of the roles played by different investigators and commentators, and of the interplay between evidence and theory over time. History is important for the effective teaching of science, mathematics, and technology also because it can lead to social perspectives—the influence of society on the development of science and technology, and the impact of science and technology on society. It is important, for example, for students to become aware that women and minorities have made significant contributions in spite of the barriers put in their way by society; that the roots of science, mathematics, and technology go back to the early Egyptian, Greek, Arabic, and Chinese cultures; and that scientists bring to their work the values and prejudices of the cultures in which they live. Insist on Clear Expression Effective oral and written communication is so important in every facet of life that teachers of every subject and at every level should place a high priority on it for all students. In addition, science teachers should emphasize clear expression, because the role of evidence and the unambiguous replication of evidence cannot be understood without some struggle to express one’s own procedures, findings, and ideas rigorously, and to decode the accounts of others. De-emphasize the Memorization of Technical Vocabulary Understanding rather than vocabulary should be the main purpose of science teaching. However unambiguous terminology is also important in scientific communication and—ultimately—for understanding. Some technical terms are therefore helpful for everyone, but the number of essential ones is relatively small. If teachers introduce technical terms only as needed to clarify thinking and promote effective communication, then students will gradually build a functional vocabulary that will survive beyond the next test. For teachers to concentrate on vocabulary, however, is to detract from science as a process, to put learning for understanding in jeopardy, and to risk being misled about what students have learned.
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
784
Handbook of Technology Management in Public Administration
Don’t Evangelize. Don’t Sell Speaking in techno-babble doesn’t work, neither does overselling the importance of this topic. The role of technology in modern society is self-evident once the student’s attention is focused upon it. It is the gradual realization of the ubiquitous nature of technology and its implications that defines understanding. By approaching the issue from 3608 and encouraging self-discovery is when the student begins to see past the utilitarian tool which technology represents for most of us and appreciate the range of philosophical and public policy implications inherent in this field of study.
FUTURE DIRECTION There is little evidence to support claims that the United States is falling behind in S&T development or application. Although there is a serious gap between the widespread use of technologies and an understanding of how they work here is no evidence showing this is a uniquely American problem. The robustness and breadth of the U.S. higher educational system is without equal worldwide and it is dangerous to heed calls demanding its revamping to emphasize S&T in order to compensate for an incorrectly perceived shortfall. However, it is vital to note the close coupling between industrial trends and academic offerings and emphasis. In other words, while academia is often in the forefront of research and development activities that will eventually benefit industry it is also responsive to negative industrial trends as well. For instance, the collapse and migration overseas of the U.S. nuclear industry was followed by the disestablishment of scores of nuclear engineering programs been abolished, universities have also been decommissioning their research reactors as well, thus sacrificing an entire field of technical pursuit. Given the vicissitudes of federal funding for research and development,13 the linkage of science and technology education to industrial trends, and its prominence in public policy debates it is all the more important to ensure that the educated public have as complete a grounding in S&T issues as possible. Perhaps it is a unique twenty-first century paradox that it is more important for “progress” and public policy formulation to focus the attention of our educational system upon the inter-relationships, consequences, and implications of current and previous technological developments rather than joining the “bandwagon of progress.” For educators, it is more important to inculcate in students an understanding of the appropriate implementation and management of technological development than to expect new innovations from their students. For the non-technical students, instilling an understanding of current and past developments will leave them well equipped to handle the future as well. To accomplish this, students must be exposed to the theories, language, culture, engineering difficulties, societal implications, and public problems posed by the inevitable advance of technology.
NOTES 1. http://www.nsf.gov/sbe/srs/seind00/c8/fig08-04.htm 2. National Science Foundation, Science & Engineering Indicators 2000 (http://www.nsf. gov/sbe/seind00/start.htm). 3. http://www.nsf.gov/sbe/srs/seind00/c8/fig08-06.htm 4. http://www.nsf.gov/sbe/srs/seind00/c8/fig08-17.htm 5. Ronald J. Stupak, Future View (Fall 1995): 1, 4. 6. http://www.computer.org/history/development/1801.htm 7. http://www.nsf.gov/sbe/srs/seind00/c8/fig08-01.htm 8. http://www.nsf.gov/sbe/srs/seind00/start.htm 9. http://www.nsf.gov/sbe/srs/seind00/access/c8/c8s1.htm 10. OECD, Education at a Glance, 2001. http://www.oecd.org//els/education/el/EAG2000/ wn.htm
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
785
11. Bill Noxon, U.S. National Science Foundation, quoted in the Washington Post (July 21, 2001): A21. 12. The NSF has classified the public into three groups: the attentive public: those who (1) express a high level of interest in a particular issue, (2) feel well-informed about that issue, and (3) read a newspaper on a daily basis, read a weekly or monthly news magazine, or read a magazine relevant to the issue. The interested public: those who claim to have a high level of interest in a particular issue, but do not feel well informed about it. The residual public: those who are neither interested in, nor feel well informed about, a particular issue. 13. National Science Foundation, FY 2001 Department of Defense Share of Federal R&D Funding Falls to Lowest Level in 22 Years (February 26, 2001).
REFERENCES Compududes Web Site (http://www.compududes.com/Default.htm). Kuhn, Thomas S., The Structure of Scientific Revolutions; University of Chicago Press (November 1996). National Science Foundation, FY 2001 Department of Defense Share of Federal R&D Funding Falls to Lowest Level in 22 Years (February 26, 2001). National Science Foundation, Science & Engineering Indicators 2000 (http://www.nsf.gov/sbe/srs/seind00/ start.htm). Noxon, Bill., U.S. National Science Foundation, quoted in the Washington Post (July 21, 2001): A21. Organization for Economic Cooperation and Development, Education at a Glance, 2001. Stupak, Ronald J. Future View (Fall 1991): 1, 4.
INTERNET ADDICTION, USAGE, GRATIFICATION, AND PLEASURE EXPERIENCE: THE TAIWAN COLLEGE STUDENTS’ CASE* ABSTRACT This study explores Internet addiction among some of Taiwan’s college students. Also covered are a discussion of the Internet as a form of addiction, and related literature on this issue. This study used the Uses and Gratifications theory and the Play theory in mass communication. Nine hundred and ten valid surveys were collected from 12 universities and colleges around Taiwan. The results indicated that Internet addiction does exist among some of Taiwan’s college students. In particular, 54 students were identified as Internet addicts. It was found that Internet addicts spent almost triple the number of hours connected to the Internet as compared to non-addicts, and spent significantly more time on bulletin board systems (BBSs), the WWW, e-mail, and games than non-addicts. The addict group found the Internet entertaining, interesting, interactive, and satisfactory. The addict group rated Internet impacts on their studies and daily life routines significantly more negatively than the non-addict group. The study also found that the most powerful predictor of Internet addiction is the communication pleasure score, followed by BBS use hours, sex, satisfaction score, and e-mail-use hours.
INTRODUCTION Use of the Internet on Taiwan’s college campuses and in society has increased dramatically in recent years. While the academic use of Internet is primarily intended for faculty research and * Chien Chou and Ming-Chun Hsiao, Institute of Communication Studies, National Chiao Tung University, 1001 Ta-Hsueh Road, Hsinchu, Taiwan. Reproduced, with permission, from Computers & Education, Volume 35, Issue 1, 2000, Pages 65–80. Tel.:C886-3-5731808; fax: C886-3-5727143; e-mail:
[email protected] DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
786
Handbook of Technology Management in Public Administration
communication, the Internet has also become an important part of student life. However, overinvolvement with the Internet has occasionally been observed on campus. For example, Chou, Chou, and Tyan (1999) reported this observation: in one dorm at their science- and technologyoriented university, four roommates were busy, quietly working on their PCs. They logged on to the Internet to chat with other people, their roommates! Another observation the researchers made was that some college students flunked because they spent too much time on the Internet rather than on their studies. Some students remain connected to the Internet virtually the whole day—as long as they are awake. One researcher’s student reported that she could not do anything else, and felt serious depression and irritability when her network connection was out. These observations attracted researchers’ attention and led us to ask how the Internet hook them so tenaciously, leading them to produce such addiction-like behaviors? Who is actually addicted to the Internet, and why are they addicted? Although development of the Internet addiction concept is still in its infancy and academic investigations are few in number, some anecdotal data and empirical studies have accumulated in recent years. Griffiths (1998) considered Internet addiction to be a kind of technological addiction (such as computer addiction), and one in a subset of behavioral addiction (such as compulsive gambling). Brenner (1996) argued that because the Internet provides user-friendly interfaces, and a convenient medium for checking information and communicating with others, a wide range of users have become cybernetically involved with the Internet, and this has certainly changed the profile of the “computer addict.” Kandell (1998) defined Internet addiction as “a psychological dependence on the Internet, regardless of the type of activity once logged on” (p. 12). He stated that college students as a group appear more vulnerable in developing a dependence on the Internet than any other segment of society, because college students have a strong drive to develop a firm sense of identity, to develop meaningful and intimate relationships, usually have free and easily accessible connections, and their Internet use is implicitly if not explicitly encouraged. All these observations can also be applied to Taiwan’s college students. In Taiwan, the first network infrastructure is called TANET, which connects all schools and major research institutes. TANET still provides convenient and free access to faculties and most students. In Taiwanese society, many students separate from their families and move toward an independent life when they enter college. Most of them live in school dormitories, and have fast and free Internet access via school network systems. More than half of them had not used the Internet before entering college, neither did their parents. However, upon their graduating from college, each one of them is well experienced with the Internet. The Internet becomes an important part of college students’ lives, not only for their studies and daily routines, but also as a tool for getting to know other people and the rest of the world. Most people use the Internet in healthy and productive ways. However, some college students develop a “pathologic” use of the Internet. Kandell (1998) gave an analogy that exercise is good and people require it, but over-exercise may have a destructively negative impact on human health. Internet use is similar. Over-involvement with the Internet, or “pathologic Internet use” (PIU) may cause users time-management or health problems, and create conflicts with other daily activities or with people around the users. The Internet may be essentially good, but as in other areas of life, too much of a good thing can lead to trouble.
RESEARCH ASSUMPTIONS AND QUESTIONS In this exploratory study, the researchers studied the Internet addiction issue from a communication perspective, adopting Morris and Ogan’s (1996) argument that the Internet is essentially a mass medium, just like television and newspapers. The researchers tried to investigate Internet addiction according to a combination of the theory of Uses and Gratifications and the Play theory in mass communication, assuming social and psychological origins of needs that generate expectations of the Internet, which lead to differential patterns of Internet exposure resulting in needs gratification
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
787
and pleasurable experiences, as well as consequences such as addictive behavior (McQuail 1994, p. 318). Based on these assumptions, the purpose of this survey study was to examine Taiwan college students’ Internet addiction, Internet usage, gratification and communication pleasure. The research questions asked in this study were: 1. Who are Internet addicts and how can we screen them? 2. What are the differences in Internet usage, needs gratification degree, and pleasure experience between the addict and non-addict groups? 3. What are the differences in Internet impact on dimensions of daily life between the addict and non-addict groups? 4. What are the predictors of Internet addiction?
LITERATURE REVIEW Internet addiction as a new form of addiction has recently received much attention from researchers in sociology, psychology, psychiatry, among others. Griffiths (1998) considered Internet addiction to be a kind of technological addiction, and one in a subset of behavioral addictions. Any behavior that meets the following criteria is operationally defined as functionally addictive: 1. salience: a particular activity, such as Internet use, becomes the most important activity in the subject’s life and dominates his or her thinking; 2. mood modification: subjective experiences people report as a consequence of engaging in the particular activity; 3. tolerance: the process whereby increasing amounts of the particular activity or time are required to achieve the desired effects; 4. withdrawal symptoms: unpleasant feelings, state, or physical effects when the particular activity is stopped or curtailed; 5. conflict: conflicts between addicts and those around them, with other activities, or within the individuals themselves; and 6. relapse: the tendency for repeated reversions to earlier patterns of the addictive activity to recur. In this sense, we can suspect that college students may be addicted to Internet if (1) use of Internet becomes the most important activity in their daily lives, and dominates their thinking; (2) use of Internet arouses in them a “high,” an “escape from the real world” or other similar experiences; (3) they have to spend increasing amounts of time on-line to achieve the desired effect(s); (4) they feel irritable or moody when they are off-line; (5) Internet use causes conflicts between them and their parents, teachers, or friends, and between spending time on the Internet and on studies or sleep; (6) they have tried to discontinue or decrease their Internet use, but reverted back to former use patterns after some time. Goldberg (1996) is the first person who coined a term to describe such an addiction, Internet Addiction Disorder (IAD), and established a support group for Internet addicts—The Internet Addiction Support Group (IASG). He defined IAD by providing seven major diagnostic criteria: hoping to increase time on the network, dreaming about the network, having persistent physical, social, or psychological problems, and so on. Goldberg’s (1996) paper is the keystone cited by many other studies in this field. However, some researchers, such as Squires (1996), question the legitimacy of the therapy: using the Internet to help IAD sufferers. Since the Internet is such a new form of addiction, how to measure Internet addiction becomes an important research issue. Egger and Rauterberg (1996) also developed on-line survey
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
788
Handbook of Technology Management in Public Administration
questionnaires, and studied the network addictive behaviors of 450 valid subjects. Among them, 84% were male and 16% female, and 10.6% of the respondents considered themselves addicted to or dependent on the Internet. They reported negative consequences of Internet use, such as feeling guilty about spending time on the Internet, lying to colleagues about Internet use time, and so on. His results showed a significant difference between answers from addicted and non-addicted users, and he concluded that addictive behavior does indeed exist. However, he noted that his “Internet addicts” were self-identified but not judged by any validated “addiction” checklist. In other words, the “Internet addicts” in his study may not have been bona fide addicts. Brenner (1996, 1997) examined Internet over-use among a self-selected on-line sample by developing an “Internet-Related Addictive Behavior Inventory” (IRABI) to survey world-wide Internet users. In the first 90 days the surveys were distributed on the WWW, 563 valid questionnaires out of 654 turn-ins from 25 countries were collected. The IRABI has 32 true–false questions such as † I have attempted to spend less time connected but have been unable to. (85% of
563 respondents answered yes)
† I have been told that I spend too much time on the net. (55%) † More than once, I have gotten less than 4 h of sleep in a night because I was using the net
(not due to studying, deadlines, etc.). (40%)
In his study, the average person scored 11 out of a possible 32 on the IRABI with a standard deviation of 5.89. The average survey respondent spent 19 h per week on-line. Eighty percent of the respondents at least indicated problems such as failure to manage time, missed sleep, missed meals, etc., suggesting that such patterns are in fact the norm. Some respondents reported more serious problems because of Internet use: trouble with employers or social isolation except for Internet friends; troubles that are similar to those found in other addictions. The IRABI questionnaire has a good internal consistency (Z 0.87), and all 32 items correlate moderately with the total score, suggesting that all items measure some unique variance. Therefore, the present researchers adopted the IRABI and translated it into Chinese for a prior study (Chou et al. 1999) and made some modifications in it for this study. Besides the IRABI, Young (1998) also developed an eight-item Internet addiction Diagnostic Questionnaire (DQ) based on the definition of Pathological Gambling from American Psychiatric Association (1995). Young stated that anyone who answered “yes” to five or more of the eight questions can be classified as a dependent Internet users; others may be nondependent users. The major concepts underlying these criteria are similar to Griffiths’ (1998). Young’s 8-item questionnaire seems the simplest and easiest instrument to use. Young used this instrument to collect 596 valid, self-selected responses out of 605 total responses in a 3-month period. Among 596 responses, 396 dependents and 100 nondependents were classified from the DQ. The dependent sample included 157 males and 239 females. Most striking was that dependents reported an average of 38.5 h (with a standard deviation of 8.04 h) per week spent on-line, compared with the 4.9 h reported by nondependents (with a standard deviation of 4.70). This study also found that time distortion is the major consequence of Internet use. Students may experience significant academic problems, eventually resulting in poor grades, academic probation, and even expulsion from universities. Other problems created by excessive use of the Internet included disrupted marriages, dating relationships, parent–child relationships, and close friendships. Young concluded from this study that the Internet itself is not addictive, however, specific applications appeared to play a significant role in the development of pathological Internet use. Dependents predominately used two-way communication functions such as chat rooms, MUDs, newsgroups, or e-mail, while nondependents used functions available on the Internet to gather information, such as Information Protocols and the WWW. Young’s conclusion is consistent with Kandell’s observation that the MUD games, Internet relay chat (IRC), and chat rooms are the major
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
789
activities that may lead people to addiction; extended Web surfing and compulsive e-mail checking can also create overuse problems. Chou et al. (1999) reported that some Taiwan college students who were considered as “addicts” used BBSs (similar to chat rooms) most, and then the WWW, FTP, newsgroups, e-mail, and games. The first study on Taiwan students’ Internet addiction was by Chou et al. (1999). It investigated Internet addiction on the basis of Stephenson’s Play Theory of Mass Communication (Stephenson 1988), assuming that using the Internet generates some kind of pleasurable communication experience that draws users to the Internet again and again, and that over-use of the Internet finally leads them to addiction-like behaviors. In this study, 104 valid, self-selected samples were collected on-line. Among them, 68 (66.7%) were male, and 80% were students. The results indicated that Internet addiction does indeed exist among some of Taiwan’s Internet users. The Internet addiction scores correlated positively with escape pleasure scores, interpersonal relationship pleasure scores, and total communication pleasure scores. The Internet addiction scores also correlated positively with both BBS use hours and total Internet use hours. This study found that the addict group (52 respondents) spent significantly more hours on BBSs and IRCs than the non-addict group (47 respondents), and had significantly higher communication pleasure scores than the non-addict group. Close review of this study suggests, however, that the demarcation between addicts and nonaddicts should be re-examined carefully. In this study, the dichotomy was based on the mean of respondents’ Internet addiction scores. In addition, the samples were self-selected but not drawn randomly from Taiwan’s Internet users. This places the limitation on the external validity of the study. Therefore, we decided to use a larger sample drawn systematically from the target population: college students. Moore (1995) stated that college students are considered at high risk for Internet problems because of their ease of access and flexible time schedules. Taiwan’s Internet originated at the higher education level and still provided almost free of charge for students. Therefore, we decided to examine excessive Internet use among Taiwan’s college students.
METHODS Instruments The present study developed a survey questionnaire with five parts. The first part, “Chinese-IRABI version II” (C-IRABI-II), after Brenner’s (1996) “Internet-Related Addictive Behavior Inventory” (IRABI) with some revised questions designed to fit Taiwan’s particular network environment. Unlike Brenner’s true/false questionnaire, this part contained 40 Likert-scale questions; subjects were required to read the statement and indicate the extent of their agreement or disagreement using one of the options on a 4-point scale: SA (Strongly agree), A (Agree), D (Disagree), and SD (Strongly disagree). The second part of the survey questionnaire was based on Young’s (1998) DQ eight yes–no questions on Internet addiction: 1. Do you feel preoccupied with the Internet (think about previous on-line activity or anticipate next on-line session)? 2. Do you feel the need to use the Internet with increasing amounts of time in order to achieve satisfaction? 3. Have you repeatedly made unsuccessful efforts to control, cut back, or stop Internet use? 4. Do you feel restless, moody, depressed, or irritable when attempting to cut down or stop Internet use? 5. Do you stay on-line longer than originally intended? 6. Have you jeopardized or risked the loss of a significant relationship, job, educational, or career opportunity because of the Internet?
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
790
Handbook of Technology Management in Public Administration
7. Have you lied to family members, a therapist, or others to conceal the extent of involvement with the Internet? 8. Do you use the Internet as a way of escaping from problems or relieving a dysphoric mood (e.g., feelings of helplessness, guilt, anxiety, depression)? Young suggested that those who scored five or more can be considered Internet addicts. The reason why this study used two instruments is to increase the criterion-related validity, that is, to provide concurrent evidence of validity of this study. The third part asked subjects to mark their motivation and gratification levels on 12 listed motivation items, such as communication with other people, searching for information and so on. These items were identified from related literature and prior interviews. Subjects were required to respond to any item which they thought of as their motivations and then use a 5-point scale after each marked item: very satisfied, satisfied, neutral, dissatisfied, and very dissatisfied to indicate the strength of the motivation. The fourth part of the survey questionnaire was “the pleasure experience from Internet Usage II” (PEIU-II) developed by the authors. PEIU-II was based on Stephenson’s (1988) concepts of communication pleasure, which assumes Internet users experience some kind of “communication pleasure” when they use the Internet, and the more pleasure they experience, the more they use it. The first edition of PEIU was presented in Chou et al. (1999), and identified five factors: 1. 2. 3. 4. 5.
escape: the pleasure of relieving worries, or responsibility; interpersonal relationship: the pleasure of communicating with other people on-line; use behavior: the pleasure of using the Internet; intertext: the pleasure from interacting with the text/information; anonymity: the pleasure of being anonymous on-line.
PEIU-II included the items from the user behavior and the intertext, and five extra items added to “anonymity,” such as “Because of the anonymity, I can say what I really want to say on the Internet,” “I feel free and easy because nobody knows who I really am on the Internet,” “It is fun to play roles on the Internet other than my role in real life.” The version this study administered consisted of 27 items on a 5-point Likert scale: SA (Strongly agree), A (Agree), N (Neutral), D (Disagree), and SD (Strongly disagree). The fifth part of the survey questionnaire had 13 questions concerning subjects’ demographic data and network usage. Subjects were asked to rate Internet impact on five dimensions of their lives: studies, daily life routines, relationships with friends/schoolmates, relationships with parents, and relationships with teachers on an 8-point scale ranging from positive to negative. The entire questionnaire was pre-tested, and revisions, such as re-wordings, question ordering, instructions, and so on made according to the pretest results. Subjects and Distribution Process The target subjects were all Taiwan college students. The stratified sampling plan was based on the “Educational Statistics of Republic of China, Taiwan” (Administration of Education of Taiwan, Republic of China, 1997) and conducted according to majors and geographic areas. One thousand two hundred and nine paper-and-pencil survey questionnaires were distributed to 26 departments and graduate programs in 12 universities and colleges around Taiwan. From mid-May to late June 1998, a total of 910 valid data samples were collected. Among them, 60% (546) were from male respondents, and 40% (364) were from female respondents. Eighty-one percent of the respondents were in the 20–25 age range, with a mean age of 21.11 and a standard deviation of 2.10.
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
791
TABLE 10.2 C-IRABI-II Factor Analysis Results Factor Name
Number of Items
Variance Explained
Reliability
9 8
13.08 10.04
0.848 0.845
6 7 3
9.93 9.36 5.02
0.833 0.818 0.534
4 37
4.72 52.15
0.481 0.925
1. Internet-addiction-related problems 2. Compulsive Internet use and withdrawal from Internet addiction 3. Internet use hours 4. Internet as a social medium 5. Internet interpersonal relationship dependence 6. Internet as a replacement for daily activity Total
RESULTS Factor Analysis of C-IRABI-II and PIEU-II The purpose of the exploratory analysis used in this study was to reduce items by deleting invalid ones. Factor analysis of C-IRABI-II revealed six factors: problems related to Internet addiction, compulsive Internet use and withdrawal from Internet addiction, Internet use hours, the Internet as a social medium, Internet interpersonal relationship dependence, and the Internet as replacement for daily activity, contributed a total of 52.14% explained variances, and the reliability was 0.925. Three items were dropped from the original 40 items due to their low validity. Thus, the final version of C-IRABI-II, consisted of 37 items; the total scores for 37 items (ranging from 37 to 148) for each respondent were their “Internet Addiction Scores.” Table 10.2 shows the names of the C-IRABI-II factors, number of items, explained variances, and reliability of factors. Factor analysis of PIEU-II revealed six factors: Entertainment, Escape, Anonymity, Alternative identification, Interpersonal communication, and Use behavior/Intertext. Two factors (groups of items): anonymity and alternative identification were emerged from items designed to test for anonymity. Close examination revealed that items pertaining to anonymity revealed the pleasure of hiding oneself on the Internet, while items pertaining alternative identification revealed the pleasure of playing another role (e.g., males becoming females). Therefore, the researchers accepted that the anonymity pleasure experience is actually two separate factors: anonymity and alternative identification. Six factors contributed a total of 56.01% of the explained variance. Three items were omitted from the original 27 items due to their low validity. Table 10.3 shows the names of the PIEU-II factors, number of items, explained variances, and reliability of the factors. TABLE 10.3 PIEU-II Factor Analysis Results Factor Name 1. Entertainment 2. Escape 3. Anonymity 4. Alternative identification 5. Interpersonal communication 6. Use behavior/Intertext Total
Number of Items
Variance Explained
Reliability
6 5 3 4 3 3 24
11.84 11.26 9.28 9.14 8.19 6.31 56.01
0.797 0.739 0.746 0.758 0.663 0.408 0.843
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
792
Handbook of Technology Management in Public Administration
Questionnaire Scores, Usage Hours, and Impact Ratings Table 10.4 lists subjects’ Internet addiction scores, communication pleasure scores, Internet usage hours and impact ratings. The mean score for C-IRABI-II from 910 valid responses was 80.4 out of a possible total of 148, with standard deviation of 16.10. The mean score for Young’s 8 yes/no questions was 2.06 out of a possible total of 8 (SD Z 1.95). The mean score for PIEU-II was 74.09 out of a possible total of 120 (SD Z11.20). The mean of subject satisfaction score was 40.14 out of a possible total of 60 (SD Z8.80). Subjects spent an average of 5–10 h per week on the Internet. They spend about 7.22 h on BBSs, 4.09 h on the WWW, 1.53 h on e-mail, 2.07 h on games, and 1.85 h on FTP. On an average, they spent less than 1 h on newsgroups and IRC. In this study, subjects rated Internet impact on various dimensions of their daily lives on an 8-point scale. Internet impact on their studies was rated at 4.88, daily life routines, 4.35, relationships with friends/schoolmates, 5.58, relationship with parents, 4.95, relationships with teachers, 4.98. Internet Addicts Versus Non-Addicts Two criteria were selected to distinguish addicts from non-addicts in this study. Those meeting the two criteria were identified as “Internet addicts.” Egger and Rauterberg (1996) and Moreahan-Martin and Schumacher (1997) set 10.6 and 8.1% of their respective samples as the addiction levels, TABLE 10.4 Questionnaire Scores, Usage Hours, and Impact Ratings Scores Internet addiction scores (C-IRABI-II) Young’s 8 addiction questions Communication pleasure scores (PIEU-11) Satisfaction scores Total Internet use hours BBS use hours per week WWW use hours per week E-mail use hours per week Game use hours per week FTP use hours per week Newsgroup use hours per week IRC use hours per week Internet impact ratings on studies
Daily life routines Relationship with friends/ schoolmates Relationship with parents Relationship with teachers
Number of Subjects
Means
Standard Deviation
910
80.40
16.10
910
2.06
1.95
910
74.09
11.20
Possible total 120
902 910 910 910 910 910 910 910
40.14 5–10 h per week 7.22 4.09 1.53 2.07 1.85 0.061
8.80
Possible total 60
9.20 5.44 2.76 6.00 6.11 2.28
910 910
0.89 4.88
2.96 1.81
910 910
4.35 5.58
1.68 1.36
910 910
4.95 4.98
1.28 1.29
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
Note Possible total 148 Possible total 8
Possible score range from 1 to 8; the higher the score, the more positive the impact was rated
Technology and the Professions
793
accordingly, we set as our first criteria that those who scored in the top 10% (110 and above) of the C-IRABI-II were possible addicts. Thus, 89 subjects were screened out as addicts by the present study. Our second criteria followed Young’s suggestion that respondents answering “yes” to five or more of her eight questions be considered addicts. Our second criteria screened 125 out of 910 respondents (about 13.7%) out as addicts. We used the conservative judgement that the intersection of the two groups be considered as Internet addicts by this study, that is, 54 respondents, actually were so identified. The other 856 subjects were classified as non-addicts. Figure 10.6 shows the numbers of subjects screened out by the two criteria mentioned above. A Pearson correlation analysis was conducted to check the relationship between C-IRABI-II and Young’s questionnaire scores. The results indicate that these two measurements significantly positively correlated, r Z0.643, p!0.01. This means these two questionnaires have shared ground in assessing these subjects’ addiction level. Addicts’ Versus Non-Addicts’ Internet Use Hours and Questionnaire Scores Statistical results indicated that the 54 Internet addicts spent about 20–25 h per week on the Internet, while non-addicts spent about 5–10 h. Internet addicts spent an average of 17.66 h on BBSs (SD Z 18.30), 6.58 h on the WWW, 3.47 h on e-mail (SD Z 4.48), and 5.47 on games (SD Z 9.2). By contrast, non-addicts spent an average of 6.6 h on BBSs (SD Z 7.9), 3.94 h on the WWW (SD Z 5.26), 1.42 h on e-mail (SD Z 2.6). The two tailed t-test indicated that the addict group spent significantly more hours on BBSs, the WWW, e-mail, and games than the non-addict group (t Z 4.03, p Z 0.00; t Z 2.32, p Z 0.025; t Z 2.85, p Z 0.006; t Z 2.57, p Z 0.013, respectively). Table 10.5 lists the means and standard deviations for each Internet application for each group, and the associated t-values. Note that the addict group’s PIEU-II scores were significantly higher than the non-addict group, and their satisfaction scores were also significantly higher than the non-addicts group (t Z 9.13, p Z 0.000; t Z 4.29, p Z 0.000, respectively). Table 10.6 shows the PIEU-II score means, standard deviations and satisfaction scores, along with their respective t- and p-values. Table 10.6 also includes the C-IRABI-II means, standard deviations, t- and p-values, and Young’s criteria for each group. Comparing the self-ratings of Internet impact on students’ lives revealed that the addict group rated Internet impact on their studies and daily life routines significantly lower than the non-addict group (t Z K4.723, p Z 0.00; t Z K3.586, p Z 0.001). There were no significant differences
C-URABI-II
Young’s 8 criteria
89
125
subjects
subjects
54 subjects screened out as Internet Addicts in this study
FIGURE 10.6 The numbers of subjects screened out by two criteria.
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
794
Handbook of Technology Management in Public Administration
TABLE 10.5 Means and Standard Deviations for Internet Applications and Respective t-values Addict Group (nZ54)
Non-Addict Group (nZ856)
Application use hours per week
Mean
SD
Mean
SD
t-value
p
BBS WWW E-mail Games FTP Newsgroup IRC
17.66 6.58 3.47 5.47 1.72 1.44 2.77
18.30 7.50 4.48 9.28 3.76 4.63 9.41
6.60 3.94 1.42 1.87 1.86 0.56 0.078
7.96 5.26 2.59 5.69 6.23 2.05 1.97
4.03 2.32 2.85 2.57 K0.14 1.26 1.42
0.000a 0.025b 0.006a 0.013b 0.889 0.215 0.162
a b
p!0.01. p!0.05.
TABLE 10.6 The Addict Groups’ and Non-Addict Groups’ Scores of Individual Measures Addict Group (n Z54)
Non-Addict Group (n Z856)
Scores
Mean
SD
Mean
SD
t-value
p
PIEU-II Satisfaction score C-IRABI-II Young’s 8 criteria
85.63 44.70 108.09 6.39
9.49 7.07 7.44 1.25
73.36 39.87 78.67 1.78
10.90 8.83 14.87 1.65
9.13 4.29 25.98 25.72
0.000a 0.000a 0.000a 0.000a
a
p!0.01.
between addict groups’ ratings and non-addict groups’ ratings of impacts on relationships with friends/schoolmates, parents and teachers. It is worth noting that the addict group expressed negative Internet impacts on their studies and daily life routines (means Z 3.64 and 3.5, respectively, both were below the median score of 4.5). On the other hand, the addict group and the nonaddict group both indicated highly positive impacts on their relationships with friends/schoolmates. Table 10.7 lists the Internet impact ratings of the addict group and the non-addict group.
TABLE 10.7 The Ratings of Internet Impacts on Students’ Lives Addict Group (nZ54) Internet Impacts on Studies Daily life routines Relationships with friends/schoolmates Relationships with parents Relationships with teachers a
Non-Addict Group (n Z856)
Mean
SD
Mean
SD
t-value
p
3.56 3.50 5.22 4.70 4.72
1.98 1.79 1.86 1.71 1.66
4.95 4.40 5.60 4.97 5.00
1.77 1.65 1.32 1.25 1.26
K4.72 K3.58 K1.46 K1.10 K1.22
0.000a 0.001a 0.151 0.274 0.229
p!0.01.
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
795
Regression Analysis of Internet Addiction One of the research question of this study was, What are the predictors of Internet addiction? Therefore, a stepwise regression analysis was conducted in which C-IRABI-II scores were the dependent variables, while subject sex, BBS use hours, WWW use hours, e-mail use hours, game use hours, satisfaction scores, and PIEU-II scores were the independent variables. The analysis generated the following formula: C-IRABI-II score Z 0.45 PIEU-II scoreC0.301 BBS use hourC0.106 sexC0.106 satisfaction scoreC0.082 e-mail use hour, indicting that the most powerful predictor of Internet addiction was the PIEU-II score, followed by BBS use hours, sex, satisfaction score, and the e-mail use hours. Note that game use hours and WWW use hours were not included in this regression formula due to their low prediction powers. Table 10.8 shows the regression model of Internet addiction.
DISCUSSIONS
AND
CONCLUSION
The purpose of this study was to investigate Taiwan’s college students’ Internet addiction, their Internet usage patterns, and gratification and communication pleasures. Therefore, a paper questionnaire was administered to a stratified sample of 1,209 college students and 910 valid responses were collected. The results indicate that Internet addiction does exist among some Taiwan college students. In this study, 54 Internet addicts were screened out by the C-IRABI-II and Young’s criteria. The percentage of addicts in this study’s sample was about 5.9%, lower than Brenner’s (1997) 10.6 and Morahan-Martin and Schemacher’s (1997) 8.1%, probably because we used two criteria simultaneously to screen for possible addicts. Our results indicate that Internet addicts spent almost triple the number of hours on the Internet as a group than the non-addicts. In particular, Internet addicts spent significantly more time on BBSs, the WWW, e-mail, and games than non-addicts. This result differs a bit from the results of previous study (Chou et al. 1999) in which the only significant differences were BBS and IRC use hours between the addict group and non-addict group. In fact, BBS, WWW, e-mail, and games are four popular applications among Taiwan college students. BBS as its name suggests, was originally designed only to distribute information. However, because of the interactivity inherent in electronic BBSs, college students can not only post information, but also respond to the postings of others. Gradually BBSs have become forums for discussions on various topics, similar to newsgroups. Taiwan’s BBSs also allow users to chat with many people, or to talk to particular users and groups of users, similar to general “chatrooms.” Therefore, BBSs have become important social tools for students to communicate with other people. Informal follow-up interviews with some of our respondents indicated that BBSs were indeed the most popular Internet application on Taiwan’s TABLE 10.8 The Regression Model of Internet Addictiona Dependent Variable
Predicting Variables
C-IRABC-II
PIEU-II scores BBS use hour Sex Satisfaction score E-mail use hour
a
B
SE
B
Significance
0.659 0.517 3.449 0.193 0.493
0.044 0.054 0.945 0.055 0.184
0.450 0.301 0.106 0.106 0.082
0.000 0.000 0.000 0.000 0.008
R2 Z0.469
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
796
Handbook of Technology Management in Public Administration
campuses, followed by the WWW and e-mail. This study’s results indicate that college Internet addicts spent about 17 h per week on BBSs, about 6.5 h on the WWW, and about 3.47 h on e-mail, which were significantly higher than the non-addicts’ 6.6, 3.94, and 1.42 h. This is consistent with Kandell’s (1998) observation that frequent e-mail checking is also a major activity that may lead people to addiction. According to the theories of Usage and Gratification, and the theory of Communication Play, students have a variety of needs (social, academic work, etc.) to use the Internet, which lead to different degrees of exposure to Internet applications (BBS, e-mail, WWW, etc.) and result in varying degree of gratification and pleasure experience. Some students may tend towards overinvolvement with or pathological use of the Internet, and gradually develop addictive tendencies. Comparing the addict group’s and the non-addict group’s scores of their self-reported pleasure experience and satisfaction showed that the addict group scored significantly higher on PIEU-II and satisfaction measurements. This means that the addict group felt that the Internet is more entertaining, fun, and interactive; they thought the Internet could help them escape from their real-world responsibilities and identification, and so they were more satisfied with their Internet usage. Can we predict who is more likely to become addicted to the Internet? Based on the measurements in this study, it was found that the self-reported communication pleasure experience was the most powerful predictor, and the next most powerful predictors were BBS use hours, sex, satisfaction score, and the e-mail use hours. In other words, the more one experiences the pleasure of using the Internet and BBSs, reports high satisfaction with using the Internet, and uses e-mail, the more likely he or she is to become addicted to the Internet. Males are also more likely to become Internet addicts. What impact does the Internet have on addicts’ daily lives? On an average, the addict group in this study rated impacts toward the negative end in two dimensions: study and daily life routines, such as meals, sleep, appointments, and classes. The addict group rated impacts on these two dimensions significantly more negatively than the non-addict group. However, both the addict and non-addict groups rated the Internet impacts on their relationships with friends/schoolmates positively. The interviewed students explained that the Internet gives them chances to meet new people, provides extra, if not the major, tools for communicating with old friends, and creates more topics to share with them. “You know somebody is always out there, you are not alone,” one of our interviewees said. This “accompany” function is even better than that of a television set or a radio, because the interactive feature of the Internet enables them to connect with others at any time, and they do not just passively receive the information from outside. The Internet is indeed the window through which students communicate and interact with the world. How about the worlds of their parents and teachers? The results showed that the addict and nonaddict groups both rated these two dimensions in the middle-to-positive range. One respondent said her family was proud of her ability to use the Internet. “They think I use my computer because I am working hard on my studies.” Parents may only know that their children are on the net, but do not know what they are actually doing with it. Taiwan’s parents may not be aware of the Internet’s possible negative impacts on their children, partially because the majority of college students use the Internet when they are on campus, and the Internet itself is highly appraised and promoted by society in general. Young’s (1998) study reported that Internet dependents gradually spent less time with family and friends in exchange for solitary time in front of their computers. This may be true of some of Taiwan Internet users, however, the data in this study did not report disrupted relationships with parents, due to time conflicts or others reasons. As to the teacher–relationship aspect, some students said that the Internet gives them an extra channel to communicate with teachers. “If teachers answer my e-mail, I think they are in my group, and I will appreciate them more,” one interviewed student said. This is an interesting topic worthy of further study. Should teachers use the Internet more to communicate with their students and to enter “students groups”? Should teachers encourage students to use the Internet to communicate with teachers? How about those students who are already over-involved with the Internet?
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
797
For instance, when I conducted the interview for this study, one student even rejected me because she believed I would reduce her time on the net! Therefore, I was forced to interview her in an on-line chatroom. I really “joined her group,” but I wondered whether I implicitly encouraged her, a heavy user, to use the Internet more? This study discussed the recent research focus on Internet addiction, collected empirical data from Taiwan college students, and raised more questions unanswered. Although the Internet seems beneficial to most of the students, some addictive cases were found among the samples. There are no doubts that Internet usage among the general population and on college campuses will grow at an exponential rate, and the Internet addiction issue will become more and more obvious and perhaps serious. More research on this topic is needed to understand the full scope of Internet addiction and its solutions.
ACKNOWLEDGMENTS This work was supported by the National Science Council in Taiwan under Project NSC87-2511S-009-013-N. An earlier version of this paper was presented at the 107th Convention of the American Psychological Association in Boston, MA, on August 22, 1999. The second author of the earlier version was Dr. Sue-Huei Chen because of her contribution to the preparation and presentation of the conference paper.
REFERENCES Administration of Education of Taiwan, Republic of China, Educational Statistics of Republic of China, Taiwan, Taipei, Taiwan, 1997. American Psychiatric Association, Diagnostic and Statistical Manual of Mental Disorders, 4th ed., APA, Washington, DC, 1995. Brenner, V., An initial report on the online assessment of Internet addiction: the first 30 days of the Internet usage survey. [On-line] Available at http://www.ccsnet.com/prep/pap/pap8b/638b/012p.txt, 1996. Brenner, V., Parameters of Internet use, abuse and addiction: The first 90 days of the Internet usage survey, Psychological Reports, 80, 879–882, 1997. Chou, C., Chou, J., and Tyan, N. N., An exploratory study of Internet addiction, usage and communication pleasure—Taiwan’s case, International Journal of Educational Telecommunications, 5(1), 47–64, 1999. Egger, O. and Rauterberg, M., Internet behavior and addiction. [On-Line] Available at http://www.ifap.bepr. ethz.ch/weger/ibq/res.html, 1996. Goldberg, I., Internet addiction disorder. [On-Line] Available at http://www.physics.wisc.edu/wshalizi/internetaddictioncriteria.html, 1996. Griffiths, M., Internet addiction: does it really exist, In Psychology and the Internet: Intrapersonal, Interpersonal, and Transpersonal Implications, Gackenbach, J., Ed., Academic Press, New York, 1998. Kandell, J., J., Internet addiction on campus: the vulnerability of college students, Cyber Psychology and Behavior, 1(1), 11–17, 1998. McQuail, D., Mass Communication Theory, 3rd ed., Sage, London, 1994. Morahan-Martin, J. M. and Schumacker, P., Incidence and correlates of pathological Internet use, In Paper Presented at the 105th Annual Convention of the American Psychological Association, Chicago, IL, 1997. Moore, D., The Emperor’s Virtual Clothes: The Naked Truth About the Internet Culture, Chapel Hill, Alogonquin, NC, 1995. Morris, M. and Ogan, C., The Internet as mass medium, Journal of Communication, 46(1), 39–50, 1996. Stephenson, W., The Play Theory of Mass Communication, Transaction Books, New Brunswich, NJ, 1988. Squire, B. P., Internet addiction, Canadian Medical Association Journal, 154(12), 1823, 1996. Young, K. S., Internet addiction: the emergence of a new clinical disorder, CyberPsychology and Behavior, 1(3), 237–244, 1998.
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
798
Handbook of Technology Management in Public Administration
ENSURING THE IT WORKFORCE OF THE FUTURE: A PRIVATE SECTOR VIEW* Last September, the National Academy of Public Administration (NAPA) released its study of how the federal government might revamp its compensation policies to help it attract and retain high quality technology workers. That this study was undertaken at all reflects a compelling reality: as business becomes e-business and government becomes e-government, employers everywhere are going to have to reassess how they recruit and hold on to well-trained workers. There are, of course, significant and foundational differences in how the public sector and private sector conduct operations, but when it comes to compensation and employee retention, especially in the highly competitive information technology (IT) field, there is a lot of common ground. In fact, IBM’s experience in revamping its compensation practices is mirrored to a great extent in the NAPA study. The study highlights many areas where the federal government might work to transform how it attracts, motivates, and retains modern IT workers.
IBM’S EXPERIENCE IBM’s ability to stay in the fore of our industry has two fundamental roots: our technology and our people. We strive never to lose sight of the fact that the one complements the other, and both are critical to our competitiveness in the marketplace. Attracting and keeping the best technology workers is, of course, a high priority at IBM. The technology and business landscape is constantly shifting, with new challenges looming over the horizon every day. We seek to anticipate those changes and provide meaningful rewards to both prospective and current employees. And we recognize that we need to spend our limited resources as smartly as possible. That means paying our most valuable employees what they are worth in the market, because we have a strong interest in retaining them. We want them to stay at IBM because they have historical knowledge about our systems, accounts, and customers. They are known and trusted by colleagues and customers alike. Retaining them, rather than replacing them, makes good sense on all levels.
GETTING COMPENSATION RIGHT It is not that different for the federal government. It also needs to evaluate how it can best attract and retain high quality technology workers. Compensation is rarely the main reason today’s worker decides where to work. There are a wide range of considerations a prospective employee will take into account: Is this a meaningful job where I’m doing something worthwhile? Is it a congenial workplace? What are my opportunities for professional growth and advancement? All of these things enter into the equation, but the fact remains that if an employer—corporate or government—gets the compensation wrong at the beginning, that employer won’t even get the prospective employee in the door. Compensation has become a given for the market-savvy tech worker. Frankly, the market leaves employers few alternatives. When competing for the best talent out there, choices are limited. Employers ignore market realities at their own peril. In this case, ignoring market realities means allowing pay to become misaligned—paying come employees too much and others too little. This becomes particularly troublesome if the pay to people with critical skills falls into the “too little” bucket. These employees may eventually vote with their feet and leave. And, as noted above, replacement is not as cost effective as retention.
* By Anne Altman, Managing Director, Federal Government Operations, IBM. She served as one of three private sector members of the Project Leadership Committee that provided the project team with insight and reaction to findings and possible solutions throughout the study.
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
799
KEYS TO SUCCESS The NAPA study warned that the federal government may be facing an impending brain drain. Can it be an attractive place for the 21st century IT worker? Of course it can, but a thorough understanding of the expectations and needs of the workers, knowledge of the labor marketplace, and a willingness to implement change will be required to succeed. A few years ago IBM saw the need to institute a pay system with salary rate that are more closely tied to the changing labor market. The question before us was not “can we,” or “should we,” but rather “how are we going to achieve this critical goal?” In a way this echoed a major reevaluation IBM had undergone in the early 1990s as we transformed to an e-business, even before we had coined the term. We assess where the tech industry was heading and underwent a major change in how we operate. Our own transformation to e-business has become a model for many government entities to make their own transformation to e-government.
COMMITMENT OF SENIOR MANAGEMENT As in the shift to e-business, the biggest single factor that allowed IBM to transform its compensation programs was the top-down commitment and drive from senior management. We went from a very structured “one size fits most” approach, in which key decisions were made by compensation professionals and implemented by management teams with very little flexibility, to a more flexible system in which decisions are given to managers with the guidance to pay their best employees like the best in the marketplace. It is hard to overestimate just how important focus and commitment by senior-level executives is to implementing a transformation of this sort.
PHASE APPROACH Classification It is said that every lengthy journey begins with a single step. The IBM journey took several steps to get to its current compensation platform. As we went forward, we took into account that any approach to change of this kind needs to be focused, objective, flexible, and rational. Our initial focus was on classification—moving to broad banding and job families we priced in the market. We reduced the number of job descriptions by two-thirds, enabling us to more tightly concentrate on needs at hand. This allows us to focus our attention on those skill groups that are most critical to our business. Common Pay Increase The second step of this journey involved instituting a common pay increase date. This give managers an objective, effective, and efficient way to evaluate all of their employees at the same time, thus helping ensure that pay decisions are consistent in approach and standards. We also made good use of existing IBM technology. Our Lotus and IBM database software provided a high-performance e-business solution to a large-scale management population, enabling us to add an online tool and processing. This includes a budget, individual employee information, and where employees stand in their pay range when compared to the marketplace for a similar position. This adds objectivity and focus to the process. Pay Differentiation Third, we instituted pay differentiation in the delivery of increases, giving managers the flexibility to pay their best employees first. Pay differentiation helps managers determine how to pay their best like the best in the market. Market-savvy tech employees know the market, and managers need to,
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
800
Handbook of Technology Management in Public Administration
as well. Savvy manages know it’s smarter and that there is more value to IBM in retaining, rather than replacing employees. We also are taking into account the various skill sets that our critical IT employees bring to the table. At IBM, what we call the “dual ladder” allows our people with highly valuable technical skills to achieve the highest levels in the pay structure, without being forces into management, where they might leave. Again, pay is only one element in a competitive package. The IBM journey began in 1996–1997 and the three phases were completed and fully executed by 1998–1999. In the United States today, we plan 150,000 employee pay decisions to each employee by April, and increases become effective on May 1. Work/Life Options Just as the private sector is quickly learning, the extent to which work/life options can be improved will make the government a more attractive employer. Work/life flexibility is one of the biggest issues in the modern workforce. Telecommuting Telecommuting is a prime example of work/life flexibility. It offers its own obvious advantages to employees and employers, and on the flip side, it also poses its own challenges. But it has become one of the more significant offerings we have today to keep employees who might otherwise not join or stay at IBM. Professional Development Professional development is also critical. With the IT worker, continuing education and skills development cannot be overplayed. Organizations that ignore development because of costs will more likely face the much higher costs of replacement. We place a high premium on development at IBM, making a wide range of learning activities—including e-learning—and other developmental programs available to our employees. Clear Communication Any significant change will always demand the question: Why? Transition to a market-based, payfor-performance compensation and human resources management plan needs from the outset, to be rational and understandable by all affected. You need to establish a compelling imperative for each phase of the transition. Ultimately, this should become an almost self-evident conclusion given the reality of competing in a wide field for a limited supply of technology workers. Clear communications on objective and the impact are required so everyone has the opportunity to understand what is happening and why. Managers on all levels, of course, need to be on board, committed, and clear, as they begin implementation. Employees will quickly get the message. They will understand that the more skills they bring to the table, the more they improve in contribution and performance, and the more they will be paid. They also know now that the market has an impact, and that how the market values those skills will have a significant bearing on their pay. Implementing meaningful and understandable compensation policies will eventually reap numerous rewards in the skills and quality of workers you will get to pursue your mission. Change of any kind is never easy, whether in industry or in the federal government. The same applies when restructuring a pay system to put a premium on performance, not tenure. IBM’s experience suggests that involving employees and managers in discussion of what is happening, getting their thoughts, and understanding their concerns, is key to building a well-conceived communications plan. They will highlight prime issues, challenges, and misperceptions for
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
801
you—before they become full-blown problems, as well as highlighting what is the greatest buy-in from current employees comes when first-line managers are best prepared to explain and answer questions about what the changes mean, and why they are being made.
CONCLUSION An employer does not necessarily need to pay the most, offer large wealth accumulation schemes, or even the highest benefits to attract high quality employees. The federal government can certainly make a clear and compelling case for why someone should choose a career in the public sector if some of the very basic market realities are addressed from the start. Compensation is just one element to attracting, motivating, and retaining a skilled workforce, but getting it right early is critical to having a solid foundation. Get it wrong, and the entire structure will collapse. Get it right, and you have an excellent chance of attracting and regaining the technology workers necessary to an agency’s success in delivering on its mission.
ORGANIZATIONAL PASSAGES: CONTEXT, COMPLEXITIES, AND CONFLUENCE* What in context beguiles, out of context mortifies.
David Wayne
INTRODUCTION Much has been made of the passages we all make in our personal lives. Perhaps starting with Erik Erikson’s Eight Ages of Man and further popularized by Gail Sheehy’s Passages the concept of required personal evolution is well documented. In the workplace, Lawrence Peter described the advancement of employees to their level of incompetence. Organizations like people go through similar passages and growth to levels that challenge their competence. The ability of leadership and management to adapt and evolve through these changes and gain new competencies is one of the most important determinants of their long-term success. While the passages for individuals are defined by age and stage of life, the parallel for an organization is its level of revenue and organizational reach. Most of us view growth in predominantly positive ways. We even say that we will grow the economy or grow an organization in the same way that we grow a crop. We expect that with growth comes the harvest that for an organization is larger earnings and geometrically improved business opportunity. Likewise, the individuals who produced the growth expect financial, power, and prestige rewards. Rarely is growth discussed as one of the greatest risk factors an organization faces. As an organization grows, it goes through distinct phases that require different perspectives, characteristics, and skills from its team and different resource availability for high performance. All organizations are tested by their ability to improve their infrastructure to meet the expanded challenge. The leaders and managers of the organization face a similar challenge. The paradigm that made them successful in the early phases may not serve them as the organization grows and contextual realities change. Hands-on skills must be replaced with supervisory talents. Internal expertise must be replaced with the ability to create external linkages and opportunities. There are five distinct phases an organization (and its managers) will face in its evolution. The first is the “start-up” phase characterized by entrepreneurship and innovation. The second phase starts when the organization becomes operational and continues through the period where * Sydney E. Martin and Ronald J. Stupak, Fording Brook Associates, 6905 Rockledge Drive, Suite 700, Bethesda, Maryland 20817. Email:
[email protected] DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
802
Handbook of Technology Management in Public Administration
the “family business” style of operations is functional. This is the period where a few highly motivated individuals perform a wide variety of functional roles. We call this the small business phase. From here the organization develops the need for functional specialists which in turn requires the creation of teams and a leadership style that directs and motivates the team rather than having the leader serve as the principal hands-on, command and control decision-maker. We call this stage of growth the middle market phase. Finally there is an evolution to a leadership business model where department managers possess the qualities of middle market practitioners, on which the creative leader’s external focus leads to a strategic vision for driving the market and transforming the organization’s future. At any stage along the spectrum an organization may become maladaptive. Simply put, their inability to adapt in terms of leadership, management, and resources results in ineffective performance and contextual failings. This maladaption will either precipitate the end of the company or catalyze a re-evaluation of its cultural capabilities, thus leading to both re-engineering of its infrastructure as well as a repositioning of its strategic initiatives. Based on our reflective practitioner orientation, we have prepared a matrix (see Figure 10.7) that demonstrates these evolutionary changes and systemic ramifications. The sections that follow will describe the process phases and the infrastructure enhancements required to meet the internal challenges and external opportunities of each developmental stage. Each section concludes with an analysis of the risks and options that typically arise at each stage and the impact of the infrastructure on the successful realization of the potential rewards.
START-UP PHASE Brief Description The typical start-up phase originates with a small group of founders who come together with an expertise and/or a vision to fill a business need in the local market. Usually the founders have been engaged as employees of another organization in the market area they intend to serve and believe they have found a better way to operate. The start-up phase is anchored in this conceptualization until the organization has achieved sustaining revenue. Operating Functions Leadership The leader is an entrepreneurial risk taker. Driven more by reward and less by power, the start-up leader is willing to undertake the hands-on responsibilities necessary to assure success. This is the kind of person you will hear saying, “It is just easier to do it myself.” Start-up leaders excel at and enjoy hands-on production. The personal customer network of the leadership is critical. The leader may be formally designated or evolve informally among founding partners. Management Management is comprised of other co-founders who are willing to follow the vision of the leader. Many times the management group is made up of equal or close to equal owners who are organized informally. Since there is little compensation for anyone initially, direction is often negotiated rather than dictated. The founders are functional generalists doing whatever it takes to launch the entity. Finance There is typically limited capital. Funding is based on the personal investment of the principals. In the rare event that there is start-up capital it is used for the direct costs of the business plan. Cash flow survival is the daily battle.
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
803
FIGURE 10.7 Business phase matrix.
Technology and the Professions
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
804
Handbook of Technology Management in Public Administration
Technology Information Technology (IT) is rudimentary at best unless it is a technology company. Computer networks are designed to share equipment if they exist. Technology is used principally for office applications, Internet research, and email communication. Human Resources There is rarely a human resources (HR) department or policies and procedures of consequence. Founders make the decisions regarding assignments, recruitment, and benefits. Transition Functions Transition Process The organization grows to the next level based upon the vision of the leader/founders. The direction is opportunity driven and growth occurs within the boundaries of the founders’ energy, willingness to sacrifice financially, and availability of opportunity. Since it is not planned growth, the opportunities will dictate direction, define the culture, and drive the business in ways that may be substantially different from the original vision. Management Development Management learns functional tasks experientially as they are presented. Typically responsibilities go to those who have the time or have the closest prior training. Training is for the task at hand not for future requirements. Risks and Rewards Most start-up operations live from day to day. This stage is almost exclusively a function of the founders, since the functional infrastructure has not been systematically developed. Start-ups are dependent on self-starting multi-taskers to survive. Daily, the founders need to solve survival issues and be comfortable with the risks they are running. These characteristics allow them to turn quickly when opportunities arise, thereby moving the organization in unenvisioned and unintended directions. Many times businesses are started by founders without the required strategic positioning skills. Perhaps they simply got tired of corporate bureaucracy and their success as a corporate manager was not translatable into entrepreneurial success. Alternatively, their personal risk profile may not allow them to stay the course. Regardless, the consequences of non-performance by the leader at this early stage are enormous.
SMALL BUSINESS PHASE Brief Description During the small business phase the organization continues to be dominated by the founder(s). Other management personnel, if not family, are typically implementers who perform cross-functional tasks under the control of the founder(s). This period continues until the size of the organization requires the development of functional specialization. Markets are usually local and systems remain primitive. Operating Functions Leadership The leadership continues to require entrepreneurial skills and the organization will grow serendipitously driven by the opportunities and consequences of the leader’s decisions. The leader is
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
805
expected to be the “rainmaker” and to deliver on opportunities with hands-on performance. Frequently the organization and the leader are synonymous in the customers’ eyes. Management The management group consists of the non-leader founders, family members, or a limited number of cross-functional implementers. Managers do what needs to be done usually on a time availability basis. This available time becomes stretched as growth occurs and the functional needs (accounting, HR, IT, marketing) increase. The entrepreneurial leader has no time for training and little interest in formal functional administration so the job of delivering on the opportunities the leader develops falls to the other managers. Finance Accounting usually remains more of a bookkeeping function and the tax return is usually the closest thing to financial statements. During this phase there is the beginning of payables, receivables, and inventory systems. Cash is usually scarce and a major job of the leader is to personally seek capital and credit. The need for financial reporting evolves with the strain on cash resources and with operations that have grown too large for the leader to control. Technology IT remains primarily an internal function with increased networking among employees. If not created during start-up, a web site will be required at this stage. There is usually little technological sophistication in the management group (unless it is an IT-related company) and the limited advances have to compete for very scare financial resources. Human Resources Now that everyone is getting paid there is a need for an internal or external payroll system and a benefits program for employees. Still principally run as a family type business, HR policies and procedures are limited. The organization is stretched in this area as it begins to hire more employees to meet growth without grounding the recruitment and retention needs in the basics of professional HR management. Transition Functions Transition Process As in the start-up phase the transition to a middle market company occurs as a result of entrepreneurial growth. Without disciplined focus the growth will be opportunity driven and the demands on organizational resources will be unplanned and somewhat haphazard. This may lead to significant resource deployment in one functional area that is required for an opportunity and little in another that is not directly instrumental for that task. Management Development As described previously, a family atmosphere pervades and management development is strictly on-the-job training and personal mentoring. As the need for functional expansion increases it becomes clear that the limitations in training as well as the parochial experiences of management cannot be remedied with internal resources and ad hoc HR efforts. Risks and Rewards The stress of growth starts to become a major risk factor in this phase. This risk is juxtaposed to increased opportunity and profit potential as the local market for the goods or services is developed
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
806
Handbook of Technology Management in Public Administration
and repeat/referral business is generated. As this growth develops the need for functional expertise arises without the staff experience to meet it. The lack of financial, IT, and HR systems begins to hamper performance. With scare capital for the development of new opportunities comes the simultaneous need for systems and management. The entrepreneurial leader has become successful because of his/her external marketing experience and is usually resistant to infrastructure expenditures both in terms of time and resources. Lenders and investors require that their money will be used for business expansion not for managing existing opportunity. Many organizations fail to prosper in and grow through this level because of the leadership, management, and system improvements neglected during their growth. The solution requires an investment back into the business instead of realizing financial gain for the owners in tandem with a disciplined trade-off between marketing opportunities and the infrastructure needs of the organization.
MIDDLE MARKET PHASE Brief Description This is the most difficult management and functional transition an organization faces and few are able to make the transition successfully. While the start-up phase comes with enormous market acceptance risk, and the small business phase adds the stretching of internal systems and skills, the middle market phase simply cannot be successfully navigated without functional specialization, a significant change in leadership substance and style, enhanced clarification and operationalization of management skills and roles, and improved horizontal collaboration among functional departments. Operating Functions Leadership Organizational leadership must make the transition from exclusive reliance upon entrepreneurship to effective team leadership, coupled with the strategic ability to spearhead growth in a focused, planned, and compatible way. The self-reliant and revenue dominated skills of the founding entrepreneurial leader becomes overwhelmed by the complexity, shear volume, and mass of an ever larger business and organizational structure. The leader must leave behind hands-on performance to direct team performance. (S)he must also assess the needs of the organization holistically, using multi-faceted analysis to make the trade-offs necessary in this much more complex environment. Management As the leader has to depend upon the team for effective performance, the management team must become functionally specialized with senior level skills and perspectives in their assigned functional areas. No longer micromanaged, each manager must acquire the leadership skills to develop a functional team and direct performance to an organizational plan. This evolution in executive skills, team leadership, and shared accountability is the sine qua non for success in the middle market phase. Finance This stage requires new capital and/or debt, an upgrade in the accounting system, and management reporting to monitor performance. The need for expansion and improvement in all systems and the required enhancement in the quality and size of staff demand a significant investment in the infrastructure. Additionally, there must be new investment in plant and equipment. If the organization is expanding rapidly, there will be a buildup in current assets (receivables and inventory) with the concomitant strain on cash flow.
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
807
Technology Technology plays a pivotal role as organizations reach middle market size. Communication between increasingly distant (physically and informationally) operations requires sophisticated networking, storage, and security. Operations require direct linkup with key customers and stakeholders. The organization goes from simple local IT to a comprehensive internal and external communication and operational system. Human Resources The organization at this size requires a larger, more diverse workforce with more levels of responsibility and compensation. With this size there are increased workplace compliance and benefit requirements and employee recruitment/retention complexities. Formal policies and procedures are required legally and operationally. Internal training to better satisfy growth needs becomes a staffing necessity. The senior HR officer becomes the head of a group of HR professionals and an important member of the senior management team. Transition Functions Transition Process Unlike the earlier transitions that were opportunity driven and leader generated, the transition from a middle market to a leadership company usually emanates from planned growth, multi-functional enhancements, and strategic projections. This transition requires that the organization become one of the premier performers in its market area gaining a competitive advantage from the coordinated fulfillment of the changes required in its middle market phase. Management Development As described in the operational areas, this is a time of enormous change in the scope and content of management responsibilities. For existing managers to make this transition requires formalized, internal, and on-the-job training to assure uniformity and completeness of exposure to key concepts and skills. Many times it is more cost effective to send managers to outside training by professionals within their functional areas. Unfortunately the required transition may not be achievable for some, necessitating the outside recruitment of new managers with the required skills and experience. Risks and Rewards All of the described changes make this a phase of both management and capital risk for the organization. The size, shape, and makeup of the leader and management team must evolve and the capital must be raised and earnings re-invested to make the required infrastructure improvements. Without these changes the inadequate systems and inferior management of the organization will be overloaded with the resultant deterioration in operating and financial performance. If the required advances are accomplished in a continuous improvement process and coupled with a compatible financial structure, the platform will be established for market leadership, improved performance, and strategic growth.
LEADERSHIP BUSINESS Brief Description Having created the platform in the middle market phase, the organization becomes positioned for strategic growth. Driven by its operations, it develops industry leadership as it expands into
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
808
Handbook of Technology Management in Public Administration
a national or international force. With its size comes increasing access to investor capital and bank financing. The organization may decide to go public to access funding in larger amounts. This size and reach requires organizational maturity in all areas. Organizational capabilities with respect to business development, management recruitment, and system development reach their peak. Operating Functions Leadership The leader of a leadership business commands industry respect for expertise, vision, and insight. While s(he) leads the strategic plan and guides the senior management team, most of the leader’s focus is external, opening new markets and arranging strategic operational, financial, and business development relationships. The effective leader will perform the increasingly visible role of spokesperson for the organization and its industry. Management There are now multiple layers to the organization. Even with the trend toward flatter organizations, the requirements of companies this size necessitate functional and sub-functional specialization. Leadership and supervisory skills must be developed at lower, in-the-trenches levels of the organization. The senior management team becomes the key operational force for meeting planned growth and performance objectives. Managers play a key ongoing role in the recruitment and development of the next generation of managers/leaders. Finance This is increasingly the stage of corporate finance with major debt and equity relationships. Usually these relationships and the complexity of the organization necessitate audited financial statements and comprehensive management reporting. The ability to manage effectively is guided by the company’s ability to create a dynamic strategic plan, implement it with effective annual budgeting, and monitor and adapt it with strategically oriented management reporting. Technology SIT is the key to comprehensive internal and external communication. The wider the scope of operations, functionally and geographically, the more IT becomes essential to producing uniform levels of performance. It is pivotal in assuring that key personnel have the timely information required to perform their responsibilities. Linking customers to the company technologically permits a timeliness of service delivery that rivals smaller local companies. Training can be performed at a variety of locations simultaneously to provide the maximum synergy and uniformity of delivery. Web site and Internet communication become critical faces of the organization to the world. Human Resources The growth of the organization requires the continuous upgrading, expansion, and diversification of its workforce. In HR terms this means major recruitment initiatives, formalized training, and systematic retention programs. Geographic dispersion increases the complexity for compliance with state and local laws and customs. The qualities of organizational opportunity, compensation, benefits, training, and team orientation become critical components for maintaining quality performance in a larger, more dispersed environment.
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
809
Transition Functions Transition Process Successful growth in this size organization requires comprehensive planning, maintenance of organizational focus/discipline, and the ability to adapt a large institution to changing environmental factors. Transitioning to regional, national, and international market leadership requires visibility, development of new markets for products/services, continuous expansion of quality management, investment in and development of enhanced systems, and universal commitment to high performance objectives. Management Development As described above, the management group expands both vertically and horizontally. This requires a commitment to internal development programs, external training, and retention/recruitment efforts. It is central to a leadership company’s success that the quality, style, and focus on team objectives for the group be maintained. Ideally, the increased size produces enhanced ways to serve; and the key to this scope of service lies in the dedication of the management group to core organizational values. Risks and Rewards This period has less financial risk and more operational, leadership, and management risk. Not that leadership companies cannot experience financial problems, just that there are usually greater resources to combat them. The principal risk is in leadership and management reaching levels beyond which their competence is tested. The challenge confronted by all leadership groups is to develop the management capability to consistently adapt and reinvent themselves to prosper in more diverse and complex business environments. If the organization succeeds in its adaptation, the rewards at hand are empowered by the widest array of business opportunities, resource capacity, and comparative/competitive advantage.
MALADAPTIVE BUSINESS Brief Description An organization that is unable to make the leadership, management, and/or functional improvements required for their stage of development will exhibit distressed symptoms which if not addressed can lead to business failure. The characteristics of the shortcomings depend upon the growth stage of the organization. Operating Functions Leadership In the early entrepreneurial phases many organizations fail because the vision, contacts, negotiating, and technical skills of the founder(s) are not able to get the company to a sustaining level. Many times a leader who was successful in organizational life is unable to make the adjustments needed in a work environment without the infrastructure of the larger organization where his/her experience was gained. Conversely, it is very difficult for the successful entrepreneur to make the adjustment required to lead a larger company where the ability to attract, direct, and motivate a team supplants the hands-on production of earlier phases.
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
810
Handbook of Technology Management in Public Administration
Management Early in the process many organizations fail because of a leadership struggle among the founders. The risk-taking entrepreneurship of early joiners/founders is incompatible with the need for unity and the need for managers satisfied with implementation roles. As the organization grows it will demand functional expertise and team leadership skills not required previously and perhaps out of reach for the founders who have not spent careers in these functional specialties. Frequently this requires turnover that is particularly difficult to execute with loyal followers who have lived through the trials of the start-up phases. Finance During the entrepreneurial phases the financial focus is on survival issue, such as cash flow, collection of receivables, and the adequacy of cash for business opportunity investment. Much has been written of start-ups and financial distress. This article is aimed at a more insidious financial risk, which is not investing in the infrastructure required for growth. Departmentally, this results in inadequate financial systems to assure timely payments and receipts, as well as inadequate financial reporting to stakeholders and management. The lack of investment in other areas is discussed below. Technology Deficiencies in technology are rarely the source of business failures (except in technology companies), but with the communication needs of the fast-paced business environment of recent years, it is a significant element of organizational growth and performance. Inadequate IT systems can impede business opportunity, restrict operations, retard the teamwork between functions and locations, and present a damaging picture to external stakeholders. Human Resources A key component of organizational development is movement from the singular focus on business development toward building a management team and support staff with the skills and motivation to lift the company to the next level. We have seen many organizations that have the office manager serve as the personnel manager until significant problems materialize, such as high turnover, employee litigation, or an unproductive corporate culture. All successful organizations understand the symbiotic relationship between success and their people. Often overlooked is investment in teams that are necessary for meeting the extraordinary demands of growth. Transition Functions Transition Process The transition process from a maladaptive to an adaptive growth phase requires the deployment of the resources necessary to build the required platform, even if this may mean electing to forgo an opportunity that is outside the current capabilities of a company. It makes the leadership and management job in a high-performance organization more complex since they must balance the needs of the organization between external growth and the internal development of the required corporate resources. Correcting maladaption will only occur when there is compatibility between growth and infrastructure trade-offs. Management Development Clearly the development of the management team is the key element that must be coordinated with growth, leadership, and functional expertise. Although risk should be viewed as a continuum
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
811
and not at discrete points in the growth process, there are two phases that have the most substantial risk. The first is the entrepreneurial phase and the suitability of the team for this environment; the ability of the founders to coalesce around a leader and for the other founders to play the implementer support roles. Leadership and power struggles among the founders frequently are the source of start-up distress. Second, and just as difficult, is the movement from a leader performance environment to team performance phases that, as discussed earlier, require a very different set of management skills. These skills have to be developed through internal or external training, development, or education. If the leader and managers cannot or will not make the transitions they must be replaced in order for the organization to avoid the consequences of maladaption. Risks and Rewards Clearly maladaption has serious consequences and correcting the incompatibility between the infrastructure platform and organizational growth has the potential for enormous rewards. It is at the heart of effective leadership and management.
CONCLUSION Businesses today operate in an environment of short-term focus, demand for yearly profitability increases, free cash flows, flat organizational structures, and stringent operating ratios. Buoyed by the excesses of the 1980s and then the collapse of the technology sector combined with the corporate excesses of Enron, Tyco, World Com, and others, an era of cost cutting and cash flow focus has arisen. The quick way to make an impression on your banker, investor group, and even on your employees is to cut expenses and improve the current operating margin. Anyone who has tried to raise capital or arrange corporate debt has heard the message loud and clear. What will be my first year return on investment? How soon can you pay back the debt? What revenue stream will this generate? Driven by this thinking, organizations have accentuated rapid growth, high returns to investors, and accelerated payback for lenders. This philosophy is running into the reality that for an organization to sustain growth and realize its long-term potential it must invest in its infrastructure. Much has been made of the leadership compensation packages that reward current year earnings results and stock market appreciation. Stocks rise on revenue projections then when trouble comes the analysts focus on free cash flow. A CEO who is focused on long-term profitability and building a market-dominant company will probably not last to see it realized. Knowing that a successful transition through the growth phases requires significant investment in infrastructure, and that this investment is contrary to the financial culture of the times, further encourages current CEOs to drive their short-term earnings, realize significant compensation, and then get out before the inevitable implosion occurs. The pressures are somewhat different in companies that remain private. It is usually not possible to grow a company to a leadership position without institutional debt and equity. As a consequence private companies face the same pressures. Also, for the founders who have sacrificed financially, worked under start-up stress, and risked their futures, it is very difficult to deny lucrative returns when profitability arrives. They have already made a serious trade-off between current and future lifestyles but now with funds available it seems justified to cash out rather than reinvest. For all of these aforementioned reasons we are operating in an era where financial sacrifice, mid-long term vision, and investing in the future are hard to come by. Unfortunately, this is precisely what is required for a company to evolve into a leadership position and prosper over time. Over and over again we have seen situations where companies make a good start and fail as
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
812
Handbook of Technology Management in Public Administration
they grow; growth by itself many times was the source of their demise. The problem was that they were not strategic about their growth. They did not balance their short- and long-term needs. In essence, the organization was not positioned to absorb growth profitably. For now, leaders must do their best to invest sufficiently in infrastructure to empower future growth. Some have chosen to use the “big bath theory,” making expenditures in short bursts in order to minimize the impact on operations for the other periods. Others have chosen a modest reduction in operating margins and lower capital returns as the path. Either way, without this article’s focus, the banks and investors will get their return (at least the first ones in) while the principals, employees, customers, suppliers, and others with a longer-term interest will suffer the dire consequences. Growth for the sake of growth is the etiology of the cancer cell with the resultant disability and death. The only kind of growth that makes sense individually and organizationally is quality growth based on conscious choices in a framework of strategic projection. Whether it be passages in an individual’s transition from stage to stage or an organization’s transition from phase to phase, contextual realities, resource capabilities, and visionary goals must temper the timing, speed, and direction of the growth process. The triggering of the creative capabilities of executives, managers, consultants, business school faculties, and public administrators is the critical anodyne at this historical crossroads regarding the future of business organizations in this period of accelerated change processes. Taking the leadership to shape a manageable context that will ensure high performance and a profitable environment internally and externally is one of the ultimate challenges facing organizational leaders as they accelerate into the future. This paradigmatic transformation must be confronted with gusto and courage since it will set the strategic boundary-expanding framework for organizations through phases in the areas of leadership, management, finance, technology, and human resources. To blend external demands for growth with the constant development of internal infrastructure capabilities required to support profitable growth becomes the essential equation for success as we clarify operational guidelines and boundary shifts of living and performing in the technological age.
REFERENCES Annison, Michael H. and Wilfod, Dan S., Trust Matters: New Directions in Health Care Leadership, JosseyBass Publishers, San Francisco, 1998. Barker, Joel Arthur, Paradigms: The Business of Discerning the Future, Harper Collins, New York, 1993. Christensen, Clayton M., The Innovator’s Dilemma, Harper Collins, New York, 2003. Drucker, Peter F., Management Challenges for the 21st Century, Harper Collins, New York, 1999. Erikson, Erik, Childhood and Society, W.W. Norton, New York, 1950, pp. 247–274. Gladwell, Malcolm, The Tipping Point, Little, Brown and Company, Boston, New York, London, 2000. Goleman, Daniel, Working with Emotional Intelligence, Bantam Books, New York, 1998. Klein, Gary, Sources of Power: How People Make Decisions, The MIT Press, Cambridge, MA, 1998. Peter, Lawrence J. and Hull, Raymond, The Peter Principal: Why Things Always Go Wrong, William Morrow & Company, New York, 1969. Reich, Robert B., The Future of Success, Alfred A. Knopf, New York, 2000. Scho¨n, Donald A., The Reflective Practitioner, Basic Books, New York, 1983. Sheehy, Gail, Passages, E.P. Dutton & Company, Inc., New York, 1976. Sternberg, Robert J., Successful Intelligence, Penguin Putnam, Inc., New York, 1997. Stupak, Ronald J. and Greisler, David S., Unlearning lousy advice, Health Forum Journal, 43(5), 38–41, 2000. September/October. Stupak, Ronald J. and Leitner, Peter M., Eds., Handbook of Public Quality Management, Marcel Dekker, Inc., New York, pp. 2–40, 2001, see also pages 573–612 and 721–740. Welch, Jack and Burke, John A., Jack: Straight from the Gut, Warner Books, Inc., New York, 2001.
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
813
IMPROVING TECHNOLOGICAL LITERACY* The first step is understanding what is meant by “technology.” Then we must try to reach the broadest possible audience.
ABSTRACT Although the United States increasingly depends on technology and is adopting new technologies at a breathtaking pace, its citizens are not equipped to make well-considered decisions or to think critically about technology. Adults and children alike have a poor understanding of the essential characteristics of technology, how it influences society and how many people can and do affect its development.
INTRODUCTION At the heart of the technological society that characterizes the United States lies an unacknowledged paradox. Although the nation increasingly depends on technology and is adopting new technologies at a breathtaking pace, its citizens are not equipped to make well-considered decisions or to think critically about technology. Adults and children alike have a poor understanding of the essential characteristics of technology, how it influences society, and how people can and do affect its development. Many people are not even fully aware of the technologies they use every day. In short, as a society we are not technologically literate. Technology has become so user friendly that it is largely invisible. Many people use technology with minimal comprehension of how it works, the implications of its use, or even where it comes from. We drive high-tech cars but know little more than how to operate the steering wheel, gas pedal, and brakes. We fill shopping carts with highly processed foods but are largely ignorant of the composition of those products or how they are developed, produced, packaged, and delivered. We click on a mouse and transmit data over thousands of miles without understanding how this is possible or who might have access to the information. Thus, even as technology has become increasingly important in our lives, it has receded from our view. To take full advantage of the benefits of technology, as well as to recognize, address, or even avoid some of its pitfalls, we must become better stewards of technological change. Unfortunately, society is ill prepared to meet this goal. And the mismatch is growing. Although our use of technology is increasing apace, there is no sign of a corresponding improvement in our ability to deal with issues relating to technology. Neither the nation’s educational system nor its policymaking apparatus has recognized the importance of technological literacy. Because few people today have hands-on experience with technology, except as finished consumer goods, technological literacy depends largely on what they learn in the classroom, particularly in elementary and secondary school. However, relatively few educators are involved in setting standards and developing curricula to promote technological literacy. In general, technology is not treated seriously as a subject in any grade, kindergarten through 12th. An exception is the use of computers and the Internet, an area that has been strongly promoted by federal and state governments. But even here, efforts have focused on using these technologies to improve education rather than to teach students about technology. As a result, many K-12 educators identify * A. Thomas Young, Jonathan R. Cole and Denice Denton. Copyright q 2002 Issues in Science and Technology, Washington. Reproduced with permission of the copyright owner. Further reproduction or distribution is prohibited without permission. A. Thomas Young is a former executive vice president of Lockheed Martin. Jonathan R. Cole is provost and dean of faculties as well as John Mitchell Mason Professor of the University at Columbia University in New York. Denice Denton is professor of electrical engineering and dean of Engineering at the University of Washington, Seattle. Young was chairman and Cole and Denton were members of the National Academy of Engineering/National Research Council Committee on Technological Literacy (www.nae.edu/techlit).
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
814
Handbook of Technology Management in Public Administration
technology almost exclusively with computers and related devices and so believe, erroneously, that their institutions already teach about technology. Most policymakers at the federal and state levels also have paid little or no attention to technology education or technological literacy. Excluding legislation focused on the use of computers as educational tools, only a handful of bills introduced in Congress during the past 15 years refer to technology education or technological literacy. Virtually none of these bills has become law, except for measures related to vocational education. Moreover, there is no evidence to suggest that legislators or their staffs are any more technologically literate than the general public, despite the fact that Congress and state legislatures often find themselves grappling with policy issues that require an understanding of technology. It is imperative that this paradox, this disconnect between technological reality and public understanding, be set right. Doing so will require the cooperation of schools of education, schools of engineering, K-12 teachers and teacher organizations, developers of curriculum and instructional materials, federal and state policymakers, industry and non-industry supporters of educational reform, and science and technology centers and museums.
WHAT IS TECHNOLOGY? In the broadest sense, technology is the process by which humans modify nature to meet their needs and wants. However, most people think of technology only in terms of its tangible products: computers and software, aircraft, pesticides, water-treatment plants, birthcontrol pills, and microwave ovens, to name a few. But the knowledge and processes used to create and operate these products—engineering know-how, manufacturing expertise, various technical skills, and so on— are equally important. An especially critical area of knowledge is the engineering design process, of starting with a set of criteria and constraints and working toward a solution—a device, say, or a process—that meets those conditions. Technology also includes the infrastructure necessary for the design, manufacture, operation, and repair of technological artifacts. This infrastructure includes corporate headquarters, manufacturing plants, maintenance facilities, and engineering schools, among many other elements. Technology is a product of engineering and science. Science has two parts: a body of knowledge about the natural world and a process of enquiry that generates such knowledge. Engineering, too, consists of a body of knowledge (in this case, knowledge of the design and creation of human-made products) and a process for solving problems. Science and technology are tightly coupled. A scientific understanding of the natural world is the basis for much of technological development today. The design of computer chips, for instance, depends on a detailed understanding of the electrical properties of silicon and other materials. The design of a drug to fight a specific disease is made possible by knowledge of how proteins and other biological molecules are structured and interact. Conversely, technology is the basis for a good part of scientific research. Indeed, it is often difficult, if not impossible, to separate the achievements of technology from those of science. When the Apollo 11 spacecraft put Neil Armstrong and Buzz Aldrin on the moon, many people called it a victory of science. Similarly, the development of new types of materials or the genetic engineering of crops to resist insects are usually attributed wholly to science. Although science is integral to such advances, however, they also are examples of technology—the application of unique skills, knowledge, and techniques, which is quite different from science. Technology also is closely associated with innovation, the transformation of ideas into new and useful products or processes. Innovation requires not only creative people and organizations but also the availability of technology and science and engineering talent. Technology and innovation are synergistic. The development of gene-sequencing machines, for example, made the decoding of the human genome possible, and that knowledge is fueling a revolution in diagnostic, therapeutic, and other biomedical innovations.
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
815
HALLMARKS OF TECHNOLOGICAL LITERACY As with literacy in reading, mathematics, science, or history, the goal of technological literacy is to provide people with the tools to participate intelligently and thoughtfully in the world around them. The kinds of things a technologically literate person must know can vary from society to society and from era to era. In general, technological literacy encompasses three interdependent dimensions: knowledge, ways of thinking and acting, and capabilities. Although there is no archetype of a technologically literate person, such a person will possess a number of general characteristics. Among such traits, technologically literate people in today’s U.S. society should: Recognize technology in its many forms, and understand that the line between science and technology is often blurred. This will quickly lead to the realization that technology permeates modern society, from little things that everyone takes for granted, such as pencils and paper, to major projects, such as rocket launches and the construction of dams. Understand basic concepts and terms, such as systems, constraints, and tradeoffs that are important to technology. When engineers speak of a system, for instance, they mean components that work together to provide a desired function. Systems appear everywhere in technology, from the simple, such as the half-dozen components in a click-and-write ballpoint pen, to the complex, such as the millions of components, assembled in hundreds of subsystems, in a commercial jetliner. Systems also can be scattered geographically, such as the roads, bridges, tunnels, signage, fueling stations, automobiles, and equipment that comprise, support, use, and maintain the nation’s network of highways. Know something about the nature and limitations of the engineering design process. The goal of technological design is to meet certain criteria within various constraints, such as time deadlines, financial limits, or the need to minimize damage to the environment. Technologically literate people recognize that there is no such thing as a perfect design and that all final designs involve tradeoffs. Even if a design meets its stated criteria, there is no guarantee that the resulting technology will actually achieve the desired outcome, because unexpected and often undesirable consequences sometimes occur alongside intended ones. Recognize that technology influences changes in society and has done so throughout history. In fact, many historical ages are identified by their dominant technology: the Stone Age, Iron Age, Bronze Age, Industrial Age, and Information Age. Technology-derived changes have been particularly evident in the past century. Automobiles have created a more mobile, spread-out society; aircraft and advanced communications have led to a “smaller” world and, eventually, globalization; contraception has revolutionized sexual mores; and improved sanitation, agriculture, and medicine have extended life expectancy. Technologically literate people recognize the role of technology in these changes and accept the reality that the future will be different from the present largely because of technologies now coming into existence, from Internet-based activities to genetic engineering and cloning. Recognize that society shapes technology as much as technology shapes society. There is nothing inevitable about the changes influenced by technology; they are the result of human decisions and not of impersonal historical forces. The key people in successful technological innovation are not only engineers and scientists but also designers and marketing specialists. New technologies simply meet the requirements of consumers, business people, bankers, judges, environmentalists, politicians, and government bureaucrats. An electric car that no one buys might just as well never have been developed, and a genetically engineered crop that is banned by the government is of little more use than the weeds in the fields. The values and culture of society sometimes affect technology in ways that are not immediately obvious, and technological development sometimes favors the values of certain groups more than others. It has been argued, for example, that such development traditionally has favored the values of males more than those of females and
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
816
Handbook of Technology Management in Public Administration
that this factor might explain why the initial designs of automobile airbags were not appropriate to the smaller stature of most women. Understand that all technologies entail risk. Some risks are obvious and well documented, such as the tens of thousands of deaths each year in the United States from automobile crashes. Others are more insidious and difficult to predict, such as the growth of algae in rivers caused by the runoff of fertilizer from farms. Appreciate that the development and use of technology involve tradeoffs and a balance of costs and benefits. For example, preservatives may extend the shelf life and improve the safety of our food but also cause allergic reactions in a small percentage of individuals. In some cases, not using a technology creates added risks. Thus, technologically literate people will ask pertinent questions, of themselves and others, regarding the benefits and risks of technologies. Be able to apply basic quantitative reasoning skills to make informed judgments about technological risks and benefits. Especially important are mathematical skills related to probability, scale, and estimation. With such skills, for example, individuals can make reasonable judgments about whether it is riskier to travel from St. Louis to New York on a commercial airliner or by car, based on the known number of fatalities per mile traveled for each mode of transportation. Possess a range of hands-on skills in using everyday technologies. At home and in the workplace, there are real benefits of knowing how to diagnose and even fix certain types of problems, such as resetting a tripped circuit breaker, replacing the battery in a smoke detector, or unjamming a food-disposal unit. These tasks are not particularly difficult, but they require some basic knowledge and, in some cases, familiarity with simple hand tools. The same can be said for knowing how to remove and change a flat tire or hook up a new computer or telephone. In addition, a level of comfort with personal computers and the software they use, and being able to surf the Internet, are essential to technological literacy. Seek information about particular new technologies that may affect their lives. Equipped with a basic understanding of technology, technologically literate people will know how to extract the most important points from a newspaper story, television interview, or discussion; ask relevant questions; and make sense of the answers. Participate responsibly in debates or discussions about technological matters. Technologically literate people will be prepared to take part in public forums, communicate with city council members or members of Congress, or in other ways make their opinions heard on issues involving technology. Literate citizens will be able to envision how technology (in conjunction with, for example, the law or the marketplace) might help solve a problem. Of course, technological literacy does not determine a person’s opinion. Even the best-informed citizens can and do hold quite different opinions depending on the question at hand and their own values and judgments. A technologically literate person will not necessarily require extensive technical skills. Such literacy is more a capacity to understand the broader technological world than it is the ability to work with specific pieces of it. Some familiarity with at least a few technologies will be useful, however, as a concrete basis for thinking about technology. Someone who is knowledgeable about the history of technology and about basic technological principles but who has no hands-on capabilities with even the most common technologies cannot be as technologically literate as someone who has those capabilities. But specialized technical skills do not guarantee technological literacy. Workers who know every operational detail of an air conditioner or who can troubleshoot a software glitch in a personal computer may not have a sense of the risks, benefits, and tradeoffs associated with technological developments generally and may be poorly prepared to make choices about other technologies that affect their lives. Even engineers, who have traditionally been considered experts in technology,
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
817
may not have the training or experience necessary to think about the social, political, and ethical implications of their work and so may not be technologically literate. The broad perspective on technology implied by technological literacy would be as valuable to engineers and other technical specialists as to people with no direct involvement in the development or production of technology.
LAYING
THE
FOUNDATION
In order to improve technological literacy, the most natural and important place to begin is in schools, by providing all students with early and regular contact with technology. Exposing students to technological concepts and hands-on, design-related activities is the most likely way to help them acquire the kinds of knowledge, ways of thinking and acting, and capabilities consistent with technological literacy. However, only 14 states now require some form of technology education for K-12 students, and this instruction usually is affiliated with technician-preparation or school-to-work programs. In 2000, the Massachusetts Board of Education added a combined engineering/technology component to its K-12 curriculum, becoming the first state to explicitly include engineering content. Elsewhere, a few schools offer stand-alone courses at all grade levels, but most school districts pay little or no attention to technology. This is in stark contrast to the situation in some other countries, such as the Czech Republic, France, Italy, Japan, the Netherlands, Taiwan, and the United Kingdom, where technology education courses are required in middle school or high school. One limiting factor is the small number of teachers trained to teach about technology. There are roughly 40,000 technology education teachers nationwide, mostly at the middle-school or highschool level. By comparison, there are some 1.7 million teachers in grades K-12 who are responsible for teaching science. Another factor is inadequate preparation of other teachers to teach about technology. Schools of education spend virtually no time developing technological literacy in students who will eventually stand in front of the classroom. The integration of technology content into other subject areas, such as science, mathematics, history, social studies, the arts, and language arts, could greatly boost technological literacy. Without teachers trained to carry out this integration, however, technology is likely to remain an afterthought in U.S. education. Beyond grades K-12, there are additional opportunities for strengthening technological literacy. At two-year community colleges, many courses are intended to prepare students for technical careers. As they learn new skills, these students, with proper instruction, also can develop a better understanding of the underlying technology that could be used as the basis for teaching about the nature, history, and role of technology in our lives. Colleges and universities offer a variety of options for more advanced study of technology. There are about 100 science, technology, and society programs on U.S. campuses that offer both undergraduate and graduate courses; and a number of universities have programs in the history, philosophy, or sociology of technology. Many engineering schools require that students take at least one course in the social impacts of technology. For the adult population already out of school, informal education settings, such as museums and science centers, as well as television, radio, newspapers, magazines, and other media, offer avenues for learning about and becoming engaged in a variety of issues related to technology. A number of specific steps can help strengthen the presence of technology in both formal and informal education. For example, federal and state agencies that help set education policy should encourage the integration of technology content into K-12 standards, curricula, instructional materials, and student assessments (such as end-of-grade tests) in non-technology subject areas. At the federal level, the National Science Foundation (NSF) and the Department of Education can do this in a number of ways, including making integration a requirement when providing funding for the development of curriculum and instructional materials. Technically oriented agencies, such the National Aeronautics and Space Administration, the Department of Energy, and the National Institutes of Health, can support integration by developing accurate and interesting background materials for use by teachers of non-technical subjects.
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
818
Handbook of Technology Management in Public Administration
At the state level, science and technology advisers and advisory councils, of which there are a growing number, can use their influence with governors, state legislatures, and industry to encourage the inclusion of technology content not only in the general K-12 curriculum but also in schoolto-work and technician-preparation programs. State boards of education can provide incentives for publishers to modify next-generation science, history, social studies, civics, and language arts textbooks to include technology content. Such incentives might come from incorporating technological themes into state educational standards or by modifying the criteria for acceptable textbooks. States also should better align their K-12 standards, curriculum frameworks, and student assessments in the sciences, mathematics, history, social studies, civics, the arts, and language arts with national educational standards that stress the connections between these subjects and technology. Among such guidelines, the International Technology Education Association, a professional organization of technology educators, recently published Standards for Technological Literacy: Content for the Study of Technology, a comprehensive statement of what students must learn in order to be technologically literate. Another crucial need is to improve teacher education. Indeed, the success of changes in curricula, instructional materials, and student assessments will depend largely on the ability of teachers to implement those changes. Lasting improvements will require both the creation of new teaching and assessment tools and the appropriate preparation of teachers to use those tools effectively. Teachers at all levels should be able to conduct design projects and use design-oriented teaching strategies to encourage learning about technology. This means that NSF, the Education Department, and professional organizations that accredit teachers should provide incentives for colleges and universities to transform the preparation of all teachers to better equip them to teach about technology throughout the curriculum. In preparing elementary school teachers, for example, universities should require courses or make other provisions to ensure that would-be teachers are, at the very least, scientifically and technologically literate. Science for All Americans, an educational guidebook produced by the American Association for the Advancement of Science, might well serve as a minimum standard of such literacy. The research base related to technological literacy also must be strengthened. There is a lack of reliable information about what people know and believe about technology, as well as about the cognitive steps that people use in constructing new knowledge about technology. These gaps have made it difficult for curriculum developers to design teaching strategies and for policymakers to enact programs to foster technological literacy. Building this scientific base will require creating cadres of competent researchers, developing and periodically revising a research agenda, and allocating adequate funding for research projects. NSF should support the development of assessment tools that can be used to monitor the state of technological literacy among students and the public, and NSF and the Education Department should fund research on how people learn about technology. The findings must be incorporated into teaching materials. and techniques and into formal and informal education settings. It will be important, as well, to enhance the process by which people make decisions involving technology. One of the best ways for members of the public to become educated about technology is to engage in discussions of the pros and cons, the risks and benefits, and the knowns and unknowns of a particular technology or technological choice. Engagement in decision making is likely to have a direct positive effect on the non-expert participants, and involving the public in deliberations about technological developments as they are taking shape, rather than after the fact, may actually shorten the time and reduce the resources required to bring new technologies into service. Equally important, public participation may result in design changes that better reflect the needs and desires of society. Industry, federal agencies responsible for carrying out infrastructure projects, and science and technology museums should provide more opportunities for the non-technical public to become involved in discussions about technological developments. The technical community, especially engineers and scientists, is largely responsible for the amount and quality of communication
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
819
and outreach to the public on technological issues. Industry should err on the side of encouraging greater public engagement, even if it may not always be clear what types of technological development merit public input. In the federal arena, some agencies already require recipients of funding to engage communities likely to be affected by planned infrastructure projects. These efforts should be expanded. The informal education sector, especially museums and science and technology centers, is well positioned to prepare members of the public to grapple with the complexities of decision making in the technological realm. These institutions and the government agencies, companies, and foundations that support them could do much more to encourage public discussion and debate about the direction and nature of technological development at both the local and national level. If informed decision making is important for all citizens, then it is vital for leaders in government and industry whose decisions influence the health and welfare of the nation. With both sectors facing a daunting array of issues with substantial technological components, there is a great unmet need for accurate and timely technical information and education. Thus, federal and state agencies with a role in guiding or supporting the nation’s scientific and technological enterprise, along with private foundations concerned about good governance, should support education programs intended to increase the technological literacy of government officials (including key staff members) and industry leaders. Executive education programs could be offered in many locations, including major research universities, community colleges, law schools, business schools, schools of management, and colleges of engineering. The engineering community, which is directly involved in the creation of technology, is ideally suited to promote such programs. An engineering-led effort to increase technological literacy could have significant long-term payoffs, not only for decision makers but also for the public at large. These steps are only a starting point. Numerous other actions, both large and small, also will be needed across society. The case for technological literacy must be made consistently and on an ongoing basis. As citizens gradually become more sophisticated about technological issues, they will be more willing to support measures in the schools and in the informal education arena to raise the technological literacy level of the next generation. In time, leaders in government, academia, and business will recognize the importance of technological literacy to their own well-being and the welfare of the nation. Achieving this goal promises to be a slow and challenging journey, but one that is unquestionably worth embarking on.
KNOWLEDGE MANAGEMENT* ANECDOTES Origami I have before me paper models of a unicorn, a stegasaurus, and a giraffe. Each was folded from a single square sheet of paper without any cutting or pasting. What is the value of these figures? * David C. Hay, Essential Strategies, Inc. A thirty-year veteran of the Information Industry, Dave Hay has been producing data models to support strategic information planning and requirements planning for over twelve years. He has worked in a variety of industries, including, among others, power generation, clinical pharmaceutical research, oil refining, forestry, and broadcast. He is President of Essential Strategies, Inc., a consulting firm dedicated to helping clients define corporate information architecture, identify requirements, and plan strategies for the implementation of new systems. He is the author of the book, Data Model Patterns: Conventions of Thought, recently published by Dorset House, and producer of Data Model Patterns: Data Architecture in a Boxe, an Oracle Designer repository containing his model templates. He may be reached at:
[email protected]; Tel.: 713-464-8316; or http://www.essentialstrategies.com. This paper was previously published for the Oracle Development Tools Users Group that can be found at http://www.odtug.com. ODTUG is an independent, not-for-profit organization designed to aid you in your efforts to deliver reliable, high-quality information systems. The user group celebrates more than 2,100 members worldwide who share a common interest in Oracle’s development tools—Designer, Developer, Discoverer, and JDeveloper—as well as methodology, software process management, analysis and modeling, data warehousing, business process design, and Web development.
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
820
Handbook of Technology Management in Public Administration
The paper involved is a few pennies. I consumed 10 pizzas while folding them. Yet they are much more valuable than that. Why? I added knowledge. I obtained the knowledge from books written by John Montroll in Maryland, who created the knowledge. He has the incredible ability to see an animal, understand its proportions, and recognize how to convert a square piece of paper into a figure that represents that animal. After he created the knowledge, he transmitted it via books.1 I then acquired the knowledge and used it to produce the models. Stuff We Know My brain contains a lot of stuff I don’t really (or rarely) need: ALTER TABLESPACE SYSTEM ADD DATAFILE. Exit 12 off the New Jersey Turnpike will take you to Carteret. //SYSIN DD * A runway number is the runway’s heading minus the right digit. You used to IPL a computer instead of re-booting it. You can transfer from the 6 train to the B, D, or F train at Bleeker Street only if you are going downtown. . My Work For ten years I lived in the corporate world. I was an employee interested in materials planning techniques, and I became more knowledgeable about these than anyone in any of the companies where I worked. However, in each case, corporate politics prevented me from propagating the ideas and my inability to deal with politics prevented me from advancing in these companies. When I joined Oracle, the working relationship was now between me and my client. As long as I produced good work, my client (and I) were happy. My boss and the corporate structure were completely irrelevant. I liked that. The Internet Every day, I get a few more notes on my “Data Management mailing list.” Not all of them are of equal value, but they give me a very good window on what many people are thinking about. Where in the old days, my acquaintances would have been limited to people in my home town, now I am casually hearing from people all over the world—people who have exactly the same concerns that I do. I have a web page on which I have, among other things, posted the articles I have presented to ODTUG, IOUW, and others, as well as the ones I have written for magazines and journals. Each month the number of hits grows. (In March 800 people visited.) I get periodic reports that reveal that these people are from Singapore, Thailand, The Netherlands, Brazil, Estonia, and dozens of other places around the world. Occasionally I get e-mail from a reader saying that she likes an article and has passed it around her office—in Bombay or Tokyo or Prague. My views are being shared with people around the world.
INFORMATION REVOLUTION This is a cliche´ now, but the world is very different than it was. We, working in our narrow worlds, sort of know that, but I don’t think many of us have really confronted this emotionally. It’s not just that our children have different problems socially than we did, or that they can now blithely travel around the world without giving it a second thought.
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
821
What the above anecdotes have in common is their reflection of the fact that the world we live and work in is fundamentally different from what it was a hundred (or even fifty) years ago. This difference is evident in the way we work, what we do in our work, and the way our employers are organized. The modern era is all about knowledge. In the nineteenth century, society moved from working primarily on farms and as single artisans producing products one at a time, to working in factories, producing hundreds or thousands of copies of a product at the same time. The driver of this new economy changed from land to capital. If you could accumulate enough capital to build a factory, the factory would produce wealth. The people who worked there simply carried out your instructions for making wealth. Instructions were passed down an organizational hierarchy, and performance monitoring was passed up. Karl Marx observed that the people who worked in these factories became alienated from their work. Divisions grew up between management on the one hand that wanted the most output for the least money, and labor on the other who wanted a living wage for their efforts. In the last fifty years, things have changed again. Suddenly information is more important than physical capital. A company that is smarter in getting the most use out of a physical device will be more successful than one who is not. Marx’s observations are no longer relevant because the relationships between labor and capital have fundamentally changed. Microsoft is one of the most successful companies of all time, yet it produces virtually no physical product, and has relatively little physical capital. Oh, they do produce physical media, such as compact disks, but customers are not buying the media. They are buying the knowledge that is encoded on the media. Consider the microchip in your computer: The value of all the chips produced today exceeds the value of all steel produced.2 What makes a chip so valuable? Certainly not its physical components. It is ultimately made of sand. The value is in the design of the chip and in the design of the complex machines that make it. The value is in its knowledge components. Even companies that sell physical products, such as automobiles, have had to radically increase the intellectual content of their products. To compete, a car must now be made intelligently, economizing on weight, cleverly getting the engine not to emit harmful gasses, and providing just the right “feel.” All these things come from the auto-maker’s investment in knowledge and expertise, not from its investment in steel and rubber. This change has had a profound effect on the nature of the workplace. Now most of us are “knowledge workers,” not factory workers. Many of us no longer work for a “boss” who tells us what to do and makes sure that we do it to specification. We have become consultants, hired to assist our clients, using our expertise and knowledge. We tell the client what to do. This means that our entire transaction with the client comes down to whether or not we are providing a useful service. Not only is the client free to let us go if he decides we are not being useful, but we probably want to go if the environment is not one where we can be productive. This is a much happier arrangement than the corporate world where we must be alert to politics and making the right people happy—in ways that have nothing to do with our skills or abilities. Our motivation is no longer security and money. We work on projects because they stimulate our imagination and intellect. We will work for a company as long as it provides interesting projects. When it stops doing so, we will go somewhere else. The study of knowledge is both very old and very new. Philosophers have been writing about it for millennia. But attention to the relationship of knowledge to the structure of the workplace is relatively new. A lot has been written about it to be sure, but most of this has been in the last ten years. This paper is an attempt to collect some of the more salient observations that have been made about knowledge in the modern workplace. Because of the nature of the topic, it is somewhat random in its structure, but it is to be hoped that the reader will get an introduction to what is being discussed in various knowledge management circles.
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
822
Handbook of Technology Management in Public Administration
The paper will cover the following topics: † † † † † † †
Kinds of Knowledge—In General Knowledge of What? From Data to Wisdom and All the Steps in Between Too Much Knowledge? Implications of Knowledge Management to Companies Accounting Doesn’t Cut It Kinds of Capital
KINDS OF KNOWLEDGE—IN GENERAL The coin of the realm, then, has become “knowledge.” This is an ancient concept that has taken on new meaning in recent years. What does it mean? Knowledge is created, acquired and transmitted through generations from parents to children. Within organizations, knowledge is transferred from bosses to employees and vice versa, and among colleagues. The knowledge may be of techniques, procedures, events, rules, or navigation of the company itself. What kinds of knowledge are important to an organization? At a simplified level, we can identify these: Data—As information professionals, we assume that the most important knowledge is that which is captured in our relational databases. We are merrily building data warehouses that purport to put all the information in the company at the management’s fingertips. This is only one part of the company’s knowledge, however. It is confined to information about products, people, activities, and so forth, that are currently part of our environment. A data warehouse has little information about the future. What businesses should we be in instead? Intellectual capital—Buried more deeply in the company’s archives are the results from its research and development. Here are the patents and copyrights. The drugs that were interesting ideas but which didn’t pan out in curing the diseases for which they were intended. The ideas that looked promising but never came to fruition that time around. Here we have a tremendous source of future growth and revenue. There is intellectual capital that the company already owns that it has been unable to exploit. For example, how many patents does your company hold that are filed away somewhere, forgotten? Can systems help here? Of course. Can systems solve the organizational problem of making it possible to use this capital in a constructive way. Probably not. Expertise—The third category of knowledge is the hardest of all to capture—the expertise of the company’s employees. People know things about what works and what doesn’t. A company with low turnover has a tremendous body of knowledge—if it can figure out how to exploit it. A company with high turnover is losing wealth every time someone leaves.
KNOWLEDGE
OF
WHAT?
So what is it that we want to know? Journalism can give us a clue. The traditional dimensions of any news story are “what?,” “how?,” “where?,” “who?,” “when,” and “why?” John Zachman has pointed out that these translate into the following: † Things of the business (What)—What are the things of significance to the organization
about which it wants to know something? What resources (physical and intellectual) exist?
† Processes (How)—What does the company do? What should it be doing? How does it work?
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
823
† Distribution and geography (Where)—Where does the company do business? How do
people, materials, money, and information travel from place to place?
† The organization (Who)—What is the company’s organization? This whole change in
orientation towards knowledge management is having profound effects on the organization. What does this mean? † Events, agents, responses (When)—What role does time play in the company’s operations? What events cause things to happen? Who responds and in what ways? † Motivation and business rules (Why)—What are the company’s objectives, and how are they translated into business rules?
The company’s body of knowledge is composed of all of these, mixed together in various ways. Some modeling techniques are available to address some of them, but no model has yet completely captured them all.
FROM DATA TO WISDOM
AND
ALL THE STEPS IN BETWEEN
It is common to confuse data, information, and knowledge. People are beginning to tease apart definitions of each. Verna Allee has defined levels of knowing in terms of the first two categories described above: what is known, and how is it used. In each of these realms, she has then characterized the following:3 What is known? DATA (Instinctual learning)—the sensory or input level INFORMATION (Single feedback loop learning)— data organized into categories
KNOWLEDGE (Behavior modification)—the interpretation of information by someone
MEANING (Communal learning)—perception of concepts, relationships, and trends. From this perspective it is possible to detect relationships between components PHILOSOPHY (Inquiring into our own thinking processes)—integrative or systemic understanding of dynamic relationships and non-linear processes, discerning patterns that connect. Recognizes the imbeddedness and interconnectedness of systems WISDOM (Generative learning)—learning for the joy of learning, involving creative processes, heuristic and open-ended explorations, and profound selfquestioning UNION (Synergistic)—integration of direct experience and appreciation of oneness or deep connection with the greater cosmos. Requires processes that connect purpose to the health and well-being of the larger community and the environment
How is it used? DATA (Feedback)—registering data without reflection PROCEDURAL (Efficiency)—doing something the most efficient way. Conforming to standards or making simple adjustments to modifications. Focus is on developing and following procedures FUNCTIONAL (Effectiveness)—seeking effective action and resolution of inefficiencies. Evaluating or choosing between alternate paths. Focus is on work design and engineering aspects MANAGING (Productivity)—using conceptual frameworks to understand what promotes or impedes effectiveness. Effective management and allocation of resources and tasks, using concpetual frameworks to analyze and keep track of multiple variables INTEGRATING (Optimization)—long-term planning and adaptation to a changing environment. This includes long-range forecasting, development of multi-level strategies, and evaluating investments and policies with regard to long-term success RENEWING (Integrity)—Defining or reconnecting with values, vision, and mission. Understanding purpose UNION (Sustainability)—Commitment to the greater good of society, the environment, and the planet
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
824
Handbook of Technology Management in Public Administration
What does all this mean? Imagine our data warehouse project. It begins with the collection and compilation of data from many sources. The process involves technology to bring the data from various places to a central place. The data become information when they are presented in an organized fashion. A sales report, for example, or a customer complaints report, are information. Our data warehouse assignment is to present the information as efficiently as possible. Current procedures are being followed for dealing with complaints. Evaluations are in terms of their success in doing so. The sales and complaint data become knowledge when the process of handling complaints is examined and attempts are made to improve it. This affects the company’s processes by stimulating efforts to make the customer complaints go down. The process of dealing with complaints is examined to see if it can be improved. We are looking for effective action. By looking at the overall process of handling complaints, it is possible to divine the meaning of this procedural knowledge. How can we make the company more productive overall? Are there patterns behind the complaints? What about the correlation between high levels of complaints and declining sales? What is there about the product, the way we sell it, and the way we use it that causes these complaints. The examination of philosophy is all about understanding patterns in the environment. Our response is to do long term planning to adapt to the environment, based on what we are able to figure out about it. Is the company exhibiting wisdom in the way it pursues its values, vision, and mission? How compatible are those complaints (and sales levels) with our values, vision, and mission? And finally, how does our company’s behavior (in the resolution of complaints, for example) relate to the community at large? Are our products socially desirable. Are we furthering life or inhibiting it? Have we formed a union with our environment?
TOO MUCH KNOWLEDGE? Especially in our field, there is way too much to know. When I was new in the business, I could read Computerworld and not be bothered by the fact that most of it was a complete mystery to me. Know, most of the articles are on subjects that I am supposed to know something about. Indeed, I could know a lot about them if I only had time to pursue them. I find it extremely bothersome that I don’t really. How many books do you have on your bookcase that looked really interesting when you bought them, and you really do intend to read—but haven’t had time to look at yet? We, and all managers, are up against the Law of Requisite Variety, first described by H. Ross Ashby in 1956: Only variety can destroy variety.4 This means that if you wish to regulate a process, your variety must be equal to that of the process to be regulated. Variety is a concept from information theory. It means simply the number of different states. A communication channel’s capacity is expressed in terms of its variety. If it can transmit 56,000 bits per second, that is the total variety that can be communicated. The problem is that each of us has a channel capacity. We can only absorb so many things. Interestingly enough, this is measured in terms of the number of discrete actions we can take. If we are only able to act in four ways, we are only capable of receiving four bits of information (variety). We deal with this by inventing amplifiers and attenuators for the variety. (“Attenuator” is an engineering word for “filter.”) An exception report is an attenuator. A more common attenuator is our tendency to simply skim over large reports, with random facts reaching our consciousness. These techniques reduce the total variety of the original body of data. Going the other direction, a broadcast e-mail from the boss to his staff is an amplifier.
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
825
The Law of Requisite Variety was converted by Stafford Beer into three Principles of Organization. The first of these, and the only one that concerns us here is: Managerial, operational, and environmental varieties, diffusing through an institutional system, tend to equate; they should be designed to do so with minimal damage to people and to cost.5
Our assignment, then, as information system designers, is not to present all data to our users, but rather to design attenuators so that our users only receive the information that they can absorb and use. We are supposed to be reducing the information presented, not increasing it. Absent design, an attenuator might be simply the fact that you can only absorb six numbers from the 200-page report. You can design an exception report to present the most important six numbers. Our skills, then, are measured in terms of our ability to determine (or our providing a facility which can determine) which information is important.
IMPLICATIONS
OF
KNOWLEDGE MANAGEMENT TO COMPANIES
New to the equation is the idea that we can manage knowledge itself. This entails “monitoring and improving knowledge by measuring and modifying the knowledge processes and their environment.”6 So how do you manage a knowledge-based company? Which is to say, how do you manage the knowledge of any company? First, you get rid of the organization chart. In the past your job was defined (and constrained by) who was above you and who was below you in the organizational hierarchy. Now it is defined by who you work with—wherever in the company (and in the world) those people are. The “boss” is now irrelevant. In the old days, the boss told you what to do and instituted controls to make sure that you did that. Now, the boss may not even really understand what you do. His job is to make sure that you have what you need in order to do what you are to do. He supplies resources and then gets out of the way. Knowledge is created “through the reconstruction of older concepts as well as the invention of new ones. Contrary to popular belief, knowledge is not discovered like diamonds or oil. It is constructed through concepts that we already have through observation of objects and events. And it only becomes knowledge when a person, group, or society validates the concept.”7 Knowledge processes are those intended to (1) produce knowledge, (2) acquire knowledge, and (3) transmit knowledge. Knowledge processes support other business processes by providing knowledge needed by agents to perform acts. Knowledge management attempts to bring together technology-based repositories of codified information (the “supply-side” view) and knowledgeenabling environments, or learning organizations (the “demand-side” view).8 Specifically, the old practice of handing out standard print-out reports is an example of supply side information processing; a data warehouse that allows flexible queries on a large body of corporate knowledge is an example of demand side processing. Good knowledge management means influencing knowledge processes within an organization so that goal-directed learning, innovation, and adaptive evolution can occur.
ACCOUNTING DOESN’T CUT IT Companies are ultimately evaluated in financial terms. The double entry accounting system we use to account for a company’s assets and liabilities was invented in 1494 by Luca Pacioli,9 in a world where everyone was either a farmer or shopkeeper. Aside from the addition of specialized reports such as balance sheets, income statements, and cost accounting, the scheme hasn’t changed in 500 years.
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
826
Handbook of Technology Management in Public Administration
The problem with it is that it only recognizes tangible assets—assets from the farming and later the industrial revolution days. It has no way to recognize a company’s intellectual assets. “The components of cost in a product today are largely R&D, intellectual assets, and services. The old accounting system, which tells us the cost of material and labor, isn’t applicable.”10 The effect of this is that companies are often sold for many times their book value—which is to say, for many times their physical assets—based on the perceived value of their intangible assets. On the books, this amount is listed as “good will,” but somehow that isn’t really an adequate representation. For example, in 1998, Berkshire Hathaway’s net worth was $57.4 billion, the largest of any American corporation. Berkshire Hathaway’s market value, however, was only one third that of knowledge companies Microsoft and General Electric.11
KINDS OF CAPITAL Ok, so if the physical capital on the balance sheet isn’t important any more, what is? Thomas Stephens lists three kinds of “intellectual capital”:12 † Human Capital—the value of the knowledge held by a company’s employees. † Structural Capital—the physical means by which knowledge and experience can
be shared.
† Customer Capital—the value of the company’s franchise and its ongoing relationships
with its customers (and vendors).
Human Capital A company always has much more knowledge and expertise than it realizes. Many companies are very poor at realizing and exploiting this. Traditional corporate organizations have often prevented companies from gaining full benefit from employees’ knowledge. In the new world, this must change. During the nineteenth century, the writings of Karl Marx and Charles Dickens gained currency because they described the fundamental problems of having people work as appendages to machinery. People didn’t own the equipment they used. They were interchangeable. The jobs were narrow and boring. Unfortunately, because of the nature of the work to be done, this was the most economically attractive alternative, and it continued well into the twentieth century. In the last fifty years or so, the value of the knowledge component of products has become recognized, and factory workers have become knowledge workers. Suddenly the tables have turned. Now the worker chooses what he works on and how he goes about it. Because the company is dependent upon his knowledge, it must permit this to happen. It is in the nature of knowledge that it is communal, so people are no longer working on isolated tasks. The working environment is becoming clusters of people who share an area of interest or an objective. Their motivation is in the work itself, not the benefits bestowed by the corporation. Thomas Stewart describes the opinion of Frank Walker, president of GTW Corp., that there will ultimately be only four types of career: 1. The top level sets strategy: It is the land of presidents and CEOs and executive VPs. 2. Resource-providers develop and supply talent, money, and other resources; they are the CFOs and CIOs, human resources managers, temporary services firms, or heads of traditional functional departments like engineering and marketing. 3. Project managers buy or lease resources from resource-providers—negotiating a budget and getting people assigned to the project—and put them to work.
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
827
4. Talent: chemists, finance guys, salespeople, bakers, candlestick makers (and presumably the odd system developer or two).13 Managing in this environment is not easy—especially for people who only know the old capitalist approach. Structural Capital This is what we information technologists can deliver. This includes everything from the Internet and Lotus notes, for sharing ideas and thoughts on various subjects, to data warehouses, which publish the operational data from the company. Companies, like Wal-Mart, that are successful in building their structural capital are very successful in the marketplace. Ok, so what does all this mean to those of us that build systems? Knowledge management can be divided into two topics: Natural knowledge management and artificial knowledge management. Natural knowledge management is concerned with the way people learn and communicate with each other. It is, for the most part, not concerned with technology. Artificial knowledge management is all about information processing using technological tools. As we address artificial knowledge management, we must keep in mind three things: We must understand the role of systems—Systems don’t create knowledge; they manipulate data and turn them into information. System design will make it easier or harder for users to take the next step and turn information into knowledge. The decision to build particular systems should be based on the meaning, philosophical and wisdom levels of understanding. We must design systems to support knowledge management (filter variety)—The job is not to push out more data. The job is to allow a user to naturally retrieve the right data. This requires skill in designing data and the user interface. This is the fundamental criterion we must apply in designing our data marts: are they presenting the right amount of the right data for the user to make decisions? (Does the variety of the presentation match the variety of the user?) We must expand the domain of our systems to include “fuzzier” data—This includes not only compiling data in databases about such things as patents and trademarks, but also making available better communications tools, so that people can work together on projects, even if they are not physically in the same place. This is particularly true of research kinds of projects where the process is one of pure intellectual exploration. Also important is the need to capture the results of knowledge creation in meaningful, accessible ways. Electronic mail and products such as Lotus Notes have taken us a long way in this direction. Customer Capital In the days of smoke-stack capitalism, the economy consisted of factories producing thousands of copies of the same thing. Marketing consisted of persuading lots of people that that thing was exactly what they wanted. The customer was at the mercy of the producer. Now, the balance of power is devolving onto the customer. Customers expect tailor-made products. (Land’s End just published an ad for swimsuits that are designed precisely for your shape.) This means that the company’s relationship to the customer—its ability to clearly understand what the customer wants—is critical. Companies that have established such relationships are worth a great deal more than companies which have not. But these relationships show up nowhere on the books.
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
828
Handbook of Technology Management in Public Administration
CONCLUSION My son is an undergraduate student studying philosophy. When I told him that I was looking into the field of knowledge management as it applies to corporations, he laughed. We’ll see .
NOTES 1. For example, John Montrol, Prehistoric Origami. 2. Thomas A. Stuart, Intellectual Capital, Doubleday/Currency (New York: 1997), p. 13. 3. Verna Allee, The Knowledge Evolution: Expanding Organizational Intelligence, Butterworth-Heinemann (Boston: 1997), pp. 67–68. 4. W. Ross Ashby, An Introduction to Cybernetics, John Wiley and Sons (Science Editions), New York, 1956. 5. Stafford Beer, The Heart of Enterprise, John Wiley and Sons, Chichester, U.K., p. 97, 1979. 6. Ed Swanstrom, What is Knowledge Management, John Wiley and Sons, Chichester, U.K., p. 3, 1998. 7. Ibid., p. 3. 8. Mark McElroy, “Your Turn: ‘Un-Managing’ Knowledge in the Learning Organization,” Leverage, Pegasus Communications, Waltham, MA: Novermber 9, 1998. 9. Luca Pacioli, Summa de arithmetica, geometrica, proportioni, proportionalita Venice, 1494. 10. Thomas A. Stewart, Intellectual Capital, p. 59. 11. Berkshire Hathaway, Annual Report, 1998, p. 4. 12. Thomas Stewart, Intellectual Capital. 13. Ibid., p. 204.
LEGAL PLURALISM AND THE ADJUDICATION OF INTERNET DISPUTES* ABSTRACT No single entity—academic, corporate, governmental or non-profit—administers the Internet. (American Civil Liberties Union v Reno [E.D. Pa. 1996] 929 F. Supp. 824, 832)
The problems of regulation on the Internet are simply stated. First, it allows novel activities: e-mail, electronic discussion groups, simple transfer or viewing of text, images, sound, and video. These activities may fall foul of laws of obscenity or defamation in some or all of the jurisdictions in which it is available. Second, the Internet is a distributed system that straddles geographical and jurisdictional boundaries; the regulation of such activities is likely to fall within two or more national “legal” jurisdictions. It may therefore be difficult to choose an appropriate jurisdiction. Third, the inevitable need to choose a jurisdiction will mean that the values to be imposed upon the dispute will be the values of that jurisdiction, values that may be different from the values of those involved in the dispute. Much has been written on the first two problems and significant developments have been made in the formulation of principles to be applied to the problem of choosing * By Richard Jones, Law and Information Technology, Liverpool John Moores University, Rodney House, Mount Pleasant, Liverpool L3 5UX, U.K.; Email:
[email protected]. Reproduced, with permission, from the International Review of Law, Computers & Technology, Vol. 13, Issue 1, 1999. Copyright q 1999 International Review of Law, Computers & Technology is the property of Taylor & Francis Ltd and its content may not be copied or Emailed to multiple sites or posted to a listserv without the copyright holder’s express written permission. However, users may print, download, or Email articles for individual use.
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
829
a jurisdiction. In this paper, I will begin to focus on the third problem, the problem of inappropriate values being imposed upon Internet behavior. The paper will develop the theme that the need for a single jurisdiction and, in consequence, the need for a single set of values to be imposed upon Internet activities is a fiction born out of centralist systems of western jurisprudence. The paper will review how courts have turned against pluralistic approaches in the past when dealing with clashes in cultural and religious values, particularly the clash in the English courts in the case of Salman Rushdie’s “The Satanic Verses.” Western courts have been dismissive of cultural and religious claims either treating them as “repugnant” or contrary to public policy, or else questioning the validity of the motives of the applicants. It is evident from recent cases in the U.S., that judges will use similar techniques to impose their own value values upon Internet activity. The concept of legal pluralism is not recognized within westernized systems of law. The paper will then consider whether a more pluralistic strategy would provide a more satisfactory approach to dealing with such disputes on the Internet: an approach that would enable the resolution of the conflict between different cultural and religious values.
INTRODUCTION I have commented elsewhere that I believe the analysis of the issues of dispute resolution on the Internet are based upon three cumulative arguments.1 The first that it is possible and desirable to regulate Internet activities by one set of principles. The second that, as such, a jurisdiction can be isolated to adjudicate over such activities; and third, that that jurisdiction will inevitably apply its own values to the resolution of the dispute. In other words, there is a concerted effort by those legal jurisdictions that are at present adjudicating on Internet disputes to impose their values onto these disputes through the process of claiming jurisdiction. The argument is circular: a single set of rules can prevail, jurisdiction can be claimed, local rules can apply. At this point in time, because the Internet is predominantly used by first world countries,2 then the values imposed are those of these countries.
THE THREE-STAGE ARGUMENT The developments in the adjudication of Internet disputes shows this process. First, there are examples of attempts to claim the application of one set of rules. The U.S. Computer Decency Act 1996 was an attempt to establish a comprehensive set of rules to regulate obscenity on the Internet, derived from one set of values. The Act had provisions designed to prevent young people from accessing indecent material over computer networks. It was to be a criminal offence to engage in communication on computer networks that was either “indecent” or “patently offensive” if such communication could be viewed by a minor. It failed, the Act being found to be unconstitutional in American Civil Liberties Union v Reno.3 In what is being described as “Round 2,” the American Civil Liberties Union (ACLU) are now challenging the validity of the Child Online Protection Act (COPA), which was to go into effect on 29 November 1998. In ACLU v Reno [1999], the District Court upheld the preliminary claim that this statute is also unconstitutional (http://aclu.org). The United States Congress assumes in these statutes that the way to deal with obscenity on the Internet was through existing federal “centralized” laws on obscenity and child pornography, extended and applied to the Internet. In Minnesota v Granite Gates Resorts,4 the Minnesota Court of Appeals affirmed the lower court’s determination that a non-resident defendant was subject to personal jurisdiction in Minnesota, based on Internet advertisements for an up-coming Internet gambling service. The appeals court noted that advertisements placed on the Internet were analogous to a broadcast and to direct mail solicitations; activities which, under the Minnesota long-arm statute, are sufficient for an exercise of personal jurisdiction. Moreover, the placement of Internet advertisements in this case indicated the defendant’s clear intent to solicit business from
DK3654—CHAPTER 10—3/10/2006—15:28—SRIDHAR —XML MODEL C – pp. 771–844
830
Handbook of Technology Management in Public Administration
markets, which included Minnesota. The argument begins with the assumption that there must exists a single, monolithic, unified sets of rules flowing from the “State’s” hierarchy, which have universal applicability to the case.5 In neither example was there consideration that alternative principles could apply. The next key step in the argument is the need to claim jurisdiction over the dispute. The search to find an appropriate jurisdiction to adjudicate on the activity has come up against an immediate problem, i.e., that as Internet activity is likely to span at least two competing jurisdictions, these are unlikely to agree on the standards to be imposed upon such activities. The result is a need to choose one jurisdiction over another: a choice of law debate. In U.S. v Thomas,6 the standards of the receiving state (Tennessee) were chosen over the “host” state (California). They viewed the Internet merely as a transmission mechanism allowing the activity to happen between two jurisdictions. How, then, have the courts decided between conflicting jurisdictional claims? Jurisdiction has been claimed by a number of mechanisms, traditional claims being based upon territoriality or connection.7 Under the U.S. Constitution, a court cannot assert jurisdiction over a potential defendant unless the defendant has sufficient “minimum contacts” with the forum so as to satisfy traditional notions of fair play and substantial justice.8 These minimum contacts may be some type of systematic and continuous contact with the forum, or isolated or occasional contacts purposefully directed toward the forum.9 In the following cases, the courts found that there was sufficient contact in order to claim jurisdiction. In CompuServe v Patterson,10 an Internet user from Texas subscribed to a network service based in Ohio. The user “specifically targeted” Ohio by subscribing to the service and entering into a separate agreement with the service to sell his software over the Internet, and advertised his software through the service. He repeatedly sent his software to the service in Ohio. The court held that the user had “reached out” from Texas to Ohio and “originated and maintained” contacts with Ohio so that jurisdiction vested in the Ohio court. Similar decisions were reached in Panavision v Toeppen and EDIAS Software v BASIS.11 However, in a contrary line of cases, jurisdiction has not been held to extend to defendant merely because their web site is accessible to those from the state claiming jurisdiction. In Hearst v Goldberger,12 the federal court held that the New York long-arm statute did not permit a federal court to exercise personal jurisdiction over an out-of-state defendant whose web site was accessible to, and has been electronically “visited” by, computer users in New York. Thereby, following the decision of the Court of Appeals in Bensusan Restaurant v King,13 this case involved a trademark infringement suit brought by the owner of the famous New York jazz club (and of the federally registered trademark) “The Blue Note” against the owner of a small Missouri jazz club with the same name, based on alleged infringement on the defendant’s Internet web site. The Court of Appeals affirmed the district court’s finding that King was not subject to personal jurisdiction in New York based on the use of his Internet web site. Without resorting to a due process analysis, the court determined that Bensusan had failed to allege that King had committed a tortuous act in New York; an act which is required to exercise personal jurisdiction over a non-resident defendant. The court stated that King was neither present in the state when the allegedly tortuous act (posting of the allegedly infringing trademark) occurred, nor did King reasonably expect that posting his web site would have consequences in New York.14 Other traditional mechanisms that may be used to claim jurisdiction, but not yet evident in the case law, include the fact that allegiance is owed by the defendant to the jurisdiction, that jurisdiction is required to protect citizens (in part evident in the Minnesota case) and universality (in part evident in the U.S. Computer Decency Act 1996). Other approaches have attempted to avoid the problem of choice of a jurisdiction by suggesting a collaborative approach to the jurisdiction of the Internet, others have suggested self-regulation of the Internet15 and the benefit of International Conventions. Efforts at overcoming the problems of jurisdiction have, in large part, only met with limited success, because of the cultural divide and differences in social values between (and sometimes within) different
DK3654—CHAPTER 10—3/10/2006—15:29—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
831
countries. Quite simply, what is a crime in one country is not necessarily a crime in another, what is objectionable in one country is perfectly acceptable in another. To date it appears likely that many countries can, at most, agree on co-operating and overcoming traditional jurisdictional considerations only in those computer crime or misuse offences which may be common to the computer crime laws of Europe and North America. The author suggests that a possible solution to the impasse would be to blend self-regulation and mandatory rules: the nature of Internet content may be rated through either self-rating or third-party rating to internationally accepted ratings standards. Rating of content would be mandatory while mis-rating would attract criminal liability.16
In another attempt to circumvent the choice of law issues, some writers have suggested that the Internet should itself be raised to the standard of a national jurisdiction where appropriate values and norms may be applied. Post and Johnson17 offer the following solution: We believe that the most obvious answer to this question—existing territorial sovereigns—may well be wrong . The new science of complex systems gives us reason to hope that an overall system of governance of the net that reconnects rule-making for on-line spaces with those most affected by those rules—but that also allows on-line groups to make decentralised decisions that have some impact on others, and that therefore elicit disparate responsive strategies—will recreate a new form of “civic virtue.”
I commented on this approach as follows:18 The assumption made is that the claiming of jurisdiction over a dispute inevitably means applying that jurisdiction’s values to the adjudication of that dispute. This then leads them to an analysis based upon the need to establish a set of values, a jurisdiction in cyberspace, distinct from the national rules that may apply in a “choice of law” analysis. This merely replacing one states values and norms with a set of values and norms for cyberspace to be determined by some mechanism not defined? In the process raising further problems of how the appropriate standards may be established for cyberspace? In looking for some consensus this will lead to a raising of level of tolerance to encompass the wishes of the most conservative nations. This analysis has added little to the problem for now the instead of choosing between two separate national values one is forced to choose between one nations values and those of cyberspace.
The third and final part of the argument is that having claimed jurisdiction, it is then necessary to apply the values of that local jurisdiction to that dispute. For example, the desire of the State of Minnesota to impose its values on gambling in Minnesota v Granite Gates Resorts19 and for Tennessee to impose its standards in U.S. v Thomas.20 It seems self-evident to courts that it is their standards that they should apply. In the context of the meaning of the phrase “what is in a child’s best interests,” Brenan stated in England: in the absence of legal rules or a hierarchy of values the best interests approach depends upon the value system of the decision maker.21
Fixico22 comments in the context of differing value systems: Native Americans and Anglo-Americans differ considerably in their value systems. The latter has established a system emphasising capitalistic individual gain and individual religious inclination. By contrast, American Indian values are holistic and community-oriented.
The difficulty is that the Internet is potentially a global phenomenon that spans many contrasting jurisdictions, where the values and cultures of the English common law system hold no sway. Where a case genuinely does not have a common heritage, such an approach will lead to one jurisdiction being favored over another. Cameron, 23 in discussing the case of Playboy Enterprises v
DK3654—CHAPTER 10—3/10/2006—15:29—SRIDHAR —XML MODEL C – pp. 771–844
832
Handbook of Technology Management in Public Administration
Chuckleberry Publishing,24 highlights how the U.S. laws of infringement of copyright were applied to an Italian Internet site. He concludes, “Suppose . a British judge ordered that an American based newsgroup should either close or become password protected so as to prevent browsing in Britain. A side effect would be to restrict the use of that site . I am sure that the American Internet community would be up in arms at such a restriction .” This result is, of course, less likely than the reverse, the restriction of non-U.S. newsgroups by U.S. courts. Therefore, the international community is open to the potentially dominant pressure and culture of the United States. The solution offered by Cameron25 is to judge the infringement by the role that the defendant chose to take. This then opens the possibility that courts will be faced with the challenge of applying differing values, i.e., of being pluralistic.
LEGAL PLURALISM It is submitted that legal pluralism may offer better methodology to the problem of the adjudication of Internet disputes. For legal pluralism, the law is not a single, monolithic, unified sets of rules flowing from the State’s hierarchy. Legal pluralism would challenge the arguments used to claim and impose values on Internet disputes. Its proponents argue that centralist theories suffer from the following inadequacies. First, centralist theories have to rely on the existence of one set of rules and values. Griffiths26 unambiguously rejects the notion of centralist and monolithic legal norms. He sees such a system, “the legal centralism” based on monistic ideas, as a myth, an ideal, a claim, an illusion, which obstructs the development of a democratic legal system and a major obstruction to accurate observation of customs and personal laws of minorities.27 This misconception of superiority of the western notion of state laws is challenged by Marc Galanter:28 The view that the justice to which we seek access is a product that is produced or at least distributed exclusively by the state, a view which I shall for convenience label legal centralism’ is not an uncommon one among legal professionals.
Any centralist attempt, argues Galanter, would be similar to a program of making all spoken language to a common written language. He continues:29 no one would deny the utility or importance of written language, but it does not invariably afford the best guidance about how to speak. We should be cautioned by the way that it is our tendency to visualise the “law in action” as a deviant or debased version of the higher law, the “the law of the book.”
Second is the suggestion that a jurisdiction can be isolated to adjudicate over such activities. Centralist systems espouse the notion that all justice should be dispensed through one system of courts, that is the dominant States courts and its arbitration system. This is impossible. The development of alternative dispute, resolution, techniques and informal justice gives the lie to the competence of one set of courts.30 Galanter points out that we have to “examine the courts in the context of their rivals and companions . we must put aside our historical perspective of legal centralism, a picture in which state agencies (and their learning) occupy the center of legal life and stand in a relation of hierarchic control to other, lesser normative orderings such as the family, the corporation, the business network.”31 Third is the demand that those regulations will base upon one set of values and norms of western thinking. Centralist systems show insufficient respect paid to customs or, in a colonial context, customary laws. The attempt to bolster the superiority of western values and laws is aided by an attack on the veracity of the culture and values of others. The literature is rife with claims by western jurists that customs in some cultures are underdeveloped, rigid, and inappropriate in developed societies.
DK3654—CHAPTER 10—3/10/2006—15:29—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
VALUES
833
ON THE INTERNET
Should not the courts in adjudicating disputes on the Internet be able to consider and apply the values and cultures of others? There is some evidence that courts are appreciative of differing principles in Internet disputes. In Mecklermedia Corp v DC Congress GmbH, M traded in the name “Internet World.” M’s licensee in Germany encountered problems with D, a German company which held a Federal trade mark registration for INTERNETWORLD. M’s licensee was sued by D in Germany for trade mark infringement, M in turn sued D for passing off in England and Wales, D had brochures and a web site available in England. The English court recognized there was a different triable issue in each country. Commenting on the case, Lea concludes that the Internet “falls to be regulated by the laws of each and every jurisdiction.” A recognition of several jurisdictions is, in part, a recognition of the pluralistic nature of the problem. Similarly, in Prince plc v Prince Sports Groups, Inc. [1997] (the court transcript is at Prince’s web site, http://www.prince. com), Mr. Justice Neuberger recognized that “It is quite possible for an English court to reach the conclusion that the principle letter . is an unjustifiable threat, and for the U.S. court to reach the contrary conclusion .” Away from the Internet, there are examples of courts considering values other than their own. In English courts, tolerance of other values was first shown in Cheni (otherwise Rodriguez) v Cheni,32 where the main issues were around “incest” and “polygamy” between uncle and niece of the Jewish faith. According to expert evidence presented at the hearing, it was revealed that such relationships, unlike “aunt–nephew relationships,” do not affect the natural order of authority. The marriage in question was potentially polygamous at the date of the ceremony but monogamous at the date of the proceedings. However, the court showed a progressive approach, admitting the reality that it was no longer advisable to assess these cultural arrangements in terms of Christianity alone. It was further emphasized that there was no justification in condemning these cultural practices. It was stated:33 It is now clear that English courts will recognise for the most purposes the validity of polygamous marriages, notwithstanding that they are prohibited by Christianity.
Sir Simon P. Jocelyn said that even though the marriage appeared to be offensive to the conscience of English norms, the court will seek to exercise common sense, good manners and a reasonable tolerance. It was further said:34 Whatever test is adopted, the marriage in the present case is, in my judgement, valid. I do not consider that a marriage which may be the subject of papal dispensation and will then be acknowledged as valid by all Roman Catholics, which without any such qualification is acceptable to all Lutherans, can reasonably be said to be contrary to the general consent of Christendom . As counsel for the husband observed, Egypt, where these people lived and where the marriage took place, is itself a civilised country. If public policy were the test, it seems to me that the arguments of the husband, founded on such inferences as one can draw from the scope of the English criminal law, prevail. Moreover, they weigh with me when I come to apply what I believe to be the true test, namely, whether the marriage is so offensive to the conscience of the English court that it should refuse to recognise and give effect to the proper foreign law. In deciding that question the court will seek to exercise common sense, good manners and a reasonable tolerance. In my view, it would be altogether too queasy a judicial conscience, which would recoil from a marriage acceptable to many peoples of deep religious convictions, lofty ethical standards and high civilisation. Nor do I think that I am bound to consider such marriages merely as a generality. On the contrary, I must have regard to this particular marriage, which has stood, valid by the religious law of the parties’ common faith and the municipal law of their common domicile, unquestioned for thirty five years . In my judgement, injustice would be perpetrated and conscience would be affronted if the English courts were not to recognise and give effect to the law of the domicile in the present case. I therefore, reject the prayer of the petition asking that this marriage be declared null and void.
DK3654—CHAPTER 10—3/10/2006—15:29—SRIDHAR —XML MODEL C – pp. 771–844
834
Handbook of Technology Management in Public Administration
In an area of potential conflict on the Internet, intellectual property right, the English Parliament has gone some way to encouraging the application of other values. English courts have been urged to be more willing to accept actions for the infringement of foreign intellectual property rights where the country is a member of the EEA, and the courts have taken this approach in Pearce v Ore Arup Partnership.35 Dutch copyright law could be applied by the English courts on a defendant domiciled in England. This principle is now extended by s.11 Private International Law (Miscellaneous Provisions) Act 1995.36 Unfortunately, these are isolated examples and it is more common for courts to refuse to contemplate the application of other values. In refusing to acknowledge the claims of those who wish to adopt different values, the courts have given a variety of explanations. First, there are those cases where the judiciary use the concept of a set of Christian values to vilify customs, classify them as repugnant and in consequence fail to recognize them. The refusal is often accompanied by the bolstering of the superiority of their values and law courts, and an attack on the veracity of the culture and values of others. Western jurists claim that customs in some cultures are underdeveloped, rigid and inappropriate in developed societies. Customs are often referred derogatorily to as being “ancient.”37 For example, Alf Ross wrote:38 The transition from customary law to legislation is immensely important on the evolution of any society. Customary law is conservative, it relies on traditional and static patterns of behaviour. Those bounds by it act as their fathers did. This does not mean that customs are unchangeable, for they may be adapted to changing conditions; this adaptation is slow and unplanned, lacking calculation and rational understanding of the requirement of a change in conditions.
The dismissal of customs because of repugnance was first used in the former colonies of the British and French empires.39 Its basis is that the non-white peoples of colonies were barbaric, that their traditions, values and personal laws are arbitrary, discriminatory, monstrous, and harmful to western civilization. Indigenous laws, therefore, were allowed “as long as they were not repugnant to natural justice, equity, and good conscience or inconsistent with any written law.”40 The same model was also used as a strategy to outlaw indigenous laws.41 Second, there are cases where the judiciary question the motives of the applicants, often assuming that the claimants merely wish to evade the rules of the domestic law. Finally, there seem to be a group of cases where the courts have justified their approach by attempting to avoid apparent injustice by employing public policy as a yardstick. Foreign customs or personal laws of others are assessed as using values against the westernized concept of “justice.” Judgments of foreign courts were not recognized in cases said to be offensive to “English notions of substantial justice.”42 Lindley held “the courts of this country are not compelled to recognize the decree of the court of another country when it offends against our ideas of justice.”43 A divorce obtained by way of a talaq in Malaysia was held contrary to ideas of “substantial justice.”44
CASE STUDY The following case study shows in detail how the English courts were unable and unwilling to adopt the cultural values of others. It is used to illustrate the concern that if such attitude continue to be used in relation to Internet disputes, then adjudication will operate only according to the values of the first world. The case study concerns the publication of a book, the growth of the Internet will mean that such publications may soon be available on the Internet and as such be the subject of adjudication on the Internet. The dispute arose from the publication of Salman Rushdie’s “The Satanic Verses.” The case study concerns a dispute between English and Islamic Shari’a law. There are two key differences between the two systems, character and scope. In character, English law is secular, although some principles of common law have been shaped by the Christian ethos. The Queen in Parliament enacts
DK3654—CHAPTER 10—3/10/2006—15:29—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
835
the law; in reality, it is the members of the House of Commons and the House of Lords who participate in the law making process. The law may change and adapt. The influence of the dominant religion is less direct, its protagonists persuade and pressure within the parliamentary process of law making. The contrast with Shari’a law could not be greater. All Shari’a law derives from God; therefore, administrators, rulers, or members of State assemblies are not, in the eyes of orthodox Islamic law, empowered to make laws. The law is immutable, the process of interpretation and adaption is held to have been completed in the past. One Muslim visitor to the British parliament in the nineteenth century was said to be surprised at seeing laymen shouting like “a pack of parrots,” engaged in enacting laws without the divine blessing from God. He wrote that unlike in Muslim countries, the British “did not accept any divinely revealed holy law to guide them and regulate their civil, criminal, ritual and dietary matters.” Instead, they pass laws “in accordance with the exigencies of the time, their own dispositions, and the experience of their judges”:45 Views similar to these are still voiced by Muslim jurists. Brohi46 identified and analyzed the law in a secular society as a paradoxical and confused one. The origin of law without a divine intervention is, in his view, illogical. How can a State create law if a State is a creation of law? The late Ayatollah Khumeini, a Shi’a Iman, bluntly rejected any law making role for legislative assemblies. Law making activities by parliamentarians were seen as unnecessary. In his view, parliamentary functions should be limited to oversee implementation; “law themselves are divine or deducible from the Qur’an at the hadith.”47 Shari’a law is then a combination of revealed laws, sunna of Prophet Mohammed (practices of Prophet), customs, and interpretations. Customs and interpretations are, in cases of conflict, subject to the revealed laws. In addition, there may be certain laws enacted by rulers in Islamic States (Daru L’Islam); these are referred to as regulations. As they are enacted by rulers or a legislative assembly, they are inferior to Shari’a law. In the event of a conflict between “regulations” and Shari’a law, it is always the latter that prevails. In classical Qur’anic law, rulers are simply believers who have been given delegated authority by God to introduce rules and regulations subject to limitations imposed by holy writs and the sunna of the Prophet Mohammed.48 The purpose of delegating such powers is to allow believers to administer their duty and State activities to establish an order on earth. As good believers, they are expected to uphold the rule of law in terms of Shari’a. Every individual is supposed to abide by Shari’a law and submit to the authority of Qur’anic laws. The scope of Shari’a law is again in marked contrast to English Law. English law governs one’s relationship to the state and to other human beings, Shari’a law governs one’s relationship with God and conscience, in addition to the state and to fellow human beings. As Rippen comments:49 Law . is a far broader concept than that generally perceived in the English world. Included in it are not only the details of conduct in the narrower legal sense, but also minute matters of behaviour, what might even be formed “manners,” as well as issues related to worship and ritual; furthermore, the entire body of law is traditionally viewed as the “revealed will of God,” subject neither to history nor to change.
The law is thus “Islamic” through and through.50 Its divine origin makes it a reference point and main source of law. In brief, the law giver is no other than God, “he alone is the ruler and the real legislature. He will alone sanction the law.”51 The law is simply not, in the view of traditional Muslim jurists, the business of human beings. Divine laws must be used by rulers to shape and mould the behavior of individuals and structure of Muslim States. It is for all mankind, since God’s revelations are common to every one irrespective of whether or not he is a Muslim. Therefore, there is a divine authority behind every legal principle; they are a universal truth. This divine power makes Shari’a law extremely powerful and authoritative. Theoretically, its authority seeps into every nook and cranny of Islamic society. Islam without law or law without Islam is a hypothesis that is unimaginable in orthodox Muslim societies. The law in all its detail is divine, not human, revealed not enacted, and cannot therefore be repealed or abrogated, supplemented, or amended.
DK3654—CHAPTER 10—3/10/2006—15:29—SRIDHAR —XML MODEL C – pp. 771–844
836
Handbook of Technology Management in Public Administration
In publishing “The Satanic Verses,” Salman Rushdie was alleged, according to Shari’a law, to have committed recantation, blasphemy, and treason. The matter was considered by Ayatollah Khumeini under verse 36 of the sura (chapter) V of the Qur’an, which lists the three offences: (a) declared war against God and his disciples (treason); (b) become a Murtadd (apostate), a born Muslim who has abandoned his faith and crossed over to the enemies of Islam (recantation); and (c) created “mischief through the land,” engaged in “fasad” (corruption). As such, the religious and legal authority of the Prophet Mohammed or the holy book, the Qur’an, should not and cannot be questioned. Most Muslims are prepared to be broadminded about most things but never about anything that even remotely touches on their faith. “Better that I be dead than see Islam insulted,” said Ayatollah Majlisi in the last century. An Arab proverb says: “Kill me, do not mock my faith.”52 If someone slandered or vilified the Prophet Mohammed, Qur’an or Islam in public, they would be punished for the offence of treason or high treason. The allegation against Rushdie was that he was said to have committed treason by engaging in fasad, creating “mischief through the land,” which in turn amounts to a declaration of war upon God, the Prophet Mohammed, and the Holy Qur’an. The support for the fatwa formed only one plank in the campaign against “The Satanic Verses.” Demonstrations were arranged in late 1988 and early 1989, a Penguin Bookshop in London was bombed, allegedly by Muslims. Salman Rushdie’s books were burnt in major cities in December 1988 and January 1989. There remained only the legal option of attempting to have the book banned and to have Rushdie and Penguin prosecuted. Since the Public Prosecutor or the police had refused to take legal action against Rushdie and his publisher, the campaigners against “The Satanic Verses” decided to launch a two-pronged attack. Two separate actions were initiated, one before the Chief Metropolitan Stipendiary Magistrate at Bow Street and the other before the Horseferry Road Metropolitan Stipendiary Magistrate. The parties in both cases appeared to have acted in concert, aware of each other’s position in advance. The first case, which was instituted by Mr. Abdul Hussain Choudhury on behalf of the Muslim Action Front, tried to establish that the law of blasphemy at common law covers all three major religions, Islam, Christianity and Judaism. The second case initiated by Sayid Mehdie Siadatan, an Iranian living in Britain, argued that if “The Satanic Verses” were allowed to be distributed, it would provoke unlawful violence contrary to s.4 (1) Public Order Act, 1986. We turn now to the arguments in both these cases. In the first, Mr. Abdul Hussain Choudhury, a Muslim living in Britain, applied to the Chief Metropolitan Stipendiary Magistrates’ Court at Bow Street for a summons for the criminal prosecution of Salman Rushdie, and the Viking Publishing Company for publishing “The Satanic Verses” on the grounds that the publication was a blasphemous libel against God Allah, Islam, Abraham, the Prophet Mohammed, his wives, and his companions. The action was an attempt to revive the pre-seventeenth century interpretation of English blasphemy laws. It raised the issue of whether it is justifiable to preclude religions of others from the protection of the law of blasphemy. The law of blasphemy in the English common law system was taken to apply only to Anglican Christianity. This was confirmed in R v Gathercole,53 where Alderson, addressing the jury, expressed the position of English law as follows: A person may, without being liable to prosecution for it, attack Judaism or Mohomodinism, or even any sect of the Christian religion [save the established religion of the country]: and the only reason why the latter is in a different situation from the others is, because it is the form established by law, and is therefore a part of the constitution of the country, like manner, and for the same reason, any general attack on Christianity is the subject of criminal prosecution, because Christianity is the established religion of the country.
DK3654—CHAPTER 10—3/10/2006—15:29—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
837
The “public importance of the Christian religion is so great that no one is allowed to deny its truth”54 nor that of the established church.55 Christianity is the “religion of the land.”56 Any scurrilous attack on the Christianity is an attack on the establishment because it is part and parcel of the laws of England. In R v Williams,57 Ashurst held that: Indeed, all offences of this kind are not only offences to God, but crimes against the law of the land, and are punishable as such, inasmuch as they tend to destroy those obligations whereby civil society is bound together; and it is upon this ground that the Christian religion constitutes part of the law of England.
In R v Woolston,58 it was held that the offence would be committed only when the “very root of Christianity itself is struck at.” This is taken to include attacks on both the old and new testaments. In R v Hetherington,59 Patterson said “the protection is not confined to the New Testament . a man who attacks the Old Testament . in effect attacks the New. It is an attack on the religion of the country.” The offence of blasphemy also requires that the statements in some way affect the stability of the state. If the attack was on Christianity, it assumed to involve an attack on the integrity and stability of the state. In R v Taylor,60 the oldest case in common law in the area of blasphemy, Hale observed that “blasphemous words were not only an offence to God and religion, but a crime against the laws, state, and government and therefore punishable in the court.” By the mid-nineteenth century, the mood had began to change. In R v Hetherington61 and R v Ramsay and Foote,62 sober, rational and serious discussions about religious beliefs and traditions had been found not to be blasphemous. For a successful prosecution, it was now necessary to show that the attack on the religion was such that it would be an attack on the stability of society; no automatic presumption would be made. As Lord Sumner63 stated in Bowman v Secular Society, the judiciary is more worried about whether any act of blasphemy would “shake the fabric of society generally.” Lord Scarman, in R v Lemon,64 summarizing the present position, said that “the offence belonged to a group of criminal offences designed to safeguard the internal tranquillity of the kingdom.” An attack on a religion that affects the sensibilities of individuals or group of individuals is as such insufficient. Within this context, Mr. Azhar, as a barrister representing Mr. Abdul Hussain Choudhury and the Muslim Action Front, faced the task of showing not only that in publishing “The Satanic Verses,” Rushdie and his publisher had attacked Christianity and that such an attack had affected the stability of society.65 To found the first part of the argument, he argued that the book had vilified and insulted God and the Prophet Abraham. Azhar’s argument was partly based on R v Williams,66 where the defendant was found guilty when attacking the Old Testament. The New Testament is based on the Old Testament; therefore, any attack on the latter amounts to an attack on Christianity. Could an attack on Islam be taken as an attack on the Old Testament? This line of argument failed, the court finding that Islam is based on the Qur’an, not on the Old Testament. Mr. Azhar then attempted to challenge directly the requirement that Christianity be the subject of the attack. He argued that the court should interpret common law offences in the light of changing demography in contemporary British society. An attack on any religion that led to social instability should amount to blasphemy. Relying on Avory’s statement in R v Gott,67 he argued that an indecent and offensive attack on the scriptures or sacred persons or objects with a view to injuring the feelings of the general body of the community would amount to both blasphemy and sedition. The word “scriptures,” argued Mr. Azhar, goes beyond Christianity and would undoubtedly cover both Judaism and Islam. If this were accepted, there was then strong evidence that “The Satanic Verses” has already caused much damage to community and property, people had lost their lives in many parts of the world by taking part in protests and demonstrations against “The Satanic Verses.” However, Watkins rejected his argument, indicating that Mr. Azhar
DK3654—CHAPTER 10—3/10/2006—15:29—SRIDHAR —XML MODEL C – pp. 771–844
838
Handbook of Technology Management in Public Administration
had misunderstood the ratio-decidendi of Williams. The law of blasphemy is in effect the law of England and, as such, English values of Christianity apply. Mr. Azhar then argued that if there was no judicial remedy available in the area of blasphemy for religious groups other than the Anglican Christians, then the judiciary should correct such irregularities by extending the law of blasphemy to other religions. He emphasized that “it is anomalous and unjust to discriminate in favor of one religion,” thus questioning the legitimacy of preferential treatment for a State’s religion.68 This is the first time in British history that nonChristians questioned and challenged the legitimacy of the State’s religion.69 In support of this contention, Mr. Azhar turned to arts 9, 10, and 14 of the European Convention of Human Rights (ECHR) 1950. In particular, Article 14 of the ECHR seems relevant. It reads: The enjoyment of the rights and freedoms set forth in this convention shall be secured without discrimination on any ground such as sex, race, colour, language, religion, political or other opinion, national or social origin, association with a national minority, property, birth or other status.
As a signatory to the ECHR, he argued, there should be a mechanism in English law to ensure that individuals are protected on an equal footing, irrespective of race or religious differences. “If the law of blasphemy,” argued Azhar, “is designed to protect Christianity alone, it means that other religions have been left unprotected ever since the Convention was signed in 1950s.”70 Such a position is in violation of the Convention obligation. Interpreting article 14 together with article 9, Mr. Azhar argued that if the right to freedom of religion for Muslims is not protected from sacrilegious attack and blasphemous libel, then Muslims are denied the rights guaranteed by both arts 9 and 14. This is a compelling argument. Here is a case where the operation of British law is differently applied in respect of Christians and Muslims, the latter in effect being discriminated against on the grounds of their religion. Preferential treatment in respect of one set of individuals is not prohibited by international law if it is designed to correct past injustices and applied only on a temporary basis. This is not so in this case, for it was the Muslims who are alleged to have been discriminated against. Anthony Lester QC, appearing for the publishing company, Viking Penguin, stressed that the U.K. was not in breach of the ECHR. Answering the questions raised by the applicant, Lester admitted that “the obligation imposed on the United Kingdom by the Convention are a relevant source of public policy where the common law is uncertain.” He maintained, however, that the “common law of blasphemy is, without doubt, certain.”71 Therefore, it was not necessary to pay any regard to the Convention in the given case. Lester argued further that if the court decided to convict Rushdie and his publisher, it would violate his rights guaranteed by articles 7 and 10 of the ECHR. Article 7 deals with retroactive offences. It provides: (a) No one shall be held guilty of any criminal offence on account of any act or omission that did not constitute a criminal offence under national or international law at the time it was committed. Nor shall a heavier penalty be imposed than the one that was applicable at the time the criminal offence was committed. (b) This article shall not prejudice the trial and punishment of any person for any act or omission which, at the time when it was committed, was criminal according to the general principles of law recognized by civilized nations.
Clearly, in the light of article 7, Rushdie or his publishers cannot be punished for an alleged offence because it was not a criminal offence when “The Satanic Verses” was published. Salman Rushdie, in the eyes of English law, did not attack Christianity or any of the institutions of the British government or try to incite individuals against Her Majesty’s government. On the other hand, as Lester argued,72 Rushdie has exercised his freedom of expression as guaranteed by article
DK3654—CHAPTER 10—3/10/2006—15:29—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
839
10 of the ECHR by publishing “The Satanic Verses,” which was, as has been mentioned earlier, acclaimed by the literary world as one of the distinguished novels produced in recent times. Article 10 ECHR reads as follows: (1) Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers. This article shall not prevent States from requiring the licensing of broadcasting, television or cinema enterprises. (2) The exercise of these freedoms, since it carries with it duties and responsibilities, may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society, for the prevention of disorder or crime, for the protection of health or morals, for the protection of the reputation or rights of others, for preventing the disclosure or information received in confidence, or for maintaining the authority and impartiality of the judiciary. Lester argued that the freedom of expression guaranteed by article 10 of the ECHR prevents the court from restricting Salman Rushdie’s right to hold opinions and to receive and impart information and ideas.73 These rights shall be guaranteed, in the view of the defence, without interference by public authority. Even though article 10 (2) allows restrictions to be placed, in the given case, Lester’s opinion was that Rushdie and Viking Press had not committed offences which could be considered under exceptions contained in article 10. Therefore, he proposed, neither the British government nor the judiciary could interfere with Rushdie’s rights. Lester’s arguments won the day. The court refused to fill a vacuum by creating new legal remedies by encroaching upon the territory of the legislature. The court hit the mood of both western liberal intellectuals and human rights lawyers who had been shocked to hear of demand to restrict freedom of expression. Richard Webster writes that freedom of expression is “as precious to the West, almost, as the Koran itself is to Islam.”74 The UNESCO’s Director General expressed his deep anxiety about the campaign against “The Satanic Verses.”75 Anthony Lester, one of the most distinguished and respected human rights lawyers in Britain, emotionally protested the claims of Azhar. He said “what the applicant seeks to do is to interfere with a well founded right to freedom of expression, a kind of interference never at any time foreshadowed by the common law of this country.”76 The campaign, militant and violent demonstrations, and “book burning incidents” seem to have alarmed and shocked members of the judiciary. Granting leave to appeal against the decision of the Stipendiary Magistrate at Bow Street, Nolan took the opportunity to remind the Muslim activists unambiguously as follows: They are not concerned with the question of whether the proposed defendants are blasphemous according to Muslim law. They are concerned to establish the scope of the English criminal law. Whatever the outcome of these proceedings may be, the fundamental rule of English law is that the peace must be preserved. I know that this is fully understood by your own very responsible clients. It would be a great tragedy if the continuation of this argument in court were taken by others as a sign that demonstrations and the like, which might lead to breaches of the law, would give assistance; in fact they will be counterproductive.77
In the second case,78 the argument was focused on a different issue, whether publishing and distributing the book, “The Satanic Verses,” would result in “immediate violence” within the meaning of s.4 (1) of the Public Order Act, 1986. This section states: A person is guilty of an offence if he [a] uses towards another person threatening, abusive or insulting words or behaviour, or [b] distributes or displays to another person any writing, sign or other visible representation which is threatening, abusive or insulting, with intent to cause that person to believe that
DK3654—CHAPTER 10—3/10/2006—15:29—SRIDHAR —XML MODEL C – pp. 771–844
840
Handbook of Technology Management in Public Administration
immediate unlawful violence will be used against him or another by any person, or to provoke the immediate use of unlawful violence by that person or another, or whereby that person is likely to believe that such violence will be used or it is likely that such violence will be provoked.
Watkins, who had delivered the main judgement in ex parte Choudhury, concentrated on technical aspects of the phrase, “such violence” and “immediate unlawful violence.” Siadatan’s application appears to be based on stronger and sounder legal arguments than ex parte Choudhury. The applicant had to prove that “The Satanic Verses” falls within the meaning of s.4 (2) of the Public Order Act. Siadatan, laying an information before the Horseferry Road Metropolitan Stipendiary Magistrate, complained that the distribution by Viking Penguin of books entitled “The Satanic Verses” to book shops was an “act whereby it was likely that unlawful violence would be provoked.” 79 The magistrate’s view was that the applicant failed to include in the information (charge sheet) that immediate violence would occur if the distribution of “The Satanic Verses” was allowed. Counsel for the applicant, Mr. Geoffrey Nice, appearing before the Queen Bench Division, argued that s.4 (1) should be read in conjunction with s.6 (3) Public Order Act, 1986 to determine whether Salman Rushdie published “The Satanic Verses” knowing his words or behavior, or the writing, to be threatening, abusive or insulting, or was aware that it may have been threatening, abusive or insulting. His principal argument was that Salman Rushdie intentionally or maliciously published materials that were prohibited by this provision, knowing that it provoked violence. Therefore, Rushdie “should not escape criminal liability under s.4 (1) simply because the violence which the written words are likely to provoke will not be immediate.”80 Mr. Nice also stressed that giving a narrower interpretation to s.4 (1) of the Act, would defeat what Parliament intended to achieve when passing the Public Order Act, 1986, i.e., to protect racial groups from racially motivated attack or publication of materials which insult racial groups or invite violence against such groups. He also argued that an individual’s right to freedom of expression is limited by s.4 (1) and s.6 (3). “Such rights,” it was argued, “do not include a freedom to insult or abuse other persons in such a way that it is likely that violence will be provoked.”81 Watkins did admit that the arguments appeared to be “intricate and persuasive” but rejected the claim for judicial review.82 Unsurprisingly, the English courts had no difficulty in finding that no offence had been committed. The response of the judiciary was that it is neither for them to fill a vacuum by creating new legal remedies nor at any event did they want to encroach upon the territory of the legislature. The court found as follows: We have no doubt that as the law now stands it does not extend to religions other than Christianity. Can it in the light of the present condition of society be extended by the courts to cover other religions? Mr. Azahar submits that it can and should be on the grounds that it is anomalous and unjust to discriminate in favour of one religion. In our judgement where the law is clear it is not the proper function of this court to extend it; particularly is this so in criminal cases where offences cannot be retrospectively created. It is in that circumstances the function of Parliament alone to change the law . The mere fact that the law is anomalous or even unjust does not, in our view, justify the court in changing it, if it is clear. If the law is uncertain, in interpreting and declaring the law the judges will do so in accordance with justice and to avoid anomaly or discrimination against certain classes of citizens; but taking that course is not open to us, even though we may think justice demands it, for the law is not, we think, uncertain.83
CONCLUSION Through a mixture of techniques including the vilifying of the customs of others, the courts have reinforced the supposed supremacy of the Christian religion and its values. This has, in turn, had the effect of significantly slowing developments in religious tolerance and understanding. This is most evident in the failure of the judiciary to take positive steps in enhancing the scope of law of blasphemy in the case of Salman Rushdie. It is normally the case that unlimited rights to freedom
DK3654—CHAPTER 10—3/10/2006—15:29—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
841
of expression may have all the ingredients to disturb the coherence between religious communities. Freedom of expression should be enjoyed with restraint. No one is advocating unlimited rights; rights bring with them the duty to pay due regard to the effects of those rights on other members of the community. We have seen how the rights of freedom of thought, conscience and religion are guaranteed by article 9 of the European Convention of Human Rights (ECHR), and how both articles 9 and 10 are subject to derogation, the duties and responsibilities are subject to formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society in the interest of, inter alia, morals and the reputation of others [Article 10 (2) of the ECHR]. Freedom of expression is not an absolute right that the individual can enjoy at his whim and pleasure. As such, it is difficult to see why the common law, irrespective of what the common law of blasphemy may be, should allow one section of the population or their religious faith to be attacked by an irresponsible person; such a position cannot find any justification in the context of any human rights treaty laws. Feldman, referring to the existing form of blasphemy laws, stated that the current position is hard to justify in a pluralist society such as Britain.84 Existing laws are only concerned about the sensitivities of the Anglican Christians. If a section of the population feel that they are subjected to discrimination with the connivance of the judiciary and the law makers, hatred among various groups might be exacerbated. The danger of such a scenario reminds us of the recent killings taking place in Bosnia, Herzegovina. The current policy that “you can discriminate for or against Roman Catholics as much as you like,”85 should not be the formula to be upheld in current British society. Such a policy leads communities nowhere. It may even endanger internal peace and harmony between different religious communities. On the other hand, it is not logical or sensible to continually advocate the present form of the law of blasphemy if we genuinely want to create better race relations and to maintain social harmony between different communities. Lord Scarman stated:86 In an increasingly plural society such as that of modern Britain it is necessary not only to respect the differing religious beliefs, feelings and practices of all but also to protect them from scurrility, vilification, ridicule and contempt.
At present, the law fails to prevent publication of works on the Internet even though it is deeply offensive to large numbers of people. The current position in English law of only relating to the sensitivities of Anglican Christians is hard to justify. This would continue to exclude large populations from the Internet. The Internet is a pluralist society, legislators and judiciary must come to terms with its wide cultural and religious views. If the Salman Rushdie dispute were transferred to the Internet, it is likely that English courts would claim jurisdiction and recognize only the views of the Christian faith. The law should not allow the ridiculing of the faiths of other religious groups. In adjudicating the Internet dispute, the court should take cognizance of the values of others and at least apply principles that respect and honor the differing religious beliefs and practices of all religious groups; encourage a social and public policy that protects the beliefs most sacred to those groups from scurrility, vilification, ridicule, and contempt. Rushdie should be judged not by the most obvious form of control, the English blasphemy laws, but by the role that he chose to take. Publication on the Internet of such materials as the “The Satanic Verses” could be viewed as an attempt to ridicule large populations of Muslims. It is submitted that in such a case, the scope of the law of blasphemy in England would be extended to protect recognized religions with a view to strengthening the fundamental institutions of those pluralist societies.
NOTES AND REFERENCES 1. Jones, R., The Internet, legal regulation and legal pluralism, 13th BILETA Conference, 1998, http://www.bileta.ac.uk 2. Wesch, W., Disconnected. Haves and Have-nots in the Information Age, Rutgers University Press, New Jersey, 1996.
DK3654—CHAPTER 10—3/10/2006—15:29—SRIDHAR —XML MODEL C – pp. 771–844
842
Handbook of Technology Management in Public Administration
3. American Civil Liberties Union v Reno, 929 F. Supp. 824, 832, [E.D. Pa. 1996]. 4. Minnesota v Granite Gates Resorts, Inc., 568 N.W.2d 715 (Minn. Ct. App. 5 September 1997), 1997. 5. M. Chiba, ed., Asian Indigenous Law: An Interaction with Received Law, KPI, London and New York, p. 197, 1993. 6. U.S. v Thomas, 74 F 3rd 701 (6th Cir), 1995. 7. Dutson, S., The Internet, the conflict of laws, Journal of Business Law, 495, 1997. 8. International Shoe Co. v Washington, 326 U.S. 310, 1945. 9. Helicopteros Nacionales de Columbia, S.A. v Hall, 466 U.S. 408, 1984. 10. CompuServe v Patterson, 89 F.3d 1257 (6th Cir), 1996. 11. Panavision International, L.P. v Toeppen, 938 F. Supp. 616 (C.D. Cal.); confirmed USC of App 1988 141 F 3d 1316, and EDIAS Software International, L.L.C. v BASIS International Ltd., 947 F. Supp. 413 (D. Ariz.), 1996. 12. Hearst v Goldberger, WL 97097, U.S. Dis. Lexis 2065 (SDNY 26 February 1997), 1997. 13. Bensusan Restaurant v King, WL 560048 [2nd Cir. (N.Y.)] (10 September 1997), 1997. 14. See also McDonough v Fallon McElligott, Inc., U.S., Dist. LEXIS 15139, No. 95-4037, slip op. (S.D. Cal. 6 August 1996, where the existence of a web site was not sufficient connection), and also American Civil Liberties Union v Reno, 929 F. Supp. 824, 832 (E.D. Pa.), 1996. 15. Waelde, C. and Edwards, L., Defamation and the Internet: a case study of anomalies and difficulties in the information age, International Review of Law Computers & Technology, 10, 263, 1996. 16. Cannataci, J., Linking illegal & harmful content on the Internet: can a blend of selfregulation and international agreement resolve the impasse? 14th BILETA Conference, http://www.bileta.ac.uk, 1999. 17. Post and Johnson, The new civic virtue of the net: lessons from models of complex systems for the governance of cyberspace, http://www.cli.org/paper4.htm, 1997. 18. Jones, R., op. cit. 19. Minnesota v Granite Gates Resorts, Inc., 568 N.W.2d 715 (Minn. Ct. App. 5 September 1997), 1997. 20. U.S. v Thomas, op. cit. 21. Secretary, Department of Health and Community Services v JMB and SMB, FLC 92-3 at 79, 191, 1992. 22. Fixico, The struggle for our homes, In Defending Mother Earth: Native American Perspectives on Environmental Justice, Weaver, J., ed., Orbis, p. 30, 1996. 23. Cameron, E. D., Netcom and Playboy: who does run the Internet?, International Review of Law Computers and Technology, 11(1), 155–164, 1997. 24. Playboy Enterprises v Chuckleberry Publishing, Inc., (United States District Court for the Southern District of New York) U.S. Dist. Lexis 8435 and 9865, 1996. 25. Cameron, E. D., op.cit., 154. 26. Griffiths, J., What is legal pluralism?, Journal of Legal Pluralism and Unofficial Law, 1–56, 1986. 27. Galanter, M., Ibid., p. 4; Galanter also agrees, Justice in many rooms: courts, private ordering, and indigenous law, Journal of Legal Pluralism and Unofficial Law 19, p. 4, 18, 1981; see also, S. E Merry “Legal pluralism” Law and Society, XXII, part 5, p. 871, 1988. 28. Galanter, M., ibid., p. 1. 29. Ibid., p. 5. 30. Merry, S. E., op. cit., 874, McLennan, G.,Pluralism, Open University Press, Buckingham, 1995.
DK3654—CHAPTER 10—3/10/2006—15:29—SRIDHAR —XML MODEL C – pp. 771–844
Technology and the Professions
31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73.
843
Galanter, M., op. cit., 17. Cheni (otherwise Rodriguez) v Cheni, 3 All ER 873, 1962. Ibid., p 879. Ibid., p. 883. Pearce v Ove Arup Partnership Ltd, 2 WLR 779, 1997. Dutson, S., op. cit. For details, see A. Allott, The Limits of Law, Butterworths, London, p. 60, and Chiba op. cit., p. 198, 1980. Ross, cited in A Allott, ibid, 1968. Fawcett J. J., Evasion of law and mandatory rules in private international law, Cambridge Law Journal March, 44–47, 1990. Merry, S. E., op. cit., 870. Ibid. See Pemberton v Hughes, 1 Ch. 781, 1899. Ibid., p. 790. Viswasingham v Viswasingham 1 FLR 15, 1980. Lewis, B., In Muslims in Europe, Lewis, B. and Schnapper, D., Eds., Pinter Publishers, London and New York, p. 1, 1994. Brohi, A. K., Islam, its politics and legal principles, In Islam and Contemporary Society, Azzam, S., Ed., Islamic Council of Europe, Longman, London and New York, pp. 62–100, 63–66, 1982. Fischer, In Voice of Resurgent Islam, Espositom, J. L., Ed., Oxford University Press, New York and Oxford, p. 169, 1983. Brohi, A. K., op. cit., p. 66. Rippin A., Muslims: Their Religious Beliefs and Practices, Vol. 1, Routledge, London, p. 74, 1991. Rippin, A., ibid., p. 77; B., Lewis and Schnapper, D., op. cit., p. 1. Brohi, A. K., op. cit., p. 66. Taheri, In The Rushdie File, Appignanesi, L. and Maitland, S., Ed., Fourth Estate, London, p. 92, 1989. R v Gathercole, 2 Lew CC 237, 168 ER 1140, 1838. Smith, J. and Hogan, B., Criminal Law, 8th ed., Butterworths, London, p 737, 1996. R v Gathercole, op. cit. R v Williams, 26 St. Tr 653, 1797. Ibid., Tr 654: 714. R v Woolston, 1 Barn KB 162, 94 ER 112, 1729. R v Hetherington, 4 St. Tr NS 563, 1841. R v Taylor, 1 Vent. 293, 86 ER 189, 1676. R v Hetherington, op. cit. R v Ramsay and Foote, 15 Cox CC 231, 1883. Bowman v Secular Society Ltd., AC 406, 1917. R v Lemon, AC 617: 658, 1979. R v Chief Metropolitan Stipendiary Magistrate, ex parte Choudhury, I All ER 306, 1991. R v Williams, op. cit., Tr 654. R v Gott, 16 Cr App R 87, 1922. R v ex parte Choudhury, 1 All ER 306, p. 318, 1991. Watkins, L. J., R v ex parte Choudhury, ibid., p. 311. Ibid., p. 319. Ibid., p. 320. Lester, A., English judges as law makers, Public Law Journal, 269–290, 320–324, 1993. See ex parte Choudhury, op. cit., pp. 320–322.
DK3654—CHAPTER 10—3/10/2006—15:29—SRIDHAR —XML MODEL C – pp. 771–844
844
74. 75. 76. 77. 78. 79. 80. 81. 82. 83. 84. 85. 86.
Handbook of Technology Management in Public Administration
Webster, R., A Brief History of Blasphemy, The Orwell Press, South Folk, p. 45, 1990. L. Appignanesi and S. Maitland, eds., op. cit., p. 125, 1989. ex parte Choudhury, op. cit., p 321. ex parte Choudhury, op. cit., p 309. R v Horseferry Road Magistrate, ex parte Siadatan, I All ER 324, 1991. Ibid., p. 326. Ibid., p. 327. Ibid. Ibid. ex parte Choudhury, op. cit., p. 318. Feldman, p. 690, 1993. Lord Denning, Mandla v Dowell Lee, 3 All ER 1108, p. 1111, 1982. R v Lemon, op. cit.
FURTHER READING Bainham, A., Family law in a pluralistic society, Journal of Law and Society, 22(2), June, 234–247, 1995. Diamond, S., The rule of law versus the order of customs, In Comparative Legal Cultures, Varga, C., Ed., New York University Press, New York, pp. 193–223, 1973. Furlong, J., From the superhighway to the main street, The Law Society of Ireland Technology Committee Conference, Lawyers and the Internet, 1995. Hamilton, C., Family, Law and Religion, Sweet and Maxwell, London, 1995. Pearl, D., A Text Book on Muslim Personal Law, Croom Helm, London, 1987. Pospisil, L., Anthropology of Law: A Comparative Theory, Harper and Row, New York, 1971. Poulter, S. M., English Law and Ethnic Minority Customs, Butterworths, London, 1986. Poulter, S. M., Ethnic minority customs, English law and human rights, ICLQ, 36, 589–615, 1987. Poulter, S. M., Asian Traditions and English Law: A Handbook, The Runnymede Trust with Trentham Books, 1990a. Poulter, S. M., The claim to a separate Islamic system of personal law for British Muslims, In Islamic Family Law, Mallet C. and Connors J., Ed., Graham and Trotmas, London, 1990b. Raz, J., Multi-culturalism: a liberal perspective, Dissent, 67, 1994. Van den Bergh, G. C. J. J., Legal pluralism in Roman Law, In Comparative Legal Cultures, Verga, C., Ed., New York University Press, New York, pp. 451–465, 1992.
DK3654—CHAPTER 10—3/10/2006—15:29—SRIDHAR —XML MODEL C – pp. 771–844
Index 3G. See third generation mobile wireless, 247–252
A Access engineering, cyber-management and, 222–223 Accident, concept of, 419–420 Accountability demands, cyber-management and, 226–227 Adjudication internet disputes and, 828–841 case study, 834–840 legal pluralism and, 832 Advanced Technology Program (ATP), 43, 48–49, 167 science and technology, public sector current issues and, 63 Aeronautics R & D, science and technology, public sector current issues and, 71 Analytic methodology, technology leadership forecasting (TLF) and warning, 465–479 Apollo Root Cause Analysis, 426 Arms control, U.S. Government and China, 272–273 ATP. See Advanced Technology Program. Aviation Security, science and technology, public sector current issues and, 62
B Balanced portfolio, 152–158 Ballistic missiles India and, 640 Pakistan and, 654–655 Baumol’s economic disorder, 547 Ben and Jerry’s ice cream, ecopreneurs and, 292 Biological weapons program India and, 639 procurement of, 646–647 Pakistan and, 653 procurement of, 658–659 Biotechnology ethics, 68–69 patents, 68–69 privacy, 68–69 science and technology, 68–69 technology and, evolution of, 11 Broadband internet access, science and technology, public sector current issues and, 65
C CALEA, Communications Assistance for Law Assistance Act, 597 California courts, technology implementation and, 564–570 Capital markets, U.S. Government and China, 270–271 Categories matrix, 464–465 Causation chain of events model, 411 factorial model, 411–412 other concepts, 412–415 Management Oversight and Risk Tree (MORT) analysis, 412–413 root cause analysis, 413–415 single-event model, 411 traditional views of, 411–412 CGWIC. See China Great Wall Industry Corporation, 175–176 Chain of events causation model, 411 Change, understanding of, 434–450 Chemicals sale of, China and Iran, 263–264 weapons program India and, 639 procurement of, 645–646 Pakistan and, 653 procurement of, 658 Chief Information Officer, Federal, 67–68 Children’s television viewing current rating system, 284 rating system, 283–284 supervision of, 282–287 V-chip control, 283 China Great Wall Industry Corporation (CGWIC), 175–176 China chemicals, sale of, Iran, 263–264 entities sanctioned for weapons proliferation, 253–254 missile technology transfer and action regarding, 173–200 chronology of major events, 200–213 military benefit, 182–184 policy issues, 174–200 security concerns, 175–176 U.S. Congress, reaction to, 186–187 U.S. Government reaction to, 184–187 845
DK3654—INDEX——4/10/2006—18:26—SRIDHAR—XML MODEL C – pp. 845–859
846
Handbook of Technology Management in Public Administration
missile technology sales to Iran, 261–263 Libya, 266 North Korea, 264–266 Pakistan, 258–259 Syria, 266 nuclear cooperation, Pakistan, 257–258 technology sales, ring magnets, Pakistan, 256–258 weapons, sales to North Korea, 264–266 U.S. Government defense policy and, 267–269 economic controls and, 269–273 capital markets, 270–271 export controls, 272 exportation of satellites, 269 import controls, 271–272 nuclear cooperation agreement, 271 sanctions, 269–270 foreign policy and, 267–269 counter-terrorism campaign, 267–268 export control assistance, 268 summit meetings, 267 Taiwan links issues, 268–269 policy and, arms control, 272–273 nonproliferation agreements, 272–273 weapons of mass destruction nuclear technology sales Iran, 259–261 Pakistan, 256–258 policy issues, 252–273 proliferation of, 255–256 Clinton administration, online privacy protection and, 304–305 Collapse, civilization and creation of shadow systems, 32–33 end of global management, 32 forces of internal breakdown, 27–31 Tainter, Joseph, 26 technology and, 25–31 Commercial satellite exports, 73 Commercialization, federally funded R & D and, 50–53 Communications and information technologies (ICT), 616–617 death of distance, 616–617 distance, impact of, 618–622 impact of, 629–632 Communications Assistance for Law Assistance Act. See CALEA. Communications issues, cyber-management and, 223 Compensation practices, information technology and, 798–801
Computational economics, negotiating and, 726–727 exploration, 733–735 simulation, 733–735 Computer back-ups, strategic information and, 580 Conferences, genetically modified organisms (GMOs) and, 96 Confidentiality, Human Genome Project, screening, 327–328 Congressional Research Service (CRS) Reports Homeland Security, 74 information technology, 75–77 products, internet statistics and, 503–504 R & D budgets and policy, 73–74 science and technology current issues, 73–77 technology development, 74–75 telecommunications, 75–77 Control/direction orientation, state and, 84 Cooperative research and development agreements (CRADAs), 51 Counter-terrorism campaign, U.S. Government and China, 267–268 Court systems, needs and implementation tactics, knowledge of fundamentals, 552, 554–555 projects vs. routine operations, 552, 556–557 technology management, 552, 555–556 vision and leadership, 551, 553–554 records management digital storage, 559–561 long-term storage, 561–562 microfilm, 561–562 role and responsibilities, 557–559 technology implementation and, California, 564–570 management and, 549–550 records management, 557–562 CRADAs. See cooperative research and development agreements. CRS. See Congressional Research Service. Cruise missiles India and, 639 Pakistan and, 654 Culture organizational, negotiation support systems and, 763–764 Silicon Valley models and, 88–89 technology and, 13–33 new human context, 15–16 Current assessment methodology, technology leadership forecasting (TLF) and, 462–464 Current issues, public sector perspectives, science and technology and, 55–77 Curriculum development, technology and public schools, 125–126
DK3654—INDEX——4/10/2006—18:26—SRIDHAR—XML MODEL C – pp. 845–859
Index Cyber-management, access engineering, 222–223 Cyber-management accountability demands of, 226–227 communications issues, 223 decision-making support, 225 dilemmas of, 227–229 informal communications, 225–226 public management, challenge to, 229–230 record keeping, 223–225 technology transfer and public sector, 222–231 Cyber-terrorism actual attacks, 581–583 on China, 582–583 national security and, 580–583
D Data analysis, negotiation support systems and, 759–762 DBK. See Dominant Battlespace Knowledge Death of distance, concept of, 616–617 Decision-making support, cyber-management and, 225 Defense policy, U.S. Government and China, 267–269 Department of Defense national security, technology management and, 577–580 science and technology, public sector current issues and, 56–57 court systems record management and, 559–561 Discretionary decision-making, understanding change, 440–449 Disruptive external medical technology, 546 Distance education, European Union, 112–116 evolution of, 115–116 popularity in, 114–115 Distance costs of, 622–625 economic interactions, 618–619 impact on, communications and information technologies (ICT), 616–617 monitoring and management of, 625–626 real income, 619–622 transit time cost, 626–629 Dominant Battlespace Knowledge (DBK), 462
E eBusiness, technology management and, 536 Echelon Interception system, 583–584 committee to investigate, 347–364 confirmed existence of, 364–383 European Parliament Report, 338–347 summary of intelligence agencies, 385–399
847 Economic controls, U.S. Government and China, 269–273 electronic espionage examples, 594–595 national security issues and, 583–600 protection from, 595–596 interactions, distance and, 618–619 Ecopreneurs Ben and Jerry’s ice cream, 292 commercial vs. social, 293 counter-culture of, 294 creation of, 290–298 private-sector initiatives, 295–297 public-sector initiatives, 297–298 The Body Shop, 292 Education, science and technology, public sector current issues and, 60–61 Education/training, information technology labor shortages and, 242–243 E-Government, science and technology, public sector current issues and, 66–67 Electronic mail cyber-management and, 225–226 internet privacy and employers, 321–322 monitoring, internet privacy and law enforcement, 321–322 signatures, 333–338 commercial standards, Federal use of, 336 keyword definitions, 333 legislation, 336–337 paperwork Elimination Act, 335 privacy, 337 technologies in use, 333–334 U.S. Congress interest in, 334–335 technology and, impact on, 8–9 E-market protocol, negotiating and, 723–724 case study, OTC derivatives matching mechanism, 728–735 computational economics, 726–727 exploration, 733–735 simulation, 733–735 game-theoretic analysis, 724–725, 732–733 laboratory experimentation, 735–736 mechanism design theory, 725–726 simulation modeling, 726–727 Emotions information and communication technology (ICT) implementation method used, 520 research on, 518–520 technology implementation and, 518–527 Employers, e-mail monitoring and, 321–322 E-Rate program, 130–131
DK3654—INDEX——4/10/2006—18:26—SRIDHAR—XML MODEL C – pp. 845–859
848
Handbook of Technology Management in Public Administration
Espionage, economic, national security issues and, 583–600 Ethical issues and concerns, 275–399 control of technology, impact of surprises, 287–290 ecopreneur, creation of, 290–298 electronic signatures, 333–338 Human Genome Project, genetic hypersusceptibility screening, 323–332 internet privacy, 312–323 knowledge management in medicine, 279–282 online privacy protection, 299–309 supervision of children’s television viewing, 282–287 Ethics biotechnology and, 68–69 technology and, 6–8 European Economic Community, genetically modified organisms (GMOs) and, 97–98 European Parliament Report, Echelon Interception System and, 338–347 European Union Directives, online privacy protection and, 307–309 European Union, distance education and, 112–116 evolution of, 115–116 popularity in, 114–115 Export controls assistance, U.S. Government and China, 268 legislation to revise, 194–200 missile technology transfer and, 193–200 national security issues and, 575–576 U.S. Government and China, 272 External medical technology, 545–548 disruptive, 546 economics of, 547–548 Baumol’s economic disorder, 547 wrapping of care, 547–548 fisherman’s paradox, 548 research, 546–547 sustaining, 545–546
F Factorial causation model, 411–412 Failure avoidance, 432–433 concept of, 417–418 technology and, 407–427 Fair information practices, FTC and internet privacy, 315 FCC. See Federal Communications Commission. Federal Chief Information officer, science and technology, public sector current issues and, 67–68
Federal Communications Commission E-Rate Program, 130–131 Federal support of technology in public schools, 130–131 Federal Government policy National Science and Technology Council (NSTC), 45 internet privacy practices and, 320–321 promotion efforts, technology transfer and, 164 technological advancement and, 44–46 current legislation, 54–55 current programs, 47 federally funded R & D, 50–53 industry-university efforts, 40–50 joint industrial research, 50 legislatives initiatives, 47 R & D spending, 47–49 transfer, 162–163 patents, 170–171 small businesses, 171 state government, 164 Federal Laboratory Consortium (FLC) Omnibus Trade and Competitiveness Act, 167–169 Stevenson-Wydler Technology Innovation Act, 165–167 technology transfer by, 164–165 Federal role, technology and public schools, 131–133 Federal support, technology and public schools, 116, 126–131 Federal Communications Commission, 130–131 misc. programs, 131 National Aeronautics and Space Administration (NASA), 126–129 National Science Foundation, 130 program listing, 126–131 U.S. Department of Education, 126–129 U.S. Department of Agriculture, 129 U.S. Department of Commerce, 129 Federal Trade Commission, online privacy protection and, 305–307 Federally funded R & D commercialization of, 50–53 Stevenson-Wydler Technology Innovation Act, 51 technological advancement and, 50–53 Firms, Silicon Valley models and, 87–88 Fisherman’s paradox, 548 FLC. See Federal Laboratory Consortium Foreign policy, U.S. Government and China, 267–269 counter-terrorism campaign, 267–268 export control assistance, 268 summit meetings, 267 Taiwan links issues, 268–269
DK3654—INDEX——4/10/2006—18:26—SRIDHAR—XML MODEL C – pp. 845–859
Index Foresight countries involved with, 106–107 move to, 105–106 risk National Health Service (NHS), 108–110 society, 101–111 tensions with, 107–108 FTC activities, internet privacy and, 315–316 fair information practices, 315 Functional areas, 464–465
G Game-theoretic analysis, 724–725, 732–733 Geeks growth of, 21–25 rise of, 21–25 Generic indicator directory, 481–491 Genetic hypersusceptibility screening, 323–332 Genetically modified organisms (GMOs) advantages of, 93 conferences on, 96 European Economic Community, 97–98 human implications, 93–94 international objections to, 95–96 regulation of, 96–97 patents on, 99 scientific testing of, 95 technology and, 93–100 testing for presence of, 94–95 United States regulations, 98–99 World Trade Organization, 98 Global climate change, science and technology, public sector current issues and, 69–71 management, end of, civilization’s collapse and, 32 GMOs. See Genetically modified organisms. GOCOs. See Government-owned, contractoroperator laboratories. Government Performance and Results Act (GPRA), science and technology, public sector current issues and, 59–60 Government policy industrial competitiveness, 42–55 technological advancement, 42–55 technology, public sector and, 42–55 Government R & D portfolios management of approaches to, 140–142 constrained approach, 140 Research Value Mapping (RVM), 142–158 technology, management of, 137–160 Government-owned, contractor-operator laboratories (GOCOs), 51
849
H Homeland Security Congressional Research Service Reports (CRS) and, 74 science and technology, public sector current issues and, 61–62 Hughes, investigation, missile technology transfer and, 180–181 Human context, technology, culture and, 15–16 Human Genome Project genetic hypersusceptibility screening, 323–332 reasons to screen, 326 risk issues, 324–325 screening, 325 confidentiality, 327–328 criteria for, 325 discrimination, 328–329 legal basis, 327 screening, privacy, 327
I ICT. See information and communication technology Identity theft, 322–323 Import controls, U.S. Government and China, 271–272 India, weapons of mass destruction and, 636–663 ballistic missiles, 640 biological, 639 chemical, 639 cruise missiles, 640 nuclear program, 638–639 procurement of, 641–650 biological weapons, 641–646 chemical weapons, 645–646 delivery systems, 641–645 missile defenses, 650 summary of, 636–638 Industrial competitiveness background and analysis, 43–55 government policy and, 42–55 Industry-university efforts, technological advancement and, 49–50 Informal communication, electronic mail, 225–226 Information and communication technology (ICT) implementation emotion method used, 520 research on, 518–520 expressions of emotions, 524–527 organization background, 520–521 training sessions, 521–524
DK3654—INDEX——4/10/2006—18:26—SRIDHAR—XML MODEL C – pp. 845–859
850
Handbook of Technology Management in Public Administration
Information systems infrastructure, science and technology, public sector current issues and, 62–63 Information technology compensation practices, 798–801 Congressional Research Service Reports (CRS) and, 75–77 labor shortages Congressional assistance, 239–242 Congressional legislation, 232–246 education/training, 242–243 governmental studies of, 239 labor market conditions, 235–236 legislation for, 239–243 older workers, 243 tax incentives, 242 technology management and, 232–246 unemployment rate, 236–239 wage increases, 238–239 worker supply and demand, 233–235 measuring performance, effect on professional service organizations, 506–516 organizational learning curves, 507–509 test results, 514–515 R & D, science and technology, public sector current issues and, 68 services, outsourcing of negotiating and, 706–718 process-oriented effects, 714–717 suggested framework, 709–710 task-oriented effects, 710–712 team-interaction effects, 712–714 science and technology, public sector current issues and, 64 types, 532–533 Information utilization data in motion, 542 static data, 541 technology management and, 541–543 Integrated “technospace” awareness Dominant Battlespace Knowledge (DBK), 462 technology leadership forecasting (TLF) and, 462 Intelligence agencies, summary of, 385–399 Internal medical technology, 544–545 International negotiations actual negotiating period, 681–682 case study, sea conference negotiations, 682–686 factors in, 671–673 nonbinding forums, 678–679 pre-negotiating, 673–674 pace, 679–681 technical information delivery of, 691 role of, 688
technology issues and, 671–693 timing of, 675–678 Internet access, technology in public schools, 121–122 addiction, study of, 785–797 disputes, adjudication of, 828–841 case study, 834–840 legal pluralism, 832 privacy, 312–323 commercial practices, 314 e-mail monitoring, employers, 321–322 e-mail monitoring, law enforcement, 321–322 Federal Government practices, 320–321 Federal Trade Commission, activities, 315–316 privacy identity theft, 322–323 legislation for, 313–320 protection of Social Security numbers, 322–323 science and technology, public sector current issues and, 66 self-regulation advocates, 315 spyware, 321 statistics Congressional Research Service (CRS) products, 503–504 estimated size of internet, 499 impact of, medicine knowledge and, 280–282 invisible web, 502–503 measuring performance and, 497–504 measuring usage, difficulties, 497–498 number of users, 498–499 of web hosts, 499–501 of web pages, 501–502 of web sites, 499 significance of, 497 Investigational aids, technological failure and Apollo Root Cause Analysis, 426 Multilinear Events Sequencing, 426 Why-Because analysis, 427 Investing, technology management and, 533–534 Invisible web, internet statistics and, 502–503 Iran chemicals and China’s sale to, 263–264 missile technology and China’s sale to, 261–263 nuclear technology and China’s sales to, 259–156
J Joint industrial research National Cooperative Research Act, 50 technological advancement and, 50
K Knowledge management, 819–828
DK3654—INDEX——4/10/2006—18:26—SRIDHAR—XML MODEL C – pp. 845–859
Index
L Labor market conditions, 235–236 shortages, information technology workers and, 232–246 Laboratory experimentation, 735–736 Law enforcement e-mail monitoring and, 321–322 surveillance technology and, 597–600 CALEA, 597–598 Leadership business, organizational functions and, 807–809 Leadership vistas remote sensing leadership, 444 – 449 techno-teams, 444 – 449 virtual leadership, 444 – 449 Legal basis, screening, Human Genome Project and, 327 framework, online privacy protection and, 302–304 Legislation current, technological advancement, Federal Government policy and, 54–55 electronic signatures and, 336–337 internet privacy and, 313–320 technology transfer and, 173 Legislative initiatives, Federal Government technological advancement and, 47 Libya, missile technology and China’s sale to, 266 Life-cycle costs, technology and, 7–8 Lockheed Martin, investigation, missile technology transfer and, 181–182 Long-term storage, court systems record management and, 561–562 Loral case, missile technology transfer and, 177–180 implications of, 179–180 investigation of, 177–179
M Maladaptive business, organizational functions, 809–811 Management Oversight and Risk Tree (MORT) analysis, 412–413 Managerial flux, organizational transformation and, 441 Managing change, 401–527 avoiding failure, 432–433 information and communication technology (ICT) implementation, emotionality of, 518–527 understanding change, 434–450 Manufacturing Extension Partnership (MEP), 43
851 Mathematical evaluation procedure, Technology leadership forecasting (TLF) and, 491–496 Measuring performance, 401–527 information technology effect on professional service organizations, 506–516 organizational learning curves, 507 internet statistics, 497–504 Mechanism design theory, 725–726 Medicine knowledge management ethical issues and concerns, 279–282 impact of internet on, 280–282 technology and, 544–548 external, 545–548 internal, 544–545 MEP. See Manufacturing Extension Partnership. Microelectronics, technology and, impact on, 8–9 Microfilm storage, court systems record management and, 561–562 Middle market phase, organizational functions and, 806–807 Missile defense program, India’s procurement of, 650 Missile technology sale of, China and Iran, 261–263 and Libya, 266 and North Korea, 264–266 and Pakistan, 258–259 transfer China action regarding, 173–200 chronology of major events, 200–213 policy issues, 174–200 security concerns, 175–176 U.S. Congress, reaction to, 186–187 U.S. Government reaction to, 184–187 export controls, 193–200 military benefit to China, 182–184 security concerns, China Great Wall Industry Corporation (CGWIC), 175–176 China’s ballistic missiles, 176–184 Hughes, investigation of, 180–181 Lockheed Martin, investigation of, 181–182 Loral case implications of, 179–180 investigation of, 177–179 Motorola, investigation of, 180 Mobile wireless, third generation (3G), technology transfer and, 247–252 Modeling, Silicon Valley and, 79 alternative types, 81–83 Southeast Asia
DK3654—INDEX——4/10/2006—18:26—SRIDHAR—XML MODEL C – pp. 845–859
852
Handbook of Technology Management in Public Administration
culture, 88–89 firms, 87–88 universities, 85–87 universities, need for, 85–87 Moore’s law, 8–9 Motorola, investigation of, missile technology transfer and, 180 Multilinear Events Sequencing, 426
N Nanotechnology, technology and, evolution of, 11 NASA. See National Aeronautics and Space Administration. National Aeronautics and Space Administration (NASA), Federal support of technology in public schools and, 126–129 National Cooperative Research Act, 50 National Health Service (NHS), Foresight and, 108–110 National Institute of Health (NIH), current issues with, 56–57 National Institute of Standards and Technology (NIST), 167–169 National Science and Technology (NSTC), 45 National Science Foundation, Federal support of technology in public schools, 130 National security cyber-terrorism, 580–583 issues, 573–663 communications and information technologies (ICT), 616–617 economic risks due to economic electronic espionage, study of, 583–600 emerging technology adoption, 576 export controls, 575–576 positive/negative technology lists, 576–577 supercomputers, virtual bomb, 602–614 surveillance technology and tools, 590–591 technology’s economic impact, 615–633 strategic information and, importance of back-ups, 580 technology management, Department of Defense view of, 577–580 weapons of mass destruction, India and Pakistan, 636–663 Negotiating e-market protocol computational economics, 726–727 design methodologies, 723–724 game-theoretic analysis, 724–725 mechanism design theory, 725–726 simulation modeling, 726–727 information technology services, outsourcing of, 706–718
process-oriented effects, 714–717 suggested framework for, 709–710 support system evaluation of, 740–741 existing, 740–741 NEGOTIATION ASSISTANT software, 738–753 task-oriented effects, 710–712 team-interaction effects, 710–712 technology issues and, international, 671–693 outsourcing transactions, 669–671 NEGOTIATION ASSISTANT software design and operation, 742–745 effectiveness of, 752–753 experiment and results, 745–752 review of, 738–753 Negotiations e-market protocol computational exploration, 733–735 computational simulation, 733–735 game-theoretic analyses, 732–733 laboratory experimentation, 735–736 OTC derivatives matching mechanism, case study of, 728–735 support systems, adoption of, 757–767 conceptual framework, 763–764 data analysis, 759–762 industry characteristics, 765–766 models, 758–759 organizational culture, 763–764 Technology Acceptance Model, 757–758 Theory of Planned Behavior, 757–758 technical issues and, custom software for, 738–753 technology issues and, 665–766 e-market protocol, 722–737 NHS. See National Health Service. NIH. See National Institute of Health. NIST. See National Institute of Standards and Technology. Nonbinding forums, international negotiations and, 678–679 Nonproliferation agreements, U.S. Government and China, 272–273 Non-technologists teaching science and technology to, 780–782 technology and, 775–784 North Korea missile technology and China’s sale to, 264–266 nuclear weapons and China’s sale to, 264–266 NSTC. See National Science and Technology. Nuclear accidents, impact of surprises on technology, 288–290
DK3654—INDEX——4/10/2006—18:26—SRIDHAR—XML MODEL C – pp. 845–859
Index cooperation agreement, U.S. Government and China, 271 China and Pakistan, 257–258 program India and, 638–639 Pakistan and, 652–653 technology sales, ring magnets, Pakistan, China to, 256–258 weapons program, China and North Korea, 264–266 Pakistan, 659–663
O Office of Base Energy Sciences, case study, portfolio one, 143–152 Research Value Mapping (RVM) project, 142–158 Older workers, information technology labor shortages and, 243 Omnibus Trade and Competitiveness Act, 167–169 Advanced Technology Program (ATP), 167 National Institute of Standards and Technology (NIST), 167–169 Online privacy protection, 299–309 background of, 301–302 Clinton administration of, 304–305 European Union directives, 307–309 Federal Trade Commission, 305–307 legal framework, 302–304 Organizational culture, negotiation support systems and implementation of, 763–764 evolution start-up phase, 802–804 technology and, 801–812 transition functions, 802–804 functions leadership business, 807–809 maladaptive business, 809–811 middle market phase, 806–807 small business phase, 804–805 goals good tech vs. high tech, 532 information technology types, 532–533 management approaches, 533 technology management and, 531–543 aligning of, 534–537 , eBusiness, 536 standardization, 535–536 learning curve models, for service organizations, 507–508 informational technology and, test results, 514–515 measuring performance and, 507–509 test results, 513–514
853 transformation leadership vistas, 444–449 managerial flux, 441 structural redesign, 442–443 understanding change and, 440 Output maximization, 143–152 Output portfolio, 143–149 Outsourcing information technology services, negotiating of, 706–718 process-oriented effects, 714–717 suggested framework, 709–710 task-oriented effects, 710–712 team-interaction effects, 710–712 transactions, negotiating of, 669–671
P Pakistan China and, missile technology, 258–259 nuclear cooperation, 257–258 nuclear technology sales to, 256–258 sales of ring magnets, 256–258 weapons of mass destruction and, 636–663 ballistic missiles, 654–655 biological, 653 chemical, 653 cruise missiles, 654 nuclear program, 652–653 procurement of, 655–663 biological weapons, 658–659 chemical weapons, 658 delivery systems, 655–658 nuclear weapons, 659–663 summary of, 650–651 Paperwork Elimination Act, 335 Patents, 170–171 biotechnology and, 68–69 Genetically modified organisms (GMOs) and, 99 Pharmaceutical drugs availability, 63–64 costs, 63–64 R & D, 63–64 science and technology, public sector current issues and, 63–64 Phone slamming, 64–65 Policy, in transition, 101–111 Politics, technology and, impact on, 10–11 Portfolio one output maximization, 143–152 output portfolio, 143–149 Portfolio two, balanced portfolio, 152–158
DK3654—INDEX——4/10/2006—18:26—SRIDHAR—XML MODEL C – pp. 845–859
854
Handbook of Technology Management in Public Administration
President’s Management Agenda, 59–60 Privacy biotechnology and, 68–69 electronic signatures and, 337 internet and, 66 screening, Human Genome Project and, 327 Private sector initiatives, ecopreneurs and, 295–297 Probable cause, concept history, 408–411 Process-oriented effects, 714–717 Professional service organizations measuring information technology performance on, 506–516 organizational learning curves, test results, 513–514 Professions technology and, 773–841 organizational evolution, 801–812 Promotion efforts, Federal government technology transfer and, 164 Protection, economic electronic espionage and, 595–596 Public management, cyber-management, challenger to, 229–230 Public schools, technology and access to, 120–121 cost of, 120 curriculum development, 125–126 Federal role in, 131–133 Federal support of, 116, 126–131 Federal Communications Commission, 130–131 Federal programs, other types, 131 National Aeronautics and Space Administration (NASA), 130 National Science Foundation, 130 U.S. Department of Agriculture, 129 U.S. Department of Commerce, 129 U.S. Department of Education, 126–129 impact of, 119–120 in use, 122–124 interest in, 117–118 internet access and, 121–122 major issues, 118 status of, 116, 118 training, 124–125 Public sector current issues, R & D budgets, 56–57 cyber-management, technology transfer and, 222–231 initiatives, ecopreneurs and, 297–298 perspectives science and technology, current issues, 55–77 technology and,
genetically modified organisms (GMOs), 93–100 government R & D portfolios, management of, 137–160 Silicon Valley, development of, 77–89 science and technology Advanced Technology Program (ATP), 63 aeronautics R &D, 71 aviation security technologies, 62 biotechnology, 68–69 broadband internet access, 65 commercial satellite exports, 73 Congressional Research Service (CRS) Reports, 73–77 critical information systems infrastructure, 62–63 Department of Defense, 56–57 educational issues, 60–61 e-government, 66–67 Federal Chief Information Officer, 67–68 global climate change, 69–71 Government Performance and Results Act (GPRA), 59–60 homeland security, 61–62 information technology, 64 R & D, 68 internet privacy, 66 National Institutes of Health (NIH) 56–57 pharmaceutical drugs, 63–64 phone slamming, 64–65 President’s Management Agenda, 59–60 public access to R & D data, 58 quality of R & D data, 58–59 radio spectrum management, 65–66 space programs, 71–73 technology transfer, 63 telecommunications, 64 voting technology, 67 technology and, 35–217 government policy, 42–55
Q Quality organizational goals, 531–543 technology and, 530–570 organizational goals, 531–543
R R & D. See research and development. Radio spectrum management, science and technology, public sector current issues and, 65–66 Real income, distance and, 619–622 Record keeping, cyber-management and, 22–225
DK3654—INDEX——4/10/2006—18:26—SRIDHAR—XML MODEL C – pp. 845–859
Index Records management court systems and, 557–562 digital storage, 559–561 microfilm, 561–562 role and responsibilities, 557–559 Regulation, genetically modified organisms (GMOs) and, 96–97 Regulatory issues, third generation (3G) mobile wireless and, 249–251 Remote sensing leadership, 444–449 Research and development aeronautics, 71 budgets and policy Congressional Research Service Reports (CRS), 73–74 public sector current issues and, 56–57 data public access to, science and technology, public sector current issues and, 58 quality of, science and technology, public sector current issues and, 58–59 information technology, 68 pharmaceutical drugs and, 63–64 policy, science and technology, public sector current issues and, 56–57 portfolios, government, management of, 137–160 spending Advanced Technology Program (ATP), 48–49 Federal Government technological advancement and, 47–49 tax reform acts, 47–49 Research external medical technology, 546–547 Research Value Mapping (RVM) project, 142–158 Office of Base Energy Sciences, case study, conclusions of, 158–160 portfolio one, 143–152 portfolio one, output portfolio, 143–149 portfolio two, balanced portfolio, 152–158 Ring magnets, sales of, 256–258 Risk Foresight, National Health Service (NHS), 108–110 issues, Human Genome Project and, 324–325 society, 101–111 implications for, 102–105 Root cause analysis, 413–415 Russia, supercomputers and, weapon development, 602–605
S Sanctions, U.S. Government and China, 269–270 Satellites exportation of, 73, 269 public sector current issues and, 73
855 Science and technology non-technologists and, 775–784 perspectives, current issues, 55–77 public sector current issues Advanced Technology Program (ATP), 63 aeronautics R & D, 71 aviation security technologies, 62 biotechnology, 68–69 broadband internet access, 65 commercial satellite exports, 73 Congressional Research Service (CRS) Reports, 73–77 critical infrastructure, 62–63 Department of Defense, 57–58 educational issues, 60–61 e-government, 66–67 Federal Chief Information Officer, 67–68 global climate change, 69–71 Government Performance and Results Act (GPRA), 59–60 homeland security, 61–62 information technology, 64 R & D, 68 internet privacy, 66 National Institutes of Health (NIH), 57 pharmaceutical drugs, 63–64 phone slamming, 64–65 President’s Management Agenda, 59–60 public access to R & D data, 58 quality of R & D data, 58–59 R & D budgets, 56–57 R & D policy, 56–57 radio spectrum management, 65–66 space programs, 71–73 technology transfer, 63 telecommunications, 64 voting technology, 67 wireless technologies, 65–66 Science, technology vs., 2–5 Screening, Human Genome Project, 325–329 confidentiality, 327–328 criteria, 325 discrimination, 328–329 legal basis for, 327 privacy, 327 reasons to screen, 326 Sea conference negotiations, case study of international negotiations and, 682–686 other issues, 686–688 Security concerns China Great Wall Industry Corporation (CGWIC), 175–176 missile technology transfer, China and, 175–184 ballistic missiles, 176–184
DK3654—INDEX——4/10/2006—18:26—SRIDHAR—XML MODEL C – pp. 845–859
856
Handbook of Technology Management in Public Administration
Hughes, investigation of, 180–181 Lockheed Martin, investigation of, 181–182 Loral case implications of, 179–180 investigation of, 177–179 Motorola, investigation of, 180 Self-regulation, internet privacy and, 315 Service organizations, learning curve models and, 507–508 Silicon Valley development of, technology and public sector, 77–89 modeling alternative types, 81–82 conventional, 79–80 problems with, 80–81 Southeast Asia, 81 culture, 88–89 firms, 87–88 universities, need for, 85–87 Southeast Asia, replication for, 81–89 state concept, 82–85 Simulation modeling, 726–727 Single-event causation model, 411 Slamming, phone, 64–65 Small businesses Federal Government technology transfer and, 171 phases, organizational functions and, 804–805 Social Security numbers, internet privacy and, 322–323 Software NEGOTIATION ASSISTANT, 738–753 utilization, technology management and, 538–539 Southeast Asia, Silicon Valley and, models and culture, 88–89 firms, 87–88 universities, 85–87 replication of, state, importance of, 81–89 Space programs, science and technology, public sector current issues and, 71–73 Spectrum band oversight issues, third generation (3G) mobile wireless and, 249–251 Spyware, 321 Standardization, technology and, 535–536 Start-up phase, organizational evolution and, 802–804 State government, Federal Government, technology transfer from, 164 State control/direction orientation, 84 outcomes driven, 84–85 problematic effects, 83–84
short-term focus, 85 Silicon Valley and, concept of, 82–85 Southeast Asia and, 82–85 Static data, 541 Stevenson-Wydler Technology Innovation Act, 51, 165–167 cooperative research and development agreements (CRADAs), 51 Strategic information, national security and, importance of back-ups, 580 Summit meetings, U.S. Government and China, 267 Supercomputers sales to Russia, weapon development, 602–605 virtual bomb, 602–614 Supply and demand, information technology workers and, 233–235 Surveillance technology intercepted economic material, types, 593 law enforcement and, 597–600 CALEA 597–598 legal context, 596–597 methods and tools, 590–591 regulatory context, 596–597 systems in use, 591–592 Sustaining external medical technology, 545–546 Syria, missile technology and China’s sale to, 266
T Tainter, Joseph, civilization’s collapse and, 26 Taiwan, U.S. Government and China, 268–269 Task-oriented effects, 710–712 Tax incentives, information technology labor shortages and, 242 Team-interaction effects, 710–712 Technical information, delivery of, international negotiations and, 691 international negotiations and, 688 issues, negotiation support system, custom software, 738–753 Technological advancement Advanced Technology Program (ATP), 43 background and analysis, 43–55 Federal Government policy current legislation, 54–55 current programs, 47 Federally funded R& D, 50–53 industry-university efforts, 49–50 joint industrial research, 50 legislative initiatives, 47
DK3654—INDEX——4/10/2006—18:26—SRIDHAR—XML MODEL C – pp. 845–859
Index National Science and Technology Council (NSTC), 45 R& D spending, 47–49 role in, 44–46 government policy and, 42–55 Manufacturing Extension Partnership (MEP), 43 failure accident, concept of, 419–420 concept of, 417–518 correct response to, 422–426 defining the problem, 416–417 investigational aids, 426–427 Apollo Root Cause analysis, 426 Multilinear Events Sequencing, 426 Why-Because analysis, 427 responding to, 415–417 wrong responses, 415–422, 427 Technologies, definitions of, 543 Technology, Acceptance Model, 757–758 access to, public schools and, 120–121 adaptation to, 9–10 adoption, national security issues and, 576 civilization and collapse of, 25–31 contribution to quality, 530–570 control of, 4–5 impact of surprises, 287–290 nuclear accidents, 288–290 cost of, public schools and, 120 courts, California, 564–570 culture of, 13–33 new human context, 15–16 definition of, 2–4 development, Congressional Research Service Reports (CRS) and, 74–75 electronic signatures, 333–334 ethical considerations in, 6–8 issues and concerns, 275–399 evolution of, biotechnology, 11 nanotechnology, 11 failure causation, traditional views of, 411–412 designing a response to, 407–427 historical background, 407–408 managing and responding to, 407–427 probable cause, concept history, 408–411 future uses of, 11–12 geeks growth of, 21–25 managing of, 21–25 government R & D portfolios, management of, 137–160
857 impact on, microelectronics, 8–9 public schools, 119–120 implementation, emotionality of, 518–527 improving literacy of, 813–819 information technology, compensation practices, 798–801 internet access, public schools and, 121–122 addiction, study of, 785–797 disputes, adjudication of, 828–841 issues, negotiating, 665–766 e-market protocol, 722–737 international, 671–693 outsourcing transactions, 669–671 knowledge management of, 819–828 leadership forecasting (TLF) analytic philosophy of, 461 background of, 460–496 categories matrix, 464–465 current assessment methodology, 462–464 current baseline capability questionnaire, 479–481 functional areas, 464–465 generic indicator directory, 481–491 integrated “technospace” awareness, 462 mathematical evaluation procedure, 491–496 matrix of, 465 methodology, 453–496 features of, 456 participants, 461 purpose of, 460 requirements, 460–461 warning, analytic methodology of, 465–479 life-cycle costs, 7–8 management approaches to, 533 court systems, 549–550 needs and implementation tactics, 551–557 records management, 557–562 information technology labor shortages, 232–246 information utilization, 541–543 investing in, 533–534 national security cyber-terrorism, 580–583 Department of Defense view of, 577–580 strategic information, 580 organizational goals aligning of, 534–541 eBusiness, 536 good tech vs. high tech, 532 information technology types, 532–533 management approaches, 533 standardization, 535–536
DK3654—INDEX——4/10/2006—18:26—SRIDHAR—XML MODEL C – pp. 845–859
858
Handbook of Technology Management in Public Administration
oversight of, 537–541 platform minimization, 538 software utilization, 538–539 managing change of, 12 Moore’s law, 8–9 persuasion in using, 16–20 platform minimization, 538 politics and, impact on, 10–11 professions and, 773–841 organizational evolution, 801–812 public schools curriculum development, 125–126 Federal role in, 131–133 Federal support of, 116, 126–131 Federal Communications Commission, 130–131 misc. programs, 131 National Aeronautics and Space Administration (NASA), 130 National Science Foundation, 130 U.S. Department of Agriculture, 129 U.S. Department of Commerce, 129 U.S. Department of Education, 126–129 interest in, 117–118 major issues, 118 status of, 116, 118 public sector and, 35–217 government policy, 42–55 perspectives distance education, 112–116 European Union, 112–116 genetically modified organisms (GMOs), 93–100 R & D portfolios, management of, 137–160 Silicon Valley, development of, 77–89 quality of, medicine and, 544–548 science vs., 2–5 standardization, 535–536 training, public schools, 124–125 transfer Federal Government and, interest in, 162–163 patents, 170–171 promotion efforts, 164 small businesses, 171 state government, 164 Federal Laboratory Consortium, 164–165 government to private sector, 161–173 missile, China, 173–213 mobile wireless, third generation (3G), 247–252 public sector, cyber-management of, 222–231 science and technology, public sector current issues and, 63
weapons of mass destruction (WMD), policy issues, 252–273 transition of, 101–111 types in use, public schools and, 122–124 understanding of, 5–6 Technology and science non-technologists and, 775–784 public’s knowledge of, 775–780 teaching to non-technologists, 780–782 principles to follow, 782–784 Techno-teams, 444 – 449 Telecommunications Congressional Research Service Reports (CRS) and, 75–77 science and technology, public sector current issues and, 64 Television, children’s viewing and supervisions of, 282–287 Testing, Genetically modified organisms (GMOs) and, 94–95 The Body Shop, ecopreneurs and, 292 Theory of Planned Behavior, 757–758 Third generation (3G) mobile wireless, 247–252 definition of, 247 oversight issues, 249–251 regulatory issues, 249–251 standards of, 247–248 technology standards, 248–249 TLF. See technology leadership forecasting methodology. Training information technology labor shortages and, 242–243 technology, public schools, 124–125 Transfer of technology, Federal government to state government, 164 government to private sector, 161–173 Transit time cost, distance and, 626–629 Transition functions, organizational evolution and, 802–804 TV rating system children’s television viewing and, 283–284 Congressional involvement, 285–286 current system, 284 revised, 284–285
U U.S. Congress assistance for, information technology labor shortages, 239–242 electronic signatures and, 334–335 involvement in TV rating system, 285–286
DK3654—INDEX——4/10/2006—18:26—SRIDHAR—XML MODEL C – pp. 845–859
Index legislation for, information technology labor shortages and, 232–246 missile technology transfer, China, reaction of, 186–187 U.S. Department of Agriculture, Federal support of technology in public schools, 129 U.S. Department of Commerce, Federal support of technology in public schools, 129 U.S. Department of Education, Federal support of technology in public schools, 126–129 U.S. Government defense policy, China, 267–269 economic controls, China, 269–273 capital markets, 270–271 export controls, 272 exportation of satellites, 269 import controls, 271–272 nuclear cooperation agreement, 271 sanctions, 269–270 electronic signatures and, commercial standards, 336 foreign policy, China, 267–269 counter-terrorism campaign, 267–268 export control assistance, 268 summit meetings, 267 Taiwan links issues, 268–269 policy issues China arms control, 272–273 nonproliferation agreements, 272–273 weapons proliferation and, 266–267 reaction of, missile technology transfer, China, 184–187 Understanding change administrative IT culture and, 434–436 discretionary decision-making, 440–449 logic of change and context, 438–440 organizational transformation, 440 traditional leadership thinking, critique of, 436–438 Unemployment rate, information technology labor shortages and, 236–239 United States, genetically modified organisms (GMOs), regulations, 98–99 Universities, Silicon Valley models and, 85–87 University-industry efforts, technological advancement and, 49–50
859
V V-chip control children’s television viewing and, 283 foreign countries, 286 Virtual bomb, 605–614 modeling nuclear explosions, 606–608 verification of, 608–611 Virtual leadership, 444–449 Voting technology, science and technology, public sector current issues and, 67
W Wage increases, information technology labor shortages and, 238–239 Weapons of mass destruction, China entities sanctioned for weapons proliferation, 253–254 nuclear technology sales Iran, 259–261 Pakistan, 256–258 proliferation of, 255–256 India and Pakistan, 636–663 India, summary of, 636–638 Pakistan and India, 636–663 Pakistan, summary of, 650–651 policy issues, 252–273 Weapons proliferation China entities sanctioned for, 253–254 U.S. Government policy issues, 266–267 relations with China, 267–269 Web hosts, number of, 499–501 pages, number of, 501–502 sites, number of, 499 Why-Because analysis, 427 Wireless standards, third generation (3G), 248–249 technologies, science and technology, public sector current issues and, 65–66 World Trade Organization, genetically modified organisms (GMOs) and, 98 Wrapping of care, economics of external medical technology and, 547–548
DK3654—INDEX——4/10/2006—18:26—SRIDHAR—XML MODEL C – pp. 845–859