The DIGITAL HAND
This page intentionally left blank
JAMES W. CORTADA
The
DIGITAL HAND Volume III How Computers Ch...
118 downloads
1232 Views
5MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
The DIGITAL HAND
This page intentionally left blank
JAMES W. CORTADA
The
DIGITAL HAND Volume III How Computers Changed the Work of American Public Sector Industries
1 2008
1 Oxford University Press, Inc., publishes works that further Oxford University’s objective of excellence in research, scholarship, and education. Oxford New York Auckland Cape Town Dar es Salaam Hong Kong Karachi Kuala Lumpur Madrid Melbourne Mexico City Nairobi New Delhi Shanghai Taipei Toronto With offices in Argentina Austria Brazil Chile Czech Republic France Greece Guatemala Hungary Italy Japan Poland Portugal Singapore South Korea Switzerland Thailand Turkey Ukraine Vietnam
Copyright © 2008 by Oxford University Press, Inc. Published by Oxford University Press, Inc. 198 Madison Avenue, New York, New York 10016 www.oup.com Oxford is a registered trademark of Oxford University Press All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of Oxford University Press. Library of Congress Cataloging-in-Publication Data Cortada, James W. The digital hand. Volume 3, How computers changed the work of American public sector industries / James W. Cortada. p. cm. Includes bibliographical references and index. ISBN 978-0-19-516586-9 1. Information technology—Economic aspects—United States—Case studies. 2. Technological innovations— Economic aspects—United States—Case studies. 3. Business—Data processing—Case studies. I. Title: How computers changed the work of American public sector industries. II. Title. HC110.I55C67 2005 338'.064'0973—dc22 2004030363
9 8 7 6 5 4 3 2 1 Printed in the United States on acid-free paper
To three visionaries who have done so much to help young historians study the history of information technology: Erwin Tomash Arthur L. Norberg William Aspray
This page intentionally left blank
PREFACE The object of government is the welfare of the people. —Theodore Roosevelt, 1910
T
he literature on how organizations in the private sector go about their dayto-day work is almost always silent about what workers who are employed by government agencies do. Business and economic historians, professors of management, and consultants who comment on government treat the public sector of the economy as “different”; hence they often bypass it in their studies of how work is done. Government officials and their employees contribute to this situation by reinforcing the notion that the public sector does things differently and plays a unique role in society. Agencies are not in the business of making profits but, rather, facilitating and protecting the welfare of the nation. But as a result of these attitudes, they all ignore some basic realities, not the least of which is that the daily work of public officials is often just the same as in the private sector. Because this simple truth is so often overlooked, as a result, we have a paucity of research on how the operational practices of day-to-day work in governments and other public institutions compare to those simultaneously in use in the private sector. We also face the problem that the numbers of studies about work practices in the public sector are far fewer than in the private sector. In addition, government agencies prefer to report more frequently and thoroughly on the activities of other industries, and of the economy as whole, than about themselves. Yet as the first chapter of this book demonstrates, the public sector is large, and depending on how public workers are counted or their budgets tabulated, the largest within the American economy. Does the paucity of studies about the role of the public sector in modern society mean that we are left with the interesting possibility that a massive portion of the American economy functions differently from the private sector? The question is an intriguing one that we cannot fully answer in this book, but by looking at the day-to-day activities of a variety of government agencies,
viii
Preface
we can feed the dialogue with a considerable amount of new details. Why do we care, one might ask? Simply put, the public sector today consumes between 19 and 21 percent of the Gross Domestic Product (GDP) and there are probably only a handful of citizens who believe they are getting as much value from their taxes as a similar amount might yield in the private sector. However, by examining the role of computing and telecommunications in the daily operations of various government agencies and public institutions, we can specifically identify how the work of officials and public employees plays out in the economy. We will see that many of the operational practices, and uses of computing, were for the same applications and reasons as in the private sector. As I did with so many other industries reviewed in the first two volumes of The Digital Hand, in this book I identify patterns of operations in the public sector that are remarkably consistent with the experiences in the private sector. For the truth is, public officials sought to use the new technologies of the second half of the twentieth century for many of the same reasons as managers in the private sector: to reduce operating costs, to increase throughput, to perform new work, and to provide new services. To be sure, their missions varied; they set policies and implemented them, regulated parts of the economy and specific industries, and played a vital role in nurturing the development of new technologies in support of the national economy, such as development of the computer itself. In the case of the computer, no government in the world did as much to support the rapid development of this technology over such a long period of time as did a combined collection of American government agencies. But our story is going to be about government as a user of the technology, not its inventor. We look at how public institutions acquired technologies and for what purposes. I describe the applications to which these were put and the expectations officials desired. When possible, results and consequences are explored. To be quite formal about the scope, however, the story begins in about 1950, when computers were coming to the attention of public officials, and concludes with discussions of trends and events occurring in 2007. There are some fundamental findings to report. First, while we can quibble about when a public institution adopted a specific use of computing, or how that compared to when the same occurred in the private sector, as a general statement public institutions embraced specific uses of computers at roughly the same time as did the private sector. Of course, there were exceptions, and those are pointed out quite explicitly. But the general finding I consider proof positive that management in the public sector learned about the continually evolving technology, and of its possible uses, at approximately the same time as the private sector. Furthermore, at approximately the same time, and through a similar process, they, too, made decisions to acquire, use, and replace technologies. In short, they were very much as plugged into the flow of information about the technology as any other part of the economy and were just as enamored with the managerial practices of their day as their counterparts in corporations and small companies. Patterns of uses of computing paralleled those evident in other industries. Four in particular are significant. First, initial uses involved shrinking or
Preface
containing costs of labor and served largely to manage more efficiently accounting functions. As new uses of computing became possible, similar applications appeared in governments, schools, and health facilities to do data entry (1960s), word processing (1970s), database management and to use bar codes (1980s) and the Internet to reach the public in the 1990s and beyond. Second, as in private sector industries, applications specific to a mission of a public industry also stimulated use of this technology. Just as ATMs were created for the specific use of the Banking Industry, specific computer devices were developed for the public sector, such as command-and-control software and hardware embedded in weapons for the military and distance learning for education. Because of the massive purchasing power of public institutions, they could impose technological standards that filtered into the private sector and, in time, became dominant and ubiquitous. As this book goes to press in late 2007, the current example is the deployment of RFID technology in business supply chains, because the U.S. Department of Defense mandated application of the technology early in the new century. This is no different from what General Motors did to its thousands of suppliers in the 1970s and 1980s when it set standards for electronic sharing of information or the insistence of Wal-Mart that its suppliers conform to specific technical standards and use applications in support of the large firm’s mission in the 1980s and 1990s. Third, public institutions proved as aggressive as private industries in embracing digital technologies and the constantly arriving new tools. To be sure, there were exceptions, where adoption proved slow, or slower than other industries; but even here it was for the same reason: a specific technology or application either did not yet function well enough, was simply not cost effective, or the budgeting process proved slower moving than in a corporation. In short, I found no intrinsic feature of governmental operations that constrained the desire to use the technology. Budgets always served as a major influence in setting the pace of adoption and change, but as a general statement, that, too, has always been the case in the private sector as well. Because public institutions can have such a pervasive effect on many industries (if for no other reason than size), these agencies could and did push ahead of the private sector in the use of communications and digital technologies, sometimes leading the way. For example, the early versions of what eventually became known as the Internet were used almost exclusively by universities and government agencies for over a decade before the private sector began to integrate this new form of communications into the fabric of their work and business strategies. Fourth, the extent of use of the digital and communications technologies proved substantial. We know a great deal about rates of adoption. Often these equaled or exceeded those of many industries in the private sector. When I began working on this book, I hypothesized that public institutions were really far more different from private ones, and to be sure there were and continue to be important differences between them. However, the evidence led me to conclude that when it came to using information technologies (IT) over the past half century, they were more similar than different. I address the reasons for that, and
ix
x
Preface
the implications and lessons derived, more thoroughly in the final chapter. But one practical managerial lesson can immediately be called out: private sector industries can and should learn how to innovate their use of their information technologies from the public sector. Conversely, public sector managers can continue to do that as well. I say can continue because the historic pattern is that they have been more willing to learn from the private sector how best to use computing than have company officials from government cohorts. Finally, and perhaps most important, the continuous, unrelenting, and ubiquitous adoption of computing proved so massive that by the end of the century how people did their work in public sector industries had fundamentally changed in both their look-and-feel and in what they actually did. Old services and functions were performed differently while new activities took place and, in the process, led to a new style of doing things. In the two earlier volumes of this study, I describe in detail the features of the new digital style, but essentially my argument has been that the changes were as substantial as when, for instance, manufacturing companies went from making things with craftsmen to mass production, in part facilitated by the availability of electricity and electrical motors. Today, professors in many fields cannot do their research and writing without the use of computers; Internet crime could not be committed until the wide availability of this new communications tool; terrorists could not coordinate activities without benefit of cell phones and laptops; audits of tax returns for larger percentages of today’s population would not be possible without use of computers. In the Gulf War of 1991, the United States’ military headquarters was not in Basra, Iraq, or even in Kuwait; it was in Florida, again made possible by advances in communications and computers not available decades earlier. Quietly, in an evolutionary way, technologies seeped into every corner of the public sector of the economy; and while the changes were incremental, they had such a cumulative effect that by the end of the century we could understand why so much of the hyperbole about “computer revolutions” could be articulated by observers of the American scene. The findings are derived from the study of some, but not all, public institutions. So, we need to understand the scope of the book you are holding. Slices of like-missioned portions of the public sector act very much as if they were independent well-defined industries, much as companies in similar markets relate to each other within an industry. Because of that common behavior, it became quite easy and practical to discuss similar uses of computers and mutually reinforcing behaviors in the public sector as we might in the private sector. For that reason, therefore, half the chapters in this book are organized much like those in the prior two volumes of The Digital Hand. Thus, there is a chapter on higher education, since colleges and universities identify with each other and have a shared set of uses and experiences, including with computing and communications. The same applies to kindergarten through high school (K–12) and their school districts. Local governments (towns, cities, counties, and states) identify with each other and, as K–12 and higher education, have their own industry publications, conventions, associations, and so forth. So a chapter is devoted to local governments.
Preface
Other chapters, however, are organized around common activities that transcend local, state, and federal organizations, and those individuals and agencies that work within lines of shared activities work so closely together that it was more important to discuss their use of computing that they shared. Law enforcement, the military, and tax collection are examples and thus are treated individually. In the case of law enforcement, that means discussing together the work of city police, county sheriffs, state police, and the Federal Bureau of Investigation (FBI), rather than providing individual histories of law enforcement in specific agencies. As with any industry, there is a common identity among law enforcement professionals or tax collectors, complete with their own industry-centric conventions and publications and so forth. Finally, there is the proverbial “everything else,” which I have chosen to ignore, because either they do not add significantly to the story I am telling or their uses of computing are more commonly evident across all agencies. Legislatures, for example, are today extensive users of e-mail and online applications that access data about legislation in process or files on constituents. So, I include a chapter that begins to document the crossagency types of applications, because as a group they illustrate the extent of deployment of computing and communications technologies in the public sector. My approach is a happy compromise between my desire to write histories of such organizations as the U.S. Department of State or the State of California and the need to describe enough of the public sector landscape so that at the end of this book you and I have confidence that we have a reliable enough understanding of the patterns of deployment and the effects of computing on the daily work of public officials to draw some conclusions about such matters as patterns of adoptions of various applications and their effects on organizations. This hybrid approach stands in sharp contrast to how government officials look at themselves. Go to any agency report, Web site, or history and the first thing you will notice is that all accounts are organized as studies or histories of a specific agency or department. This would be tantamount in the private sector to writing accounts of a specific company rather than of an industry, or of a division within a large corporation. So, for some readers in the public sector, my approach may seem unnatural—and it is—but perhaps also enlightening, particularly in light of the fact that as this book goes to press, government agencies are outsourcing their work to other agencies and to the private sector, which, in the process, forces one to view the work of a public sector organization differently, more in line with the supply chain- or value chain-centric perspectives so widespread in the private sector. The Defense Department (DoD) has outsourced a vast amount of its nonmilitary work and IT; the U.S. Postal Service (USPS) formed a partnership with private sector UPS delivery service in 2004; other cases could be cited, such as private firms doing trash pickup or snow plowing for a city. The point is that practices so evident in the private sector are seeping into the public sector, a key finding regarding the use of computing as well. In earlier volumes, I noted that practices in the public sector came into the private sector. In this book, we see occasionally the movement of ideas and practices from private to public sector.
xi
xii
Preface
Finally, I had to decide what to do about healthcare. That entire arena of economic activity consists of government regulators (e.g., Food and Drug Administration), providers of services (e.g., state hospitals and U.S. Department of Veterans Affairs’ hospitals), payers (e.g., Medicare and Social Security Administration, and even the U.S. Treasury Department that handles the flows of payments, such as checks). At the same time, private health insurance companies, doctors, clinics, hospitals, and benefits departments of corporations are involved. Universities (most of which are public institutions and yet others private) also do research and staff clinics and hospitals. In short, healthcare is a messy space in the American economy, ill defined, and one that flows across various private and public sectors. After carefully weighing the criteria for what was “in” and “out” of scope for this volume, I concluded that healthcare in America straddled many of the industries and agencies already described in the three volumes, so there would be little gained in devoting a full chapter to the subject at this time. The federal, state, and local agencies selected for inclusion in this book were portions of governments that were extensive users of IT, or otherwise influential in the use of computing by other public and private sector organizations. Thus, there is much discussion about military uses of computing, the role of the FBI, and so forth, but near silence about many parts of the U.S. government, such as the intelligence community (e.g., CIA and NSA) and the Department of Interior. At the state level, examples and trends are drawn from many states, but not all states are discussed. The same applies to schools, colleges, towns, and cities. Any one of these other entities could easily be the subject of their own article, chapter, or book, and I would encourage others to do research on the role of technology in specific departments and agencies so as to enrich our understanding of the role of the digital hand in the public sector. So as not to write a 3,000-page book, I chose to highlight agencies and examples that illustrated patterns rather than to strive for completeness. Nobody will read, let alone publish, the size of book the story would otherwise require. I chose the title The Digital Hand primarily to call attention to the insufficiently recognized influence of computing on how organizations, companies, industries, and national economies changed the way they worked. Changes came about as a result of this new class of technologies that entered the lives of people during the second half of the twentieth century. I will be the first to admit that many factors influenced the way work changed and organizations evolved in this period; the full story is not just about the consequences of computing or telecommunications. However, to draw our attention to the important role of technology I have chosen an overt title. The title is also a tip of the hat to the work of distinguished historian Alfred D. Chandler, Jr., whose analysis of the rise of the managerial class in business—his Visible Hand—so expertly described the rise of corporations and the new nature of work since the mid–nineteenth century. My work is not intended to replace his—Chandler worked on a larger canvas than I—rather to pile on to his suggested lines of thinking by considering only one facet: the role of IT in the professionalization of work and management, and the evolution of strategy, structures, and scope of organizations. Once we
Preface
understand this source of important influence on the work of industries, then future historians can place IT into a more proportional context that takes into account such other important influences as monetary policies, regulatory practices, politics, war, and environmental effects. I point this out because although in the two prior volumes of the Digital Hand I so noted my intention, it is easy to forget that other influences were at work; I just do not have the space in this book to discuss those and explore the role of IT at the level of detail required to get a realistic understanding of its influence. So I will brave the risk of a critic arguing that I am too narrowly focused. In the spirit of true candor, I should note that Professor Chandler’s ideas influenced enormously my views on business history; we also were friends and have collaborated in examining how computing and other information technologies came into American society over the course of more than two centuries and influenced modern business. In the small world of business historians, scholars take sides either in support of his views or critical of them. So there is no doubt about my worldview: I am a Chandlerian. Also in the spirit of candor, I have not asked him to critique or bless this project; this is a work of my own doing and the design of its organization and content a result of what my own research has suggested made sense. While this book was in production, the sad news came in that Professor Chandler had died at the age of eighty-eight. His voice has been silenced, but I hope that his influence will continue to energize historians and economists. For the same reasons of controlling scope when I looked at over thirty industries in the private sector, I focused only on the United States, clearly the earliest and most extensive user of computers in the world (a situation rapidly beginning to change as computing spreads around the world). So, this historical example has much to teach those who ponder the role of computing in Asia, the greater Europe (not just the European Union), Latin America, and parts of Africa. For the research methodology, I direct the reader to the first appendix in volume one that studied manufacturing and retail industries as it is the same one used in this book. I want to explore in this book some of the same issues analyzed in the two prior volumes. I want to know when computing came to the public sector and for what uses. How did that happen and why? We know, for example, that for the entire period federal agencies spent more on computing than many private sector industries. Why? How? To what extent did industry-centric applications influence the types and rate of adoption of computing from one industry to another? How much influence did practices in one industry have on another, for example, computing in banking on government, or the military on manufacturing firms? In manufacturing, we found that over time computing and telecommunications linked suppliers to large manufacturers in tight codependent relationships that today would be impossible to imagine breaking. Has that been happening in any of the public communities studied in this third book? A series of questions can also be asked regarding how computing affected economic activities in the public sector as compared to influences in the manufacturing and services sectors. The immediate answer is that the digital affected public sector industries in similar
xiii
xiv
Preface
ways to manufacturing, retailing, services and simultaneously in different ways. Finally, what are the implications, particularly for management, in the public sectors that are fast growing, to the look and feel of contemporary American society? The end notes are rich in detail of what was required to document my sources. Where the literature on a subject was not so well known, as in the case of the Internet, I have added material to help those who wish to explore further themes discussed in the book. The subject of this book is vast, and there is a growing and large body of contemporary material on the subject that can be examined. I hope that others can flush out details I could not address in the trilogy. As with the previous volumes I have had to be almost arbitrary in selecting materials and portions of the public sector to examine, and to limit the length of the discussion to keep it within the confines of one book, because I have had to write about a very broad set of issues. In many instances, I have had to generalize without fully developing explanations. For this I ask the reader to forgive me, or at least to understand. The views I express in this book, and the weaknesses you encounter, are not those of the good people who helped me, my employer (IBM), or my publisher. This book would not have been possible to write without the help of many people. Nancy Mulhern, an expert on government publications and librarian at the Wisconsin Historical Society and University of Wisconsin, did more to shape my research agenda, and the research strategy underpinning the project, than I can ever fully describe. Most historians stand in terror in front of hundreds of shelves of obscure government publications; she made those reports my friends and allies. James Howard, an archivist for the U.S. Air Force, introduced me to the study of the history of the modern military establishment, while historian Alfred Goldberg, also of the Department of Defense, proved helpful in teaching about how the department had evolved. At the Bureau of the Census, I received help from David M. Pemberton and William Maury, while at the Social Security Administration Larry DeWitt did the same. Megaera (Meg) M. Ausam and James Golden, both at the United States Postal Service, shared information, insights, and then critiqued my discussion of the USPS, all to the betterment of the book. At the IRS Terry Lutes was of extraordinary help in ensuring that I got my facts right, and a highly experienced private sector tax preparer, Linda Horton, advised me on the role of computing in tax collection and preparation. Dr. William J. Reese, at the University of Wisconsin-Madison, provided advice on my work on education in America. Dr. Richard N. Katz, vice president of EDUCAUSE, was very helpful in advising me on how to deal with the very large issue of computing in higher education. I received constant help about the role of the Internet from Professor Shane Greenstein at the Kellogg School at Northwestern University, while it was historian Walter Friedman at the Harvard Business School who gave me various opportunities to articulate my thoughts and findings, valuable tests before solidifying them in this book. Paul Lasewicz, IBM’s archivist, and his staff kept sending me materials for years that simply were of enormous value and unavailable elsewhere. The staff at the Charles Babbage Institute, the world’s largest research center and archive
Preface
devoted to the history of computing, did the same. To put things in perspective, each archive provided me with several thousand pages of archival material and had tens of thousands of pages of additional records I simply could not find time to read. Many agencies have archival materials relevant to the kind of study I did that are not catalogued but that they are willing to allow historians to study; to so many of them, my thanks. Finally, I want to publicly thank the editorial and production team at Oxford University Press. You know you are working with a world-class organization when it can handle such a large project as if it were a routine event and yet make the author feel, as I did, that he or she was its sole contributor. This project represented an enormous investment of time, staff, and budget on the part of the press; it bet that a business manager could write a large and useful history consistent with the press’s standards and mission. And like my wife Dora, who had to tolerate my discussions about the digital hand for over fifteen years, editors and the production staff quietly kept me grounded and focused on the project. Despite the help of so many people in various forms, this book may still have weaknesses and failings, and they remain my responsibility. Finally, I want to say something about the dedication. Erwin Tomash was an executive in the world of information processing in its formative years and was the first individual to recognize that the history of computing needed to be preserved and told professionally. He established both the Charles Babbage Foundation (CBF) and the Charles Babbage Institute (CBI) at the University of Minnesota, which has helped dozens of historians learn about the history of computing and equip them with the skills to conduct research on the subject. Professor Arthur L. Norberg, of the University of Minnesota, devoted his career to the study of the history of computing, modeling the way for many of us, and ran CBI for many years in what clearly was a brilliant manner. Finally, I want to publicly recognize Professor William Aspray, of Indiana University, who also is a distinguished historian of computing but who has perhaps done more behind the scenes than anyone in the world to mentor young historians, advise senior ones, and, in the process, fundamentally shape the research agenda of a generation of historians of computing.
xv
This page intentionally left blank
CONTENTS 1. 2. 3. 4. 5.
6. 7. 8. 9. 10.
Presence of the Public Sector in the American Economy Digital Applications in Tax and Financial Operations Digital Applications in Defense of the Nation Digital Applications in Law Enforcement Digital Applications in the Federal Government: The Social Security Administration, the Bureau of the Census, and the U.S. Postal Service Role, Presence, and Trends in the Use of Information Technology by the Federal Government Digital Applications in State, County, and Local Governments Digital Applications in Schools Digital Applications in Higher Education Conclusions: Patterns, Practices, and Implications
Notes Bibliographic Essay Index
3 16 49 102
140 184 211 251 284 334 364 437 453
This page intentionally left blank
The DIGITAL HAND
This page intentionally left blank
1 Presence of the Public Sector in the American Economy No single technological advance has contributed more to efficiency and economy in government operations than the development of automatic data processing equipment. —Howard D. Taylor, 1965 The very fact that the services of general government are not sold means that there is no market valuation in the conventional sense and no prices whereby the estimated value of output might be deflated. —John W. Kendrick, 1961
J
ohn W. Kendrick published one of the earliest comprehensive studies about productivity in the American economy in 1961. Like so many economists who have studied the role of government and other sectors, he faced difficulties in measuring its performance. The above quote reflected a specific problem he faced, yet one that remains with us today. He recognized the importance of understanding the public sector by using techniques similar to those deployed by economists to observe the private sector. However, Kendrick also concluded that alternative means of understanding the economic impact of the public sector was possible; his message permeates this book, beginning with this chapter.1 The importance was reinforced by Howard D. Taylor in the quote above because computing was seen by public officials as an important tool for the improvement of productivity in the public sector since the 1950s, and continues to be so 3
4
The DIGITAL HAND, Volume III
viewed at the dawn of the new millennium. At the time of his statement (1965), Taylor was the Regional Director of the U.S. Internal Revenue Service in New York and thus was one of those public officials. He further noted that “today’s largest—and first—user of ADP for business management purposes” was the federal government.2 At the dawn of the twenty-first century, the combined total of the nation’s GDP from town, city, state, and federal governmental portions of the public sector totaled approximately 19 percent (state and local was about 12 percent, federal another 7 percent). Over the decades, the percents went up or down a percentage or two, often as a result of whether the nation was at war or not. Those elements of the economy generally employed about 16 percent of the nation’s total workforce, making it the second largest component of the economy after all of services and surpassing manufacturing. While the public sector had grown during the second half of the twentieth century, even in 1950 it already was a major component of the economy, the result of New Deal agencies of the 1930s expanding in size, along with the growth of the federal government during World War II and in the early days of the Cold War. The GI Bill also pumped millions of dollars into higher education, beginning in the late 1940s. That infusion of funds was augmented by additional billions in support of research conducted by higher education during the 1950s through the 1980s, largely in direct response to Cold War requirements. State, county, city and town governments also expanded during the entire period for myriad reasons. The volume of taxes collected grew over the entire period as a direct result of an expanding, generally prosperous economy and in response to the need for funding in support of government’s increasing activities, most notably in support of the Cold War and later the “War on Terrorism.” The percent of the total economy dedicated to taxes (called receipts in government parlance) did not vary wildly across the half century, but expenditures did as various governments and administrations expanded or shrank their use of deficit spending. Public officials intertwined deployment of every major information technology in the creation and operation of the functions of government and in expenditure of funds over the half century. Early on (1940s–1950s) deployment of IT focused on building out use of calculators and adding machines—the existing “high tech” devices of the day—for applications identified as useful before World War II. Governments and educational institutions were already extensive users of these kinds of information-handling equipment. In the case of IBM and its predecessor companies, for example, the U.S. government had been their largest customer since the 1890s.3 State and local governments started using adding machines and typewriters before World War I. For most suppliers of “office appliances,” and later computers, software, and services, the same held for them; namely, public sector organizations were typically their largest customers or, at least, one of their biggest market segments. The larger an institution was, the more that proved to be the case. So, federal agencies remained the most important, while individual schools and their districts were the least significant, as measured by volume of sales.
The Public Sector in the American Economy
This pattern of size affecting extent of deployment of IT mimicked what occurred across the economy at the same time, particularly in large organizations and companies. For while I have demonstrated elsewhere that use of IT permeated all sizes of organizations, the fact remains that the largest enterprises and government agencies tended to be the earliest and most extensive users of all forms of IT equipment throughout the twentieth century.4 Since the public sector had some of the largest organizations in the economy, it should be of no surprise, therefore, that it would absorb a substantial quantity of every new form of IT to come along during the second half of the century and, in particular, computers and telecommunications. Their large size simultaneously created economic incentives to look for ways to control costs, while providing additional quantities of services to the expanding population in the United States, and even new ones. Operating within the context of a growing, prosperous economy also added other influences as well.5 Thus, to begin understanding the kinds of uses to which governments and other public institutions put IT and telecommunications, we need to appreciate how big this sector was and how the various components comprised its grand total. To simplify the exercise, one can look at the number of employees in the sector, tax revenues, and expenditures over time. Those data points collectively give us a quick snapshot of some macro trends that over time proved influential in the adoption of IT in this sector. Subsequent chapters describe specific uses of various technologies, rationale for their adoption, and where possible, extent of deployment. One trend that each chapter spells out is how uses of computing came in incremental waves, one washing over the other as new generations of technology appeared on the market or the consequences of prior uses manifested themselves, contributing to the gradual evolution of many work functions over time. Eventually, the presence of computing technology embedded in the activities of various agencies contributed to the extent of growth or shrinkage of an organization, to how it was organized, what its budgets could be, and what services it might provide. So, we will see agencies initially install computers and other IT to drive down costs and improve efficiencies and eventually arrive at a point where how government organized itself was driven in important ways by the capabilities and effects of the technology. It is a process still under way, unfolding as this book was being written. One quick example will have to suffice to illustrate the point. When the 9/11 Commission studied the causes of the terrorist attacks of September 2001 and made its recommendations about what to do going forward, it focused largely on the failure of various agencies to share information. One of its most important recommendations to the president was to consolidate various intelligence and law enforcement agencies so that they could share information and gain access to each other’s databases. The commission’s final report read as much like an IT organization strategy as it did a blueprint for responding to terrorists. Several quotes from the report’s key findings make the point: The U.S. government has access to a vast amount of information. When databases not usually thought of as ‘intelligence,’ such as customs or
5
6
The DIGITAL HAND, Volume III
immigration information, are included, the storehouse is immense. But the U.S. government has a weak system for processing and using what it has. Recommendation: Information procedures should provide incentives for sharing to restore a better balance between security and shared knowledge. We propose that information be shared horizontally, across new networks that transcend individual agencies. A decentralized network model, the concept behind much of the information revolution, shares data horizontally too. Agencies would still have their own databases, but those databases would be searchable across agency lines.6
How public officials could reach the juncture of being so dependent on IT is a story that has its origins decades earlier, when computers first became available. But before we tell that story, we need to know what made up the public sector.
The Federal Presence The federal presence in the American economy grew all through the period. From 1950 through the early 1990s, the number of civilian and uniformed military employees grew, as illustrated in table 1.1. During the entire period, the number of federal employees per 1,000 employees in all sectors of the American economy, however, actually declined. In the early 1960s, for example, there were about 13 federal employees for every 1,000 employees across the economy, which peaked at 14 in 1970, then declined to between 12.4 and 12 in the 1980s, before dropping to 10 in the 1990s, and in the new century to numbers ranging from 9.1 to 9.4. Yet, if we add in state and local government personnel, who constituted roughly half the employment base for all governments, the number per thousand workers employed essentially doubled. For most of the period, military personnel made up about half of the federal government’s total. The reason for so many military employees grew largely out of a combination of Cold War requirements, needs of the Vietnam War, and later conflicts in the Middle East and Afghanistan. There were more people in uniform in the 1960s (over 3 million)
Table 1.1 Federal Government Employees, 1950–2005 (thousands) 1950
1960
1970
1980
1990
2000
2003
2005
3,421
4,875
6,085
4,965
5,234
4,129
4,210
4,196
Note: All annual figures include uniformed and civilian employees. All tables in this chapter include data for 2005, the latest year for which information was available. Source: U.S. Department of Commerce, Bureau of the Census, Historical Statistics of the United States: Colonial Times to 1970 (Washington, D.C.: U.S. Government Printing Office, 1975): Part 2, pp. 1102, 1141; U.S. Government, Fiscal Year 2005 Historical Tables of the U.S. Government (Washington, D.C.: U.S. Government Printing Office, 2004): 301; “U.S. Employment and Labor Force,” http://www.dod/ mil/comptroller/defbudgt/fy2005/fy2005_greenbook.pdf (last accessed 3/06/2006).
The Public Sector in the American Economy
than in the 1980s (averaging about 2.1 million); in the 1990s that number dropped (1.4–1.5 million).7 While analyzing population data can be a tricky proposition, the general trend is clear: the proportion of the total workforce in America in the employment of the U.S. government remained relatively constant, although the absolute numbers were high, ranging from over 3 million to just in excess of 6 million people. A second way to understand the scope of the federal government’s presence is by looking at taxes and other receipts over the period. Table 1.2 summarizes the totals over time. First, what becomes quickly obvious is that the absolute number of dollars coming into the federal coffers from taxes, fees, and fines is enormous and grew in volume over time. Second, and more telling, as a percent of Gross Domestic Product (GDP), these increased by some 50 percent from a low of 14.4 percent of the economy’s output to nearly 21 percent, then declined as tax reductions initiated at the turn of the century took hold. Later in this chapter, we review similar kinds of statistics for state and local government, which also show a growing percent of GDP, from just over 5 percent of the total to an average of nearly 11 percent late in the century. In short, as measured by income, the federal government again appears as a major player in the economy.8 If we move to the other side of the balance sheet—to expenditures, which do include deficit financing that, hence, exceeds income—we gain another view of how large a role was played by this government. The key data are displayed in table 1.3. As with income, expenditures were large. From the 1950s into the early 1980s, expenditures roughly matched incomes; then expenditures exceeded receipts in ranges of a few percents to 20 percent until the years of the Clinton administration when expenditures actually declined to slightly lower volumes than receipts. The situation then reverses back to the pattern of the 1980s in the early years of the new century. So, budgetary outlays throughout the half century ranged from just over 14 percent of GDP to an excess of 20 percent at the dawn of the new century.9 Table 1.2 Federal Receipts, 1950–2003 Fiscal Year 1950 1960 1970 1980 1990 2000 2003
Total Government Receipts ($ Billion)
As Percentages of GDP
56.6 131.3 288.9 776.1 1,032.0 3,084.9 2,927.6
14.4 17.8 19.0 19.0 18.0 20.9 16.5
Source: U.S. Government, Fiscal Year 2005 Historical Tables of the U.S. Government (Washington, D.C.: U.S. Government Printing Office, 2004): 288.
7
8
The DIGITAL HAND, Volume III Table 1.3 Federal Expenditures, 1950–2003 (billions of dollars) 1950 1960 1970
42.6 92.2 195.6
1980 1990
590.9 1,253.2
2000 2003
1,788.8 2,157.8
Source: U.S. Government, Fiscal Year 2005 Historical Tables of the U.S. Government (Washington, D.C.: U.S. Government Printing Office, 2004): 289.
As impressive as are the raw statistics on number of employees and dollars taken in and spent, they do not tell the whole story, a tale that we will not discuss in this book. But, one should keep in mind that in some instances over the years, public funds went toward stimulating development of new technologies and science. One fundamental reason that, for example, computing advanced so rapidly in its technical evolution in the 1940s through the 1960s can be attributed to the millions of dollars spent by the U.S. government in support of early computing projects, and to outfit the military branches with computers and microelectronics embedded in weapons systems and avionics.10 That story has been told by many, so it is enough here to acknowledge a first and second order effect of expenditures on computing in America. Almost every computer development project of the 1940s and through the mid-1950s was funded almost entirely by the U.S. government. Many of the early components that comprised the Internet, before the availability of Web browsers, were also the result of federal largesse.11
State and Local Governments State and local governments also held a prominent position in the economy during the second half of the twentieth century. As table 1.4 demonstrates, the number of public employees at the state and local level, from governors to teachers, police and park personnel, city workers and county game wardens, and all other government workers remained high during the entire period. Furthermore, when compared to the number of federal employees, they substantially outnumbered them, increasingly so as time passed (see table 1.1). While there were many reasons for this trend, not the least of which was the conscious effort of the federal government to transfer responsibilities for various programs to the states (which in turn often also moved duties and obligations to local government), the numbers were impressive. Now let us examine revenues and expenditures to round out our view of the size of state and local government presence in the economy. As table 1.5 indicates, the volume of tax receipts taken in during the period proved enormous. As a percent of GDP, income doubled over the half century. Why did the share of GDP double while dollar flows grew by nearly two orders of magnitude? All through the half century, the U.S. economy also grew enormously, as measured
The Public Sector in the American Economy Table 1.4 State and Local Government Employees, 1950–2005 (thousands) 1950
1960
1970
1980
1990
2000
2003
2005
4,098
6,083
9,822
13,375
15,219
17,925
18,745
19,000
Note: The data also includes school employees. Source: U.S. Department of Commerce, Bureau of the Census, Historical Statistics of the United States: Colonial Times to 1970 (Washington, D.C.: U.S. Government Printing Office, 1975): Part 2, p. 1104; U.S. Government, Fiscal Year 2005 Historical Tables of the U.S. Government (Washington, D.C.: U.S. Government Printing Office, 2004): 301; “U.S. Employment and Labor Force,” http://www.dod/mil/ comptroller/defbudgt/fy2005/fy2005_greenbook.pdf (last accessed 3/6/2006).
Table 1.5 State and Local Government Tax Receipts, 1950–2003 Fiscal Year 1950 1960 1970 1980 1990 2000 2003
Total Government Receipts ($ Billion)
As Percentages of GDP
17.1 38.8 96.1 259.0 615.0 1,059.7 1,145.3
6.3 7.5 9.5 9.5 10.7 10.9 10.6
Source: U.S. Government, Fiscal Year 2005 Historical Tables of the U.S. Government (Washington, D.C.: U.S. Government Printing Office, 2004): 288.
by Gross National Product (GNP) from $284.8 billion in 1950 to $503.7 billion in 1960 (in 1958 dollars), during years of early and limited adoption of computing in the public sector of the economy.12 Next, the economy as a whole expanded all through the years in which extensive use of computing occurred. If we switch to the preferable measure of GDP, in 1970 that totaled just over $1 trillion, grew to $2.8 trillion in 1980, then to $5.8 trillion in 1990, and in 2000, reached nearly $9.9 trillion. Even in the recession year of 2001, GDP climbed to $10.2 trillion. In short, the economy was large, expansive, and could comfortably afford to support a growing public sector.13 How much did state and local governments spend, that is to say, how much did they inject back into the economy, such as for the purchase of computers, software, salaries in addition to its myriad other obligations? Table 1.6 indicates the volumes and trends. For the most part, state and local governments spent what they received, often more as a by-product of laws and state constitutional
9
10
The DIGITAL HAND, Volume III Table 1.6 State and Local Expenditures, 1950–2003 (billions of dollars) 1950 1960 1970
16.0 34.6 88.0
1980 1990
250.2 605.3
2000 2003
1,004.8 1,156.8
Source: U.S. Government, Fiscal Year 2005 Historical Tables of the U.S. Government (Washington, D.C.: U.S. Government Printing Office, 2004): 289.
Table 1.7 Number of Governmental Units by Type, 1952–2002 Type of Government
1952
1962
1972
1982
1992
Federal State County Municipal Towns & townships
1 48 3,052 16,807 17,202
1 50 3,043 18,000 17,142
1 50 3,044 18,517 16,991
1 50 3,041 19,076 16,734
1 50 3,043 19,279 16,656
2002 1 50 3,034 19,431 16,506
Source: U.S. Department of Commerce, Statistical Abstract of the United States: 2002: The National Data Book (Washington, D.C.: U.S. Government Printing Office, 2002): 260.
requirements that they live within their means than because of some particularly disciplined approach to fiscal management. It is a good point to keep in mind because state and local governments did not have as much flexibility as the national government to spend more and thus felt a continuous, even greater pressure than federal agencies to find ways to slow or contain expenses. To that end, they turned frequently to IT for assistance. Since budgets for such things as computers and communications networks reside in departments and agencies, rather than in the hands of individual employees, it is useful to understand how many such organizations exist, just as it is useful to know how many companies there are in an industry. The data in table 1.7 was prepared by a U.S. government agency and documents how many such agencies there were at the local level. The information is also terribly misleading when it comes to state and federal agencies since there are literally scores in every state and hundreds in the federal government. Furthermore, the data leave out such other governmental units as the commonwealth government of Puerto Rico and other possessions of the United States in the Pacific. In subsequent chapters, more granular information is provided (such as the number of law enforcement agencies by type). That said, however, note the large number of county, town, and municipal entities (listed in table 1.7); over time they all became users of computers and telecommunications. Add in the various state government agencies, and one begins to see that the user base for computing throughout the entire period grew enormous. If we add in school districts and
The Public Sector in the American Economy Table 1.8 Total Number of Local Governmental Units, 1952–2002 Year
1952
1962
1972
1982
1992
2002
Total
116,756
91,186
78,218
81,780
84,955
87,849
Source: U.S. Department of Commerce, Statistical Abstract of the United States: 2002: The National Data Book (Washington, D.C.: U.S. Government Printing Office, 2002): 260.
other specialized districts and entities, the numbers remain quite high for the entire period. Table 1.8 documents that pattern. Note also that a combination of consolidations and closures of agencies occurred during the period; one can reasonably assume that agencies adding to their responsibilities those of a closed organization would have grown in size over time, putting them in a better position to want and afford digital tools. School districts consolidated the most during the half century into ever larger governmental units while the number of “special districts” increased as well. This later category included such organizations as public housing authorities, and others that managed irrigation and power, often generating their revenues from rents and fees.
K–12 and Higher Education The majority of schools in the United States during the second half of the twentieth century were publicly funded and run. Because computers were not used in any substantial quantity in schools until the early 1980s, when personal computers became widely available, it is more important to understand how many schools there were in the last two decades of the century, rather than in prior decades. More important are the school districts, which, by virtue of being responsible for managing multiple schools, had the size and economic wherewithal to spend money on computers to handle such accounting and administrative functions as payroll, class assignments, and grades. Table 1.9 catalogs the number of elementary and secondary schools. The number became especially important in the 1990s when the Clinton administration launched its initiative to connect every school in the country to the Internet, over 80,000 of them, and their school districts, which added another 14,000 organizations into the mix (see table 1.10). Note that over time school districts consolidated into larger organizational units, all when the number of students rose from over 48 million in 1980 (as PCs were just starting to spread across the economy) to over 52 million in the 1990s.14 The number of teachers grew from just over 2.2 million in 1980 to nearly 2.8 million in 2000.15 Higher education experienced an extraordinary period of growth and prosperity throughout the half century. Its institutions became extensive users of all types of computing across administrative and research functions, and to a limited
11
12
The DIGITAL HAND, Volume III Table 1.9 Number of Elementary and Secondary Public and Private Schools in the United States, 1950–2002 Year
Elementary
Secondary
1950 1960 1970 1980 1990 2000 2002
138,600 105,427 80,172 61,069 61,340 64,131 70,516
27,873 29,845 29,122 24,362 23,460 22,365 27,468
Source: For 1950–1970, U.S. Department of Commerce, Bureau of the Census, Historical Statistics of the United States: Colonial Times to 1970 (Washington, D.C.: U.S. Government Printing Office, 1975): Part 1, p. 368; for 2000, U.S. Department of Commerce, Statistical Abstract of the United States: 2002: The National Data Book (Washington, D.C.: U.S. Government Printing Office, 2002): 147; “Digest of Education Statistics, 2003,” http://nces.ed/gov/programs/ digest/d03/tables/dt085.asp (last accessed 3/06/2006).
Table 1.10 Number of School Districts in the United States, 1952–2002 1952 1962
67,355 34,678
1972 1982
15,781 14,851
1992 2002
14,422 13,522
Source: U.S. Department of Commerce, Statistical Abstract of the United States: 2002: The National Data Book (Washington, D.C.: U.S. Government Printing Office, 2002): 260.
extent in teaching. Table 1.11 documents the number of public and private chartered institutions and includes junior and four-year colleges and universities. I include public and private universities because even private schools were the beneficiaries of extensive funding by the public sector, particularly for research and even library applications. In 1950, there were approximately 247,000 faculty members. In 1970, that population had swelled to 729,000 and, by the end of the century, to over a million.16 The army of students they taught also grew throughout the period, from 2.2 million in 1950 to 8.5 million in 1970. During the 1970s, enrollments rose massively, largely due to the baby boomers going to college, with enrollments exceeding 12 million in 1980, and rising to over 13.8 million in 1990 as their children also enrolled in postsecondary education programs. The surge in enrollments extended right to the end of the century, with the number of students approaching 14.8 million in 1999/2000.17
The Public Sector in the American Economy Table 1.11 Number of Institutions of Higher Education, 1950–1999 1950 1960
1,863 1,959
1970 1980
2,556 3,231
1990 1999
3,559 4,084
Source: For pre-1970, U.S. Department of Commerce, Bureau of the Census, Historical Statistics of the United States: Colonial Times to 1970 (Washington, D.C.: U.S. Government Printing Office, 1975): Part 1, p. 382; for post-1970, U.S. Department of Commerce, Statistical Abstract of the United States: 2002: The National Data Book (Washington, D.C.: U.S. Government Printing Office, 2002): 165. Note that these statistics are for public and private nonprofit institutions. In chapter 9 data is presented that suggests that in the early 2000s there were over 9,000, which includes corporate and for-profit postsecondary institutions.
If we total K–12 and higher education, we begin to understand how large the Education Industry was in the United States. Taking 1980 as our starting point— by which time computers were in wide use in higher education, were being broadly deployed in school districts, and were just starting to enter elementary and secondary schools—there were 60 million people going to school, taught by several million other individuals. By the end of the 1990s, approximately 65.8 million were in school, while an additional 3.8 million were teaching.
Summary By collecting together the summary data for the number of public sector employees and comparing them to the total work force, we can further clarify the size of government in the economy. Table 1.12 does that just with employment figures from the mid-1960s forward when uses of computing became substantial in the sector. Earlier, we had already established the contribution to GDP made by the sector in total. This table only shows employees working for a governmental institution and includes, for example, teachers and professors at public schools and universities. By placing alongside what percent of the total American workforce (in the public and private sectors) comprised those working in the public sector, we see that the proportion is roughly 15–17 percent, a fairly constant amount. To be sure, the data has all kinds of problems. For example, we know that governments outsourced a great deal of work in the 1990s, particularly the federal government, which otherwise would have more workers, resulting in larger proportions of the total workforce being in the public sector. Totals from one calculation or one agency to another also do not always match. However, as noted earlier, the trends are consistent enough to make the data useable. This applies to all three categories of information: employment, revenues (taxes), and expenditures. The data raises intriguing questions tied to Kendrick’s original interest in measuring productivity and related to the use of computers and telecommunications. For one thing, we would expect that as the amount of computing agencies
13
14
The DIGITAL HAND, Volume III Table 1.12 Comparison of Federal, State, and Local Employees to Total U.S. Workforce, 1965–1995 (thousands)
Year
Total Federal
1965 1975 1985 1995
5,215 5,061 5,256 4,475
Total State & Local 7,696 11,937 13,519 16,484
All Government 12,911 16,998 18,775 20,959
% of Total Total U.S. U.S. WorkWorkforce force 77,178 94,793 115,461 132,304
16.7 17.8 17.6 15.0
Note: U.S. workforce only includes civilian employees. Source: For pre-1970, U.S. Department of Commerce, Bureau of the Census, Historical Statistics of the United States: Colonial Times to 1970 (Washington, D.C.: U.S. Government Printing Office, 1975): Part 1, p. 127; for post-1970, U.S. Department of Commerce, Statistical Abstract of the United States: 2002: The National Data Book (Washington, D.C.: U.S. Government Printing Office, 2002): 367; and for 1975 from ibid., 1974 edition, 356; and U.S. Government, Fiscal Year 2005 Historical Tables of the U.S. Government (Washington, D.C.: U.S. Government Printing Office, 2004): 301.
deployed increased, their productivity would also improve. However, since almost all industries were also installing computers for similar applications and at approximately the same time, it should become difficult to differentiate public from private sector productivity directly attributable to computing, applying the old saw that “rising waters raise all boats.” We will have other occasions to discuss the “productivity paradox” in this book, but suffice it to say that the issue is more than just about productivity. The extent of deployment of computing in public institutions proved so extensive that fundamental ways of doing work had been altered to such a point that prior methods were no longer possible to use (e.g., tracking aircraft flying toward airports without use of computers). Furthermore, there now were new and different services that both the public at large and officials in agencies had come to see as part of what they wanted and did (e.g., communicating with citizens and each other via e-mail). That change in style in how this sector went about its business is the more important story. That said, however, we need to acknowledge that productivity measures in government have long intrigued observers and economists.18 In recent years, economists have engaged in an extensive debate about what kinds of data should be collected by the U.S. government regarding productivity in the services sector, because it is increasingly becoming quite clear that current data collections are not adequate.19 There is the additional problem that such data sets do not take into account the short- and long-term structural changes in productivity that grew directly out of the extensive use of IT.20 To make matters worse, we are left with woefully insufficient data about productivity in the public sector, but with an abundance of such information about manufacturing, and a slowly growing body of data regarding services.
The Public Sector in the American Economy
By looking at the fundamental “street level” uses of computing in the public sector, the whole discussion of productivity and effectiveness in government can be informed, suggesting possible avenues for documenting efficiencies in the public sector. Moving from just a discussion of comparative statistics—a favorite and useful exercise of economists—to the less abstract historical narrative that describes how this important collection of technologies was used, moves the whole discussion to a different plane. This holds out the hope that behind “the numbers” there is a more three-dimensional human understanding of how public institutions functioned in recent decades that contributes to our appreciation of how they go about their work today. For public officials, this exercise moves insights from statistical abstracts to managerial “best practices” and identifies patterns of adoption and deployment that can be applied to future decisions on acquiring and using IT. The road ahead is an exercise in business history, with an injection of economic data (but not necessarily theory), and a fairly disciplined avoidance of technical history of computing and telecommunications. The chapters describe when computing came into use, why, for what applications, and when available, document results, and certainly expectations. This will not be a debate about what policies were implemented or why, unless they directly concerned use of IT. Rather, our main concerns will be managerial, and in the process we will see that many practices evident in the public sector were quite similar to what existed simultaneously in the private sector, providing yet another line of evidence that the economy of the United States evolved broadly into new forms across so many industries—a basic theme of the trilogy of The Digital Hand. Because taxation, defense, and law enforcement transcend all manner of local, state, and federal agencies in one form or another, we will look at those first. Next we move away from heavily federally centered themes to deeply local ones, by examining state, county, and municipal governments and education. But it always seems that one cannot begin any discussion about government without raising the issue of taxes. For that reason, the next chapter is devoted to this topic and to the extensive use of computers deployed in their collection and expenditure.
15
2 Digital Applications in Tax and Financial Operations It is not an understatement to say that every aspect of the IRS’s mission has been and will continue to be affected by the technology revolution. For the IRS, it is impossible to look at even our recent past without seeing the enormous impact of information technology (IT). —Tim Brown, 1990
T
he most widely recognized agency within the U.S. government lies inside the Department of the Treasury: the Internal Revenue Service (IRS). It was also one of the nation’s largest users of computers throughout the second half of the twentieth century. Tim Brown (quoted above), the assistant commissioner for collections at the IRS, had witnessed for years the continuous growth in the use of computing in this large corner of the U.S. government. He commented to a case writer for the Harvard Business School that “from the way tax returns are processed to the way employees communicate and use office equipment, IT has continually changed the way we do business.”1 His comment could just as easily have been made about the entire federal government, for most states’ financial and tax collection agencies as the century progressed, and even about local governments. One can envision a virtual wave of technology sweeping across the entire financial and tax collection agencies of government, beginning earliest and most extensively with the largest organizations, such as the IRS and the states of New York and California, followed by smaller states, cities, counties, and towns. By the time midsized states were automating steps in their accounting and tax applications with computers, large corporate tax departments already had, and
16
Tax and Financial Operations
large independent accounting firms were busily moving into the world of IT. Next in the 1980s, individual taxpayers began using software to prepare their returns, using personal computers. By the end of the century, all the major participants in public accounting and tax preparation, payment and collection were extensive users of information technology. In the very early years of the new century, over half of individual taxpayers were filing online; a higher percentage of companies and corporations did the same. In short, all manner of tax accounting had been highly digitized.2 Reasons for this wave of automation splashing across the economy from large taxing agencies to individual taxpayers are not hard to find. Howard D. Taylor, the regional IRS commissioner in New York in the early 1960s, stated categorically that computers “contributed more to efficiency and economy in government operations” than any other practice or tool.3 Computers from the 1950s to the present could store and move large quantities of data, and this technology could also calculate and compare information faster and cheaper than people or prior information processing equipment. These twin reasons— efficiency and economy—remained the primary sources of interest in computing in both accounting departments at large organizations and in all areas of tax accounting for over a half century. The same held true for state and local agencies and for tax accountants and taxpayers. Over time, however, in addition to doing the normal work of preparing, collecting, and accounting for taxes, computing made other reasons equally attractive. IT projects in 2004 at the IRS, for example, were motivated by “improving service or performance,” “increasing tax compliance,” “detecting fraud, waste, and abuse,” and “detecting criminal activities or patterns,” all classic examples of data mining that only became possible after vast quantities of tax information resided in computers and software that could analyze large bodies of data became available.4 Many of these applications first appeared in the 1960s, particularly those related to compliance. In short, the technology lent itself to the twin reasons for deploying every new form of IT that had come along in the past century to accounting and tax activities. Implicit, perhaps obvious, was the fact that tax activities were enormously paper intensive throughout the century. Individual tax returns in the 1930s and 1940s were already two to four pages in length, while your author’s combined federal and state tax returns for 2005 exceeded ten forms, each at least two pages long, not including backup files and hundreds of receipts and canceled paper checks. Corporate and small business returns were always more voluminous, and even today there are a number of corporations whose tax return, if they were to be filed on paper, would exceed 50,000 pages. In short, tax preparations and management were some of the most paper-intensive activities in the economy. The lion’s share of this chapter reviews the introduction and use of computing in tax applications, with a tip of the hat to accounting and financial applications in general, since all three activities essentially called for the same use of computers: to collect data, calculate taxes and receipts, report results, and perform analysis of large bodies of information to identify patterns worthy of audits, criminal investigations, and so forth. Accounting applications remained some of
17
18
The DIGITAL HAND, Volume III
the largest and most pervasive uses of computers by all governments across the American economy. While the fundamental missions and tasks did not change over time, how the work of accounting, financial, and tax departments was accomplished did. How that happened is the story of this chapter.
U.S. Internal Revenue Service (IRS) It would be impossible to survey all the accounting and financial applications in use by the federal government in this book and still look at the broad array of other uses across the public sector at large. Observing the experience of the IRS, however, does give us the opportunity to understand uses and patterns of adoption of computing by the federal government and to begin understanding the challenges and consequences of adopting IT, particularly for large digital applications. The agency was always large, interacted with every business, government agency, and taxpaying resident in the nation. In short, its activities were pervasive, massive, and often emblematic of general patterns of use and deployment across the economy. Its role as the government’s major fund raiser cannot be overemphasized. In the nineteenth century, the majority of federal income came from customs fees charged for goods coming into the nation; in the twentieth century, the lion’s share came from individual, corporate, excise, and social insurance receipts. By the late years of the century, customs (excise) taxes accounted for only 4 percent of total federal income, while individual income taxes reached 46 percent in 1999, for instance. That same year social insurance receipts accounted for 34 percent of total tax revenues. Corporate taxes provided an additional 11 percent.5 The IRS relied on computer-based applications designed to reflect these realities. A second influence on applications was the physical location of the IRS. While the number of offices varied over the years, its organization remained remarkably constant. The IRS maintained its headquarters in Washington, D.C., many hundreds of offices all over the nation (although shrinking in number in the 1990s), seven regional offices, approximately sixty district offices, and seven to ten service centers (depending on which year we look at). In 1994—the year just before the massive increased use of the Internet across the United States— the IRS employed 115,000 people. In 1955, the IRS took its first step into the world of computing when it established a service center in Kansas City to begin consolidated processing of tax returns. Over the next few years, IRS officials converted manual, and partially automated, accounting functions to computer-based applications. These included the same applications migrating to computers in the private sector: payroll, general accounting, and some tax accounting. By the late 1960s, the IRS had over 1,000 data-processing personnel and some of the most advanced and largest systems installed in any industry and enterprise within the American economy. Early tax-specific applications implemented in the late 1950s and early 1960s included systems for maintaining taxpayers’ records, audit of returns, and notices of tax due or refunds.6 Deployment occurred in waves,
Tax and Financial Operations
Figure 2.1
First computer installed at the IRS computer center, 1964. (Courtesy IBM Archives)
beginning with projects in Kansas City and in New York City, followed by establishment of the first fully functional service center to be enabled with major automation at Chamblee, Georgia, in 1962 to process business tax returns from seven southern states. The IRS located its first—and only—National Computer Center in Martinsburg, West Virginia, in 1961 with five employees. By the late 1960s, it had hardware and software. By then, the overarching collection of applications had been defined, and the IRS was on its way to implementing them. These included a master file of all business taxpayers, later others, such as for individuals (called the individual master file or IMF); assignment of a permanent tax identification number to all taxpaying entities so that in the future data could be extracted from systems, using much the same method as inventory control or product numbers deployed in the private sector; and increased centralization of computing to take advantage of the rapidly expanding capabilities of mainframes and economies of scale. Taxpayers submitted returns, the IRS deposited tax receipts, and returns were sent to regional service centers for processing. An IRS official described what happened next: In the regional service center the tax returns are sorted, batched, edited and placed under control for processing. The taxpayer’s name and identifying number, as well as the line items of data from which the tax due is computed, are transcribed to punched cards exactly as they appear on the return forms. The punched cards are then converted to magnetic tape and the arithmetic is verified by the computer.7
19
20
The DIGITAL HAND, Volume III
After errors were corrected, the data transcribed from returns loaded on magnetic tape were shipped to the National Computer Center where they were sorted and edited. Staff updated master records and prepared tapes to generate refunds and other accounting records. During the mid-1960s, the IRS slowly implemented this system across the nation; the New York office began, for example, in 1965 with limited batch input for key forms filed by taxpayers, including Form 940 (Employers’ Annual Federal Unemployment Tax Returns), Form 1120 (U.S. Corporation Income Tax Returns), and other related documents, and in time additional forms.8 However, the real experimentation, and later production work, took place at the Chamblee computer center and later at the Philadelphia office, both in the late 1960s. These applications became the core uses of computing at the IRS for many years. They reflected the capabilities of the technology of the early 1960s: tape-based, batch, and run on large mainframe systems. By the late 1960s, advances in technology made it clear that it was time to start thinking about replacing these earliest systems to leverage new technological advances and to handle growing demands for capacity and function. All through the 1970s, the IRS and other government agencies (including congressional committees and the Government Accounting Office) worked with this issue. It proved to be a very slow process, however. Not until the mid-1970s did the IRS complete formulating a concept for its replacement, called the Tax Administration System, touted at the time as the largest data-processing project that would be undertaken by the federal government, with a proposed budget of $1 billion. That, however, did not happen for various administrative and political reasons. Specifically, administrative concerns centered on cost/benefits and political issues related to fears that the IRS was inspecting the public too intrusively. Recall that this was the period immediately following Watergate when public concerns about the ethics of government were at a heightened state of alert. In fact, public mistrust of “big brother” watching the public slowed a number of other data-processing projects at the CIA and FBI as well. Instead, all through the 1970s and most of the 1980s, incremental changes were made to existing batch systems that had previously not been connected to each other. Meanwhile, service levels declined, as did responsiveness to taxpayers. In 1982, the IRS tried again to get funding for a fundamental change, this time to launch the Tax System Redesign Project. It called on private industry to design the new system, have a private firm lead the project, and upgrade the master taxpayer file using IRS employees. According to the GAO, the IRS failed to accomplish this task because of constant changes in management, insufficient clarity in defining the roles and responsibilities of key officials, and lack of sufficient technical expertise at the agency. Auditors at the GAO observed that technicians in the IRS drove the project, rather than general line management. Without appropriate senior management pushing forward the initiative, the systems installed in the 1960s remained in use.9 The press and members of Congress criticized the IRS for its antiquated computing, in sharp contrast to the progressive image it had enjoyed in the early
Tax and Financial Operations
1960s. In 1985, the entire system went into crisis. The IRS had installed a new system with inadequate capacity or staff, with the result that by April 5, with 60 million returns already received and the tax filing season not yet over, it had only processed 60 percent of the returns already in-house. All the work was delayed, such as late refunds, but also some inaccurate dunning notices related to filed returns were mailed to taxpayers, and other errors occurred. Reports circulated that IRS employees were simply throwing away tax returns that they did not have time to process. Disciplinary actions were taken against a number of employees and managers as a result and a service center director and regional commissioner retired shortly after the incident; careers of some senior officials were also damaged in the wake of the crisis. The agency, which was so profoundly reliant on computers, had reached a point where it could not manage its own systems.10 However, the crisis had been brewing for years. An audit report in 1976 had expressed concern “about potential software problems because data base management software needed for the proposed system is not commercially available,” while hardware and other software would have to be tailored. All these actions would increase the expense, complexity, and time required to implement some of the largest systems in existence.11 A string of similar GAO reports issued throughout the 1970s and 1980s reviewed similar issues, expressing concerns about the IRS’s unrealistic optimism, lack of sufficient skills, and rising costs for implementing new systems. Piecemeal improvements intended to increase capacity had been the only major changes made to the systems during the 1970s and 1980s. Yet, the IRS soldiered on with its Tax System Modernization (TSM) project, one that it estimated would cost $21 billion by the end of the century to implement. The project grew in scope, and by 1990 the IRS had expended $120 billion on it since 1986. The story of bad project management, lack of focus and controls, and so forth, became the subject of numerous audits and investigations in the 1980s and 1990s and is a story that itself could fill a book.12 Furthermore, tax fraud became increasingly possible, both manually and electronically, with audits in the early 1990s suggesting that losses just from electronic fraud were reaching $5 billion per year.13 The issue of fraud was one faced by all agencies and companies at the time moving to digital transactions, an issue that would be partially alleviated in the late 1990s by the availability of new software tools that could minimize fraudulent activities. The IRS, like other users of the digital, realized that different tactics were needed to prevent fraudulent behavior from those long in use with paper-based transactions. Meanwhile, massive amounts of resources were still deployed to maintain the older systems at the same time that tax returns were increasing in volume. These applications kept changing and becoming more complex as Congress made intermittent and sometimes substantive changes to tax laws every year.14 Each new administration, however, kept investing in the development of TSM right through the 1990s, although with increasingly smaller budgets and with loud declarations of impatience with the leadership at the IRS. This project had become one of the noisiest and most publicly negative IT implementations in the
21
22
The DIGITAL HAND, Volume III
history of the nation. Yet, Congress and multiple administrations saw enough value in its implementation to continue funding it over the years. One observer of the situation at the IRS described the circumstance it faced in the late 1990s: The IRS struggles with the dilemmas of document processing and electronic filing, causing them to enter a spiral of innovation as computerized collection methods engender computerized tax fraud, which requires yet more sophisticated tax collection methods. The day-to-day operations of the IRS are still jeopardized by the crumbling systems in place. Meanwhile Tax Systems Modernization has brought conflict to the organization of the Internal Revenue Service, between the technical and business sides of the organization, exacerbated with every year that TSM failed to be implemented, causing additional problems for any projects in the future.15
In his memoirs written after being the IRS Commissioner from 1997 to 2002, Charles O. Rossotti described the situation he faced in that same period: “While there was little doubt in any quarter that the ever growing tax system could not continue to depend on computer systems developed in the 1960s and 1970s, there was also increasing concern about the IRS’s ability to manage its technology program.”16 Rossotti conducted a survey of the extent of the problem, mainly to understand what had to be accomplished to get past the Y2K problem. A few statistics suggested the massive size of the problem faced by the IRS: We found no fewer than 130 separate computer systems essential for the functioning of the tax system, running on 1,500 mainframe and midrange computers from twenty-seven vendors and comprising about eighteen thousand vendor-supplied software products and fifty million lines of custom computer code. These were connected through three wide-area communications networks, many stand-alone dedicated circuits, and 1,182 local-area networks. Although the IRS employed about 120,000 people at peak season, the agency had in its inventory over 200,000 end-user computers, partly because many users needed more than one computer to access the numerous incompatible systems and databases.17
In public, officials at the IRS noted in their defense that despite these difficulties, they were doing the work of collecting taxes—in fact over $1 trillion each year during the Clinton administration—processing over 200 million returns and some 85 million refunds. Michael P. Dolan, a deputy commissioner at the IRS, reported to Congress in March 1997 that “over the past few years, we have been trying to shift taxpayers, and the IRS, from some paper transactions. We have made more and more information available via the telephone, computer, fax services, and CD-ROM.”18 In the mid-1990s, the IRS began making available over 700 forms and tax publications over the Internet, and in 1996, over 100 million visits to its Web site took place. It had started accepting filings electronically, about which more is said later in this chapter. In the late 1990s, it implemented an online Integrated Collection System (ICS), which provided online
Tax and Financial Operations
access to current account data, while other online systems were upgraded, such as the Electronic Return Filing System, to reduce errors in operations. A set of applications that did not receive much attention in the press involved all the normal backroom accounting work that goes on in any organization. At the IRS, these uses of computers had been created over the years to handle personnel matters, payroll, accounting, and so forth, many of which were stand-alone applications. Following practices evident in corporate systems in the early 1990s, it consolidated or integrated many of these, such as the applications used for budgeting and accounting from eight major systems, into one financial control system, called the Automated Financial System.19 Before discussing the impact of the Internet, e-filing, and other applications that emerged late in the century, understanding the status of IT at the IRS at that time places in context the nature of its dependence on computing and the effectiveness of the agency’s operations. Using mid-1997 as a point of reference is useful because in that year a major review of the IRS generated considerable data on its performance. The audit report declared TSM a failure “because the IRS did not have a consistent long-term strategic vision to guide the project.”20 A national commission reporting on the IRS noted that “one of the most significant problems with TSM was the failure of the IRS to tie technology objectives directly to business objectives, and to assess success based on those objectives.”21 The failed TSM initiative had, in conjunction with other operational challenges, blemished the image of the IRS. A few statistics give us a broader measure of performance. In that year, the IRS processed 205 million tax returns, of which 120 million came from individuals. Approximately 10 percent of all filings had processing errors caused by the IRS’s people and systems, while another 10 percent of all returns had mistakes in calculations and data created by tax filers. Roughly one half of all tax returns were created on PCs and other computers by citizens and companies, which filers printed out and mailed to the IRS. Then IRS staff reentered these into its IT systems.22 Even with this task, only 40 percent of the data from paper returns was entered into IRS systems that year, the rest of the data was processed manually or only partially through use of computing. Table 2.1 documents briefly how large a proportion of the agency’s total budget went to IT. Computing was consuming approximately 19 percent of its budget by 1997. The various data make it clear how important computer-based applications had become to the agency and the nation at large. As we see next with e-filing, the IRS’s systems had also become quite public. These were clearly the most visible of all government uses of digital tools in the entire sector. President Clinton’s IRS Commissioner years later commented first on the origins of the predicament that his agency found itself in, arguing that “sheer size” was a major issue, because it became “more difficult to design, implement, and operate” the agency.23 Second, he blamed the Congress for constantly changing the tax code, which created additional levels of complexity and work for the IRS. The agency reacted by adding more hardware and lines of software code to existing programs as a quick fix. However, unlike the GAO and Congress, Rossotti did not criticize the management at IRS.
23
24
The DIGITAL HAND, Volume III Table 2.1 IRS Budgets for Fiscal 1996 and 1997 (billions of dollars) Appropriation by Project
1996
1997
Processing, assistance, & management Tax-law enforcement Information systems Totals
1.724 4.097 1.527 7.348
1.780 4.104 1.323 7.206
Source: “Statement of Lynda D. Willis, Director, Tax Policy and Administration Issues, Testimony Before the Subcommittee on Oversight of the House Committee on Ways and Means, March 18, 1997,” http://waysandmeans.house. gov/legacy/oversit/105cong/3-18-97/3-18will.htm (last accessed 1/2/2005).
E-filing emerged as a major new use of IT at the IRS and for the public at large in the 1990s. In time, this application made it possible for an individual or firm to submit electronically their tax returns directly to the IRS from their PC or company mainframe over telephone lines and later over the Internet. In turn, the IRS received these returns as digital data into its tax-processing systems, with the promise of faster processing, greater accuracy, lower costs, and faster turnaround on refunds. That was the promise and the hope, and like so many large applications at the IRS, it took many years to become a reality. In the beginning, individuals filed electronically only by using a professional tax return preparer; a situation that did not change until the mid-1990s, when the software needed had stabilized sufficiently to allow the public at large to use it. E-filing thus evolved into its current form, rather than emerging from some massive complex project done in one gesture, such as management wanted TSM to be. Yet, part of TSM had envisioned adoption of e-filing. The first step, and a part of TSM, was to make available online to taxpayers and their tax preparers more existing information, such as tax forms and instructions for filings. In 1990, an early pilot for electronic filing for this application attracted four million taxpayers. In a report that year, the IRS noted that “electronic filing is a new way of filing certain tax returns with the Internal Revenue Service. The return is not filed on paper; instead, it is transmitted to the IRS by modem over telephone lines.” Interest in such an application at that time was driven by “the growing number of tax preparation firms that use computers to prepare individual returns.” The IRS concluded that such an approach would reduce its own costs for processing returns, tracking compliance, while facilitating enforcement of the tax codes. It thus embraced the application. Initial implementation of e-filing had begun for individual returns as far back as 1986 for 1985’s taxes and for businesses in 1988. Over the years, additional types of existing tax forms were added that could be submitted electronically.24 Early in 1993, the IRS reported that five of its ten processing centers were handling electronic filings. It had also introduced its first major PC-based form called the 1040PC as a digital alternative to a traditional tax form, and that could be submitted electronically to the IRS.25
Tax and Financial Operations Table 2.2 Individual Federal Income Tax Returns Received in Paper and Electronic Formats, 1996 and 1997 (millions) Type of Filing Traditional paper 1040PC* Total paper Traditional electronic** TeleFile Total electronic Total all types
As of March 7, 1996
As of March 7, 1997
% Change
31.98 2.37 34.35 9.27 2.28 2.28 45.91
28.10 2.75 30.80 10.92 3.50 3.50 45.22
⫺12.3 15.7 ⫺10.3 17.8 53.0 24.7 ⫺1.5
*PC software was used to produce paper returns. Only lines where a taxpayer made an entry are included on this form. **Returns filed through third parties, such as tax return preparers. Note: In both years, this data reports on filings up to five weeks before the end of the filing season, so the final number of filings was actually larger for each year. Source: “Statement of Lynda D. Willis, Director, Tax Policy and Administration Issues, Testimony Before the Subcommittee on Oversight of the House Committee on Ways and Means, March 18, 1997,” http://waysandmeans.house.gov/legacy/oversite/105cong/3-18-97/3-18will.htm (last accessed 1/2/2005).
Three years later (in 1996), the IRS reported continued growth in the popularity of this application, with 685,000 taxpayers having used the 1040 via telephone and 41,000 businesses having made tax deposits via electronic funds transfer, a major enhancement to earlier online applications available to taxpayers.26 Usage grew as the decade passed, with nearly a 25 percent surge in 1997 over 1996, for example. By viewing table 2.2, one can see the relative proportion of electronic versus paper submissions; note how significant the volume of digital forms had become. What is particularly interesting is the evidence it presents, demonstrating that this application was evolving from paper to electronic, with both types in use, and, even in the case of 1040PC (a return generated using software, which when printed only contained data needed for processing, not a full tax return), still involved a combination of both paper and software, since it was the actual printed document that was still submitted.27 Because we are discussing an emerging and important application of IT, it is useful to see how the agency was spending its IT budget in 1997 to give a sense of how funds were distributed for computing. That distribution is reflected in table 2.3. Note that expenditures for TSM development continued, along with upgrades of telecommunications (both telephone systems for taxpayer inquiries and also other communications infrastructure common to large organizations). Congressional committees, some staff at the White House, and the IRS focused their attention on expanding online receipt and processing of tax returns during the middle years of the decade. The IRS added additional forms that
25
26
The DIGITAL HAND, Volume III
could be filed online and even experimented with helping state governments launch their own programs.28 Continuing into the early years of the new century, additional functions went online, such as filing for extensions over automated telephone systems (2001) and combined state/federal filing (2002). All the while volumes increased. In 2003, for example, over 150,000 tax preparers were authorized to file electronically on behalf of their clients. Taxpayers using PCs at home filed 11.6 million returns, of which 2.4 million qualified to do so at no cost to them. Some 6.7 million businesses also filed electronically that year. Adding in 4 million using TeleFile (which used software to handle the transactions over the telephone) brought the grand total to some 52.9 million electronic returns, of which 22.8 million were joint federal and state filings. Electronic funds transfer and credit cards were also used extensively. In 2004, the IRS reported that over 63 million returns had been filed electronically, of which 43 million had been submitted by tax preparers.29 Also in 2004, almost 2.4 million individuals made electronic payments totaling almost $3.3 billion.30 To be sure, the growth in e-filing was not simply the result of technology just making it easier. Crucial to the situation was that filing taxes had become increasingly complex during the last two decades of the twentieth century, motivating tax payers and preparers to use software tools to help them accomplish the task. While the federal government had launched a paper reduction campaign during the Clinton administration, the volume of paper actually increased in the mid-1990s, and one internal audit cited the IRS for the lion’s share of the increase: “Nearly 90 percent of the governmentwide increase during fiscal year 1999 was attributable to increases at IRS, which IRS said was primarily the result of new and existing statutory requirements.”31 The government estimated each year how many hours of work it burdened the public with to comply with its demands, such as Table 2.3 IT Budget for the Internal Revenue Service, Fiscal Year 1997 (millions of dollars) Project or Activity Legacy systems TSM operational systems TSM development and deployment Program infrastructure “Stay-in-business” projects Staff downsizing Telecommunications network conversion
Budget 758.4 206.2 130.1 83.4 62.1 61.0 21.9
Source: “Statement of Lynda D. Willis, Director, Tax Policy and Administration Issues, Testimony Before the Subcommittee on Oversight of the House Committee on Ways and Means, March 18, 1997,” http://waysandmeans.house,gov/legacy/oversit/105cong/3-1897/3-18will.htm (last accessed 1/2/2005).
Tax and Financial Operations
the filing of tax returns. About 80 percent of the time people spent complying with government requests involved tax returns and other tax-related reporting and filings.32 Trying to control this paper growth, coupled with ongoing efforts to improve efficiencies and services through the use of information technology, remained a central activity at the agency in the early years of the new century. As this chapter was being written in 2007, the IRS was still implementing TSM, now called Business Systems Modernization (BSM), along with a broad array of more specialized applications. These included data mining to improve service and performance of accounting applications, abusive corporate tax shelter detection, other applications to detect various forms of tax evasion and fraud, and a specialized system dedicated to electronic fraud detection and other criminal activities.33 Additional automation of various corporate tax returns was planned but not started, such as a scoring system for partnerships submitting returns. Projects totaled over $200 million a year and did not include the costs of running existing systems, many implemented in the 1960s.34 Beginning in the 1990s, moving applications and creating various avenues of access for the public to all government agencies via the Internet had also become major initiatives, the e-government effort started by the Clinton administration and reinforced with the passage of the E-Government Act of 2002. The IRS participated extensively in this government-wide initiative. In the case of tax filing, however, the public’s move away from paper-based to electronic forms progressed slowly for various reasons. Use of tax planning programs, required in order to prepare and file digitally, remained the purview of high-income taxpayers already familiar with PCs, or they used professional tax preparers. Use of the 1040EZ forms, which were aimed at those with simple returns, was not the group using software tools the most. Nonetheless, the Bush administration attempted to promote use of electronic filing by allowing people to submit tax returns electronically. Over twenty-five states also had established free Web-enabled filing by 2004 as well. Use of software by those inclined to do so did not simplify filing either for the IRS or the public at large. The rate of e-filings remained slower than desired by the IRS, and, as discussed below, the speed of adoption was driven less by practices at the IRS than those of tax accountants and software companies selling tax planning products.35 Complexity throughout the tax-processing effort—from filing by taxpayers through to the myriad activities of the IRS—motivated various parties to use accounting tools. But there were other features of the process as well that motivated the IRS to use computers. However, as we saw with online systems in the Banking Industry, developers of new systems always seemed to anticipate faster adoption than proved to be the case. The IRS was no exception to a pattern evident in many industries at the time. Perhaps the most remarkable feature of the IRS throughout its history is the massive nature of its activities, which included extensive use of IT. While there have been some small IT projects at the IRS, those that historically have been needed to keep it current, “modern,” have been some of the largest available in
27
28
The DIGITAL HAND, Volume III
the American economy. This circumstance was as true in 2007 as it was in 1957. Recent examples follow historic patterns. A project to provide the IRS with integration, engineering, and telecommunications services had a budget of $900 million to be spent over five years, beginning in 1998. An even larger initiative, called the IRS Integration Services, represented a fifteen-year, minimum $8 billion effort.36 If we look at projects from the perspective of number of transactions, we see large writ over all these systems. For example, use of scanning technologies, such as optical scanners (OCR), called at the IRS the Service Center Recognition Processing System (SCRIPS), launched in 1995 at half of the IRS’s service centers. In any given week, it was used to scan 2.5 million documents for a total over 249 million forms between 1995 and early 1998.37 It was the large volume of transactions that led the IRS to move early and quickly into the Information Age. These quantities, however, made for complex systems. In turn, the IRS became one of those organizations in the American economy that pushed the mythical “envelope” of a technology’s capabilities and, perhaps most important, began illustrating the managerial lessons that would have to be learned regarding any implementation and operation of large IT systems.38 Auditors at the GAO frequently studied IT projects at the IRS and routinely found much to criticize. Their evidence was specific and detailed, and the IRS rarely mounted any vigorous defenses against these charges. The IRS found itself most frequently in a situation where the massive size of its applications made the agency a prisoner of systems that increasingly became unattractive for various reasons. Turnover in senior leadership, bad project management, technical complexity, aging technologies, Congress’s changing tax laws, massive volumes of transactions, changes in budgets, and the requirement of a decade or more to achieve significant changes in applications and systems all converged to make the situation at the IRS difficult, if not impossible to overcome.39 Insights can be gleaned from the historical record. Old applications can still work despite the fact that management wants to change them. The IRS was incrementally able to change and upgrade myriad hardware, software, and processes, despite the constant churn in its leadership. All the GAO surveys of the IRS pointed out that management was too optimistic about what it could accomplish, underestimated costs, overestimated benefits, and underestimated the time required to make changes. In fairness to the IRS, GAO often found much to criticize in other federal agencies and departments in how they, too, managed development of new systems. From the late 1960s to the present, the patterns GAO documented seemed to have been common, prevailing year after year, and made more troublesome by the fact that these received considerable political and media attention. The IRS also operated in alien territory when compared to the smaller applications that emerged all over the private sector across dozens of major industries. Other federal agencies engaged in massively large applications, such as the U.S. Department of Defense (DoD), and when we examine that department’s experiences in the next chapter, we will see similar effects of complexity at work. Understanding the need for fundamental change, and not simply for upgrading computer systems, the Clinton administration reorganized the functions of the
Tax and Financial Operations
IRS so substantially that they could be compared to the work done to create the IRS in the first place in 1952 in the form it was to be for the second half of the century. While the history of that reorganization cannot be adequately described here, it is important to point out that it was the use of IT that compelled the agency to change the nature of its work. Read what the IRS Commissioner who directed the transformation later wrote of the effort: “For the IRS to be successful in modernizing its business operations and technology, the agency would first need to reorganize itself into fewer units, which could thus manage operations more consistently across the whole country. The overall IRS reorganization, implemented in October 2000, was therefore a critical step for the Business Systems Modernization Program as well.”40 Rossotti also had the agency document all its basic processes, such as how it handled returns, provided taxpayer services, and carried out compliance activities (audits and collections) so as to reduce confusion for employees and taxpayers alike and so as “to take advantage of new technology.”41 By the early years of the new century, the IRS employed a third less people than a half century ago, partially as a result of needing fewer employees to process paper returns brought about by electronic filing. As one student of the new application repeatedly noted, one would expect that as the public used electronic services, one could expect increasing amounts of productivity to accrue back to the IRS, and the cumulative results on employment over a half century is part of that record.42
State Tax and Financial Applications Uses of computers to process state taxes paralleled those of the IRS, yet they also differed. The basic nature of taxation was similar in that states taxed its citizens’ income and thus required taxpayers to submit returns each year, which the taxing agency would either accept or contest. A state would either collect receipts or pay out refunds. Both federal and state governments also collected fees for services that had to be accounted for. In the federal government, such collections were done across many departments and then deposited in the U.S. Treasury Department. Both federal and state agencies had to play essentially the same role in collecting income taxes of business enterprises. However, there also were, and continue to be, some fundamental differences. For one thing, states imposed sales taxes, which the federal government did not, and they had to collect these from merchants. For another, because states were smaller agencies than the federal government (they had a smaller load of transactions than the IRS, such as fewer tax papers to work with), often their equivalents to the IRS were accounting and financial departments that had a tax collection suborganization within them. Thus, tax applications were often more intimately connected to the rest of the accounting systems, running, for example, on the same computer as other financial applications. Finally, we should acknowledge that for decades prior to the arrival of the computer, taxing authorities in state government had relied on all available accounting and office appliances of the day,
29
30
The DIGITAL HAND, Volume III
such as adding and calculating machinery and tabulating punched-card equipment. As at the IRS, the work flows needed to process taxes had held together, in an organized manner, the combined managerial, operational, and technological activities of the day, with the work process dominating and all equipment subservient to it. With computers, software tools (applications) increasingly became the thread through the processes and managerial practices that characterized the cadence of the tasks to be performed and, over time, the nature of that work. This transformation, while subtle, was well recognized in the 1950s and 1960s and by the 1970s had become the subject of much discussion in data processing and academic circles.43 Since all states collected taxes, their experience in doing so with computers is instructive about how states in general also worked with a digital hand. States became interested in using computers in the late 1950s for the same reasons as had the federal government a half decade earlier. The first known survey of deployment of such technology at the state level appeared in 1960, and while it perhaps overstated the numbers, the pattern was unmistakable. It reported that nearly 100 computer systems were installed in state governments and an equal number were on order (or planned) for future installation. Five years later, both numbers had increased, leading one census taker to report that 163 systems were in use by state governments.44 Nearly half the systems were used by highway departments (also called public works) to perform engineering calculations and, later, project management. By 1964, forty-eight out of fifty states had a computer. The other major collection of applications installed between the late 1950s and the end of the 1960s served administrative purposes, including tax work. Table 2.4 presents data from the mid-1960s, cataloguing the various uses of computers in states. While most computers were used for Table 2.4 Major Uses of Computers by U.S. State Governments, circa 1964–1965 Application Public works Revenue administration Employment security Motor vehicle and driver licensing Public welfare Education Public health Other uses* Total
Number of Systems 55 29 19 16 9 7 3 25 163
*Often these were also used for accounting, tax, and administrative work. Source: Harry H. Fite, The Computer Challenge to Urban Planners and State Administrators (Washington, D.C.: Spartan Books, 1965): 4–6.
Tax and Financial Operations
multiple purposes, the table documents how many systems had at least one major application identified with a system. Normally, where there was a second or third computer, public works, again mainly for engineering applications, provided the main reason for their use. Note in the table that “revenue administration,” which included tax applications, accounted for the second largest use after public works. According to the individual who conducted the survey, Harry H. Fite, after public works, the most extensive use of computers in state governments “lay in the functional field of finance, payroll, billing, accounting or tax work.”45 To be more precise, he reported that “29 state governments have adopted computer methods in the tax work,” such that “electronic data processing has become an accepted technique for revenue administration at the state level also.” He described the work done with computers: “Such applications as revenue accounting and reporting, taxpayer assessment and accounting, revenue refunds, collections, deposits and reporting, statistics-gathering and reporting, tax history records, descriptions and cross references, etc., have become as familiar as the proverbial ‘old shoe.’”46 Another survey, done in 1963, reported a similar trend with sixteen out of forty-three responding states reporting use of computers for corporate, individual, and sales tax accounting. This census also noted that the most widely deployed applications were for highway computation and accounting (thirty-eight out of forty-three states).47 In short, from the earliest days of computer usage at the state level, tax applications were deployed and linked to the larger accounting and financial work. Administration, finance, and tax departments early on acquired control over their own computer systems, much as did accounting departments in large corporations, and, like them, retained managerial and operational control over such systems until the 1980s, when more centralized cross-agency IT operations increasingly became the norm in state government. But almost from the earliest days of computing, each department wanted its own system, if it could cost justify it. One Louisiana state official, after conducting a census of existing systems in his state in 1965, reported that “our survey revealed that computer centers were cropping up throughout the State agencies like lilies after a rain,”48 driving up the cost of administration enough to alarm officials in this state to start centralizing computing, beginning in the 1960s. Other states did the same. Like the experience of the IRS, state governments developed their own data processing applications in support of tax work in the 1960s, and these systems essentially remained in use over the next several decades. To be sure, they were moved to newer computers, evolved from purely batch systems to online versions, particularly taking advantage of terminals to provide employees with access to ever larger disk-based files, as states moved away from punched-card and tape systems in the 1970s and 1980s. Their systems were not as massive as those at the IRS, which made it more easily possible to keep up with technological innovations, although they, too, evolved more slowly than what one saw in the private sector. As each new innovation came along in hardware, software, and telecommunications, public officials examined the merits of converting tax applications to some new form. For example, as early as 1965, when the data
31
32
The DIGITAL HAND, Volume III
processing world was discussing the merits of a “total systems approach” to integrated computer-based applications, state officials participated in the debate. At the heart of the issue was the case for going beyond simply automating precomputer financial processes. As one commentator in the mid-1960s explained, “it involves rethinking the work of a Revenue Department and designing a new system which takes full advantage of the capabilities of electronic devices on the one hand, and serves the total needs of managing the revenue function—not merely processing its paper work—on the other.” These new procedures had to be “developed as part of a total integrated system in which all elements or parts are consciously interrelated with another so that no unnecessary or duplicate work is performed, so that the output of one state or steps becomes input for another and so that machine work is maximized and human intervention is minimized.”49 States had not heard such suggestions since the 1910s and 1920s when the first systems approaches to work had facilitated the wide adoption of tabulating, calculating, and adding machines.50 The call for redesign of applications did not fall on deaf ears. In small evolutionary steps during the 1960s, 1970s, and 1980s, more functions moved from manual or precomputer technologies over to computers, and along the way, officials optimized processes and work streams. Like the IRS in the 1970s, however, tax authorities found their systems often straining under the workload of increasing numbers of taxpayers and changes in tax law.51 Also, as at the IRS, tax evasion remained a chronic problem that public officials expected computers to help solve.52 Many non-IT actions had been taken, such as implementation of amnesty programs, whereby individuals could come forward and pay back taxes without penalty or punishment, more comparative analysis of IRS data files with those of various state agencies, and, of course, a continuous stream of changes in tax laws. For many states, the major new initiative of the late 1970s, and that continued throughout the 1980s, was finding ways to use computers to enforce compliance with tax laws by using software to compare various government agency records to identify individuals to go after. In addition, they used many non-IT approaches to increase compliance, such as publications, TV and radio spots, and expanded phone services that taxpayers could use to learn more about their state’s tax laws and requirements. However, to help in auditing compliance, states used computers extensively. As one student of the process reported in 1988: “Computer technology suits perfectly well the work of tax agencies. Programs and hardware greatly enhance their capability to process, store, and retrieve vast amounts of information. A major use of the technology is to check income tax return information with data from other resources.”53 These “other resources” included IRS income tax tapes (all states did this), IRS 1099 forms (about two-thirds of all states), and other IRS files. Sometimes they also used computer files of other states, records of Blue Cross/Blue Shield, partnership returns, employer withholding statements, and corporate and sales tax files. Comparing data files, or performing the newly emerging data mining searches made possible in the 1980s, allowed officials to identify anomalies that required investigation and enforcement, strengthening the hand of tax auditors.
Tax and Financial Operations
One survey from the late 1980s pointed out that thirty-six out of fifty states had extensive computerized tax audit capabilities to perform such functions as checks for failure to file in earlier years, mathematical accuracy, taxes owed and refunds due, identifying prior year delinquencies, presenting comprehensive views of a taxpayer’s files, matching returns with other data files, classifying which returns to pursue, and creating various management reports. Almost all states used computers as well in support of their collection processes, such as to operate automated phone collection systems and to track accounts receivables. Management also equipped investigators with PCs to gain access to data.54 Thus, over time, states had moved from simply collecting data and income tax returns in machine-readable form (1950s–1970s) to also using computers to help in enforcing laws (compliance) and supporting collections (1970s–1980s). During this period, states began using computers as well to inform tax preparers and payers about tax laws. By the end of the decade, states were next able to use tax data to enforce other laws. For example, in 1987 Massachusetts established the Child Enforcement Division within the Department of Revenue, using its tax systems to intercept “federal income tax refunds of individuals that could then be used to pay for past-due child support debts.”55 The effects generally were positive from the perspective of tax collectors in that they increasingly were able to enforce laws and discourage evasion. Processing of returns and receipts tightened. One method that spread during the 1980s and 1990s was the use of electronic funds transfer (EFT), whereby taxpayers could transfer funds electronically from their bank accounts to their state tax department, while tax refunds could be moved electronically into a citizen’s bank account. The same applied for business enterprises. The state of Florida, for example, began deploying such a process in 1990, but only for businesses paying taxes. Faster collection of taxes made it possible for the state to earn additional interest income, $21 million in that first year, for instance. As of 1987, only a few states used EFT; by the end of 1993, the number had grown to 45.56 As use of online systems expanded and the Internet gained wide acceptance as a tool for communicating and conducting business, government agencies in federal, state, and local governments began embracing the concept of e-government. The idea was that citizens and organizations could conduct their business with governments via the Internet or through other telecommunications networks, such as with highly automated voice response telephone systems. Most state governments only began deploying information about their services and requirements online via the Internet beginning in 1995 and 1996 and, like the private sector, did not begin conducting transactions over the Net until data security software became available later in the decade.57 State governments and the Clinton administration embraced the notion that use of the Internet would drive down operating costs, make services to the public more responsive and be delivered more quickly, and create a positive image of being progressive. Some officials even attempted to leverage these themes for political and other reasons. For instance, the governor of Pennsylvania, Tom Ridge, suggested that his state
33
34
The DIGITAL HAND, Volume III
should no longer be nicknamed the Keystone State but rather the Keyboard State.58 While a major push to online systems focused on such areas as education at the K–12 levels, e-mail within agencies, and the establishment of Web sites for most agencies to communicate with the public, tax agencies participated as well. As citizens became increasingly comfortable using the Internet, they called on their local and state governments to do the same. In one survey on this theme (conducted in 2000), a third of the respondents wanted the capability of filing tax returns over the Internet, and 27 percent wanted to pay their taxes using credit cards or e-checks.59 As tax filing went online, however, a new question arose, namely, who should pay for e-filing? Despite the fact that automating tax work helped eliminate some of the most intensive activities conducted by any government, providing online filing, refunds, and remittance added IT costs to an agency that, at least initially, had to be paid for by governments. There was considerable debate about whether to charge citizens and businesses for this service, in the belief that it made things easier for filers and allowed them to get refunds more quickly. States chose initially to charge for the service. Thus, for example, in 2000 one could pay their taxes in New Jersey using an online system but had to pay a service fee calculated as a percent of their bill. All over the United States, the public resisted using online systems of this type so long as there were fees. In the case of New Jersey, in 2001 the government made it possible for taxpayers to remit their taxes through EFT at no cost. Using credit cards posed a tangible problem since credit card processors charged a fee for such transactions that someone had to pay for, and the volumes of dollars involved were significant. To a large extent, state governments opted slowly to make taxpayers pick up those transaction fees. In the early 2000s, state governments experimented with credit cards, EFT approaches, and other ways of using the Internet to collect taxes and drive down costs for filing and paying.60 But the clear pattern was that most states were making it possible for citizens and businesses to file and pay online.61 By late 2002, 42 states had made it possible for individuals to download tax forms, 38 states provided tax advice online, and 35 offered online tax filing in conjunction with mail filing. Just over half (29) had launched complete online filing applications where refunds were expected and 23 of those where payments by citizens were required. The same survey also noted that approximately 16 percent of all taxpayers had used online tax submission applications.62 Meanwhile, backroom automation continued in many tax agencies. In fact, by 2002, ten had some 95 percent of their tax records stored in digital forms, while a third had over half their files in electronic formats. One result of having so many digitized files was that in thirty-seven states citizens could now view the status of their tax filings using online systems.63 In 2004, e-filing continued to grow at the state level, as with federal tax forms. In fact, nearly 45 million people and businesses filed their state tax forms online. If an individual filed their federal taxes electronically, they were inclined to do the same for state returns, with state filings actually increasing more rapidly than federal filings.64 So, we can conclude from these various statistics that tax agencies were active in
Tax and Financial Operations
the e-government movement of the 1990s and early 2000s, reflecting growth in use of the Internet that expanded as use of this new digital tool spread across society.65 We can also be confident that a fundamental change in how citizens and businesses were filing and paying taxes had been under way for some time. These activities were becoming highly digitized. There is one additional tax issue that surfaced at the dawn of the new century that has yet to be resolved but has become an important concern to state and local governments. It involves whether or not they could charge sales tax for purchases made over the Internet. In a nondigital transaction, a person making a purchase in a store pays a sales tax on that transaction, the amount determined by the state or community in which that store is physically located. With an Internet-based purchase, the order for a product could come from anywhere in the world, and obviously from another state. Most Internet merchants never collected sales taxes, and so long as the volumes of transactions were minor—the situation in the late 1990s—states were not too concerned. The U.S. Supreme Court had ruled in 1992 that no state could force a company that did not have a presence in that state to collect sales taxes for Internet sales. During the Clinton administration, in an effort to encourage citizens to use the Internet for e-commerce, the Congress exempted Internet-based purchases from sales taxes. But as states and communities saw their traditional sales tax bases shrinking at a time when they were desperate for additional revenues, they became quite concerned and in the early years of the new century renewed their lobbying for the ability to tax such transactions. The numbers involved were substantial and make it understandable why the renewed efforts. For example, one survey provided evidence that state and local governments had lost $18.9 billion in sales taxes and predicted that this number would nearly double the following year. In fact, that happened as online sales continued to grow year after year in the new century. As of this writing (2007), the U.S. Congress had yet to raise the moratorium, while online retailers kept complaining that to collect sales taxes was too complicated a process given the number of communities and state governments in the United States.66
Local Government Tax Applications County, city, and town—called local governments in the United States—shared many common financial, accounting, and managerial practices that affected how and when they used digital technologies and telecommunications. The most obvious difference between these governmental entities and state or federal agencies is that they served fewer numbers of citizens and themselves were proportionately smaller public organizations. These organizations had smaller accounting and financial departments, where the same employees did a number of tasks that in state or federal agencies would be the work of specialized departments. Thus, for example, a county or town accounting department might write payroll checks, pay bills, and collect taxes. In some communities, towns and
35
36
The DIGITAL HAND, Volume III
counties pooled their resources, such that one of the entities might bill citizens for property taxes on behalf of both governmental units. Finally, we should acknowledge that in the case of taxes, while there were income taxes in large cities, sometimes in counties, the bulk of the tax base consisted of property taxes. These taxes were based on the assessment of the market value of someone’s home or business by the local government or a private contractor working on behalf of the tax authorities, followed by the process of billing the taxpayer for property taxes. Collection processes, however, were similar to what state and federal agencies practiced. As we will see with all agencies and all applications of the digital hand, scale and size mattered. The smaller the local government, the later it embraced computing, while the reverse also held true. As computing arrived at the local level beginning in the 1960s, deployment continued expanding as the cost of this technology dropped in the 1970s and 1980s. Exceptions proved the rule: cities like New York, Chicago, and Los Angeles were some of the first municipalities to set up large data centers, because they had the volume of work to justify these expenditures. The earliest inventories of the deployment of computers did not list local installations; not until around 1960 did such surveys begin identifying uses. Even then the accuracy of such data is partially questionable because some communities first sent their data processing work to a service bureau before acquiring their own system. Nonetheless, by 1965 there were about sixty-five cities and counties in the United States that had their own computer systems. Since schools often represented the largest budget item for a town or county (up to 80 percent in some instances), we should acknowledge their use of computing as well. In 1965, a rough count suggested that an additional twenty-two school districts also had their own systems (discussed in chapter 8). As at the state and federal levels, major applications supported accounting and financial work and were installed first, particularly in the early years.67 Table 2.5 catalogs some of the digital applications and other forms of predigital information-handling technologies, such as tabulating equipment used by cities in 1965. The purpose of showing this table is to reinforce the message that the bigger the entity, the more officials used computing. We can make a second observation as well: there existed already a large body of applications of office appliances prior to the arrival of the computer that gave public officials experience with IT, and that already made them dependent on mechanization of accounting and tax collections. The very largest counties experienced a similar pattern of use and adoption. The investigator who conducted the survey reflected in table 2.5 put things in perspective. He reported that all state and local computers totaled some 250 systems, while there were approximately 10,000 installed across the entire U.S. economy in 1963. So, while the presence of state and local government was proportionately higher in the economy, their use of computers was not, a situation that changed by the end of the 1980s.68 Because of the multiple roles local accounting and financial departments played, accounting and tax functions were highly integrated, if for no other reason than the same individuals performed all manner of accounting work. They
Tax and Financial Operations Table 2.5 Uses of Data Processing Equipment by U.S. Cities, circa 1965 Cities over Cities with Cities with Cities with Cities with 250,000 100–250,000 50–100,000 25–50,000 ⬍25,000
Uses Utility bills Utility accounting Appropria. acctg. Cost accounting Tax billing License records Payroll Mgmt. reports Personnel Police
22 19 17 16 20 14 29 21 15 25
22 13 11 9 16 11 30 23 30 14
28 15 13 11 17 8 27 19 10 4
20 8 10 4 12 5 12 10 3 2
23 12 11 6 14 4 12 4 2 0
Source: Adapted from Harry H. Fite, The Computer Challenge to Urban Planners and State Administrators (Washington, D.C.: Spartan Books, 1965): 6–7.
used a variety of accounting equipment, evolving upward in sophistication as new technologies came on stream. The case of one county illustrated the process. Its data processing director noted that for many years prior to the use of computers, “county tax rolls and tax bills were produced by a combined operation using billing and addressing machines.” In 1956, as the volume of work kept increasing, the county started using small punched-card equipment “combined with addressing machines.” Yet the work remained labor intensive until 1960 when a computer was first used to automate some of the tax processing. The data processing manager is worth quoting at length on the effect it had on the work of tax collecting: This computer permitted more complex operations and speed of preparation, but even more speed was required to provide needed service for the county taxpayers. An (even more) modern computer was delivered to the county in 1961 to print the 1961 tax roll and tax bill. . . . The use of the new computer permitted the production of the 1961 tax roll and tax bills with a computer breakdown of county tax, flood control tax, and with the total tax computed into five discount amounts.69
This pattern of ever evolving capabilities occurred in almost all towns, cities, and counties throughout the second half of the twentieth century. In the years immediately following World War II, accounting departments added to their inventory of accounting equipment, moving from adding and calculating machines to more complex billing and tabulating equipment to handle growing volumes and complexity of tax work. They faced the challenge of calculating taxes, billing citizens in a timely and accurate manner, collecting and
37
38
The DIGITAL HAND, Volume III
accounting for the funds received (or not received), and posting of entries. Often these processes were a combination of intense manual labor supported at various steps by machinery. Communities acquired equipment to speed up work, minimize the number of additional employees who had to be hired, and sustain accuracy. Billing was of particular interest because many communities also invoiced citizens for various utilities, such as for water usage.70 So billing was a highvolume transaction activity suited perfectly for office equipment and later computers. All the cases reported on regarding use of IT in the 1950s emphasized speed, accuracy, and increased capacity. There was hardly a local government entity in the nation that did not use some form of office equipment to handle tax collections and billing.71 The scores of documented cases make it quite clear that in the 1950s local governments all over the nation evolved their tax collection processes into more formal, mechanized forms, moving from relying on simple accounting to more complex equipment. Even large cities that had resisted using various forms of automation before now fell in line. For example, the city of Philadelphia—the third largest city in the nation—which, for many political and managerial reasons, had barely used accounting equipment across its entire tax processing functions, did in 1958 when it mechanized its 540,000 assessment records, using punched-card equipment.72 So, by the time communities began turning to computers, they had accumulated a body of local experience in partially automating and mechanizing tax accounting. A few communities began using computers to do this kind of work at the end of the 1950s. Some small communities rented time on a computer, such as Patchogue, New York (population 8,200 with 3,200 tax bills), which used a Univac 120 and 10 percent of an accountant’s time to manage its tax process.73 Other cities acquired their own computer, such as New York and St. Louis, by the end of the decade.74 By the early 1960s, some communities were integrating various tax and accounting functions in more automated ways than before, particularly as they began using service bureau computers and then their own systems. Whereas in the 1950s individual steps in the process were either automated or assisted through mechanized actions, in the 1960s steps were linked together, made possible either by the use of more sophisticated accounting machines that updated various records from the same data entered into a system, or through increasing use of computers as time passed.75 In the 1970s, deployment of computers to process tax billing, receipts, and accounting spread across all fair-sized and large cities, many counties, and to smaller communities, either through direct use of such technology in-house or via a service bureau. Applications were normally batch, although some online query capabilities became available by the end of the decade. Pressures on local governments to keep down their costs led officials to continue automating all kinds of accounting work, not just tax processing, with the result that during that decade and into the 1980s, their data processing budgets kept growing. Service bureau work came in-house in ever increasing amounts in the late 1970s and early 1980s, for example, as the cost of having one’s own system dropped and as internal skills in IT increased. In a major survey done in 1983 on local government use of computers, with 743 responses, 82 percent reported having internal data processing
Tax and Financial Operations
operations. The pattern of wide adoption was evident all over the nation and not restricted to one region or size community. The largest communities continued to be the most extensive users of IT (88 percent of large cities, towns, and counties, 63 percent of townships). They used all the major computer products offered by IBM, Burroughs, NCR, Hewlett-Packard, Data General, Digital, Sperry-Univac, Wang, Honeywell, and others, all solid proof that local governments used everything from large IBM mainframes to minicomputers from Digital, down to word processing and accounting equipment. If one combined in-house and service bureau uses of computing, then 63.4 percent of townships now used computers; municipalities reported 87.8 percent, while cities with populations of over 50,000 but less than 150,000 were at 97 percent; all larger cities used computers.76 What were local officials doing with all these computers, and how important was tax accounting in this mix of applications? By the early 1980s, the major categories of applications resident on computers were accounting, assessment, budgeting and management, voter registration, public safety, treasury and collections, utilities, purchasing and inventory control, planning and zoning, and sanitation management. Within treasury and collection applications, property tax records and billing done with computers ranged from 64.9 percent of all small communities to just over 50 percent of large cities. Similar proportions of use were evident from special assessments, for maintaining tax records, and just slightly less for property tax assessment processes. In short, within a period of a quarter of a century, local government had gone from no use of computers to over 50 percent in aggregate, and if we look at large communities of over 10,000 residents, then deployment had reached 90 percent or more.77 Because most local governments came to computing later than the IRS or state governments and also had to contend with smaller volumes, their use of more modern equipment and software allowed them to have more contemporary applications than either the earlier two users. That meant, for example, that local governments could upgrade systems more quickly to more cost effective ones. The fact that major vendors included minicomputer manufacturers provided clear evidence of the greater use of more modern systems than could be found at the IRS. For example, in 1983, nearly 6 percent of all users of computers used H-P equipment, another 5.1 percent Data General’s, and yet a further group of 4.7 percent Digital’s. So, just in the area of minicomputers, nearly 16 percent deployed this third generation of technology to do accounting work.78 One area of computing that grew all through the 1960s, and has continued down to the present with direct bearing on taxation, was the emergence of mapping applications, usually called GIS (Geographical Information Systems). While this application is discussed in some detail in chapter 6, suffice it to point out here that one of the reasons for using computers to track land uses and to digitize maps of properties was in support of tax assessments. To be sure, such systems were also used to do planning and to build and maintain water systems and highways. But by the 1980s, these systems were also emerging as extensions of the tax assessment process. Geographical Information Systems software create maps of the land and are used to track ownerships, splits. and changes in property borders and in calculating property values, often using satellite
39
40
The DIGITAL HAND, Volume III
photography and, in earlier times, CAD/CAM-like software tools. In effect, digitized maps made it possible for tax assessors to do their work with less manpower required to visit properties and talk to owners.79 Meanwhile, systems went online in the 1980s and early 1990s so that accounting personnel could respond quickly to taxpayer requests for information regarding assessments, tax bills, and such, by accessing their files through CRTs. On the eve of the arrival of the Internet, online access to tax files was fairly widespread in large and small communities since they had upgraded their batch systems over the years in an incremental fashion, moving data from cards and tape of the 1960s and 1970s to direct access disk drives in the 1980s and 1990s.80 Migration to Internet-based tax applications, however, proved slower to accomplish. Local governments fretted over what to make available over the Internet and how to protect the privacy of such data. However, use of the Internet by local governments took place in many other areas, such as in providing information about various services of specific agencies and departments, data on how to apply for licenses, and information about community events. Like the federal and state governments, local governments were being affected by the growing use of IT across the American economy. Historically, local taxes were based on physical property (such as land, buildings, and machinery), sales taxes for goods and services that took place on their geography, and a few other miscellaneous sources of taxation. But with the increasing shift of the nation’s production and consumption of products to services, many of which were taking place anywhere in the nation and not necessarily physically where the transaction was initiated, debate about the fundamentals of tax sourcing began in the 1990s. As Internet sales increased in the late 1990s, denying local communities sales taxes (mentioned earlier in this chapter) resulted in a potential problem for sustaining taxes. A third conundrum just looming on the horizon was how to tax the growing creation of intangible assets as people made fewer things but did more knowledgebased work that had economic value and thus, in theory, could be subject to taxation. Today we also have the increased rivalry that grew all through the 1980s and 1990s among local jurisdictions for tax revenue, involving towns, cities, counties, and states competing, for example, for sales and use taxes. At the dawn of the new century, these issues were of greater interest to large communities and state and the federal governments than to small municipalities. But nonetheless, they were emerging as part of a broader discussion about the fundamentals of taxation in this nation, a debate that was only just beginning as this book went to press in 2007.81 In short, the digital hand was beginning to affect local government in ways officials could not have imagined even a decade earlier.
Tax Preparers and Payers We now move away from the work of government agencies that process tax returns and collect revenues to the two communities they deal with the most: accountants who prepare tax returns on behalf of clients and the taxpayers
Tax and Financial Operations
themselves. Both the preparers and payers increasingly adopted software tools to facilitate preparation of tax returns, beginning largely in the early 1980s and continuously expanding their use of IT over time. In fact, their deployment of such digital aids was part of the reason that the IRS was able to step up its plans to offer e-filing at the dawn of the new century. While in the 1950s no accounting firm used software to prepare returns, by the early years of the 2000s, over half of all returns were prepared using computers. The transformation from pencil and paper practices began with accountants and, with the arrival of the PC, spread to individual taxpayers. We should recognize a couple of other realities. First, there were various types of accountants and tax preparers, such as the high-end, well-trained employees at Deloitte, PriceWaterhouse, and other elite accounting firms, second, small bookkeeping CPA firms, and corporate entities such as H&R Block, and third, some mom-and-pop firms or individuals with little formal accounting or even tax preparation training. We should also recognize that this conglomerate of preparers as a whole had little appetite for the IRS to digitize their work, and many resisted and fought the proposed transition. With those facts in mind, we can begin looking at the experience of the accountants. There are essentially three applications of IT used by this community. The first is tax software to collect filing information, to fill out digital versions of federal and state tax return forms, and to do the necessary mathematics. That is not the same as traditional accounting software, which is used all year to perform normal accounting and bookkeeping. There are three attractions of such software packages: they are relatively easy and quick to use, once the user understands how to use the software package; they drive down the cost of offering a client the service, which is important in what has always been a competitive market; and they are accurate, avoiding many of the mathematical or judgmental errors that always plagued manual approaches. The second type of software used by such accountants consist of packages to perform tax research, such as to find out what the tax laws called for. Much like the use of such tools by the legal profession, tax preparers increasingly came to rely on such software in the 1980s. A myriad of digital tools comprised the third group and included such things as Web sites to inform clients of their offerings and by which individuals could communicate with accountants, e-mail, and e-filing software tools and telecommunications links to state and federal tax agencies. The first two appeared initially in the 1970s as emerging applications of computing. They became reasonably widespread in the 1980s among large accounting firms (such as at H&R Block) and in the 1990s spread to smaller firms. Online communications with individual tax clients is a story of the 1990s and beyond, although communications with businesses began in the 1980s. Tax preparation on behalf of clients had existed all through the twentieth century, and accountants were important customers for all manner of accounting equipment of the day, from such firms as Burroughs, NCR, Monroe, and Felt & Tarrant, and typewriters from an equally prestigious list of suppliers, not the least of which were Remington, Underwood, and IBM. But as with tax collecting agencies, these two classes of equipment—adding machines and calculators on the
41
42
The DIGITAL HAND, Volume III
one hand and typewriters on the other—were used to assist in essentially a manual process of typing onto forms necessary data and augmenting hand calculation of totals using office appliances. Up until the years following World War II, accounting firms did all manner of accounting, not just taxes (many still do). Then some began specializing in one or few aspects of accountancy. H&R Block, one of the nation’s largest tax accounting firms, was typical. Founded in 1946, it did all manner of bookkeeping for small firms but in 1955 decided to concentrate solely on tax preparation. Results were impressive in that its focus proved to be a winning strategy. Within one decade the firm annually prepared over a million returns and, in 1975, generated in excess of $100 million in revenues for this kind of work. In short, tax preparation in the years when tax law became more complex and the number of taxpayers grew proved to be good business.82 While lawyers, individual accountants, small accounting firms, and, increasingly over time, large tax preparation firms did tax filing preparation, the largest seemed to embrace IT the earliest. In the case of H&R Block—the nation’s largest tax preparation firm in the last quarter of the century—it began extensively using software tools in the 1970s and, in 1980, even acquired Compuserve, an early online consumer service. In 1986, H&R Block filed 22,000 electronic returns on behalf of its clients, relying on software preparation tools to get the job done. Exactly ten years later (in 1996), it prepared one out of every nine tax returns filed in the United States, giving it access to many taxpayers who could possibly be convinced to start allowing the firm to file electronically on their behalf.83 Tools for tax preparation services were often products from small software firms in the 1970s and early 1980s. One of the most successful of these was Intuit, which began life in 1983 with an accounting software product aimed at the home market, Quick Books, and over time added software products for use by professional accountants.84 Online tax preparation took off in the 1990s. To put that statement in perspective, consider that the total number of returns filed in that decade grew by 13.7 percent, while the rate of growth in the use of tax accountants expanded by 26.4 percent. And as noted earlier in this chapter, the percent of all tax returns filed electronically by individuals and accountants also grew substantially during the decade. Even short forms, such as the 1040EZ, were often prepared by accountants by the end of the 1990s. This work also was an example of new opportunities for generating income made possible by the digital hand. For example, some of the 1040 EZ business for tax professionals was tied to the Refund Anticipation Loan (RAL) products, which gave the taxpayer a loan for a portion of their refund; it became a whole new source of revenue for tax accountants. But in short, tax preparation had become a mainstream digital application. In tax preparation year 2001, 21.1 percent of all such forms submitted to the IRS were signed by tax preparers.85 While hard data on the number of tax returns submitted by preparers electronically in earlier years are difficult to come by, we do know that for 2000 the number exceeded 57 percent.86 Complexity of the tax code and time required to fill out forms had nurtured expansion of tax preparation services, augmented by an ever increasing number of reliable software
Tax and Financial Operations
tools, forming a virtuous circle of opportunity and adoption of digital tools. By the end of the century, the IRS had initiated a variety of programs to encourage even further electronic filing and aimed some of its promotional programs at the professional accounting and tax preparation firms. Ironically, and perhaps not surprisingly, in the early years of the new century as software tools that individual filers could easily use appeared on the market, the number of clients for these firms actually dipped slightly. It is quite possible that the reason for this situation can be explained by the fact that the economy as a whole was in a recession, and as a result the total number of tax returns filed had declined slightly (both paper and electronic). Nonetheless, the number of electronic filings they did as a percent of their total work remained high (71.9 percent of all filings done in 2001, 67.1 percent in 2003).87 In the 1980s, many software firms rushed into the market to provide professional tax preparers with software products, but, by the end of the 1990s, the number had dwindled to less than a couple dozen enjoying broad market appeal. Many other niche tools also were on the market. A survey done by the CPA Journal of tax preparers in the state of New York in late 2002 pointed out that on the whole, accountants were pleased with the performance of these tools. Their biggest complaint concerned the consolidation of software firms, which meant products they were familiar with went out of use and they had to learn how to use new ones. The same held true for tax research software, even though there were fewer options in what one could use. Two-thirds of all the firms surveyed were extensive users of all manner of IT, from preparation software to Web sites.88 The answer as to why continued to be the same as in the 1980s and 1990s. One report in 2003 summed up the rationale quite clearly: “Tax practice has changed significantly over the last decade as computer technology has enabled local and regional practioners to adopt sophisticated tax compliance and research software to better service their clients. The relatively low cost and high availability of Internet resources have also influenced the way accounting professionals communicate with existing and potential clients.”89 But at the same time, users were complaining more about the constant changes in their software tools. Those noted above were surveyed in 2002; in the following year, they increased their use of software tools to over 80 percent, so the complaining did not stop their increased use of the digital. Their lingering concern now shifted to how secure the Internet was (or was not) as a tool for accepting and transmitting sensitive financial data back and forth to clients and to tax agencies.90 As this chapter was being written, the market for software driven tax preparation continued growing but was increasingly being serviced by larger firms, using a fewer number of software tools. On the tool side, as of late 2004, there were only sixteen software firms that had products that could handle all the federal tax forms and calculate taxes from all the states for individual income tax returns. That number represented a decline of some 20 percent over the prior decade. That trend forced about 13 percent of preparers to switch packages, always a tension filled activity as they learned new ways of doing their core tasks.91 Table 2.6 lists some of the most widely deployed products used by tax
43
44
The DIGITAL HAND, Volume III Table 2.6 Tax Preparation Software Widely Used by Tax Preparation Firms, 2004 Vendor
Tax Program
Drake Software Research Institute of America Intuit, Inc. ATX, Inc. CCH Tax and Accounting, A Wolters Luwer company Universal Sax Systems Inc. TaxWorks by Laser Systems Thompson
Drake Software GoSystem Tax RS Lacerte Max Plus ProSystem fx Tax TaxWise TaxWorks UltraTax
Source: Stanley Zarowin, “Users Size Up Tax Software,” American Independent Certified Accountants, October 2004, http://www.aicpa.org/pubs/jofa/oct2004/ zarowin.htm (last accessed 1/8/2005).
preparers for individuals and businesses. The percents of usage are based on the survey by the CPA Journal done in 2004. As happened in so many industries, as use of digital tools spread from suppliers of goods and services to their customers, new possibilities turned into new waves of innovative services only made possible by the growing infrastructure of IT capabilities. One clear example of this phenomenon was just beginning to appear with tax preparers. Beginning in late 2004 and expanding in 2005, tax preparers targeted their services at teenagers and very young adults, often offering free or inexpensive services online in hopes of enticing them to become clients in subsequent years. H&R Block, for example, provided free online federal tax preparation for those under the age of eighteen earning less than $10,000. Intuit, Inc., which sells TurboTax, a software package that individuals can use to prepare their own returns, went after the same demographic with a Web site called RockYourRefund.com, displaying images of shirtless fun-loving beachgoers enjoying their tax refunds, all for a $5.95 tax filing fee. The site also offered discounts on purchases of electronics or travel. In both instances, the companies recognized that the youngest taxpayers often did not file returns, even though they had refunds due them, or their parents filled out the forms. In either case, these people represented a new segment in an otherwise slow but growing market; they were already on the Internet and thus could easily add financial transactions to their activities.92 It was no surprise that competing firms would pursue them. While professional preparers were changing their practices to incorporate extensive use of the digital, individuals filing returns were also busy at work embracing software tools, although not to the same extent. This did not happen overnight either. Several preconditions had to be met before individuals and small businesses could use software to prepare and file electronically. The most
Tax and Financial Operations
important of these was either owning or having access to a personal computer. While such devices first came on the market in the late 1970s, it was not until the 1980s that they had spread widely across the American scene, and in the 1990s even more so, often with access to the Internet by the late 1990s.93 The second prerequisite turned out to be an appetite for spreadsheets, personal financial planning, and management software. These could be used to manage household finances, plan investments, and pay bills. By the early 1980s, there were dozens of software products on the market to handle these kinds of transactions. In December 1984, Intuit shipped the first version of Quicken, which eventually became one of the most popular home financial planning products on the market. Over time, new releases added functions, while other vendors passed into history or saw their market shares shrink. By the early 1990s, Intuit had several million users. Then in 1993, Intuit acquired TurboTax, the tool that would be used by so many individuals to prepare their first taxes with the aid of software.94 By the early years of the new century, over 20 percent of all households used some sort of financial planning software product.95 So, long before online or even digitally based tax returns were prepared, many Americans had already become familiar with PC-based software for financial management. Initial users of tax preparation software tended to be those individuals with higher than average incomes, often were well educated, and had an assortment of mutual funds, retirement accounts, and stocks. The majority of those using tax preparation software had prior experience with other financial planning digital tools, thus making it either easier to prepare returns or to avoid the greater expense of having a tax preparer do the work.96 As the Internet became a viable means for filing electronically by the end of the century, e-filing began a slow but continuous deployment, as discussed earlier in this chapter. By 2005, the IRS had an aggressive campaign under way to promote such filing and even announced that it would be retiring use of telephone systems as a paperless filing method (TeleFile). When it announced that taxpayers could file free of charge, using software tools developed in the private sector, taxpayers found that they had fifteen different software packages they could access.97 Historians will someday recognize that this agreement to allow citizens to file this way was truly a landmark event in the history of American tax collecting. Professional tax preparation firms had fought mightily against the IRS’s making it possible for individuals to file electronically. The preparers lobbied Congress, which frequently reacted behind the scenes, threatening the IRS with budget cuts if it continued to move forward on building its own Internet filing site. Nonetheless, the agreement went into effect. Tax preparers accepted the changing nature of things. For example, Intuit promoted its product, TurboTax, both on the Internet and through retail channels, emphasizing that it would “cut down on errors and save time,” while providing verification from the IRS that it had received one’s returns. Refunds also came in more quickly. For example, in 2004, e-filers tended on average to receive their refunds electronically if posted to their bank accounts (via EFT) in just over two weeks, while paper filers waited anywhere from two to six weeks, a point emphasized by tax software vendors.98
45
46
The DIGITAL HAND, Volume III
Both large and small businesses also either used these same tools or wrote their own and, like individual tax filers, had long experience with digital financial tools. Since the 1960s, the Fortune 1000 companies—the largest firms in the United States—had used software to assist in the preparation of their corporate returns. By the late 1980s, some 60 percent of those firms had computerized over half the work done by their tax departments; one survey suggested that a third had automated 70 percent or more of their work. Personal computers were their favorite digital tool in both large and small firms. Larger firms used these in conjunction with mainframes that housed large financial accounting systems that they needed to access in order to do their tax work. In short, like individual preparers, big and small firms had moved to software tools in the 1980s. Down to the present, the application has remained virtually ubiquitous, particularly in large corporations, and less so in small firms, which have tended to track deployment more closely to what we witnessed with individual filers.99 The IRS operated in tandem with these developments. The first truly modern e-filing application from the IRS was for the 1120 (used for corporate returns) and the 990 (used by tax-exempt organization returns) as part of its Modernized E-File Project. Later, the IRS mandated electronic filing by all large corporations and tax-exempt entities. During the early 2000s, IRS began modernizing its digital 1040 software to sit within the same system.
Conclusions The deployment of the digital hand in the world of tax filing, collections, and compliance followed a similar pattern evident across many industries and applications. The work was paper intensive for all parties concerned. The subject was fraught with complexity and frequent errors in mathematical calculations and data entry. The volume of time, people, and transactions was always massive, to say the least. In short, from nearly the earliest days of digital computing, tax work proved an ideal use of computers. The benefits all sought were speed, ease of use, accuracy, and lower operating costs. Both tax agencies and tax filers in general were able to reduce paper work, speed up processing, increase accuracy, and sometimes even ease the complexity of the work. For individuals filing, costs as measured either in time spent on the process or in paying tax preparers did not go down, although self-preparation using software packages proved so convenient that the cost of the software was more than offset by the time people saved in preparing returns. The larger the tax agency one operated, the more complex digital applications became in support of the collection and processing of returns. The IRS became the epitome of complexity and largeness, even when compared to private corporations. Conversely, small towns and counties were able to use computers in support of tax work relatively easily and as part of their general accounting and financial activities. The experience of the IRS teaches us that the use of computing involved many nondigital aspects, such as the role of institutional focus,
Tax and Financial Operations
management attention, and having the right project management, programming, organization, and leadership skills. This discussion is not about incompetence at the IRS—which most GAO audits implied or stated—rather, about the realities of complex applications of the digital and organizational operations. Smaller tax agencies also faced these kinds of issues, but again, the smaller the tax department, the easier it was to manage the adoption and use of software in support of tax work. The experience of people in the United States with tax applications highlights the symbiotic and iterative evolution of adoption. First, we saw the IRS and large states begin using software to do tax work, followed by ever increasing waves of midsized states and large cities, then by even smaller states, cities, counties, and towns. As technology either became less expensive and modular or easier to use, adoption spread across all tax and financial departments in the public sector. Right on the heels of the large tax agencies were the financial and accounting departments of sizeable corporations also embracing use of the computer to do their tax work. We saw as well a similar pattern of ever smaller companies using computers over time. Almost simultaneous with adoption of computing by smaller firms were the professional preparers, who often were the organizations that introduced computing to smaller companies as part of their accounting services. Finally, with the arrival of the PC, online access (even before the Internet), and financial planning and tax preparation software, individuals began using the digital in support of their tax work. As each constituency embraced the computer, they caused other participants in the process to alter their use of computing. Thus, as companies, some tax preparers, and individuals began pressuring the IRS and state agencies to allow e-filing, tax collectors had to acquiesce. Congress, for example, in the late 1990s passed legislation directing the IRS to step up its support of e-filing and provided tools to do so, such as authority to use paid advertising to promote e-filing. Use of computing evolved over time as a function of changes in technology and as a growing base of experience and stock of installed applications became integral to daily work. In the early 1960s, the IRS accumulated massive card and tape files, and processing occurred in batch mode, that is to say, not real time or online. As technology evolved, online systems were added to the stock of digital applications, often building on these by providing windows into databases of information that could be examined or updated real time by a person at a terminal. As telecommunications improved in various forms in the 1960s and 1970s and were later reinforced with secure transactions over the Internet, moving information from one point to another became possible and desirable (e.g., e-filing). Every tax agency I studied in preparation for writing this chapter added applications and changed earlier ones to provide new functions or simply improved operations over earlier digitally supported processes. It was a constant evolution, iterative, incremental, and cumulative. Thus, a tax department of 1950, cluttered with adding machines, tabulators, and calculators became an alien, unfamiliar landscape when compared with what such an organization looked like a half century later when mainframes and PCs were the nerve centers. To be sure, much paper still remained,
47
48
The DIGITAL HAND, Volume III
because not all filers had gone online and taxes had become more complex, often requiring more forms or other documentation. The IRS still desired to replace older systems with newer ones, and their conversion projects were massive in vision and complexity. But when the IRS changed applications in an incremental fashion, it enjoyed successes for most of the half century; when it attempted to do large wholesale replacements, it simply either failed or took decades to accomplish. Technology changed faster than the IRS could, which simply complicated matters. The earliest users of large systems tend to be saddled with large problems of inertia and complexity in their attempt to move to more modern systems—a lesson from the IRS. Those who embraced computing later, even if large systems, tended to have an easier time of it because newer digital tools lent themselves to incremental upgrades and changes more than the earliest systems. Changing a large system, such as those of the states of New York, California, and Pennsylvania, or the IRS, was tantamount to trying to change a flat tire on a moving vehicle. One could not stop collecting taxes for several years to enhance older systems. So tax agencies did what the private sector also did; they changed systems incrementally, and that is why the history of tax applications is not the story of revolutions in computing but rather a long tale of evolutionary transformations from no computers to extensive reliance on the digital hand. The IRS Commissioner who most was able to change the agency in the second half of the century concluded from his experience “that it is wrong to assume that a big, entrenched institution that gets into deep trouble cannot be changed for the better. The crisis can be turned into an opportunity. If it is important enough to do, it can be done.”100 In the next chapter, we face two sets of issues. The first is the adoption of applications that are complex, indeed, some more so than those at the IRS, but second, a portfolio of uses that were far more varied than the focused work of a tax agency. The military services and the U.S. Department of Defense were arguably the largest users of computers in the world during the twentieth century. Uses ranged from the mundane, such as doing payroll, but sometimes for several million “employees,” to supporting logistics and complex supply chains that required almost every conceivable product available in the American economy to be ordered, transported, and consumed, sometimes under combat conditions. That is the story to which we turn next.
3 Digital Applications in Defense of the Nation Information systems have become essential ingredients to the success of combat operation on today’s battlefield. —General Colin Powell, 1992
T
he U.S. Department of Defense (DoD) is one of the largest users of computers and telecommunications in the world and the largest within the federal government. Its uses of these two bodies of technologies have historically been some of the most advanced and complex as well. This department used all manner of technologies, but none proved so central to the way it conducted its affairs as these two during the second half of the twentieth century. Computing and telecommunications spread across all the uniformed services, and civilian employees used the digital hand to assist in such activities as accounting, financial reporting, procurement, and logistics. What President Dwight D. Eisenhower called the military-industrial complex included the wider community of companies that the DoD often called upon to develop new uses of computing or the application of the digital hand in the creation of new weapons systems. Throughout the second half of the twentieth century, the DoD often supported development of more advanced uses of computers, while pushing forward the state of the art of digital and telecommunications technologies throughout the period of the Cold War. The development of the network we later came to know as the Internet was one of many important examples. In short, the combination of stimulating R&D in technology, then using these results in practical ways linked to the core missions of the department, presents us with a very large case
49
50
The DIGITAL HAND, Volume III
study of how computers were broadly used across the American economy and, more narrowly, in the public sector. A great deal of the story of how the DoD promoted the development of new computing technologies from the 1940s through the 1990s has been studied by historians.1 While the broad lines of those studies will be summarized below, this chapter focuses on the use of computers and telecommunications, a story that has not been told in any comprehensive manner. By doing that, we can demonstrate the extent to which this department relied on computing to do its work and the degree to which its uses of the technology changed how DoD evolved over time. We care about that story for all the obvious reasons, but also for one other. DoD was more often than not a department equal in size to some important American industries.2 In times of war, for example, it employed as many people as the banking or construction industries. It was also a world of its own, acting very much like an industry. It had its own values, language, organizations, methods for doing things, allies, constituents, suppliers, and value to deliver. It always had its own publications, conferences, training programs, and so forth. In these ways, for example, DoD was no different from other established industries. It may seem an odd perspective of the department but nonetheless a useful one for understanding this world, because like so many industries, actions taken in one part of DoD affected the work of other agencies, uniformed services, and vendors. Each learned from the other and influenced each other’s thinking and actions. We will not review in detail the role of the broader community that existed outside of DoD that made up part of the military-industrial complex, such as manufacturers of military aircraft. I described their uses of computing in the first volume of The Digital Hand. What is added below are comments about how that community interacted with DoD in developing or deploying IT. To be sure, my review covers research and development, but also logistics, ordnance, weapons systems, training, combat operations, Information Age warfare, and various noncombat applications. A brief discussion about deployment closes this review of DoD’s activities to provide a more comprehensive picture than we have had before of how IT came into DoD and to what extent. I do not discuss intelligence activities—a major function at DoD—because of the lack of sufficient publicly available information about its use at this time.
Makeup of the Department of Defense In order to appreciate the role of the digital hand at DoD, it is important to understand how that department organized itself and its work. The National Security Act of 1947 established the National Military Establishment, led by the secretary of defense, and included three military departments—Air Force, Army, and Navy—along with a variety of other agencies. Legislation in 1949 changed
Defense of the Nation
its name to the Department of Defense and reaffirmed its status as an executive department, with the secretary reporting to the president. To a large extent, the organizational history of the DoD is about the growing power of the secretary and the Joint Chiefs of Staff and of rivalries among the uniformed services for influence, funding, and over scope of their missions. Others have addressed the history of those rivalries, a story largely out of scope with our interest in understanding the role of computers.3 However, it is important to understand that this department combined the uniformed military services—Army, Navy, Air Force, Marine, and Coast Guard—and a variety of civilian agencies in support roles, employing a combination of civilian and military personnel. The size of DoD proved so influential on the role of computing both in the department and across the American economy that the demographics and size of budgets need to be understood. In 2005, for example, the defense budget consumed 21 percent of the entire federal government’s $2.5 trillion budget. In other words, it was over $500 billion, a figure that does not include other supplementary allocations.4 During periods of war, that percent of the total budget always climbed higher. Table 3.1 catalogs totals for various DoD expenditures for the period since 1950. The growth in expenditures derived largely from the expanded duties of the department throughout the Cold War and from waging the Korean Conflict, Vietnam War, Gulf War, and Iraq War, not to mention carrying out various smaller missions, such as the hundreds of occasions of helping people in natural disasters or rescuing people at sea, and funding smaller military engagements. Table 3.2 documents the number of people employed by DoD, including military personnel, civilian employees, and others outside of DoD doing work for the department, in short, President Eisenhower’s “militaryindustrial complex.” A quick glance at the employment figures reveals first that the number of civilian employees as a percent of all DoD employment was quite high throughout the period, requiring that any survey of the role of computing in DoD take into account their use of technology. Second, the role of outside contractors, whether in the development and manufacture of weapons systems or in running cafeterias, also comprised an important contingent. Combined, the data in tables 3.1 and 3.2 demonstrate that for the entire period, DoD was a large component of the government and, by extension, of the American economy at large, both in times of peace and war.
Table 3.1 U.S. Department of Defense Spending, 1950–2005 Year $ Million
1950 42.6
1960 92.2
1970 195.6
1980 591.0
1990 1,253.2
2000 1,788.8
2005 2,340.0
Source: Office of the Under Secretary of Defense, National Defense Budget Estimates for FY 1998 (Washington, D.C.: United States Government Printing Office, March 1997): 160–161; ibid., for 2005 (Washington, D.C.: United States Government Printing Office, 2005): 205.
51
52
The DIGITAL HAND, Volume III Table 3.2 Total Defense-Related Manpower, 1950–2005 (thousands) Year 1950 1960 1970 1980 1990 2000 2005
Active Duty 1,459 2,475 3,066 2,063 2,144 1,449 1,455
Civilian
Total DoD
Defense-Related
710 1,195 1,264 990 1,073 698 688
2,169 3,670 4,330 3,053 3,216 2,147 2,143
2,883 6,131 6,729 5,043 6,332 4,572 5,618
Source: Adapted from various tables in Roger R. Trask and Alfred Goldberg, The Department of Defense, 1947–1997: Organization and Leaders (Washington, D.C.: Historical Office, Office of the Secretary of Defense, 1997): 171–176; U.S. Census Bureau, Statistical Abstract of the United States: 2002 (Washington, D.C.: United States Government Printing Office, 2002): 329; ibid., for 2005 (Washington, D.C.: United States Government Printing Office, 2005): 213. Statistics varied from one source to another for any given year; however, when compared, the differences were slight.
Patterns of Research and Development Historians agree that the investments made by DoD in the development of computers in the 1940s and 1950s made it possible for this class of technology to reach a level of effectiveness such that it could be used by the private sector. Along with economists and other researchers, they have documented how this enormous investment in R&D gave the entire United States economy a leap forward toward the Information Age ahead of all other nations.5 To a large extent, the motivation for this enormous investment was the need of the American government to respond to the military threats posed by the Cold War, dangers that ranged from the development of nuclear weapons to complex guidance systems for missiles, to advanced avionics for aircraft and command and control systems for large naval fleets.6 Early and intensively, officials in both the uniformed services and in the civilian side of the War Department and later in the Department of Defense saw the possibility of using computing to perform complex calculations, to coordinate rapid air defense and combat command decision making, to support the development of increasingly lethal weapons (later “smart” weapons and ordnance), and to support vast logistical processes. It was a faith in the potential benefits of using computing that grew more intensely over time, beginning with hints of possibilities worth investing in during the 1940s and then morphing to wide-spread endorsement of the technology by the late 1960s. Paul Edwards has argued in his study of the military’s use of computing that the technology became a major source of the techno-world view the military embraced throughout most of the era of the Cold War.7 To accomplish the task of developing new
Defense of the Nation
technologies, and then new uses for these, required a variety of strategies that ranged from in-house development at facilities run by the uniformed services to national laboratories, even delegation to other departments, universities, and think tanks, and, of course, to the private sector. It was a complex ecosystem that spun off vast amounts of new tools and uses. Management of the R&D process also varied with the uniform services, multiple civilian agencies within DoD (most notably DARPA), and elsewhere, such as the National Science Foundation (NSF) and the National Security Agency (NSA) funding and directing research agendas. One historian who has extensively studied the influence of computing on the thinking of the military, Paul Edwards, summarized the focus of the research activities over the course of the last six decades of the twentieth century: “First, air defenses, then strategic early warning and nuclear response, and later the sophisticated tactical systems of the electronic battlefield grew from the control and communication capacities of information machines.”8 Almost every major weapon system of the late 1950s forward involved the use of digital technologies either for their development (such as design of aircraft) or their operation (for example, ballistic missiles and smart bombs). Defensive systems aimed at providing early warning of enemy attack also involved use of computers, such as SAGE in the 1950s through the early 1980s, and development of the Star Wars’ Strategic Defense Initiative (SDI) from the 1980s to the present, an effort still under way during the early years of the new century. Command (military jargon for leadership) and control (military term for management) systems, used to coordinate complex air, naval, and later land battles, also were the subject of R&D. The DoD devoted considerable attention to the creation of complex and sophisticated logistics systems (often what in the private sector is referred to as inventory control or supply chains). These clusters of work called for the use of operations research (OR), artificial intelligence (called expert systems in the private sector), and, most recently, exploitation of RFID technology (early 2000s).9 In short, for a wide range of activities of interest to the Pentagon, development of new computer technologies and applications became ongoing activities that began in the early 1940s and have continued to the present. The early research projects of the 1930s and 1940s have been well documented, so they need not detain us here. However, it is important to understand how they were managed by the military. An initial, popular approach was to seek out the help of American universities, institutions that already had collections of scientists and electrical engineers who could immediately be put to work on projects, such as the development of ENIAC at the University of Pennsylvania during World War II,10 other work at the Radio Research Laboratory at Harvard University, and at MIT’s Radiation Laboratory. The latter emerged with the largest research projects on behalf of the military by the late 1940s, employing some four thousand people across sixty-nine colleges and universities.11 The results of these early projects were impressive: the ENIAC for compiling ballistics firing tables for antiaircraft weapons and army artillery, radar from Harvard and MIT, and the largest of all the early computer projects, SAGE, led by MIT. All these institutions included subcontractors from the private sector in these various
53
54
The DIGITAL HAND, Volume III
projects. The combination of university and private sector doing R&D emerged as the favorite research strategy used by DoD for decades.12 R&D increased at the Pentagon in the late 1940s and early 1950s, running into the hundreds of millions of dollars each year, often comprising 80 to 90 percent of all federal funding for all manner of military R&D.13 Various agencies emerged in the 1940s to direct the research. During World War II, American scientist Vannevar Bush, builder of early analog devices in the 1920s and 1930s at MIT, created the Office of Scientific Research and Development (OSRD), which coordinated much R&D during World War II. The Ballistics Research Laboratory (BRL) was also an early investor in various ballistics projects, beginning with the ENIAC, which was moved to its facilities in Maryland in 1947. Additionally, the Navy established the Office of Naval Research (ONR) in 1946. As the Cold War heated up, along with the attendant nuclear arms race and later space race, both the Navy and Air Force took aggressive steps to ensure funding and management would be in place to sustain the development of new digital tools.14 The ONR became the leading military agency funding projects in the late 1940s and 1950s, with the Air Force playing a similarly important role.15 Professor Edwards reported that by 1948 the ONR alone was funding 40 percent of all basic research (not just about computers) being conducted in the United States and that two years later had 1,200 projects scattered across 200 universities.16 Major computer projects of the day included Whirlwind (later part of SAGE) at MIT, Hurricane at Raytheon, and the Mark III at Harvard.17 As the 1950s progressed, the Navy and Air Force added specific development contracts to the list of R&D projects, which they awarded to major electronics and other private firms to complete. For example, all of IBM’s early computers (circa 1950s) were in whole or partially funded by the military.18 Key participants also included Northrop Corporation (for example, with the Snark missile), Bell Labs (Nike missile); Burroughs and NCR with various ballistics and avionics applications; Engineering Research Associates (ERA), which worked initially on cryptographic computing and later the ATLAS, usually dubbed the second electronic stored-program computer in the United States when it went “live” in 1950; and Univac for its systems.19 By the early 1950s, while computing R&D was beginning in the private sector, the federal government still funded about 75 percent of the costs of all major projects, with the lion’s share focused on military requirements.20 Even though this massive injection of funding made it possible for the U.S. computer industry to come into existence and lead in world production in the 1950s, hence launching the capability in the private sector for companies to start doing their own R&D in the field, the Pentagon and various civilian government departments continued to support military projects in the 1960s. The military extensively supported defense research on miniaturization of electronics, and particularly development of semiconductors, right through the 1970s. So, even Texas Instrument’s famous introduction of the integrated circuit (IC) at the dawn of the 1960s was done with Air Force funding. ICs were critical for all missile guidance systems from the 1960s to the present and also for avionics in military aircraft, beginning in the 1970s.
Defense of the Nation
An important organizational change came in response to the launch of the Sputnik satellite by the Soviet Union in 1957 with the establishment of the Advanced Research Projects Agency (ARPA) in 1958, renamed DARPA (adding Defense to its title) in 1972. Over time, this agency funded many projects involving telecommunications networks and such advanced forms of computing as artificial intelligence, graphics, intelligent sensors, software, semiconductors, SDI, time sharing, and advanced computer architectures. Its most famous claim to fame was the funding and development of the early versions of what came to be known as the Internet, a digital packet switching, telecommunications network used initially by academics, government officials, and those companies working on military projects.21 DARPA remained one of the most extensive supporters of basic research on computing, and its military uses, in the world right into the early twenty-first century.22 Over the past half century, the government as a whole devoted considerable attention to the organization and management of clusters of research facilities. The combination of universities, private firms, national laboratories, and other facilities were collectively called Federally Funded Research and Development Centers (FFRDCs). These were distinct centers that were sometimes housed on university campuses, within government agencies, or in the private and nonprofit sectors. Each participant in this community might also be a subcontractor to an FFRDC. These emerged over time in line with how technology evolved on the one hand and, on the other, in response to the changing interests of the military community. They began with work on operations research during World War II and, by the early 1960s, systems analysis and systems engineering in basic scientific work along with others devoted specifically to IT. Between World War II and the mid-1990s, 150 such organizations became FFRDCs, and of this total, 70 were controlled by DoD. The earliest were devoted to military projects; not until the mid-1970s did a growing number of civilian research centers come into the program, when 40 existed, 10 of which DoD controlled.23 From the beginning, the military were active. The Navy set up, for example, the Operations and Evaluation Group (OEG), while the Air Force created RAND to meet its needs. Specialized organizations sometimes spun off into private ones, such as the RAND organization in 1948, the Systems Development Corporation (SDC) in 1957, and MITRE, which spun off from MIT in 1958. Many of these centers were located at universities, such as Lincoln Laboratory at MIT and the Software Engineering Institute at Carnegie-Mellon University, funded by DoD and working on projects for the military. While these were established to handle all manner of R&D for the federal government, it is very telling how many were devoted to military research, the majority of which involved computing topics. Table 3.3 documents for select years the number of military-centric centers.24 Almost all of the early centers were established in direct response to threats posed by the Cold War, a focus that remained unchanged until the early 1990s. Although outside the scope of our discussion of the DoD’s role, other government agencies also contributed funding, management, and staffing for R&D on military projects from the 1950s right into the next century. These included
55
56
The DIGITAL HAND, Volume III Table 3.3 Number of Federal and DoD Research Centers, Select Years, 1956–1995 Fiscal Year 1956 1961 1966 1971 1976 1981 1986 1991 1995
Total Federal
Total DoD
Shared DoD with Other Agencies
46 66 47 68 37 35 36 41 39
27 43 23 13 8 6 10 11 10
18 20 19 21 20 21 20 22 19
Note: Part of the reason for DoD’s decline in number in the 1970s reflected the surge in R&D that had been taken up by the private sector that resulted in commercially useful technologies directly applicable to the military, such as general purpose computers. Source: U.S. Congress, Office of Technology Assessment, A History of the Department of Defense Federally Funded Research and Development Centers, OTA-BP-ISS-157 (Washington, D.C.: U.S. Government Printing Office, June 1995): 51–52.
the Atomic Energy Commission (AEC), which played an important early role in the development of nuclear weapons and their supporting systems; the National Science Foundation (NSF), which funded projects much like DARPA but also R&D on computing for civilian projects; the National Institutes of Health (NIH); the National Security Agency (NSA); and the EPA.25 While it was not obvious in the late 1940s that computers could be useful to the military, by the early 1950s that doubt had been dispelled and so leveraging academic, private, and national laboratories became the way DoD added to its store of new information about the digital and development of many applications. Over time, it became increasingly obvious that computers could perform quickly complex calculations, operate reliably enough with the prospects of improving performance, were shrinking in size, and all the while adding capacity (certainly by the early 1950s). By the mid1960s, enormous advances had been made along each of these dimensions. Research and deployment projects tracked along the lines of what the technology proved capable of doing. For example, in the 1940s, what was seen as a scientific device was deployed to perform calculations that the equipment could do faster and more accurately than human calculators.26 Next, in the 1950s, R&D led to the development of ways to control weapons, such as missiles, and to guide aircraft.27 By the mid-1960s, the Pentagon had become very interested in a wide variety of projects that could simulate battle conditions and also engineering problems that might be faced by emerging weapons systems. A great deal
Defense of the Nation Table 3.4 DARPA-Sponsored Categories of Digital Military Research Projects, 1999–2005 (funding in millions of dollars) Project Type Defense research sciences Next-generation Internet Computer systems & communications technology Extensible info systems Tactical technology Integrated command & control technology Materials & electronics Advanced aerospace systems
1999
2001
2003
2005
57.4 42.0
90.4 15.0
94.4 0.0
96.1 0.0
309.1 0.0 159.0
376.6 69.3 121.1
355.4 90.0 151.1
364.3 95.0 174.3
38.3 268.6 0.0
31.8 249.8 26.8
0.0 215.3 40.0
0.0 230.6 44.0
Source: U.S. Department of Defense, Unclassified Department of Defense FY 2001 Budget Estimates, February 2000, vol. 1, Defense Advanced Research Projects Agency (Washington, D.C.: U.S. Department of Defense, 2000): unpaginated, available at http://www.defenselink.mil/comptroller/defbudget/ fy2001/budget_justification (last accessed 6/1/05).
of interest developed in the 1960s about how to automate battle decisions using computers to rapidly acquire information and to take actions. This focus has continued to the present and often is labeled the “electronic battlefield” or “Information Age Warfare.”28 As with private sector applications, each new use was met with considerable skepticism and only applied incrementally in an evolutionary manner as each improvement in the technology demonstrated capabilities of the digital hand to do something better or newer than previous methods. However, the Pentagon remained continuously a major supplier of funding and management for projects scattered across the American economy. Table 3.4 lists important categories of defense related IT projects of the late 1990s and early 2000s, all funded by DARPA. The list would have looked quite similar to those of the 1970s and 1980s as well. The projects covered a wide range of research, many of which had been under way in various forms since the 1960s. These included work on the Internet, intelligent systems and software, information survivability, asymmetric military threats, networked centric warfare, software for autonomous operations of equipment, and embedded systems. These also included a host of projects directly related to specific weapons: naval warfare, land systems, and a series of technologies for targeting, tactical support, aeronautics, and logistics. Not included in the table is a new category of research begun in the 1990s that, while not digital, could become so in time, called defense against biological warfare. By the early 2000s, DoD funded research on this subject at far greater levels than R&D for defense research sciences or tactical technologies.
57
58
The DIGITAL HAND, Volume III
While many projects were supported by DoD, a couple provide us with a sense of what was being done. Both the Navy and Air Force were early and consistent supporters of research on computing for weapons systems, although officers at all ranks would not support any proposed deployment of any system that had not proven its worth. The Army came later to use computers for weapons, although as we shall see, it was just as eager to deploy computers for accounting, logistics, and inventory control. Historians have so extensively documented SAGE, the air defense network, that we can look to other R&D initiatives for insights.29 Turning to the development of a family of weapons systems is a fruitful exercise for identifying patterns of relationships between R&D and computing. Creation of ballistic missiles demonstrates that without the helping hand of the digital, this class of weapons could not have been created or deployed in the forms that it took. Its development illustrates several behaviors. First, the weapon could not work without light-weight avionics to guide missiles to their targets, a role played by a combination of onboard and ground-based computers. Second, as with most weapons systems, all types of R&D on weapons occurred over a long period of time, in this case from the end of World War II until the present, with many projects leading to the creation of a system and then many additional ones improving incrementally their efficiency and performance. Third, these were expensive and sophisticated, indeed very complex, projects that simultaneously strained the entire body of knowledge related to various fields, ranging from computing to electrical engineering, to ballistics, and so forth. While ballistic missiles were developed in the 1950s and 1960s, in the early years of the twenty-first century that body of research was still being extended as
Figure 3.1
Use of computers in development of missiles was a major application in the Navy, circa mid-1950s. (Courtesy IBM Archives)
Defense of the Nation
the DoD continued work on the SDI project, which is intended to provide a SAGE-like shield, this time not just against enemy aircraft but more importantly enemy missiles. Patterns of behavior regarding R&D in computing apply to the vast majority of military projects because of their complexity and the magnitude of their deployment. Much like what the IRS faced, nothing the military did proved to be small, inexpensive, or easy to complete. Paul Edwards was one of the first civilians to document the complexity of all these projects. In addition to size and scientific/engineering complexity, there were the normal IT day-to-day problems that the private sector faced with systems that worked or did not, because they had been so extensively deployed by the military across all processes and most complex weapons systems. Always in the forefront for military leaders was their concern about whether or not complex systems and weapons would work in the heat of battle. Proliferation of software programming languages also plagued the DoD, forcing it to standardize to one in 1983 called Ada. It was intended to extend standardization of IT across the entire department to simplify maintenance and facilitate connectivity of systems.30 Robustness of systems—both physically and technically—remained an intense concern from the earliest days, an issue that has not yet gone away.31 Edwards documented, for example, a chronic collection of problems with the avionics of the F-15 fighter jet, a workhorse for the Air Force.32 Growing out of the experience of early German missiles fired at London during World War II, the American military community recognized the future potential of missiles and set about creating their own in the late 1940s, extending their development to the present, with the U.S. Air Force (USAF) largely responsible for their evolution and deployment. For a variety of reasons ranging from interservice rivalries to normal start-up efforts, work on the first generation of USAF long-range, strategic ballistic missiles (Atlas, Titan, and Thor) really did not progress until the end of the Korean War. The original concept of a pilotless aircraft had not changed for decades. Budget constraints in the late 1940s and early 1950s often dictated the pace of development, gating the involvement of American aircraft manufacturers and other firms that provided components and subassemblies. Problems to overcome with missiles involved reentry, range, guidance systems, efficient and effective motors, and fuels. Always on everyone’s mind was what platform could best be used to deliver nuclear warheads and, secondarily, other explosives. Nuclear warheads became lighter and more powerful, starting in the early 1950s, a development that influenced the configuration of rockets for long-range delivery. Increasingly, USAF coordination of development work centralized by the mid-1950s; hopes rose simultaneously of reducing costs and improving efficiencies. At the same time, a strategy of parallel development of subsystems emerged that has been applied in one manner or another to the present. This approach allowed multiple projects to thrive simultaneously, encouraging development of interchangeable components for various rocket programs, and to spread new knowledge of rocketry among companies supplying the DoD. For a typical missile system, including the three earliest ICBM missiles, subcontractors were recruited to develop the air frame, propulsion, guidance,
59
60
The DIGITAL HAND, Volume III
nose cones, and computer systems. Development involved multiple generations of missiles, so as one came online, the next generation was in development. Development of guidance software and hardware, and other digital and analog subsystems, spread to several companies. For the Atlas and Titan, General Electric and Bell Telephone were recruited, respectively. The USAF named Burroughs the prime vendor for made-to-order computers for the Atlas and gave the same chore for the Titan to Remington Rand.33 During the early months of the Kennedy administration, these systems came online, with thirteen ICBM squadrons of Atlas missiles and six of Titans ultimately established. The feared “missile gap” with the Soviets had, for all intents and purposes, vanished, but newer systems continued under development. The success of the three early weapons systems gave confidence to both the military and to private sector contractors that ever more sophisticated missiles could be developed, particularly as digital technologies shrank in size (largely due to the introduction and use of the integrated circuit in the 1960s) and larger software programs became possible to write and run. Older missiles were constantly updated with new hardware and software in the 1960s and 1970s, establishing a pattern of continuous replacements of subassemblies (including hardware and software) all during the second half of the twentieth century.34 To help put the size of the R&D effort in some context, in the late 1950s and early 1960s, approximately 2,000 contractors worked on missiles. Every major computer manufacturer of the time also participated.35 A similar tale could be told about how development of avionics for Air Force and Navy aircraft, and later for ships, was managed. Each of the uniformed services had its own R&D operations for basic and applied research and for deployment. Analog and digital control systems were developed for each that were highly specialized, designed to increase coordination of aircraft and ships in more complex, faster-flowing combat operations. Computers, particularly their integrated circuits, made it possible for both services to develop highly maneuverable planes in the 1970s, most specifically fighter aircraft, and in the 1980s, stealth combat aircraft. Conventional fighter aircrafts reached their pinnacle in the 1970s and early 1980s and were then followed by development of the stealth class of “black” aircraft, which made their detection by conventional radar nearly impossible. This class of aircraft heralded what many have called a new age of aircraft, comparable in importance to the development of jet craft in the 1940s.36 Developers used computers to assist in the design of this new class of aircraft and deployed avionics systems that enhanced flight stability while offloading from pilots an increased number of the mundane operational activities.37 Because these complex weapons platforms often took two decades or more to develop, for example, ballistic missiles and fighter aircraft, firms and government agencies developed specialized knowledge that permitted them to participate in the development of ever newer systems for the military. This practice applies to current work under way in developing SDI and stealth aircraft.38 How did the Pentagon manage basic research in computing in such areas as artificial intelligence and super computing, applications that have yet to emerge fully in weapons systems, particularly in the 1980s and 1990s? Research focused
Defense of the Nation
not only on weapons but also on the wider issue of command and control systems, which involved development of larger, more complex computing projects ranging from work on artificial intelligence to more advanced computer chips. DARPA had lead responsibility for charting the R&D effort and distributing work among companies and universities. The major set of initiatives of the 1980s and 1990s was called the Strategic Computing Program and grew largely out of the Reagan administration’s Strategic Defense Initiative. Research projects also involved intelligent systems, robotics, automation, visual programming, survivable networks, chip developments, integrated packet nets, distributed sensor nets, machine architectures, and a variety of operational applications. In short, a wide variety of work on basic and applied computer science once again became subjects of great interest at the Pentagon. As the 1980s turned into the 1990s, officials invested increasingly in machine intelligence projects, initiatives that had not yet resulted in significant new applications by the end of the decade. In the 1990s and early 2000s, projects also included work on nanotechnologies, security systems, and telecommunications.39 Officials envisioned various applications becoming possible from this research, an expectation dating back two decades. All through the 1980s and 1990s, uniformed and civilian officials had become enamored with the idea of computer-driven information battlefields. Their thinking included development of precision weapons, significantly advanced forms of intelligence gathering to warn of problems and to set targets, and a myriad of software tools to enhance in an orchestrated manner command and control of a wide variety of activities, people, weapons, and vehicles. DARPA officials envisioned a combination of commercially available IT, such as civilian networks, working with specialized software and hardware to create the information battlefield of the future. Increasingly, shifting decisions on what to do to equipment became a growing theme for discussion and research. As had happened for many decades, uniformed personnel were reluctant to turn over command and control to machines unless these proved effective, while civilian and engineering officials were eager to explore new possibilities. All through the 1990s and beyond, many military mid-career and senior officers and DoD civilian personnel debated the issue while work quietly went on in developing new systems. For example, at the start of the new century, researchers building neural networks for intelligent flight control for NASA began considering trials involving the F-15 and later the F-18 flight simulators.40 This recent effort demonstrates several patterns of application development at DoD. First, from the time someone conceived of an idea to the time it was deployed on ships, planes, or battlefields often ran over two decades. In the latest case of F-18 flight simulators, the ideas had been the subject of testing and development since the 1980s. Second, they often involved creating new computer science, and not simply new applications, which partially accounted for the enormous time and costs involved. SDI, conceived in the 1980s, was still not a reality in the early years of the new century, although an enormous amount of work had gone into creating new uses of computing in missiles and air defense systems, just
61
62
The DIGITAL HAND, Volume III
as had occurred in the 1950s with the creation of SAGE, the nation’s first computer-assisted air defense system. Third, these projects were clusters of R&D initiatives farmed out to national laboratories, universities, and corporations in waves over the years. This pattern of sourcing, outsourcing, and in-sourcing R&D led to the broad dissemination of leading-edge R&D in IT and telecommunications across wide swaths of the American economy over some six decades.
Inventory Control and Logistics Generations of military personnel have heard, and believed, the old quip that “nothing happens until something moves.” They meant people and materiel to support troops, sailors, and airmen. Some of the earliest deployments of computers by the military were not in combat or in air defense systems. Rather, uses mimicked many of the applications that first surfaced in the private sector as well: inventory control, cost accounting, purchasing, and logistics, often originally run on punched-card tabulating equipment. Because these uses involved the majority of civilian and military employees of DoD for over six decades, for many it was their initial introduction to data processing and to IT in general. For these two reasons, it is important to understand such early uses of computing before moving to what one might normally expect to discuss, such as combat operations, simulations in training, or information age warfare. To understand the role of computers, keep in mind that there are three interrelated sets of processes that interacted over time with each other and also involved the exchange of data among them. These consisted of purchasing (acquisition of materiel or services), inventory control (receiving, storing, and tracking of goods), and logistics (movement of goods and services to where they were needed).41 The histories of these three processes within DoD have yet to be written, but we know several facts about them. First, they were massive in size, complexity, volume of dollars expended, number of digital systems involved, staffs dedicated to them, and number of people, industries, and companies they affected. Second, they often operated independently of each other but also increasingly over time became more integrated. Third, they mirrored patterns of behavior in the private sector.42 A Defense Logistics Agency mission statement from 1991 clearly summarized the purpose of these processes: “to provide effective and efficient logistics support to the Military Departments and other organizations. The Agency’s vision is to continually improve the combat readiness of America’s fighting forces by providing America’s soldiers, sailors, airmen, and marines the best value in supplies and services, when and where needed.”43 This statement could have been made in any decade since World War II. Every uniformed service had inventory and logistics functions dating back to the Revolutionary War, a circumstance that has continued to the present. However, in an ongoing attempt to improve efficiencies and to reduce expenses, various organizational and operational reforms were implemented to centralize and better coordinate these functions. For example, in 1952, the Army, Navy,
Defense of the Nation
and Air Force established a center to control the identification of supply items, a first for the military. As in manufacturing industries, by migrating to common inventory nomenclatures, such as item numbers, it became easier to track available inventory, what was on order, and to share that information across the services. Additional reforms in the mid to late 1950s increased coordination across the services, making it easier to track vast amounts of inventory by using computers, leading decision makers to find newer technology to be more attractive to use than older manual or partially automated processes, many of which originated during World War II. In 1961, Secretary of Defense Robert McNamara established the first DoD-wide organization to handle the entire spectrum of functions called the Defense Supply Agency (DSA). The agency spent the 1960s and 1970s consolidating processes and systems, changing its name in 1977 to its current form, the Defense Logistics Agency (DLA).44 Beginning in the early 1950s, each of the uniformed services launched studies on the feasibility of using computers to handle inventory control, purchasing, and logistics with the result that by the mid-1950s, DoD began installing computers in support of these processes. One interesting and early application was the use of an IBM 705 by the U.S. Air Force to manage personnel as if they were inventory. Going live in July 1956, the system was an upgrade from an old punched-card system that tracked the availability of officers and enlisted personnel, and their movement, producing various personnel reports and tracking transfers of some 30,000 to 35,000 people per month.45 At approximately the same time, the Navy installed a similar computer to manage parts for ships, monitoring inventory on 181,000 items worth $550 million. The purposes of the system were to lower costs of inventory, ensure adequate supplies of parts, and reduce the number of people required to handle the process—all the same reasons the private sector used to justify such systems.46 The Army installed its first IBM 705 in 1956 in support of inventory control for its Signal Supply Agency, which managed 162,000 items worth $1.4 billion in support of military installations around the world. For the first time with the help of the 705, the Army was able to marry inventory management and requisition processing, which reduced the number of man hours to do both, and to carry out better and more accurate inventory control.47 Other systems appeared at various bases across all the services during the second half of the 1950s and early 1960s. While the number of systems installed is not absolutely clear, extant evidence suggests that several hundred computer systems were implemented.48 For the day, volumes of items tracked and cost were massive and, as one contemporary study after another made clear, could not be handled with manual systems. One report on the Air Force’s Air Material Command argued that this agency managed more assets it purchased in 1960 ($36 billion) “than General Motors, United States Steel, American Telephone and Telegraph, Metropolitan Life, and Western Electric combined.”49 At the time, this command used a dozen IBM 705 computers, nine IBM 650 tape RAMAC systems, two 305 RAMACs, 26 IBM 650 computers, and over 3,000 pieces of unit record equipment, such as punched-card readers, printers, and tape drives to process 1.6 million items.50 By
63
64
The DIGITAL HAND, Volume III
the early 1960s, half of all computers in DoD were used by the Air Force (400 versus 800 for all of DoD), and of the 400 in the USAF, 125 were dedicated to supporting its inventory management around the world, processing suppliers and parts worth $12 billion. Like the private sector, the Air Force wanted to minimize volumes and also the number of people deployed in inventory management. The USAF reported that using this first generation of computers reduced the number of personnel required to perform this work from 212,000 in 1956 to 146,000 in 1961.51 Yet by any measure, military inventory and logistics systems remained massive, even by today’s standards, let alone by those of the late 1950s and early 1960s. All of the services included in their inventory control and logistics processes a set of activities and uses of computing to handle procurement. Precomputer uses of information technology in this area reflected the same pattern adopted for inventory control: use of punched-card tabulating equipment and smaller office appliances. By the 1960s, purchasing and inventory control were integrated activities, beginning with planning for what had to be ordered, through to contract award, to inspection of what was received, and accounting reporting to pay vendors. Over a dozen reports typical of purchasing applications were routinely produced in the 1960s, and that continued to the present, such as bidding requests and contracts let. The move to computers in the early 1960s made it possible to expand the variety of analysis and data tracking beyond old punchedcard applications, mimicking what the private sector did in the same period.52 By the early 1960s, it had become impossible to manage these processes without using computers; all during the 1960s, they were upgraded, new applications added, and extended to all corners of the military establishment. In 1965, for example, the Pentagon launched what it intended to be a ten-year project to upgrade to a second generation of inventory management process using computers. This initiative led to the development of the Logistics Information System, a process that took twelve years to accomplish at a cost of $61 million. Thus, like the IRS, DoD had implemented systems in the late 1950s and throughout the 1960s that were, by the standards of the day, modern, and made possible by migrating from old punched-card systems to computers. These required some redesigning of processes and writing software that took advantage of the technology.53 Like the IRS, the military remained dependent on these early systems for many years. During the 1970s, they did essentially the same as the IRS, deploying use of computers to preexisting work practices. Assistant Secretary of Defense for Installations and Logistics Barry J. Shillito in 1973 told those interested in military logistics that “the challenge for the next four years, and the remainder of this decade, is to continue this progress.”54 Beginning in the late 1960s and extending through the 1970s, each of the uniformed services continued to control and enhance their own individual logistics and inventory control systems. Complaints from auditors and blue ribbon panels about redundancies began to appear. One such report, dated 1970, argued that “many of the modules of these systems perform almost identical functions, such as warehousing, shipping and
Defense of the Nation
receiving, inventory control, etc.” and that “software programming for each of these is costly and each independent modernization step taken on the many separate programs involves unnecessary duplication and appears to lock in more tightly the incompatibilities of the various systems.”55 Nonetheless, these systems remained, with hardware often upgraded as new machines became available, although many of the applications were essentially the same as developed in the late 1950s and 1960s. In fairness to the military, however, new technologies were bolted on to these processes much as occurred in the private sector. For example, after bar code systems became available in the late 1970s, the Pentagon did not hesitate to integrate these into existing inventory and logistics systems, much as it was doing with RFID technology in the early 2000s. In the case of bar codes, one of their first uses involved tracking maintenance data for airplane parts, which represented an upgrade from earlier punched-card tracking systems.56 Bar coding spread across all the uniform services for all manner of inventory and logistical processes during the 1980s and 1990s. Annual reports of the Defense Logistics Agency from the late 1980s and early 1990s pointed out that the DLA focused less on upgrading logistics systems already in place than in lowering operating costs, “building an effective relationship with industry,” and improving performance in general.57 When e-commerce practices came into their own at the turn of the century, DoD began using this new form of IT as well.58 Pentagon logistics managers began thinking of their subject area much in the same manner as the private sector, viewing logistics as supply chains, with the basic strategy in war time of deploying preexisting stockpiles of weapons, various supplies, and food accumulated in peacetime for quick response while ordering backfills and additional quantities from suppliers as needed. The planning and execution of the war in Iraq at the start of the 1990s strained the logistical capabilities of the Pentagon. The logistics strategy involved situating massive quantities of materiel in the Middle East prior to the war on Iraq. Existing logistics systems were capable of ordering, moving, and tracking the massive quantities of supplies needed for that initiative. Following the war, strains on the logistics processes subsided, until the start of the second war with Iraq (Operation Iraqi Freedom). Beginning in 2001, the Pentagon began planning for this war and started deploying troops in the theater in late 2002. DoD launched major combat operations in March 2003. In support of this new military campaign, the logistics community in DoD had to move in excess of 2 million short tons of cargo to the Persian Gulf, ranging from equipment and spare parts to food and batteries. It would be difficult to exaggerate what a large effort that was; from October 2002 through September 2004, for instance, expenditures for supplies and operating support reached $51.7 billion, and officials spent an additional $10.7 billion just for transportation of people and supplies.59 By any measure, all existing systems were being strained. The GAO pointed out after the early stages of the second Iraq war that existing supply chain management practices were problematic, not because of the quality of the IT systems, but due to high levels of inventory and lack of the “right” inventory needed in Iraq. One major IT flaw it identified was the poor quality of computerized models used to forecast item
65
66
The DIGITAL HAND, Volume III
demand, particularly by the Army. All forecasting was computer based, and in the case of the Army, its model was programmed to simulate peace-time demand and insufficiently to react to war-time circumstances. In addition, the GAO criticized the military in general (civilian and uniformed services) for their logistics IT systems’ not being able to “support the requisition and shipment of supplies into and through Iraq.”60 For instance, using RFID tags on pallets had been mandated prior to the war, but, as of 2004, only a third of all pallets and containers coming into the theater had RFID tags, with reasons ranging from lack of personnel trained in the use of this technology to disagreements over who had jurisdictional control of inventory.61 Spot shortages in this war became highly visible to the American public and created a significant political problem for DoD. The GAO’s auditors blamed the Pentagon for the problem: “the logistics systems used to order, track, and account for supplies were not well integrated; moreover, they could not provide the essential information necessary to effectively manage theater distribution.”62 In short, criticisms mirrored those faced by senior management at the IRS. Furthermore, the GAO reported similar root causes: disconnected systems preventing visibility across the entire supply chain, communications problems, particularly with field forces over radio, new address codes for forces in the field not matching those in computer systems, and lack of sufficient training in the use of these IT systems.63 The reason for dwelling on these IT problems is to demonstrate how important these applications of the digital hand had become over time for the military. Failures in IT logistical systems in the 1950s and early 1960s would have been irritating but not crucial; today, the opposite is the case. GAO auditors said that risks to troops in future wars would be great if these problems were not solved: “Troops will continue to face reduced operational capabilities and unnecessary risks unless DOD’s supply chain can distribute the right supplies to the right places when warfighters need them.”64 Like the IRS, DLA committed to another round of modernizing its software, largely by using commercially available off-the-shelf packages, integrating data from various files and systems, and to maintaining accurate, up-to-date status of inventory (on order, in storage, location, and so forth).65
Weapons Systems and Ordnance The first image that probably comes to mind when thinking about the role of computers in ordnance (such as bombs and missiles), or the more nebulous notion of weapons systems, is the “smart bombs” that the United States used in the Iraq war of 1991. Digital guidance embedded in bombs communicated back and forth with pilots and others, leading to the precision bombing of targets. However, the availability of smart ordnance is a relatively new development, although the concept of using computers to direct the use of “dumb bombs” or artillery shells pre-dates the arrival of digital computers. The idea of using computers to help coordinate use of weapons as part of a system of activities dates back to the 1940s. The notion of embedding intelligence in weapons systems lies
Defense of the Nation
at the heart of how the military wanted to use computers to coordinate defensive and offensive actions in warfare, what in the parlance of the military is normally referred to as command, control, and communications. Use of computing in weapons spread over the decades in three waves. The first involved analog devices in the 1930s and early digital computers in the 1940s to develop firing tables for Army and Marine Corps artillery, and by the Air Force for bombers and the Navy for its large guns. The second wave concerned use of computers to guide missiles to their targets, already discussed earlier in this chapter. The third wave involved deployment of smart bombs first developed in the 1980s, a process still unfolding as a new generation of technology.66 Underlying these three waves of ever increasing use of computers in weapons was the integration of the weapons themselves with the “platforms” used with them. Platforms can be ships and airplanes, and those also within the larger framework of a battlefield command and control process all supported extensively by computers. These systems provide commanders intelligence on the action under way and provide them with the ability to send orders increasingly down the entire chain of command right to the battlefield. The whole process of waves of digital introductions increased the use and influence of computing in battle over the decades, leading to a situation where today no battle can be fought without extensive use of computing in one form or another.67 As early as 1932, the Army had looked at the possibility of using Professor Vannevar Bush’s differential analyzer at MIT—an analog calculator—to improve the accuracy of artillery and bombing tables, subsequently building up a decade of experienced thinking about the application. In 1942, after the start of World War II, the Ordnance Department contracted with the University of Pennsylvania to build a large processor to prepare such tables, the system that came to be known as the Electronic Numerical Integrator and Computer, or simply, the ENIAC. While the story of the ENIAC and its successor machines has been told many times,68 for our purposes it is important to recall that this and other military-sponsored computers built in the 1940s and 1950s were used to create firing tables. By the early 1950s, this digital application had been well established and pervasive; thus the notion of using such technology to guide missiles did not require as great a leap of faith as one might otherwise think. The ENIAC proved to be such a successful instrument during the years it was employed (1946–1955) that almost all early firing tables were calculated using it. Many other government and academic researchers used it as well to perform various mathematical calculations regarding early missile development, weather prediction, atomic energy calculations, and other scientific research. The Army made this, and subsequent machines, available to academic researchers, providing an early tie to universities via computers. A successor to the ENIAC called the EDVAC (Electronic Discrete Variable Automatic Calculator) went online in 1951 and was used by the Ballistics Research Laboratory all through the 1950s into the early 1960s to do similar work as the ENIAC, particularly for missiles. For the latter, it calculated projectile, propellant, and launcher behavior. It computed detonation waves for
67
68
The DIGITAL HAND, Volume III
ordnance and missiles, trajectories of courses, more firing tables, and for the first time, generated guidance control data for weapons, particularly for guided missiles. Other “firsts” that became standard fare included mathematical evaluations of antiaircraft and antimissile performance, carrying out calculations for early war game problems, and supporting a variety of studies on the probabilities of lethality of various mines.69 In the 1950s, the Army also put into operation yet another system that it shared with all the uniform services called the Ordnance Variable Automatic Computer (ORDVAC), which was assigned to do necessary calculations for the development and use of missiles.70 Field artillery in the 1950s received considerable help from computers. Fire control problems continuously arose as new shells and cannons were developed. The major development of the 1950s, which set the pattern for future artillery systems, was the Field Artillery Fire Control System M35. It used an electromechanical computer to prepare its directions, and while the system had a variety of weaknesses (accuracy declined with distance a shell was fired), it proved especially useful for close-in shelling, such as done with 105-mm and 155-mm howitzers. In the 1950s, the Army contracted for the construction of a new system called the Field Artillery Digital Automatic Computer (FADAC), a solidstate electronic digital computer, weighing about 200 pounds, that could operate in the field—a first for digital computers—and that required minimal training of artillery personnel. Because how it was used became the standard form late during the Vietnam War and beyond, a contemporary description of its operation is worth quoting: “Input consists of a manual keyboard and various arrangements of paper tape or another FADAC. When all the data, such as target location, powder temperature, gun location, and meteorological data, are entered, depression of a button initiates computation. Gun orders, comprising deflection, quadrant elevation, fuze [sic] time, and charge are displayed in decimal form.”71 The same system was used in the operation of the Perishing, Sergeant, Lacrosse, and NIKEHercules weapons systems. In the 1960s and 1970s, artillery became more mobile—even mistaken by the uninformed as tanks—with onboard computing increasingly becoming available, and also for tanks, as computers became smaller, hence more portable. By the 1980s, the digital hand was evident at work with all these systems, in rear positions guiding shelling operations, and increasingly linked to headquarters where battlefield commanders were able to understand what was being fired and at what targets. The Navy had a similar experience. It needed firing tables for its onboard ship artillery, and because it began experimenting with missiles in the 1950s like the Air Force, it, too, became extensively involved in using digital tools in support of the larger ordnance.72 By the late 1960s, the Navy was installing computer assisted targeting systems on its ships for use with missiles.73 As one student described the effects of these new systems, by the mid-1960s, they were “starting to become the ‘glue’ that bonded together an increasing number of shipboard weapon systems and sensors.”74 The key to Navy artillery was linking its work with other radar and computational systems used to detect enemy threats to ships, and to coordinate various moving components in battle (ships,
Defense of the Nation
submarines, aircraft, missiles) in an integrated command and control battle management process.75 As systems became more advanced, less expensive, and smaller, they increased their role on ships throughout the 1970s and 1980s.76 Old analog devices and systems were replaced with more versatile digital computers. The Naval Tactical Data System (NTDS) was the name given to a collection of applications that first came into existence in the 1960s as the onboard command and control process. These applications spread in the 1960s (installed on three ships in 1961, twelve in 1965, and thirty in 1969) and beyond, with senior naval officers increasing their level of confidence in the effectiveness of computers in battle.77 These applications were installed in waves, on new ships and then retrofitted on existing vessels; then earlier versions were upgraded with new functions, a pattern that began in the 1960s and has continued to the present. Navy gunfire control systems became completely digital in the 1970s.78 In fact, by the late 1970s, minicomputers were ubiquitous on naval ships for all manner of computing, not just fire control. As the uniformed services became more comfortable and reliant using computing to direct firing of weapons, and nearly simultaneously in using command and control systems to direct battle operations, the notion of “systems” as a way of looking at integrating various activities spread through the services.79 Simultaneously, computing proved more useful as integrated circuits became more powerful, less expensive, had greater reliability, and as software improved as well. New applications became possible. If one could build a computer-guided missile, why not also a computer-guided bomb? The case for having such weapons was obvious. If one could drop a smart bomb on a desired target, that mission would only require one bomb, one airplane, and one pilot to accomplish. Without that weapon, a commander needed to send many aircraft, drop many bombs, and have many maintenance crews and even then not be guaranteed to have fulfilled the mission. A smart bomb lowered operating costs, held out the promise of improved performance, less collateral damage, and reduced the risk of crews being wounded or killed. So what are “smart bombs”? These are munitions that are guided to specific targets by a variety of means (such as a pilot) and have onboard computing capability that can communicate back and forth with the fire control software/ command to stay on or alter course and transmit images of its flight. Guidance systems are thus a two-part installation: computing on the ground (or aircraft) and in the bomb itself. The Army had attempted to develop radio-controlled bombs as far back as World War I, but without success; so it is a concept that has been explored for decades. By the early 1960s, some bombs were equipped with television cameras that aided the bomb director on an aircraft, who could send signals to steerable fins on the weapon; it was a popular application late during the Vietnam War, and that remained in use as late as the 1990s. Both the Army and Air Force experimented with laser-guided systems as early as the 1960s. Not until microchips became available in the 1970s did these prove practical and become used most spectacularly first by the British during the Falklands War in 1982 and next by the Americans in 1991 in Operation Desert Storm.
69
70
The DIGITAL HAND, Volume III
Satellite-guided weapons relying on GPS networks to find specific targets became the latest iteration of this class of weapon.80 With each wave of smart bombs came increased use of computing, more advanced systems, and the need for constant improvement to ensure accuracy and reliability. Because this is the latest iteration of weapons, smart bombs are not perfect. In the Iraq war in 2003, there were instances in which smart bombs did not hit their intended targets; but they had advanced so far that, while still used with “dumb bombs,” they had clearly become the wave of the future for bombing. By the mid-1980s, all the major components of modern digital applications in weapons systems had been designed and implemented to one degree or another across the major organizations and activities that made up how modern warfare was to be conducted. Computer-enhanced weapons were systems, not merely bombs and artillery configured with digital features. The work of airplanes, ships, and tanks, for example, needed to be coordinated for the weapons systems to work, much like machine guns firing straight through propeller blades in World War I had to be synchronized so as not to shoot off the propellers. One Pentagon training manual, published in the 1980s, described the modern military aircraft and the role of computers, illustrating the interdependences involved: The Korean War vintage cockpit, packed full of instruments, has been replaced with a cockpit containing just a few instruments and controls and display screens. Any information required by the pilot is simply displayed on the screens at a push of a button, anything from attitude indications to the status of weapons. They are all under the control of computers. Furthermore, there are now new subsystems that would not be possible without digital systems: diagnostic systems that can display the health of all the major subsystems and “expert systems” that provide the pilot with information on the various options available during a particular mission.81
The same report went on to say “that computers and software have introduced a whole new dimension to our weapon systems,” thereby expanding the capability of warriors at all levels of command to have greater flexibility, speed, and effectiveness in the use of their weapons.82 Table 3.5 lists major software applications common to the operation of Air Force and Navy combat aircraft by the late 1980s. For aircraft to deliver ordnance, the services needed a variety of software tools, some of which were used on the ground, such as those used for maintenance, training, and mission preparation. They also had to operate in real-time and be fault-free with far less failures than a business or accounting application. Clipping through a quick discussion of the injection of computing into weapons systems could lead one to conclude that these worked well and that problems were minimal. Nothing could be further from the truth. What these weapons all had in common was that they were new, unlike accounting or logistical applications. While they built on prior experiences, the requirements were
Defense of the Nation Table 3.5 U.S. Military Aircraft Weapon System Digital Applications, circa 1988 Flight management Data reduction Maintenance training Scenario/analysis
Battle management Crew training Automatic testing Mission preparation
Source: Adapted from Defense Systems Management College, Mission Critical Computer Resources Management Guide (Washington, D.C.: U.S. Government Printing Office, 1988): 2–6.
more stringent: they had to be more rugged than a computer system in an airconditioned data center; they had to be smaller and lighter; they had to be more durable and reliable because people’s lives depended on them; and they had to be effective. It always took years of R&D, experimentation, innovation, and then persuasion of the military to adopt these. The same patterns were evident in command and control systems, discussed more fully below. Firing tables developed in the 1940s and 1950s were subject to errors and had to be improved; software problems meant that not all missiles would fly flawlessly in the 1950s and 1960s; even the vaunted SAGE system had a series of false alarms that created near-war crises with the Soviet Union;83 and software “glitches” plagued every new innovation, a pattern that persists, just as is often the case with complex civilian systems. On a technical level, often new concepts had to be developed. In the case of firing tables in the 1940s, as one mathematician noted, since firing data had to be automatically calculated, “it became necessary to develop a quantitative theory of information in order to solve the associated engineering problems.”84 Pentagon officials worried about the growing complexity of systems. One wrote in 1975: As the role of digital computers has increased, so has their criticality in terms of performance and cost. Computers perform aircraft, ship, missile and spacecraft navigation, guidance and control, weapon control, target detection and tracking, combat direction, communication distribution and processing, automatic testing and other critical functions that affect the success or failure of strategic and tactical missions. Computer technology within Department of Defense (DOD) weapon systems is a relatively new field.85
He was blunt when it came to reliability: “Although hardware reliability has improved substantially, the corresponding gains in system reliability have not been realized.”86 In the 1960s and 1970s, Pentagon officials complained about the poor quality of systems development and discipline, and less about hardware and software. But complexity was a major factor. The USAF FB-111 aircraft of
71
72
The DIGITAL HAND, Volume III
1966 had an on-board computer with a 60,000 word memory while the 1988 version of the B-1B Bomber had a system that could hold 2.5 million words. As hardware capacity and reliability increased between the 1970s (when microprocessors became widely available) and the end of the century, the majority of problems and concerns moved from hardware to software for all manner of computing, not just for weapons systems, and to “uneven application of standards.”87 Such systems had to be fault-free and operate real-time, which added to their complexity. As decades passed, the relative importance of software over hardware mirrored patterns of dependence on computing evident in the private sector and in other government agencies. The biggest changes came in the 1970s when integrated circuits transformed radically the capacity, reliability, cost, and functionality of hardware, making it possible to store in a small machine a massively increased amount of data and complex software rich in functions. Military weapons designers realized very quickly that software was far more flexible than hardware or electromechanical systems, which is why the volume of software used in aircraft, for example, expanded all through the 1970s, 1980s, and 1990s. As hardware shrunk in size and software became more useful, the latter replaced functions performed in earlier years by machinery, cables, and levers. Of course, they had to overcome the problems posed by the lack of standards commonly applied across all weapons systems, such as the myriad of programming languages used in the 1960s and 1970s that led to the adoption of Ada as the one for all the services to use. Costs also suggest the change. In 1960, 80 percent of a weapons system’s cost went for hardware, and only 20 percent for software; by 1980, the ratio had flipped, with software now accounting for 80 percent of a digital system’s expense.88 The discussion later in this chapter on Information Age warfare describes how computers in weapons systems and their delivery platforms (such as airplanes and ships) were integrated into battle command systems to provide commanders with very computer-dependent ways of managing warfare late in the twentieth century, building on the incremental improvements made in software.
Training As military technologies emerged throughout the twentieth century, all the uniformed services embraced every training method that came along. They used such innovations as those that depended on technological innovations, such as computer aided instruction (CAI), simulation tools, and video games. They embedded the concept of technology-enhanced training deeply into the culture of all the services, just as they had military tactics in the nineteenth century. Beginning in the early stages of World War I, the uniformed services became extensive users of all manner of technologies, not just computers or telecommunications, with the result that by the end of the 1950s, the U.S. military establishment was the most technology equipped in the world. It was also the
Defense of the Nation
most technologically advanced. Both situations concerning technology—great deal of it in use and continuous incorporation of new forms—meant that there was an enormous need to train personnel, not only new recruits but also “lifers” all through their careers. Thus, training functions in DoD became massive, sophisticated, and evolved continuously. Training ranged from how to use or maintain a particular tool or weapon system to war gaming and leadership in which simulated combat or other situations were role played, increasingly using electromechanical simulators in the 1930s and 1940s (for example, to train pilots before putting them into expensive aircraft), analog simulators in the 1940s and 1950s, and increasingly afterward digital computers. By the end of the century, even video games, the most current form of simulation in training, had been widely deployed. In fact, DoD relied earlier and more extensively on every technologically based innovation in training that came along in the twentieth century than had elementary schools, high schools, or universities, often extending knowledge about pedagogical effects of the use of the digital hand. Why was DoD able to do this, to take a leadership role in the use of various forms of training in large quantities? Part of the answer lay in the need to train a continuous influx of new employees (civilian and uniformed), who were constantly changing their jobs and using ever evolving technologies. A second reason involved scale and scope. Unlike any school or university in the country, DoD was so big, and had so many people to train, that it could afford to develop expensive new teaching aids to displace older techniques. It could spread the costs and benefits across large masses of people. As one training program manager at DoD remarked in 1974, “the armed forces alone spent $6 billion annually on training of which more than one-third is devoted to technical specialty areas. This effort accounts for 120,000 man-years of trainee time each year and involves hundreds of thousands of trainees.”89 In 1981, the annual budget for training of all military personnel had risen to $8.8 billion, 74 percent of which went to the development of newly recruited personnel.90 By 2001, those responsible for training in the military were serving 2.4 million personnel in uniform and roughly another million civilians, helping both communities be productive in over 150 professions and at various levels from private to four-star general. That year, DoD budgeted roughly $18 billion for training.91 No school district or university had student populations even remotely comparable in size and scope with the exception of the very largest cities in the country. To control costs and be effective, DoD training organizations had no alternative but to explore every conceivable form of training possible; that included using computers to assist in the process. Since the cost for developing digital tools was often far beyond the financial capabilities of schools or higher education, but not of DoD, one can understand why this federal department could be the first to create and deploy many such tools. Also, it had a workforce comfortable with all manner of technologies. Whenever innovations emerged outside of DoD—such as PLATO—the department did not hesitate to incorporate them into their training program as well, just as it did not shrink from using off-the-shelf
73
74
The DIGITAL HAND, Volume III
software tools to manage inventory control, supply chains, and logistics, especially after they became widely available in the 1980s. In the 1950s, training consisted of a combination of classroom instruction with chalk and blackboards, practicing skills using reel-to-reel tape (for example, to learn a foreign language), on the job training, and field exercises in which mock simulations of combat, for instance, took place. One other form of training involved the use of electromechanical simulators to train airplane pilots before putting them into an expensive airplane to fly. That situation remained relatively unchanged until the late 1970s, when computer assisted instruction (CAI) methods were added to existing training programs. The uniformed services began limited experimentation with CAI systems as far back as the 1960s, but in the early 1970s, DARPA began funding CAI development to address administrative and training issues. An official at DARPA explained the thinking: Training is a highly labor-intense activity in terms of teacher as well as student time. It takes place at many training schools and at many diverse operational sites. The problem is to reduce the teaching and administrative burden and the trainee time, while maintaining the quality of the training received, in a way that permits instruction to be economically distributed to widely dispersed military installations. Computer technology, together with communications and student terminal capability, shows promise for solving the problem.92
CAI held out the promise of supporting drill and practice teaching, administration and grading of tests, and aggregating student scores, all the while allowing students to progress at speeds consistent with their capabilities. The earliest CAI system used by the military appears to be PLATO, a system developed at the University of Illinois. It consisted of a mainframe with distributed CRTs that students could use interactively to access and work with training courses. Various experiments throughout the 1970s led to the development of military training modules, which were widely deployed during the 1980s.93 Table 3.6 lists the kinds of early CAI-based training provided by DoD in the late 1970s and early 1980s. What becomes obvious is the broad range of subjects. In each of these instances, courses saved between 31 and 89 percent of time to train over prior methods. The courses listed in table 3.6 translated into savings in the millions of dollars per course since often thousands of students took them.94 In short, CAI was quickly beginning to prove its worth. All through the 1980s and 1990s, CAI-based training spread across almost every function that required training and was used alongside other methods of instruction, such as use of simulation tools. IT in support of simulations came early and spread intensely across the department. Simulations were always of two kinds: those that required one to understand how something might work, such as a proposed new airplane or the application of a new way of doing things (in military jargon, a doctrine); or as a way of teaching before allowing someone to actually do something for real, such as instructing pilots on how to fly specific models of aircraft before putting them
Defense of the Nation Table 3.6 Early Computer-Assisted Training within DoD, circa Late 1970s–Early 1980s Electronics Electricity Machinists A/C panel operator Medical assistant Vehicle repair Weather Weapons mechanic
Tactical coordination Fire control technician Navy aviation familiarization Navy aviation mechanics fundamentals Inventory management Material facilities Precision measurement equipment
Source: Jesse Orlansky and Joseph String, “Computer-Based Instruction for Military Training,” Defense Management Journal 17, no. 2 (Second Quarter 1981): 47–48.
in the cockpits of an actual airplane, or simulating battle conditions to teach tactics, strategy, and command and control practices (the heart of many video games today). The earliest applications of simulation using mechanical means date back to the late 1920s with the development of flight simulators to teach pilots how to fly. During World War II, military pilots were routinely taught flight proficiency using simulators that during the late 1940s increasingly incorporated advances in electronics to improve the breadth of the learning experience. The first major digitally based simulator was created by the University of Pennsylvania for the Navy and Air Force in the 1950s, called the Universal Digital Operational Flight Trainer Tool. All through the 1950s, 1960s, and 1970s, new trainers were developed, incorporating more complex maneuvers and, of course, operation of increasingly computerized avionics and weapons systems. The Navy did the same not only for pilots, but beginning in the 1960s, for submarine crews as well. By the start of the 1980s, simulators were used to train all submarine crews. In summary, from the late 1950s to the present, both the Navy and Air Force proved to be the most enthusiastic deployers of simulation systems to train people who operated all manner of equipment and did maintenance, and for a growing list of backroom office applications, such as inventory management.95 Normally, one reads about the use of digital simulation tools by the Navy and Air Force since they had the largest, most complex equipment, but the Army also became an active user of this training application. While the Navy and Air Force began that process in the 1950s, the Army began in the 1960s on a limited basis and over time expanded this application to all manner of skills, ranging from modeling new and more complex weapons (e.g., tanks), to training crews in their operation (tanks, helicopters), to simulating battlefield activities. Tank simulators for both development and training were some of the earliest uses of simulation. By the 1980s, tank training simulating battlefield conditions was used to train individual tank crews and also multiple crews required to operate
75
76
The DIGITAL HAND, Volume III
in a coordinated fashion on the battlefield. By the 1990s, training tank crews could be done simultaneously at multiple bases, involving virtual forces at war, rather than actual deployment of tanks. The heart of this system was called Simulation Network (SIMNET), which began as a Tank Team Gunnery Trainer in the early 1980s and, over time, evolved into a networked tool using local area networks (LANs) and super computers. By the 1990s, it had expanded into whole battlefield simulators.96 A second form of simulation involved soldiers in live training exercises with a new tool: in the 1990s, the Army added simulated bullets (as opposed to real bullets), which were lasers emitted from a weapon, while using blank ammunition to simulate the sound of firing.97 From the 1950s through most of the 1980s, simulation in training remained fragmented because tools were developed for specific use within individual uniformed services or for discrete activities, such as operation of tanks or airplanes, by the various services and agencies within the department, often independent of each other. But alongside these training applications were a constant curiosity in and healthy skepticism about the use of computers in what initially was called the “electronic battlefield” and which, by the 1990s, was also variously labeled “Digital War” and “Information Age Warfare.” But there is a difference between the two. The original use of the phrase “electronic battlefield” is unclear, although by the 1950s and 1960s the term was in wide use, often referring to reliance on radio and radar to communicate and track activities on the ground, air, and sea. All manner of warfare, armies, air forces, and navies around the world also became increasingly mobile in these years, diverse in the types of equipment they deployed and in the sheer volume of hardware and people moving around. As a result, more electronics were introduced into the process of warfare, with the result that, in the arcane language of the engineer, “the density of signals has grown to the point that the common descriptor of signal density—number of pulses per second—is all but meaningless.”98 During the Vietnam War, a variety of electronic sensors were deployed, for example, on the Ho Chi Minh Trail, to track the enemy’s movement by causing electronic signals to be broadcast to the U.S. military. The result of these and other proliferations of electronics on the battlefield in the second half of the twentieth century led to a situation where commanders had to process and understand quickly thousands of signals simultaneously. Increasingly over time, beginning with radar systems in the 1950s but extending to every complex weapons and communication system by the end of the 1970s, computers were used to digest, analyze, and present the messages generated on the electronic battlefield.99 How many signals they received, assessed, and digested often was a function of the capacity and capability of the technology. By the 1980s, commanders were beginning to realize that the pace at which effective battlefield command and control could be handled was becoming highly dependent on IT to receive, digest, and present the volume of incoming data. By the 1990s, they knew that the pace at which action could occur on the ground was being influenced profoundly by IT and telecommunications.100 One crosses over to the concept of digital warfare when computers are used to make battlefield decisions based on such data; to direct weapons systems (such
Defense of the Nation
as smart bombs); or to react to electronic and digital countermeasures by the enemy, such as jamming or blocking communications, or “hacking” into combat operational software programs. Electronic warfare has been a reality for the military since World War I, the possibilities of computer-based warfare a training and simulation activity first as a glimmer of a possibility in the 1950s, but more realistically a viable option to use and train for by the 1960s.101 The earliest use of simulators in this area was in war-gaming scenarios run against possible Soviet land wars in Eastern Europe during the Cold War in which rivals of the United States also had made significant investments in electronic warfare and, more specifically, in military computing. Each of the American uniformed services had initiatives in this area to train its commanders, even before using computers. The Navy implemented war games at its Naval Postgraduate School in Monterey, California, in the late 1950s, building on a tradition of manual war gaming dating back to the late 1880s at the Naval War College; use of computers to enhance the gaming process began in 1960.102 That same year, the Marine Corps introduced war games at its Marine Corps School in Quantico, Virginia, and later in the decade used computing in support of war gaming.103 The Air Force first applied operations research (OR) techniques during World War II and in gaming training in the 1950s. Its acceptance was profoundly influenced by that service’s long familiarity with flight simulators, with the result that it became the first military branch to apply the digital hand to war gaming in the 1950s when it simulated air-ground combat.104 RAND Corporation also developed nuclear war games for the Air Force in the 1950s and 1960s, keeping the Air Force at the forefront of computer-based gaming.105 SAGE in the 1960s and 1970s provided additional gaming opportunities for training and study (for example, in advancing knowledge about optimum targeting strategies). As at the other services, even senior officials early on participated, for example, the Air Force Chief of Staff in 1960 took part to understand the possibilities of Soviet first-strike nuclear warfare.106 The Army embraced gaming in the 1950s and used a Navy computer game called CARMONETTE I for the first time in 1959. Subsequent releases of the software included infantry and armed helicopters, communications and night vision exercises. These systems allowed commanders to move military units around (increasing in size to battalions by the late 1960s) and to decide whether to fire or not.107 By the early 1960s, the Joint Chiefs of Staff and even NATO commanders were beginning to use digital simulation tools to learn, to train, and to formulate tactics and strategies. Upon the backs of these early games were built those that ran on more sophisticated hardware and software all through the next three decades. In those years (1970s–1990s), as Cold War and post–Cold War realities set in, games were modified to train for anticipated new battlefield and logistical issues. By the end of the 1980s, battalion-level commanders (e.g., lieutenant colonels) could train using computer-based tools, sitting in front of terminals connected real-time or in batch mode to mainframes.108 As computing power in the late 1980s and early 1990s increased, new generations of war games appeared. In addition, as one government report written in 1995 noted,
77
78
The DIGITAL HAND, Volume III
Figure 3.2
Marines in training class, Cherry Point, NC, 2006. (Courtesy U.S. Marine Corps)
as the costs of computer imaging declined, “the compromise on resolution is being reversed: newer simulations are using increasingly higher resolution graphics to serve other uses besides training,” in effect preparing the military for the widespread adoption of video games by DoD late in the decade.109 By the mid-1990s, a cultural transformation had occurred within the military that encouraged the further use of computers in training. Although a whole generation of commanders and enlisted “lifers” had been exposed at one time or another, and often frequently, to computer-based training and simulation in the 1970s and 1980s, it was the arrival of a new generation of soldiers and civilians who had grown up with television, GameBoy, and video games now working for DoD in the 1990s that stimulated further use of digitally based training. One observer of the phenomenon, writing at the dawn of the new century, noted that senior officials were now “very mindful that the people that they’re trying to bring into the military—the 18 year olds—are probably the first generation that grew up with computers, who get ‘bored real easy’ with traditional classroom instruction. They keep this in mind when designing all their recruiting strategies and training programs.”110 The same observer drew the connection with actual combat operations: In addition to cost and motivation, add relevance. Because modern warfare increasingly takes place on airplane, tank or submarine computer screens without the operator ever seeing the enemy except as a symbol or avatar, simulations can be surprisingly close to the real thing. In addition, since war is a highly competitive situation, with rules (or at least constraints), goals, winners and losers, competitive games are a great way to train.111
Defense of the Nation Table 3.7 Sample Computer-Based Military Training Tools, circa 2000 Service
Projects
Army
Saving Sergeant Pabletti (for team skills) used with 80,000 soldiers Taskforce 2010 PC (wargaming over the Web) Spearhead II (tank game) Flight Simulator (Microsoft’s game modified for Army aviation) SubSkillsNet (laptop-based submarine training modules) Fire-control training games Electro Adventure (ship repair, problem solving) All initial flight training now done through simulators JVID and Finflash (target recognition) Falcon 4.0 (commercial flight game) Marine Doom (fire team training, based on commercial version) Quake (squad fire training) Battle Site Zero (squad simulator)
Navy
Air Force
Marines
Source: Derived from Marc Prensky, Digital Games-Based Learning (New York: McGraw-Hill, 2001): 303–312.
By the end of the century, battle simulation games in the military could be played at one or remote locations, involve thousands of participants, and include all sizes and many types of armies and combat conditions. DoD had created Joint War Fighting Centers to bring in various levels of commanders from all the services to learn how to collaborate and fight in modern wars. Even gaming with NATO armies in Europe across the Atlantic became part of the process. Table 3.7 lists some of the training projects under way at the turn of the century, suggesting the evolution in how the digital was used in comparison to the courses listed in table 3.6. What is also important to point out about the information in table 3.7 is that all these simulation games worked on a new IT platform that came into wide use in the 1980s and 1990s—video games—once again demonstrating that DoD was willing to integrate new IT developments into its training programs. As early as 1982, commentators noted that “the advent of home computer games has clearly demonstrated the teaching value and appeal of simulated engagements at all age levels,” an observation not lost on the military.112 While adoption of the new technology took time, by 1997, the Pentagon had a variety of development initiatives under way with both the Video Games Industry and with other entertainment industries for the creation of simulators to train individuals and groups, to evaluate new weapons systems and new doctrines and tactics, and all cost effectively.113 The Marine Corps may have been the first to adopt video games when, in 1996, the Commandant General Charles C. Krulak directed that the
79
80
The DIGITAL HAND, Volume III
Corps consider deploying such technology.114 That led to the adoption of the Doom game cited in table 3.7. The other forces added games, including Real War, intended to teach enlisted personnel how to think like a higher-ranking commander. Simultaneously, the military continued replacing Cold War–based strategy training games with video games suited to post–Cold War realities. As early as 1999, DoD set aside over $90 million for this purpose.115 One other form of training using computers came into its own just after the dawn of the twenty-first century, involving embedding training modules in modern weapons systems themselves by taking advantage of the availability of laptops and ever smaller electronics. These were small video training programs that were embedded in missile launchers and tanks. These could be simultaneously accessed with the equipment being trained on and when people had time or need to know. Training developers considered this class of training tools relatively new and the next evolution from traditional computer-based simulation training programs. They replicated what could be done with simulators but also provided story telling while teaching skills. As this book went to press, the services and other parts of DoD were merging old and new games together (for example, an old training video game on support strikes from F-16 jets with a new module on how to run truck convoys in Iraq).116 As with so many other digital systems, use of simulation software spread so widely that by the 1990s complaints and confusion surfaced similar to what occurred in other large federal departments, as we saw, for example, at the IRS. A RAND study from 2003 summarized the issues at DoD: Currently, the development, governance, financing, and use of simulation is a complex web, with multiple agencies responsible for defining and implementing modeling and simulation (M&S) policy. Furthermore, beyond the basic phase of training, training requirements for ships are only minimally articulated. The vagueness and inconsistency of training requirements and standards for assessing readiness further complicate the problem of determining how simulation might best be used.117
As at the IRS by the GAO, RAND called for more coordinated deployment of standards and practices, a useful suggestion to make since all the branches of the uniformed services were required by the 1990s to operate in an integrated manner in combat, first widely reflected in the Iraq war of 1991. The issues remained in the new century, most recently with the development of a whole new generation of training modules based on video games.
Combat Applications The heart of combat applications of any technology centers on support of command, control, and communications. Command involves planning, assessing capabilities of one’s own forces and the enemy’s assets, allocation of resources, and committing to action. Control is about such things as the management of weapons,
Defense of the Nation
as how computers are used to direct artillery. While much data can exist, only a small amount is needed to direct this work and to assess results. Surveillance is considered part of control. Finally, communications is, as one would expect, the task of communicating up and down the chain of command and horizontally on the battlefield about what is going on, transmitting orders, and collecting data on results. The earliest uses of computers were aimed at control activities. For instance, SAGE was designed to provide an umbrella of monitors to detect incoming Soviet aircraft and missiles in the 1950s and was used into the 1980s, when it was replaced with more sophisticated systems.118 Early uses were also applied to artillery and missile programs. As computers became more capable by the early 1960s, simulation tools could be used in support of planning functions—the command part of the 3 C’s of military leadership. Communications represented some of the most important applications of computing over the past half century, and also one where military leaders were as frustrated with the lack of progress at the end of the century as they were in the 1950s. Use of computers in support of combat evolved slowly over time. In the 1950s, the major initiatives related to combat were driven largely by the Air Force, first through the massive SAGE project but also in other, smaller ways. For example, the Air Force began using small computers (COMAR-1) to analyze bombing reconnaissance data in the mid-1950s, perhaps the first use of the digital hand in direct support of combat-related activities.119 But the major investment in these early years was in using computers in support of communications. The Strategic Air Command (SAC), which had responsibility for responding to any aerial attack on North America, installed a series of mainframes in the late 1950s and early 1960s in its underground headquarters in Omaha, Nebraska. There, three IBM 704s and two IBM 7090s were used to conduct planning, control, and evaluation applications; in short, they helped plan missions, dispatch them, and assess results.120 In the early 1960s, the Air Force installed its first system equivalent to e-mail, using teleprocessing terminals to communicate information regarding aircraft, missiles, personnel, and logistics.121 All the services and DoD had various communications networks in the 1950s and 1960s, all analog, most provided by or based on civilian telephone technologies, and AT&T. However, ensuring that these communications systems never went down was of constant concern, and the military did not hesitate to use computers made by AT&T and commercial IT vendors to track problems and warn of failures as early as 1962–1964.122 Earlier, we discussed the evolution of missiles; they, too, required complex communications to track trajectories and so forth, which were implemented at the same time. By the mid-1960s, this network included twenty computer systems scattered across missile firing ranges in the Pacific. They monitored range safety, predicted where missiles would land, provided orbital vehicle control, tracked flights, supported radar acquisition and handover functions, performed orbital computations, handled data, and conducted pre- and postlaunch checks.123 However, all of the applications just listed were ad hoc, little coordinated activities in support of combat operations. A couple of observers close to these
81
82
The DIGITAL HAND, Volume III
activities reported in 1962: “At the present time, there are few commands that depend extensively on computer assistance for command information processing. The majority of the commands to which automation may be applied are using interim data processing capabilities or are operating manually and planning to obtain automated capability in the near future.”124 Earlier in this chapter, we noted that operational support for navigation and functioning of aircraft and ships in the 1950s and 1960s was augmented by the installation of on-board computers, a process of incremental adoption and upgrades that occurred from that period to the present involving all aircraft and ships. By the 1980s, all major aircraft and ships used computers in support of their combat missions: command, control, and communications. Communications applications were some of the most complex and crucial for the military, and it would be difficult to overstate the importance for combat operations. Each of the armed services created a command (in civilian language, an agency or office) to ensure that they had effective communications systems and that these were in line with DoD’s overall requirements. These networks operated around the world and increasingly included computers to run them. While they began as analog networks, by the end of the century they were largely digital. DARPA spent vast sums of money to constantly improve these networks so as to ensure they were failsafe (a key design feature of the Internet developed by DoD). The various services developed independently of each other many of their own communications systems from simple field radio communications to networks. By the end of the 1960s, complaints surfaced about incompatibilities, and there were calls for standards and interoperability of systems; however, since in those days each of the services operated in combat more independently of each other and would not be explicitly required to integrate combat missions and operations until the 1980s, it is understandable why separate systems emerged. Some were quite massive. For example, at the end of 1968, the U.S. Army’s communications operations in Vietnam consisted of twenty-two signal battalions with 23,000 men, and that was only at the start of U.S. expansion of forces in the war. Enormous progress had been made in the 1960s in improving communications using computers. A deputy assistant secretary of defense, Paul H. Riley, summarized the progress. At the start of the decade: We had only Army, Navy and Air Force communications. If you will visualize with me hundreds of communications circuits in the form of water hoses, all interconnecting hundreds of houses in a suburban development, you will start to have the basic picture. This situation, with all of its criss-crossing of hoses, resembled the proverbial bucket of worms and unfortunately resembled our communications system.125
He complimented the Air Force for consolidating all its “hoses” while he announced that the Army was just beginning to do the same. He excused the Navy from his drill because it had a different situation, caused by ships in constant movement all over the world, although years later it, too, would have to bow to the call for integration and conformance to standards for the entire DoD.
Defense of the Nation
But like the other services, even in the 1990s it faced problems communicating with, for example, aircraft hovering over ships manned by pilots from the Air Force or Army. But that lay more in the future because in 1970–1973 all of the services were figuring out how to use computers in support of communications, because hardware was beginning to shrink in size and cost, while satellites were now making it possible to transmit sixty times more data than a traditional telephone line. Computers were beginning to route information in large quantities across various commands, an application that spread widely in the 1970s, known as Automatic Digital Network (AUTODIN).126 However, the world of combat was changing, requiring the Pentagon to improve its communications infrastructure in the 1970s. As early as 1969, Riley commented on why the need for change: The first reason was the change in the nature of warfare itself. Time had become the most fundamental and critical factor. The need to respond in seconds or minutes began to depend on communications more than at any time in the history of warfare. A second factor was the need for compatibility of communications between Army, Navy, and Air Force. We had to be sure that we could send our Department of Defense, or national command authority [president] instructions . . . a third factor, compelling change, was the sharp increase in the price of hoses and the fact that all users were pressing for more and longer hoses.127
The reason for the needed longer “hoses” can be illustrated by comments made at the same time by Major General Joseph J. Cody, Jr., of the Air Force, who was responsible for providing communications to his service, due to a growing appetite for information: “The missile age, which called for increased emphasis on command and control, also caused a revolution in management and the end of traditional management methods.”128 What he set forth to implement were improved communications systems that could “collect, transmit, process and display information.”129 At the time, he used computers linked to radar systems (such as SAGE), a second system called the Ballistic Missile Early Warning System (BMEWS) that fed data into SAGE, and the Back-Up Interceptor Control (BUIC) system that did the same thing. He also launched development of an airborne command and control system that could hover over a battlefield called the Airborne Weather Reconnaissance Systems (AWARS).130 By the early 1970s, all the services were using AUTODIN (launched 1966) and were ordered to start consolidating their various communications networks. That work went forward slowly through the 1970s and 1980s.131 As late as 1990, when DoD filed its annual report of achievements, it still commented on the need for integration and continued improvements in communications.132 Various assessments made in the 1980s and 1990s complained that little progress was being made not only in integrating communications but also in other digitally based weapons systems and processes. By now there were many of the same problems evident at the IRS: expensive legacy systems; newer ones that were incompatible with older networks, applications, and equipment; budgetary
83
84
The DIGITAL HAND, Volume III
constraints; the need to keep the networks operating while upgrading; complex procurement practices; changing nature of military operations—in short, a long list of problems. Following each war or mission, “after action” reviews invariably commented on the issues. The invasion of Granada in 1983 required extensive collaboration among the uniformed services and was plagued with so many problems that led to congressional action and changes in policies, procedures, and practices within DoD. However, progress proved illusive.133 By then, computing and communications had become central to all discussions about the effectiveness of the military. The issues were not resolved. Former Secretary of Defense Les Aspin and former Congressman William Dickinson reported after the Persian Gulf War that “Operation Desert Storm demonstrated that tactical communications are still plagued by incompatibilities and technical limitations.”134 The operation in Somalia in the early 1990s also illustrated the problem as did events in Iraq in the late 1990s, when the United States monitored the air space over the country. What is most interesting about DoD’s problems with communications in support of combat operations is how little the technology itself affected the adoption of new digital tools. It seemed the proverbial “everyone” wanted the most advanced telecommunications in support of combat and other functions. Computer and telecommunications technologies improved enormously in the 1980s and 1990s. The constraints on adoption were driven largely by nontechnical influences. An Air Force lieutenant colonel studying the problem at the dawn of the new century reported that “combinations of factors contribute to the persistent shortcomings in interoperability, including the military acquisition culture, dwindling budgets, rapidly changing technology, the changing nature of operations, competing priorities, insufficient oversight, and unrealistic training and exercises.”135 Another study done at the time by the National Research Council (NRC) drew similar conclusions. However, it also commented about the effect of legacy IT systems worth understanding, because the same problem existed in other federal government departments that had been early adopters of computers: “The military services have tended to retain legacy information systems that were developed in response to ‘stand-alone’ requirements, were not regarded as subject to connection with other systems and, therefore, are not operationally friendly with their increasingly interdependent companion systems. The legacy systems issue is one of the greatest challenges faced by the DOD today.”136 The problem was compounded by both military and congressional mandates that the services operate as a united force and by the growing collaboration with military services of other countries in joint operations in the 1990s. The cultural problems were subtle but significant for weapons systems and communications of all kinds. An Air Force colonel in 2000 explained the situation: “The problem is that while the Department of Defense assigns warfighter responsibilities to unified commands, each individual service is responsible for developing its own command and control systems. . . . This creates some big, ugly seams for joint commanders.”137 The debate and actions in support of reforms continued into the new century.
Defense of the Nation
The war with Iraq in 2003 once again demonstrated to the world on television the importance of communications applications in the deployment and performance of combat forces that combined operations from all the uniformed services and even those of a few other nations. As of this writing (2007), few details were just beginning to come out regarding the role of IT. Drone aircraft equipped with digital avionics surveyed the terrain and delivered small propelled munitions to targets; vehicles were equipped with laptops that could download information and use GPS technology, while satellites provided commanders with additional information. The fragmentary evidence and early after action reports indicated that from division level on up communications systems, working with computers provided significant amounts of information, but, below that organizational level out in the field, communications malfunctioned frequently, with many junior army and marine officers complaining at the lack of timely and accurate information about what was happening on the ground. Often troops had to rely on cell phones and old fashioned e-mail to communicate up and down their chain of command.138 In Afghanistan, units of special operations communicated horizontally with each other and even had their own web pages. That approach proved more effective, suggesting a different model for organizing battlefield communications. As an expert in unconventional warfare at the Naval Postgraduate School assessed the experience: “Some of the problems in Iraq grew out of an attempt to take this cascade of information provided by advanced information technology and try to jam it through the existing stovepipes of the hierarchical structure, whereas in Afghanistan we had a more fluid approach. This is war by minutes, and networking technology allows us to wage war by minutes with a great probability of success.”139 The other observation one can make is that digital systems
Figure 3.3
Soldier uses a computer, Iraq, 2005. (Courtesy U.S. Army)
85
86
The DIGITAL HAND, Volume III
had continued to be deployed late in the century down to the individual soldier, much as civilian uses of IT had migrated from large mainframes over the years down to individual productivity tools, such as digital watches, laptops, cell phones, and PDAs.140
Information Age Warfare Ever since officers and military analysts could envision using computers in support of combat operations, they had speculated on the future of warfare engaging the digital hand in important roles. By the end of the 1980s, a whole generation of officers and enlisted career soldiers, sailors, and airmen had been exposed to myriad computer systems in training, logistics, communications, and in support of their daily work. As Professor Edwards asserted in his study on computing in the Cold War, a whole generation of warriors had embraced computing as contributing to the next major revolution in warfare.141 All the uniformed services speculated about the future of warfare, of course, and by then, the digital influenced profoundly their thinking because there had been sufficient experience and successes with the technology. Most accepted the idea that it was now a major component of all command, control, and communications activities. Most military commentators on strategy and tactics agree that the Gulf War represented a turning point away from non-IT-centered warfare of the Cold War vintage and toward a new era in how combat would occur. The quote at the start of this chapter from General Powell, chairman of the Joint Chiefs of Staff, who was very familiar with the uses of IT described in this chapter, was made after the war. What changed in the 1990s was the ratcheting up of the debate on how to use the digital in warfare, building on prior thinking and experience. Many definitions of Information Age warfare emerged in the decade, along with a large body of publications inside and outside of DoD debating its elements and consequences. While the phrase information warfare dated back to 1976, it was not until the 1990s that it came into wide use.142 The key event in the Gulf War in increasing interest in the subject was the collection of technologies that underlay the successes shown on everyone’s evening news: “These outcomes were enabled by battlefield tracking and targeting systems that allowed American forces to identify and attack targets beyond the line of sight, by advanced aerial reconnaissance from airborne warning and control systems (AWACS) and from joint surveillance and target attack radar systems (JSTARS), and by space-based satellite sensors,” making it possible “to apply destructive force when and where we want to.”143 Ideas about what constituted Information Age warfare, however, remained in flux. Yet, an Army major provided a useful explanation of the difference between conventional and IT-centric combat: “Comparing digital and nondigital forces is the proverbial apples and oranges discussion. The two simply cannot fight in the same manner. Digital units have command-and-control equipment that tells the
Defense of the Nation
entire force, down to each vehicle, the friendly and enemy situation in relation to the surrounding terrain. A nondigital unit must advance physically to acquire the same information.”144 Understanding what those differences implied for doctrine, strategy, and tactics dominated discussions about the future of war. One student of the issue pointed out that “the new meanings of power and information . . . favor the argument that wars and other conflicts in the information age will revolve as much around organizational as technological factors.”145 The development of so many new weapons and ordnance in the 1980s, and their subsequent successful use in Iraq, led some observers to start thinking in terms of a new revolution in military practices. Proponents of the new way argued that wars in the future would be struggles over the control of data and knowledge and less over territory; that industrial age stovepipe organizations would be replaced with flatter horizontal units; and that weapons would increasingly herald a new age of precision warfare, delivered to the correct targets thanks to “information superiority,” with scores kept by myriad sensing devices, such as drones. More extreme visionaries spoke of successful nations dominating or destroying the information of its enemies, even to the point of using propaganda and media to reshape enemy public opinions, often all occurring without shedding blood.146 Senior officers at the Pentagon increased their formulation of doctrine and strategy in the course of their routine planning in the 1990s. Publication of the Joint Chiefs of Staff’s Joint Vision 2010 set the direction: “We must have information superiority: the capability to collect, process, and disseminate an uninterrupted flow of information while exploiting or denying an adversary’s ability to do the same.”147 Thinking derived from that statement led to enriched articulation of how IT would influence command, control, and communications activities and practices, including the then favored expression of command, control, communications, computers, intelligence, surveillance, and reconnaissance, or C4ISR, to describe the now normal realm of warfare. DoD now gave information warfare (IW) a formal definition: “information operations (IO) conducted during time of crisis or conflict to achieve or promote specific objectives over a specific adversary or adversaries.”148 This could be carried out on the battlefield or as psychological operations (PSYOP) on a population or nation, envisioning “Force XXI” thinking as using IT as a major force multiplier. On the extreme edge beyond official thinking were those who argued the case that “cyberwar” would come in which warfare was strictly digital, with attacks on networks, databases, local area networks, and so forth. To those operating in a battlefield environment, however, digital tools offered potentially three advantages. An infantry colonel explained: First, the unit with situational awareness can better maintain its bearings in urban warrens, congested jungles, and convoluted mountain outcroppings, not to mention under the cover of darkness, all the preferred fighting arenas of U.S. light contingents. The second benefit relates to the first and again gains time, widely acknowledged to be the most valuable currency of combat.
87
88
The DIGITAL HAND, Volume III
Finally, a digitized light unit can more quickly recover from the confusion that usually accompanies its inaugural events: an air-borne, air assault, or amphibious forced-entry operation.149
However, he also cautioned that “these advantages accrue to a trained, disciplined force that adds them to well-honed close-combat skills. They are not magic wands that guarantee victory when wielded by any guy dragged in off the street.”150 In the late 1990s and early 2000s, the language of information-based warfare increasingly concentrated on the notion of network-centric warfare (NCW), creating a whole body of thinking that once again emphasized enhancing communications to get precise information to forces and facilitating collaboration on the battlefield among various units. In DoD’s latest formal pronouncement at the dawn of the new century, entitled Joint Vision 2020, senior officials spoke of more complete pictures of the battlefield for commanders at all levels.151 Increasingly, merging video game technology with simulation software and communications vehicles, such as the Internet and LANs, creates a new combat system to track and attack targets, deploy people and materiel, and assess results. The experience in Afghanistan, discussed above, was viewed by a growing number of officials as proof positive of the value of what many in the military were now increasingly calling network-centric operations (NCO), although commanders complained about slow response time of networks and not enough bandwidth to keep up with what was actually happening on the ground. The closer one moved toward the frontlines, toward “the sharp end of the spear,” the louder the complaints about insufficient bandwidth, so theory and practice had yet to marry up.152 However, the notion of NCO had gained enough adherents to start influencing another round of developments in communications and strategy. The digital-centric warfare so widely embraced in the 1990s and beyond had its critics almost from the beginning, as had all emerging IT applications in past decades. Missteps with false alarms with SAGE and SAC provided early sources of criticisms should a technology fail to do as intended. Programs, such as SDI, were often oversold and overpromised, while confidence in high-tech weapons only came after deployed in combat.153 The poor performance of some rockets in the first Iraq war gave pause to some critics, many of whom were outside the military establishment. However, there were also critics who grew up in the military and who raised objections after leaving active service. Each published pronouncement by the Joint Chiefs of Staff prompted criticism. The case of Joint Vision 2020 ( JV2020), for example, proved to be no exception. One recently retired Army colonel described four fallacies with it, any one of which could prove deadly to the military. Retired Colonel John A. Gentry complained that the thinking promised only to address a narrow band of activities performed by the military; that it relied on highly vulnerable infrastructures; that easy countermeasures could be launched against the military’s systems; and that institutional impediments to change, baked into the culture of the military, posed another threat.154 The debate over the increased use of IT in fighting future wars became subsumed in a larger discussion held within the Pentagon regarding the general
Defense of the Nation
use of technology. The debate moved from theoretical to practical considerations as the George W. Bush administration prepared for war with Iraq, flush with the victory against relatively primitive Taliban warriors in Afghanistan. Secretary of Defense Donald H. Rumsfeld over many years had become a supporter of arguments that stressed the importance of air power, use of computer-centered communications, and the deployment of small, well-trained forces. He wanted to transform the department into a leaner, more technologically dependent fighting organization and had been running into enormous resistance from many senior officers and some members of Congress prior to the start of the Iraq war. Senior officers believed that future wars would continue to require combat on the ground with many troops and a great deal of fire power (derived from the Powell Doctrine of 1990–1991), what the experience in Iraq proved subsequently would have been the correct strategy. For the war, General Tommy Franks wanted 250,000 troops; the secretary insisted on a strategy that relied on a fewer number of troops and massive use of air power, smart bombs, and other technologies. By the start of the second week of the war, critics began complaining that there were not enough troops on the ground, and the chronic problems in controlling security after the formal war ended simply added evidence in support of those who thought wars could not be so extensively dependent on technology. In short, technology had its limits.155 As this book was going to press in 2007, the outcome in Iraq had yet to be determined. So we do not yet know if what some have called the Rumsfeld Doctrine proved effective. But what is very clear is that now war was profoundly influenced in a direct and open manner by computing. While IT had been important in combat operations as early as the late 1970s and 1980s, it was not until the 1990s that it was front and center, perhaps more so than for any other nation’s military. Meanwhile, the idea of Information Age warfare continued to unfold.
Non-Combat Applications While the breadth of uses of computing in DoD rivals that of any other government organization or company, several further applications suggest the extent to which this department deployed the technology. The most important of these are the accounting and financial reports prepared by DoD. Each of the uniformed services had been users of precomputer information technologies dating back to World War I. Accounting and financial reports on budgets, expenditures, and payroll had long been assisted with the use of punched-card tabulating equipment from IBM and a myriad of office appliances from other American vendors, such as Burroughs, NCR, Remington Rand, and Felt & Tarrant. Their equipment was used at all bases and ports, not just in offices located in the Washington, D.C. area.156 When computers became available, the uniformed services and civilian agencies migrated their preexisting processes over to the new technologies at the same time and for the same reasons as did the private sector because
89
90
The DIGITAL HAND, Volume III
they were faster, could handle larger volumes of transactions, and reduced the amount of manpower required to get the work completed. Minimal alterations to existing work processes took place because the intent was to automate existing activities. Only in the late 1950s and early 1960s did officials alter work processes as they began understanding the possibilities of even further streamlined or new activities using computers. One additional early pattern that has continued to the present involved an organization-wide concern about the formalities of procurement, creation, and defense of budgets and concern over expenses. Every annual report of the secretary of defense, for example, discussed the issue. Law required that budgets and financial reporting adhere to standards set by the U.S. Congress, and oversight organizations monitored conformity, such as the GAO and the Office of Management and Budget. Examples illustrate how quickly the military embraced computing in support of accounting and financial matters. The Navy used IBM 705s to handle payroll functions as early as 1958 at its various naval centers around the country. The Army, which had the largest number of personnel, began installing similar IBM equipment at its bases all over the world in the late 1950s to handle personnel statistics, depot supply accounting, financial reporting, and also such smaller applications as theater supply and housing control and graves registration; but the lion’s share of the computing power went to accounting and financial systems. In the language of the uniformed services, accounting consisted of a collection of processes called comptroller activities. For the majority of the second half of the twentieth century, these activities consisted of tracking and reporting on general accounting, cost accounting, civilian payroll and leave accounting, civilian personnel, command management, budget and manpower reporting, station property reporting, and station supply accounting. “Station” referred to the physical site, such as an Army base. Reports and data collection processes relied heavily on computers by the end of the 1950s, and, in fact, many of these accounting applications were not updated to reflect the capabilities of newer hardware and software until the 1970s.157 Accounting was so important that in wartime, often one of the first units to arrive in a combat theater were the accountants. A Marine officer working in accounting and data processing at the time of the Vietnam War preserved a case study of the process. The first data processing platoon to arrive in Vietnam came on March 23, 1965, from the Marine Corps to serve the 3d Marine Division. Like the Army, Navy, and Air Force, each Marine base and large units had their own data processing to support such applications as payroll, budget accounting, and to track personnel and inventory. All these processes migrated to computers in the 1950s and early 1960s. Beginning in 1958, the Corps began leasing six computer systems to support accounting and personnel applications, with three systems from NCR and another three from Univac. In 1962, these systems were replaced with IBM 1401s, and subsequently with even newer Univac IIIs by the start of the Vietnam War. The initial accountants to come to Vietnam consisted of an officer (data processing [DP] manager) assigned to the local commander, a second in command DP officer, and enlisted personnel who could do the work,
Defense of the Nation
run the machines, and perform necessary maintenance. The majority of the work they did involved budgeting, inventory control, and personnel accounting.158 It is impossible to discuss accounting and finance in the military without reviewing personnel accounting systems because unlike in the private sector, often larger quantities of information were kept about personnel in machinereadable form and as part of accounting applications. Major Henry W. Tubbs, Jr., who led some of the first Marine DP personnel into Vietnam, described what was involved: “Locally, the origin of this information is his unit and the individual’s service record book. Certain items used by the command and which comprise the input to personnel data processing programs are also kept on punched cards at the mobile dp installation [in this case in Vietnam]. New items will eventually end up in his master record on tape at the West Coast Computer Center at Camp Pendleton, Calif.”159 Work processed through his system included listing of personnel rotation tour data, attrition, officer assignments by unit, myriad casualty reporting, combat awards applied for and distributed, locator listing for personnel in Vietnam, and location of personnel by military occupational specialty (MOS). Additionally, a complex set of applications generating seven reports kept track of casualties for payroll, unit commanders, and the Marine Corps’ medical community. His data center consisted of two air-conditioned trucks that moved with the troops; often punched-card files were “located in tents,” which accumulated “dust, sand particles, and grime while being manually processed.”160 Similar personnel systems for use within the United States and around the world were implemented by all the services.161 It is easy to ignore how many people were tracked and accounted for this way, but there were many. One example will have to suffice to make the point. In the late 1950s, the Air Force employed 900,000 people in uniform and an additional 400,000 civilians for a total of 1.3 million people, more than the combined populations of Delaware, Nevada, Wyoming, and New Hampshire at that time. Beginning in 1956, the USAF began using an IBM 705 in support of the work to service 300 major bases and 3,000 locations around the world that spent $17 billion that year. Of the 300 bases, about 200 had some prior punched-card systems and in the late 1950s all began migrating to computers. The applications, and types of data collected and managed, mirrored the experience faced by the Marine major quoted above. Subsystems were developed to monitor personnel distribution, maintain an officer inventory control process, perform personnel turnover analysis, and do payroll and budget for people, buildings, and inventory. All the services reported that using computers improved the quality and accuracy of their data, timeliness in which it was collected, analyzed, and reported out.162 By the early 1960s, basic accounting and personnel systems were widespread across DoD. Modifications over the years included collecting data on procurement practices as they changed over time and a growing collection of applications to do analysis on patterns of expenditures, modeling options for purchases and how best to spend budgets. Accounting and financial systems were built all during the second half of the century that met the specific needs of branches and agencies but that also
91
92
The DIGITAL HAND, Volume III
addressed specific classes of transactions so that the DoD had, in effect, many automated systems that did not link up end-to-end with integrated flows of data. Attempts to address that latter problem did not begin in earnest until the 1990s. The result of disconnected applications over the years led to inherent problems described clearly by one audit: “The relationships among feeder, accounting and financial systems are still “detached” from the perspective of data standardization, transactional standardization, and system compatibility. This detachment causes much re-entry of data, ‘crosswalking’ or matching of data through elaborate edit processes and conversion tables, creating timing delays—all of which contribute to an inability to determine the status of financial information on a routine basis.”163 The description applied to many systems. At the end of the century, there were ninety-one critical applications feeding data to sixty-one accounting systems, few of which were even written in the same programming language, let alone technologically compatible with each other such that information could move from one system to another or changes made to one program could be implemented in another. Some of the affected classes of applications included payment transactions for civilian pay, debt management, military pay, contract and vendor payments, disbursing, and payments for transportation and travel, all crucial systems. Feeder applications that collected data on transactions covered acquisitions, personnel, cost management, property management, and inventory management. Other programs had been written to move data from one system to another, in effect, translating data so that it could be understood by upstream applications. Yet even at the end of the century, DoD still did not have an integrated balance sheet for the entire department and so still generated much manual work for accountants. Thus, the department was saddled with redundant systems, incompatible data and systems, and a challenge to integrate $500⫹ billion of expenditures.164 It became a fact of life that audits of DoD accounting always turned up problems, just as in many other federal agencies. A lesson we can derive from DoD’s experience with accounting and financial applications is that just because one used computers in support of a stream of work did not mean that the activity would be done well. As generations of IT experts always pointed out, computers were tools; use them effectively and you will do well; use them poorly and you will do poorly. The variety of systems that had been implemented over the years from various suppliers and of varying ages made consolidation of data increasingly difficult to do over the years. As late as 2001, after a half century of accounting using computers, one audit team reported that the “Defense Department’s accounting systems do not provide the information needed to relate financial inputs to policy outputs” and failed to generate accurate information that told “managers the costs of forces or activities that they manage and the relationship of funding levels to output, capability or performance of those forces or activities.”165 Until 1990, DoD was primarily responsible for auditing its own applications, with the occasional intrusion of auditors from other parts of the government. However, that year, Congress passed the Chief Financial Officers Act, requiring all departments to pass an audit that demonstrated expenditures
Defense of the Nation
Figure 3.4
The USAF used mainframes to do routine accounting and processing at the Pentagon. Notice that even in 1978, when this photo was taken, punched cards were still in use. (Courtesy IBM Archives)
matched appropriations. As of 2001, DoD had failed to pass any of its annual audits, a problem that plagued DoD accountants right into the new century.166 Periodically through the entire period, audits of inventory expenditures would call out problems with overpriced items, or overpayments on contracts, with insufficient oversight. As recently as 2003 and 2004, similar problems again surfaced concerning payments to civilian subcontractors working in Iraq and regarding the hundreds of systems tied to inventory control. Pointing to these problems is perhaps less a statement about incompetence and more a comment about how large organizations like DoD, IRS, and General Motors faced enormous challenges in managing enterprise-wide applications that involved a combination of legacy and new systems, and the constant requirement for changes in applications, such as for new forms of data and reports. Nonetheless, because of the large size of the Defense Department, GAO, OMB, and various congressional committees constantly monitored its accounting, inventory control, and personnel practices.167 A second class of application provides a more positive case study of a very contemporary use of computing—video games to help recruit new personnel—an application of IT that only came into its own in the late 1990s. Just as military trainers had recognized that young recruits had grown up with television, PCs, GameBoys, and video games, so, too, did recruiters appreciate the fact that this generation had to be reached in new ways. War games in video formats had been around since the late 1970s, a story described in volume 2 of The Digital Hand.168 However, on July 4, 2002, the Army released a free game to the public called America’s Army, intended to illustrate life in the Army, its values, and way of life.
93
94
The DIGITAL HAND, Volume III
Aimed at older teenage boys and young men, it could operate on various platforms, such as PCs, MACs, xBoxes, and over the Internet. It became very popular. Within months it had become one of the top ten video games in the country. Between initial release and the summer of 2005, it went through fifteen editions, attracting controversy with some accusing the Army of having prepared a propaganda piece, while recruiters claimed that it had significantly helped them in their recruitment efforts. Various surveys of enlisted personnel, students at West Point, and officers indicated that tens of thousands had played the game. It was a perfect match of a technology with a target audience and perhaps the most public success story of the use of IT by the military after the smart bomb and missiles.169 In addition, it reduced the cost of recruiting. Because it was free to download, the military avoided distribution costs of a commercial CD-based game, a savings amounting to some $13 million. Furthermore, the game was effective. Because it reflected Army values, enlisted personnel were encouraged to play it as well, as a form of training. One internal study estimated that in its first year of release, 1.2 million people had played the game, and that players around the world from many countries had played over 174 million times; in other words, people spent 17 million hours with this Army video game.170 Three years after its initial release, long after most video games were forgotten and replaced by others, this one remained popular. Because the Pentagon had invested some $16 million to keep it current, over 4.5 million people played the game by mid-2005, adding 100,000 new players every month.171 Its success was due to (a) the fact that it was free off the Internet or could be picked up at any recruiter’s office and (b) that the Army paid careful attention to keeping the game authentic in every detail, from the weapons and vehicles used to the circumstances in which the player found him or herself. Players went through boot camp in the game, were encouraged to work in teams, and went to virtual Fort Leavenworth military prison if they violated rules of engagement.172 Finally, one could ask, of all the non-combat-related uses of IT not covered in this chapter, were there others of significance? The most important one relates back to communications. DARPA helped fund the creation of the Internet for use by researchers working on behalf of the military. We saw earlier that primitive uses of telecommunications to transmit data over telephone lines began in the 1950s and extended all through the 1960s. By the 1970s, each of the uniformed services had begun implementing commercially available forms of fax machines and, more important, e-mail. One early available tool of this kind from the 1970s and 1980s was the Professional Office System (PROFS) from IBM. It consisted of terminals connected to mainframes that switched messages from one user to another. Multiple mainframes linked together so that additional users on other computers could communicate. It was useful for transmitting notes and letterlength messages, for creating online calendars, and for sending attached files, the sorts of functions that Lotus Notes provided in a more user-friendly format in the 1990s before the arrival of Internet e-mail. Early systems were designed to provide communications within a military base, initially often within an organization, such as a logistics operation on an Air Force base. It quickly also became the
Defense of the Nation
basis for communicating budget information as graphical packages became available and could be used, for example, by procurement and accounting.173 By the end of the century, e-mail had become ubiquitous in DoD and was even available on laptops mounted in armored vehicles in combat in Iraq in 2003–2007.
Patterns of IT Deployment Even before the invention of digital computers, the military community committed to their development and, for decades afterward, to their further improvement. But as this chapter makes clear, actual deployment and use of computers by the DoD far transcended its R&D activities in terms of what happened on a day-to-day basis within the department. Every command and agency used computers extensively and indeed as far back as the late 1950s, some even earlier. From the earliest days, DoD had formal processes for how any command or agency could acquire computers that involved practices similar to those evident in the private sector and in other public agencies. These included identifying specifications and applications, doing a cost justification analysis, soliciting equipment and software from vendors, and going through formal steps for evaluating to whom to award contracts. Those processes varied over the years but more in details than in concept.174 Despite attempts by various secretaries of defense to create standards and uniform practices, variations existed. For example, as far back as the 1950s and early 1960s, one Army official wrote: “Why so many different kinds? The reason is simply that before the Army reorganization, each Technical Service proceeded individually to acquire computers as they felt best suited their needs.”175 The CIO for the U.S. Air Force in the late 1990s was just as candid about experiences of DoD decades later: We’ve tended historically to bring on new information technology on a catch as catch can basis. We bought thousands of computers, without thinking that the time would come when those boxes might be used to form interactive networks of computers. Then we created computer networks without thinking about optimizing the interfaces between networks of computers. And, worst of all, perhaps, we did all that without thinking too much about the requirements of the individuals using the computers and networks.176
While one can quibble if he overstated the problem, he also noted that in Desert Storm (1991) military units in the field complained about insufficient bandwidth, the same complaint heard in 2003–2006 in Iraq.177 Many things thus stayed the same, not the least of which were procurement practices. In 2003, an Army lieutenant colonel wrote in his War College thesis that “DOD’s Planning, Programming, and Budgeting System . . . has changed little over the past decades.” He complained that circumstance “inhibits innovation and fails to adequately react to environmental changes.” He noted what so many other commentators had about the acquisition process: “Defense analysts aim most of their modeling efforts and statistical analysis at program/budget
95
96
The DIGITAL HAND, Volume III
requirements for successive six-year windows,” a practice that “severely limits objectivity by perpetuating the status quo,” perhaps providing one explanation for why non-combat core uses of computing often existed hardly modified from one decade to another.178 Two patterns of behaviors become visible from my analysis of DoD’s use of computers and telecommunications. First, and despite continuous skepticism about new applications, particularly in weapons systems, the entire DoD had a healthy appetite for IT, with critics often complaining that this interest was too great. Every agency and every branch of each service used computers. Second, despite initiatives to the contrary, acquisition and deployment of IT remained highly decentralized with every command, branch, and agency busily going about its work of acquiring and deploying systems that it found supported their missions. As their assignments changed, they dutifully acquired new applications, although more often than not retaining old ones. This pattern of acquisition of IT mirrored what happened with aircraft and weapons systems as well. This practice of acquisition and coexistence of new and old applications played out in a relatively clear manner. New weapons systems typically sported newer, more advanced technologies than nonmilitary applications. Put in the converse, support applications changed less frequently, such as payroll, personnel systems, inventory control, and logistics in general. However, give the Air Force a new airplane, and it would acquire the latest avionics. DoD attempted to implement standards, for example, as far back as the mid-1960s for telecommunications, done to simplify acquisitions, make them more compatible and, therefore, able to support cross-branch integration, and as a way of reducing acquisition and operating costs. However, much of the slow progress in upgrading old backroom applications and imposing consistent technologies mirrored the experiences at the IRS and at other agencies described in subsequent chapters. In the 1950s and 1960s, before the number of computers far exceeded the capacity of members of the Computer Industry from keeping lists of installations, all publicly available inventories showed that the military bought every kind of IT that came along, often acquiring 25 percent or more of the entire inventory in the United States.179 By the mid-1960s, DoD was spending over $700 million per year and had installed nearly 2,000 computer systems. By the end of the decade, it had over half the systems installed in the U.S. government, down from a dominating 70 percent of the inventory in 1965.180 Yet even in the mid-1970s, of the some 8,600 computers systems installed across the federal government, 4,245 belonged to DoD.181 By the end of the 1970s, it was easier to start observing the percent of the Pentagon’s budget allocated to IT or dollars spent than to track every computer system. Government accountants began reporting such data in the early 1980s, continuing to the present. As table 3.8 illustrates, in the 1980s and mid-1990s, DoD’s expenditures on IT ran into the billions of dollars per year. Expenditures grew all through the 1960s, 1970s, and 1980s and stabilized in the 1990s as the result of three circumstances: sharply declining costs of computing equipment and software, extensive budget cutbacks for all DoD, not just for IT, and normal operational improvements from streamlining
Defense of the Nation Table 3.8 Department of Defense Expenditures on IT, Select Years, 1982–1996 (billions of dollars) Year Amount
1982 4.2
1987 8.2
1992 9.9
1996 9.0
Source: U.S. Office of Management and Budget, Information Resources Management Plan of the Federal Government (Washington, D.C.: U.S. Government Printing Office, August 1996): 3.
and consolidating systems. However, those budget numbers mask a massive inventory of installed computers. In 1996, for example, DoD had 2.1 million computers (from super computers to PCs), operating on some 10,000 local area networks and 100 national networks.182 However, what is also clear from the historical evidence, and underpinning the data in table 3.8, is the fact that the DoD singlehandedly spent nearly 45 percent of all the funds the federal government expended on IT across the period from the 1950s through the 1980s; and in the 1990s, it still claimed over 25 percent of all federal expenditures on IT. In short, as every computer sales executive knew, DoD was the world’s largest consumer of IT in the second half of the twentieth century.183 For certain classes of IT, DoD’s presence was impressively large. For example, in 1989, the federal government had seventy-one super computer systems. Twenty were located in DoD, while another thirty-three were in the Department of Energy, where all the national laboratories were housed, and which did a considerable amount of work for the military, such as developing nuclear weapons. NASA had ten, but even with these systems, some of it was in support of the military, leaving, in effect, eight systems that we could conclude probably did not do much of any work for the military. These were located in the Department of Commerce, Environmental Protection Agency, and Department of Health and Human Services. Not included in this inventory, of course, were the numerous super computers used by various intelligence agencies, such as the National Security Agency, which housed some of their systems at military installations.184 The pattern of being a large consumer of all manner of IT extended into the new century, by which time DoD spent vast sums on IT outsourcing and other services, like so many corporations and other government agencies. In 2000, for example, DoD expended $53.1 billion for all manner of services, not just technical or IT, and of that amount, approximately $5 billion went to IT services.185
Conclusions It would be difficult to overstate the effect that the massive size and complexity of DoD has had on its internal workings and the effects it has had on wars, world
97
98
The DIGITAL HAND, Volume III
peace, and the operation of the national government at large during the second half of the twentieth century. When the position of secretary of defense was established in 1947, Congress authorized that official to hire a handful of employees to work in his office; fifty years later, that secretary had over 2,000 employees and managed 15 percent of the federal budget, while providing direction to over 2 million employees.186 No discussion about the DoD can ignore its sheer bigness. The history of the digital hand at this department ultimately reflects the consequences of that size and the complex and varied missions it was required to carry out. IT unquestionably and consistently played a supportive role in helping all organizations within DoD carry out their missions over the years. Every niche and corner of the Pentagon, at hundreds of bases over the world, and across every command and agency, employees used IT in support of their activities. If the technology did not exist to get the job done, DoD contracted out its invention. DoD shined best and most effectively in its use of IT in two fundamental ways. Over many decades, DoD godfathered a vast amount of computer science and technology that benefited both the department and the nation at large. The value of the Internet is merely today’s most obvious example of benefit to the nation; but we must not forget others: various programming languages, computer memories, online systems, computer operating systems, artificial intelligence, avionics—the list is long. We live in the Information Age thanks to the R&D and technology transfer of the Department of Defense over so many decades. The second way DoD shined was in the application of digital technology in weapons and transportation. Ballistic missiles, atomic bombs, smart bombs, modern aircraft, naval ships, and even wheeled vehicles made the United States the most powerful military might in the world and, to not an insignificant degree, led to its winning the Cold War. If DoD had to “get right” one use of IT, it was better that it be in military applications than in accounting and financial systems. DoD’s experience with IT, however, was not uniformly and consistently outstanding. As we saw with its accounting and financial systems, IT reflected the limitations of how the organization operated and its values. As historians of the department pointed out, a major war occurred within the DoD that has yet to end over control of assets and rivalries among the uniformed services. For those outside of the department, it is telling about how extensive this problem has been that its historians, themselves employees of DoD and publication of their studies endorsed by the department, considered it a central theme of their analysis of how the office of secretary of defense evolved.187 IT aligned on all sides, supporting battles among organizations functioning as relatively independent silos, which goes far to explain why every major audit of the department’s IT and accounting systems found that the biggest problem remained the lack of integration of information about the operations of this part of the American government. The situation extended across all manner of applications, from weapons and combat to training and logistics and, finally, to accounting and finance. Looking at other large agencies and major American corporations, one can spot similar patterns of IT usage. However, nowhere is the fragmentation
Defense of the Nation
of systems as great as at DoD, which can largely attribute that phenomenon to the sheer size of the place. Anybody who has worked for a very large public institution or major corporation understands instinctively how difficult it is to work outside one’s own silo, to comprehend and to work with others within the larger enterprise without getting lost, compromising one’s management’s priorities, or risking loss of financial or career incentives. Given the substantial deployment of all manner of IT, and the vast resources devoted to it over the decades, it is remarkable how long it took for many systems to go from conception to being up and running. Development of weapons systems and new aircraft and ships routinely took decades to accomplish. More mundane systems, such as a new generation of avionics, also took ten, fifteen, or more years to develop before being ready for implementation. At the same time, transformation of relatively simple systems, such as accounting and personnel applications, barely took place. It was not uncommon, for example, to see logistics systems in the 1980s that had been designed in the 1960s still operating all over DoD. Even ten-year-old (or older) hardware also existed all over the department. To be sure, the causes of this technological atrophy paralleled the experience of the IRS. Both had complex procurement issues; each complained about the lack of sufficient budget and resources; auditors pointed to poor leadership and alternative priorities; and each took over a decade to change any major application. Somehow, however, both survived. To be sure, the IRS came very close to not carrying out its mission of collecting taxes in the late 1980s, and we have yet to see DoD lose a war because of poor IT systems. Although, as noted earlier in our discussion about Information Age warfare, there are some military analysts who wonder if that may not yet happen. One is tempted to argue that because of the complexity and size of this department, it experienced every type of application of the digital hand used in the American economy and enjoyed (and suffered) an equally comprehensive set of consequences. But that might not reflect reality. For example, DoD took the lead in innovative uses of IT in education that the rest of the nation has yet to catch up on, particularly higher education. On the other hand, modern research techniques and research agendas of universities and companies have been influenced profoundly by the Pentagon; yet, indeed, for the period from the 1940s through the 1960s, it dominated these. While students of the IT industry are quick to point out that commercial R&D took off by the 1970s and created a situation whereby the Pentagon had to play catch up with the private sector, beginning in the 1980s with the emergence of the PC, the fact remains that DARPA still influences many research agendas. But back to our point about the influence of DoD, it is useful to think of its experiences with IT as reflective of what happened across the economy. President Eisenhower’s notion of the “military-industrial complex” is a useful paradigm to apply here, because DoD, in combination with its key suppliers, did constitute an economic and technology ecology that made DoD an economic “force multiplier,” to use a good military term. The DoD affected how whole industries functioned and used IT, such as airplane manufacturing, transportation, and weapons. By using the military
99
100
The DIGITAL HAND, Volume III
notion of “force multiplier,” I mean that DoD’s expenditures and the movement of its alumni into industry enhanced the power and influence of the department, creating waves of effects, much like what happens when a pebble is thrown into a calm body of water. In that way, it affected its suppliers and consumers in a manner not so dissimilar to the influence General Motors had on its suppliers and business partners in the 1960s and 1970s and Wal-Mart on its from the 1980s to the present. In each instance, large size, control over the amount of business parceled out to suppliers of research, equipment and other IT products, and services were quite similar in creating technological and economic ecologies that included unique practices, local cultures and values (even language), and economic bazaars. No other public institution in the American economy had this degree of influence. Ultimately, however, we must ask, despite size, complexity of culture, conflicting internal objectives, and the characteristics and use of the technology itself, did IT change the nature of the work of this department over the past half century? The answer is clearly yes. A soldier in 1950 rarely if ever saw a computer; in 2005, every employee and service personnel used computing at one time or another to do their work and had for nearly twenty years. In the case of the Air Force, it was more like thirty years. The advanced weapons of the United States that served it so well in the Cold War were profoundly influenced by computing. To be sure, in low technology-driven wars of insurgency like Vietnam and those waged in the Middle East, American forces consistently punished their enemies far more so than the converse. The percent of casualties suffered by American military forces, both wounded and dead, remained lower than for other nations that did not rely so extensively on IT and other related technologies. That balance of pain may change as widespread availability of IT extends to less developed economies and societies, as evidenced by the effective use of cell phones, garage door openers, and washing machine timers that caused thousands of U.S. soldiers to be wounded or killed in Iraq since 2003. But in aggregate over the past six decades, IT provided a better shield protecting American troops, sailors, and airmen than it did for enemies of the United States. Ultimately, the ability to fight and win wars, and to do so with minimal casualties and other costs to the nation, is the objective of the Department of Defense. There is no evidence to suggest that the DoD intends to pull back in its reliance on IT to carry out its reason for existing. Finally, we need to acknowledge an issue that is increasingly getting attention, namely, to what extent do we have in the case of DoD possibly too much reliance on technology? It is a question that does not come up with any other part of the public sector, with the possible exception of critics of the use of computing as a teaching tool in classrooms. Secretary McNamara realized in the late 1960s that his reliance on numerical targets, statistics, computerized modeling of events, and extensive aerial bombardment was not leading to victory in Vietnam. The authors of a recent history of the Iraq war have questioned whether Secretary Rumsfeld was suffering from an even worse case of overreliance on technology.188 Meanwhile, an historian of the role of technology in American
Defense of the Nation
foreign policy across all of U.S. history in regard to the latest war concluded that “techno-hubris goes far to explain the miscalculations of the civilian planners in the Pentagon who were the main architects of the 2003 invasion of Iraq.”189 Paul Edwards in his study of the Pentagon during the Cold War brilliantly documented how both civilian and military leaders viewed the world almost through the ideas and intellectual typologies suggested by the technological architectures of computing and telecommunications.190 In short, did the Pentagon suffer from too much reliance on the digital hand and other forms of technology, that is to say, was it influenced in its world view by a culture and zeitgeist of information technology? The accumulating historical evidence would suggest that it is a risk senior officials can be exposed to while embracing what otherwise are practical uses of the digital hand.191 If we turn our attention to the flip side of national security, that is to say, away from threats aimed at the United States originating in other countries, and look internally to crime, law enforcement, and judicial uses of the digital hand, what can we learn? It should be of no surprise that police officers, detectives, district attorneys, jailers, judges, and, yes, criminals all used IT. How they did that is the subject of the next chapter.
101
4 Digital Applications in Law Enforcement The need to develop, test, and field new law enforcement tools remains as compelling as ever, given the rapidly increasing technological capabilities of criminals. —William Schwabe, 1999
C
rime is big business in America. If law enforcement were measured like an industry, it would be listed as larger than many in the private sector. It has also expanded over time. Expenditures on law enforcement accounted for about 1.1 percent of GDP in 1982 and grew to 1.66 percent in 2001, or by roughly 50 percent in two decades. The percent growth is even more pronounced than the numbers would suggest because the economy was also larger in 2001 than in 1982. At the start of the new century, nearly 2.3 million people worked in this corner of American society, making it one of the largest employers in the economy.1 In one of the first attempts to quantify the cost of crime to the national economy, the U.S. Department of Justice reported that in 1995, personal crimes cost $105 billion, and if one included pain and suffering expenses, $450 billion. Violent crime caused 3 percent of all medical expenses and 14 percent of all injury-related medical spending. It further estimated that crime generated 10 to 20 percent of all mental health costs.2 So, it is a sector of the economy that cannot be ignored. We can learn much about the deployment of the digital hand in this part of society, because it is an example of how many small organizations around the country acquired IT and what they did with it. This case study also illustrates how initiatives by the federal government affected state and local
102
Law Enforcement
governments. In comparison to the IRS or the Department of Defense, most law enforcement agencies were always relatively small organizations. Since in many industries the speed with which computing permeated day-to-day work of companies and agencies was influenced by their size (scale) and available budgets (scope), how did computing come into law enforcement, where most agencies were small? What effects did the technology have on their operating culture? These are important questions to ask because by the end of the century, law enforcement agencies had become extensive users of information technology. Computers and all manner of telecommunications were in wide use in police departments, courts, and prisons; even criminals had become dependent on computing and telecommunications. In the process, the latter created a whole new type of crime often called cybercrime or simply computer crime. This chapter documents the waves of applications that washed over the law enforcement community over a half century. It is a sector of society that had long relied on vast quantities of data with which to do its work, and so the arrival of the computer was a development very suitable to this “industry,” with its ability to handle ever larger volumes of information. Specifically, we will look at the use of computing by policing agencies, courts, and corrections, with a brief introduction to the early history of computer crime as it represents a new class of criminal activity made possible by the existence of the digital hand. There is insufficient space in this book to discuss in detail the role of computers in law firms or in ancillary professions, such as private investigators and IT security experts, but they are subjects worthy of study by others.
Structure of the Law Enforcement Community This community consists of various police forces, systems of courts (judges and staffs), and myriad local, state, and federal jails and prisons (also called correctional facilities) filled with prisoners, and other individuals who either are charged with crimes pending decisions by courts or criminals simply loose in society. Sometimes the communities of lawyers and those who work for them (such as private investigators) in the law enforcement world are also described as part of the criminal justice ecosystem. We should also include in this milieu those who are victims of crime to complete the picture of a law enforcement ecosystem.3 The heart of law enforcement consists of a complex patchwork of local, county, state, and federal organizations that provide police protection, trials, corrections, and rehabilitation. Each has varying scopes of geographic and legal responsibilities and authorities; some even overlap. Towns, cities, counties, and states each have police agencies, courts, and prisons. The federal government has specialized law enforcement agencies, most notably the Federal Bureau of Investigation (FBI), military police in each of the uniformed services, others who concentrate on firearms and tobacco or who protect specific places (such as the White House). The Secret Service in the Department of the Treasury is responsible for enforcing laws against counterfeiting American money and for protecting
103
104
The DIGITAL HAND, Volume III
the personal security of the president and vice president.4 Criminals and victims of crime exist across society at all social levels and in each community, irrespective of conventional policing and jurisdictional boundaries. The total collection of various individuals and agencies presents a picture of highly fragmented sets of players, a feature of that community that had a profound influence on how they deployed computing. A few statistics illustrate the fragmented nature of this ecosystem. At the end of the century, there existed over 40,000 policing jurisdictions (police, sheriffs, state, and other specialized agencies), most of which were small and employed less than fifty officers; obviously, the more populous a community, the larger this was. All told these agencies employed roughly 450,000 uniformed officers. Each incorporated town, city, county, and state also had courts, in fact, nearly 16,000. If we add in courts and corrections to policing, roughly 2 percent of the American labor force worked in some capacity in law enforcement and justice, accounting for some 2.3 million individuals and a payroll of $8.1 billion in 2001. Nearly 60 percent of all these individuals worked for local governments. In the case of police, eight out of ten worked for local police departments, a circumstance that remained relatively constant across the second half of the twentieth century. Local communities and states also had the largest numbers of courts and correctional agencies. In the case of justice system employees, again using 2001 data, the total reached 488,143, of which nearly 55 percent worked in local systems, another 33 percent at the state level, and the remaining 12 percent at the federal level. In corrections, which employed a total of 747,000, just over 32 percent were local employees, another 63.4 percent worked in state government, but only 4.4 percent were in the federal prison system. From a funding perspective—critical to our story of the computer—nearly half of budgets came from local taxes, another 35 percent from state sources, and the rest federal. It was not uncommon for an agency to spend over 90 percent of its budget on salaries and employee benefits, particularly at the local and state levels. Percentages were frequently lower at the federal level because large amounts of funding went through these agencies to local and state authorities to acquire various technologies (including computers) and for other purposes, such as training. The number of prisoners reached an estimated 1.9 million in 2001, of which 631,000 resided in local jails. About 4 million additional people were on probation and thus interacted with corrections, courts, and police agencies. Overall, over the half century, the number of police, judges, lawyers, corrections personnel, and prisoners grew. For the period 1982 through 2001, for which we have excellent statistics, we can see that expenditures for law enforcement grew by 355 percent, with annual increases of 8 percent fairly typical. As federal involvement in law enforcement increased over the period, its expenditures grew on average by 11 percent per year. Ironically, police departments experienced the smallest growth rates. For the entire period, however, and for all of the various agencies comprising the law enforcement ecosystem, expenditures grew faster than inflation (consumer price index), which is why, as a percent of GDP, these expanded from 1.1 percent in 1982 to 1.66 percent in 2001.5
Law Enforcement
At the risk of droning on too much with statistics, it is important to look at what drove growth in law enforcement. Here again, a few statistics show the magnitude of the activity, although not the variety of its forms. If we combine crimes against property (such as burglary and auto theft) and violent crime (murder and robbery), we see that the total amount of crime grew over time in all decades except in the 1990s, all the while as the population of the nation expanded. Using absolute numbers of offenses known to police, in 1960 crimes totaled 2 million, grew to 5.5 million in 1970, expanded to 13.4 million in 1980, and to 14.5 million in 1990. Then crime rates declined, dropping to 11.6 million in 2000. However, these data do not include unreported crimes. Some surveys suggested that the absolute number of crimes was actually much higher.6 But the trend is obvious: crime occurred in sufficient amounts to drive the large increases in resources devoted to fighting it. The number of victims proved high as well. Even in the period of the 1990s, when the volume of reported crimes dropped, the number touched by personal and property crimes still ran into the tens of millions of residents, ranging from nearly 40 million in 1995 to almost 26 million in 2000.7 Those numbers go far to explain why the public maintained constant pressure on their local, state, and federal policing agencies, courts, and legislatures to “get tough on crime” throughout the period, pressure that contributed in particular to the various federal funding initiatives that made possible expanded use of computing in law enforcement, beginning in the 1970s and continuing to the present.
Adoption of Computing by Local and State Law Enforcement Agencies Police work is very data intensive, often more so than for most local or state governmental activities. Much work occurs in documenting such events as automobile accidents and crimes and in maintaining very large files on either wanted persons or others being processed through the legal system from arrest to trial through imprisonment and parole. Using large files on known criminals as research material for ongoing investigations constitutes another important activity. The most extensive users of data are police on the street inquiring about the backgrounds of individuals they are dealing with and about potentially stolen property, such as vehicles they have pulled over for a traffic violation. Collecting data, sorting it, and doing look-ups are major activities in the lives of police officers and their back office colleagues. Thus, on the one hand, there are policespecific uses of data, such as parking and traffic citations (and later lists of these) and assignments of officers to their beats, but also others focused more on the mundane data collection and reporting typical of any employer: hours spent on the job, payroll, personnel records, vehicle maintenance records, inventories (even of guns issued to officers). Pools of data used constantly in daily work include jail bookings; investigative reports for crimes and accidents; fingerprint files, often quite massive in large cities before the advent of computing; lists and reports regarding stolen or pawned property, such as jewelry; and automobile
105
106
The DIGITAL HAND, Volume III
theft and recovery reports and data. Kent W. Colton, a student of policing practices in the 1970s, observed that “police departments process large amounts of information. A great number of events transpire under the jurisdiction of the police, and detailed reports must be prepared on many of them.”8 That reality remains as true today as in earlier decades. The most important circumstances driving the need for better data processing tools in policing were the continuous increase in both the number of crimes reported and other activities police became involved in over many decades, expanding in volume faster than their budgets and number of employees who could deal with them. Hence, the constant hunt for productivity tools, and nowhere is this more evident than in information processing. Every major report on why police used computers linked back to the problem of growing workloads. The biggest gating factor in the adoption of computers was the availability of funding, or conversely put, the lack of sufficient budgets. Many individuals in the law enforcement ecosystem understood why (or why not) to use computers, and many had realistic views of the capabilities of this technology, particularly by the end of the 1970s.9 Knowledge of computing’s potential was not what determined what and when someone adopted an application. It often boiled down to funding and availability of people to implement a use. As occurred in so many industries, the largest organizations tended to be the first to adopt computers or some advanced wireless communications, because they had the volume of work and budgets both to justify and afford them. Over time, ever smaller organizations did too as the cost of computing dropped or various departments could share systems, such as those developed in the second half of the century by the U.S. Department of Justice, and more specifically, the Federal Bureau of Investigation (FBI), and many state governments.10 Communities first installed computers for police work in the 1960s, while extensive deployment did not occur until the 1970s. However, long before the arrival of computers, all over the nation police departments had organized vast files of local paper-based data on criminals and events; and as precomputer information processing tools came along, they used them, most notably the typewriter, telephone, radio, and punched-card equipment. In the 1950s and early 1960s, police departments continued to expand their use of punched-card equipment for accounting, personnel record keeping, and for tracking and collecting unpaid parking tickets. Large cities also used punched cards to monitor criminal complaints, arrest, and traffic accident statistics in such cities as New York, St. Louis, and Chicago, cities that became early adopters of digital computers.11 This occurred even though first and second generation computers became available but often were unaffordable (hardware and staffing). Also, IT skills within law enforcement agencies were sparse in these early years. Yet police departments slowly became aware of computers, beginning in the mid- to late 1960s, in part through their normal channels of information, such as law enforcement conventions and publications, but also due to an aggressive effort on the part of the U.S. Department of Justice to take an early lead in supporting use of computers. One observer of the American police scene between
Law Enforcement
Figure 4.1
New York City police examining punched-card criminal records, 1961. (Courtesy IBM Archives)
the 1930s and the early 1970s commented at the end of the 1960s on what was becoming evident to police departments all over the nation: “It is now readily apparent that electronic data processing will have far-reaching consequences in the American police field. The smaller departments in this country stand to be among the greatest beneficiaries of these new developments.”12 Enhancing crime fighting proved essential to the use of computers. Our same observer wrote: “The fantastic ability of EDP and its brainchild, the computer, to store enormous amounts of data with split-second retrieval, has prompted police administrators to extend their vision concerning the use of this equipment in law enforcement operations.”13 Police organizations had long recognized the need to collaborate and share information, a practice that could be improved by using computers. On September 18, 1963, New York City’s police commissioner, Michael J. Murphy, commented at a police chief’s conference that “criminals move fast and do not restrict their operations to any particular local geographical unit,” and so, “we recognize that the decentralized character of local law enforcement imposes obvious hardships upon us in our struggle against organized crime. But we have the will to establish regular channels for exchange of information and intelligence and to devise procedural machinery for concerted action.”14 Three features of computing proved particularly attractive to police departments. First, when coupled to radio communications, a police officer on the street could call a dispatcher to look up information about who he or she was dealing with, such as a driver of a car pulled over for speeding: Was it a stolen? Were there outstanding arrest warrants on the driver? Did the driver have a prior
107
108
The DIGITAL HAND, Volume III
record of being violent? Second, records could be used in an investigation of a crime, such as fingerprint and mug shot records that could lead to the identification of a criminal, and hence possibly solving a crime. Third, there was the requirement of the legal system to provide both documentation and tracking of people being processed through the law enforcement system.15 First uses of computers in policing came in the early 1960s. These focused on automating simple, existing processes and sets of data. The most widely deployed application involved traffic and parking citations. Police and municipal authorities wanted to track and manage tickets to increase the rate at which fines were collected for unpaid citations by identifying and pursuing owed amounts. Collections increased anywhere from 10–15 percent to over 30 percent in many communities in the 1960s and 1970s, generating hundreds of thousands of dollars of incremental income in large cities.16 These dollars represented additional revenue that otherwise would probably never have been collected because an officer or other official might not have known that an individual owed multiple fines, or lists of amounts due were not appearing on reports to help officials collect them. With computers, police, courts, and municipalities could start dunning individuals using the same techniques as a normal billing process at a company. All during the 1960s, large city police departments, and regional pools of smaller police departments, led the way in making available computer-based files of locally kept criminal records and others on stolen property (most notably vehicles). These could either be queried by a policeman in a car calling over the radio to a dispatcher (who could quickly access a system) or be viewed at a terminal back at the police station by a police officer. Subsequently (late 1980s), police began using terminals installed in their patrol cars to access directly the data they wanted, further speeding up the process while at the same time now having access to various large collections of online data files. Speed became the critical advantage because a police officer could be told in minutes (or seconds) whether someone he had pulled over for a traffic violation had any outstanding warrants, as opposed to the process in earlier years, where as much as a half hour or so passed while someone searched paper records.17 Major cities like New York, Boston, Chicago, and St. Louis installed computers devoted to such applications.18 Query systems generated considerable use. For example, in Chicago, police routinely probed computer-based files 2,500 times a day by 1967, while police supervisors began creating computer-generated reports on trends based on data in such files to allocate resources around the city and to maintain their automotive fleet.19 The police department in Kansas City, Missouri, is often credited with having the largest number of digital applications in the early years of computing, becoming a role model for many departments. In fact, its police chief, Clarence E. Kelley, acquired a very visible national profile in part because of his extensive use of computing, a reputation that contributed to his appointment as director of the FBI in 1973. He credited the better sets of data available to his officers for helping reduce crime in Kansas City. As one report on his city at the time called out: “Each Kansas City police officer has access to a wide range of necessary information. He can instantly learn if the vehicle he is
Law Enforcement
following is wanted in connection with a crime, has been stolen, is linked to a known criminal. He can determine if the person he is questioning is wanted for traffic or other offenses, if he uses aliases, if he has been convicted of a serious crime, if he has been known to attack officers.”20 These files were shared with forty-five other criminal justice agencies in Kansas. Quickly checking cars, licenses, people, and criminals became the earliest and most widely deployed uses of computing by police on the street in the early to late 1960s. Often, a number of police departments would collaborate in creating shared files, as happened in such metropolitan areas as Kansas City, San Francisco, St. Louis, Chicago, and New York, often with help and funding from state governments.21 Many of these systems were batch, that is to say, a dispatcher or clerk had to query tub cards and files or printouts of digital records. However, in 1964, St. Louis became the first city in the United States to deploy a real-time system by which a dispatcher could look up information via a terminal, launching a new era in query applications that spread slowly most assuredly across other police departments in the late 1960s and throughout the 1970s and 1980s. All during the 1960s, St. Louis expanded its online files, storing them on an IBM Systems/360 Model 40 in the late 1960s. This system processed over 11,000 inquiries per month for such items as stolen vehicles, wanted persons, alias files, and maintained rosters of habitual criminals. The police department added new data to its system twenty-four hours per day from thirty-five different locations across the city.22 One of the first major surveys on deployment of computing in law enforcement, conducted in 1971 with hundreds of departments, revealed that just over 100 used computers, while nearly an additional forty used punched-card records (using precomputer era equipment). By then, the range of applications had become quite substantial, as documented in table 4.1, expanding from such early uses as queries on traffic accidents and violations; although as late as 1967, these still constituted nearly half the uses of computing. Kent W. Colton reported that “a shift in focus began in the middle 1960’s. Police departments continued to install traffic and crime related files, but the development of real time computer systems to provide rapid feedback of information based on the inquiry of patrolmen in the field became popular,” such as the system deployed by the department in St. Louis.23 Colton noted that by 1971, 20 percent of all police applications were online and focused on outstanding warrants, stolen property, or vehicle registration. Meanwhile, management began receiving various reports dealing with statistics about types and quantities of crimes and deployment of their resources.24 What stimulated increased deployment grew out of events that occurred earlier in Washington, D.C. The year 1967 proved to be a pivotal one in the history of police computing. That year, the President’s Commission on Law Enforcement and Administration of Justice published a major report on all manner of practices across the entire justice ecosystem, making various recommendations about how to modernize policing.25 It included suggestions on how best to use computing. The report proved important because federal officials accepted many of its recommendations, most notably, expansion of funding for the development of IT systems that
109
110
The DIGITAL HAND, Volume III Table 4.1 Early Police Computer Query-Based Applications, 1960s–1970s (digital files accessed) Police patrol and inquiry (warrants, stolen property, vehicle registration) Traffic (accidents, citations, parking violations) Administration (personnel, budget analysis and forecasting, inventory control, vehicle fleet maintenance, payroll) Crime statistics (criminal offenses, arrests, juvenile activity) Resource allocation (police patrols and distribution, police service analysis, traffic patrol allocation and distribution) Criminal investigations (automated field integration reports, modus operandi files, automated fingerprints) Command and control/computer-aided dispatch (CAD assignments, geographic locations) Miscellaneous (intelligence compilations, jail arrests) Source: From two tables prepared by Kent W. Colton, “Computers and Police: Patterns of Success and Failure,” Sloan Management Review 14, no. 2 (winter 1972–73): 78 (see also his narrative description of these, pp. 77–79), and “The Experience of Police Departments in Using Computer Technology,” in Kent W. Colton, Police Computer Technology (Lexington, Mass.: Lexington Books, 1978): 28.
could be shared with local and state policing authorities. The commission also urged the national government to provide financial resources to pay for other types of non-IT equipment and training, such as for more modern radio systems and weapons, a funding process actually begun as early as 1965. Implementation of these recommendations resulted in extensive deployment of IT in the 1970s and 1980s. That proved so much to be the case that we can conclude that until the early 1970s, police departments were barely using computing and remained prisoners of massive paper and punched-card files and were only able to change that circumstance thanks to significant federal funding and leadership in supporting the greater use of computing in law enforcement. A central feature of the new wave of initiatives was the establishment within the FBI of the National Crime Information Center (NCIC), which the FBI equipped with computers “to gather in and squirrel away whole mountains of facts winnowed by thousands of ‘Sgt. Joe Fridays’ and their partners everywhere.”26 The key development involved the FBI’s creating national databases in support of local and federal law enforcement, with police departments all over the country asked to contribute voluntarily data to the system on a continuous basis per standards established by the NCIC. The wisdom of the time across most industries in the IT world was that large centralized systems represented the most effective use of computing; the FBI’s strategy was thus very much a reflection of the norms of the day. Although this conventional strategy evolved into decentralized approaches later in the century, as also occurred in many industries, companies, and agencies, by the end of the millennium the FBI’s databases had become a
Law Enforcement
massive source of information crucial to the functioning of law enforcement across the nation and in collaboration with police forces in other nations. The commission’s report, and subsequent availability of new sources of funding for policing and creation of the NCIC, had been driven less by the increased capabilities of computers that were emerging in fairly dramatic forms at that time (remember, the time when S/360s and its competitors were being deployed across the economy) than by increased criminal activity. In 1965—the last year before the commission’s report for which current data existed—nearly 2.8 million crimes had been committed, up 5 percent from the prior year, with no end in sight. Of these, nearly 1.2 million consisted of burglaries. Forcible rape had increased 8 percent over the prior year. What proved most disturbing was the fact that the crime rate had doubled since 1940 and, in the first five years of the 1960s, expanded five times faster than the growth in the nation’s population. In short, many politicians and law enforcement agencies had concluded that the country had a major crisis on its hands that required extraordinary initiatives by the national government. The FBI decided that data housed at the NCIC had to be made instantly accessible to all law enforcement agencies, along with assistance of other types. To illustrate its sense of urgency, the FBI started planning creation of the NCIC even before the report was completed. Before the end of January 1967, the FBI had it up and running.27 Law enforcement agencies around the country began implementing the applications listed in table 4.1 all during the 1970s. Big cities automated their files and fed data into the FBI’s. Regional and state-wide collaboration projects cropped up all over the nation.28 The NCIC had been a pilot project in the 1960s with sixteen law enforcement agencies accessing online files for wanted persons, stolen property, and criminal events. By the start of 1970, over 2,000 law enforcement agencies out of a total 40,000 had access to these files, which by now contained 1.7 million active records on wanted persons, vehicles, boats, license plates, guns, and even stocks and bonds.29 State governments also built local databases with additional information on criminal activities in their region. A study done in the early 1970s reported that half the surveyed police departments would not have been able to deploy computing if not for the expenditures on such programs by the federal government.30 By the mid-1970s, law enforcement agencies were reporting increased effectiveness in fighting crime and in collecting fines. A major new development in the 1970s also included computeraided dispatching by which calls for assistance came to operators equipped with data online on where police patrol cars, fire engines, and ambulances were located that could be directed in a speedy and productive manner to the scene of a crime or incident.31 By the end of the decade, almost all of the 212 largest metropolitan police departments had various assortments of computer-based police information systems, with over half relying on online access to information, some data systems maintained by state governments and others by the FBI, such as those at the NCIC.32 Looked at from the perspective of what cities were investing in regarding computing, law enforcement now ranked only second following financial applications such as payroll, accounting, and tax collections.
111
112
The DIGITAL HAND, Volume III
Query systems for files dominated the applications of the day, with computer-aided dispatching a close second in use for both local and state police. During the decade of the 1980s, local policing systems expanded to smaller municipal, county, and state law enforcement agencies, while large municipalities added new applications or upgraded existing ones to include more data or to take advantage of newer computing and communications equipment and software. In addition to more policing agencies using computers, to access either their own files or those of state and national databases, they increasingly integrated systems. These included linking personnel and patrol car location files to assign work or in support of computer-aided dispatching, and files related to thefts, criminals, and missing persons, and other records in support of police investigations.33 Law enforcement wanted to link together the four basic sets of records used in daily work, comprising systems used for services and complaints, incident and offense reports, name indices, and, of course, arrests. Although integration became possible, affordable, and desirable, by the end of the 1980s, many departments still had not achieved this objective and were burdened with many paper files, particularly in sheriffs’ departments and in smaller communities. One report in 1989 described the continuing set of circumstances: Some of the most common problems are use of valuable office space to store large stacks of old records, high labor cost of manual file manipulation, the disappearance of documents due to misfiling, the inability to keep pace with records management operations due to limited manpower, unavailability of vital information because controls such as check-out cards simply are not used, and postponement of trials because key witness investigative reports are misplaced or lost.34
Yet at the same time, when used, departments achieved productivity gains. In a rigorous study of the role of computers in detectives’ work in forty departments, two professors well versed in the use of computers by local governments found that two-thirds of the detectives used extensively both batch and online systems in their work most of the time. A third of the detectives reported that they could not have successfully completed their work without using computer-based data, such as in making arrests and clearing cases. However, the researchers concluded that, “the computer revolution has not touched all the detectives . . . nor has it touched the detectives evenly.”35 In large cities, which continued to be the most extensive users of computers, all manner of IT and communications made their way into policing functions in the 1980s. In surveys done in the 1980s and early 1990s, these cities reported an increase in the number of IT support personnel on their payrolls, and, of course, they spent more on computing per capita than smaller departments. Large cities also acknowledged having fewer police per capita, in part due to productivity increases made possible by computing speeding up and improving their work. One study of 188 police departments conducted in 1993 reported that “urban police agencies had become highly computerized,” extensive users of mainframes
Law Enforcement
and personal computers largely devoted to management and administration, processing reports and data, and supporting collection and use of crime evidence.36 New applications also spread in the 1980s, most notably digital crime maps. Police had been using paper versions since around 1900 when they were maps of a city or town with different colored pins noting where various types of crimes (or incidents, such as automobile accidents) occurred, which helped management determine when and where to deploy resources. As these were updated, information about prior incidents was lost, and thus these maps proved static; furthermore, they took up much wall space. In the 1970s and 1980s, mainframe computer mapping came into its own. By the early 1990s, one could access these on a PC. Data entry proved to be a labor-intensive operation, so beginning in the mid-1980s, departments began feeding data to map software from other reports and used color printers to eliminate hand drawn maps. One could begin looking at data in real-time, start tracking and assessing historical patterns, and later even model scenarios for deploying and responding to events. Often, these maps were by-products of a community-wide use of digital mapping called Geographic Information Systems (GIS) used to plan maintenance of roads, water pipes, sewers, and so forth, and about which more is presented in chapter 7.37 But how did more traditional crime fighting data evolve? Fingerprinting is a most unique data file for police, courts, and corrections. A brief history of the evolution of this type of data suggests patterns of use and effects brought about by the digital hand. For just over a century, police departments all over the world had used a fingerprinting identification system called the Henry Classification, named after the Scotland Yard official who developed it in the 1890s. It
Figure 4.2
Early computer-based maps in use, Atlanta Police Department, late 1970s. (Courtesy IBM Archives)
113
114
The DIGITAL HAND, Volume III
recognized that no two humans had the same fingerprints; making a copy of these from a person put on a card only took minutes and proved simple to do; and people left fingerprints on everything they touched. “Lifting” prints at a crime scene and comparing them to a set of prints on a fingerprint card provided credible evidence about whether that someone participated in a crime. Every American police department had sets of fingerprints of people they arrested; state and federal governments did too, always posted on a standard sized card that one could find by name or type of fingerprint. Yet manual searches of fingerprints were laborious and slow, and for decades everyone had to maintain massive files. When police began understanding what computers could do, it was almost inevitable that someone would find a way to automate the process of fingerprint identification. As one criminologist of the 1950s and 1960s noted, the arrival of the computer “qualifies on a scale of importance for a position almost equal to the original introduction to police service of the fingerprint identification system.”38 The reason is simple to explain: computers could hunt for the right card in seconds rather than in minutes or hours and could compare all the loops and arches in fingerprint images to those found at a crime scene, also in seconds, and often more accurately than humans. In the 1960s, the potential for improved productivity and accuracy was obvious and stunning. Prior to the computer, an examination of one card took an average of twelve minutes; by the late 1970s, computers had dropped the time by two-thirds, and by the end of the century much further. The Henry Classification system remained in use, but increasingly in an automated form. One of the first digital fingerprint systems went into operation in March 1976 in Arizona, using a Sperry System with software developed for the purpose by the firm. Initially, 200,000 cards were converted into machine-readable form, using optical scanning. After several months of use, an analysis of performance showed that in 86 percent of the cases, when a search of the files took place, the system matched correctly the data requested, a rate far higher than manual approaches.39 The FBI had the largest collection of fingerprint cards in the country, serving as a clearing house of information for local law enforcement agencies; local departments contributed fingerprint files as they created them. Reliance on these rose as well, indeed, so much so that the FBI had to build a complex conveyor belt system in its headquarters to move cards from storage to analysts, who pored over them. During the 1980s, the FBI moved to a hybrid manual/automated system for cataloguing and searching files. All during that decade, and into the 1990s, it moved records to computers in an attempt to keep up with the growing supply of prints and requests for identification of prints collected at crime scenes. Meanwhile, states and large cities also began automating their files, displacing ink and paper fingerprinting in the 1990s by scanning fingerprints from people right into digital files. By then, the application had its own name: automated fingerprint identification systems (AFIS); and vendors were selling software and scanners for these. Over time, technical standards were crafted largely by the FBI. By the end of the century, there were over 32 million digitized fingerprint files. To put a fine point on the volume of data involved, each file
Law Enforcement
consisted of prints of all ten fingers, and even after compressing the digital files, each occupied 750 kilobytes of information.40 Fingerprinting led in the 1990s and early 2000s to facial images next being collected in digital form that also could be compared and analyzed to identify people by measuring distances between facial features and the eyes, for example. Most recently (early 2000s), use of retina scans could also be digitally collected, stored, and analyzed, now a growing application used by border management organizations (often called biometrics).41 The general application of digital fingerprinting proved enormously successful. In addition to playing a major role in the percent of cases involving arrests, these files allowed the law enforcement community to link fingerprint records to other criminal files, right through to trials, incarcerations, and postconfinement activities. Every large state, and many others as well, had various systems by the end of the 1980s; they all also had access to the FBI’s. The federal agency made it a point to make sure police departments all over the nation understood how the technology worked and shared their files. The volumes involved proved staggering. In 1990, for example, by which time digital fingerprint files were in wide use, the FBI received 17,900 local files each day into its database, and by 2000, that number had grown to 24,000 per day.42 States that had long led in the deployment of computing made creation of their own local systems also a priority, such as New York, California, and Missouri, usually by state governments and within their state police agencies. All reported faster and more accurate identification of criminals, many of whom might otherwise have gone undetected. On average, such systems got a “hit,” that is to say, made an accurate match of person to prints, in about 70 percent of the cases the first time a request was entered into a terminal, versus one percent with a manual search.43 The U.S. Justice Department reported that by the late 1980s, thirty-nine states had their own digital fingerprinting applications. By the end of the century, almost every state had access to such applications.44 In the 1990s, police departments in large cities began merging fingerprint data with photographs of individuals and other files, such as criminal records, evidence of crimes, 911 system audio and video recordings. However, most police departments in the 1990s were not as fully aware of the possibilities of merging images into records as the technology of the day permitted, nor did they have the necessary financial resources to invest in them. So as with so many other applications, either large cities and states or the federal government pioneered the next wave of applications. One of the first to do so was Cook County, Illinois, which is the geographic site of Chicago, piloting its first integrated system in the mid-1990s.45 Over the next decade, the application spread across many large American cities and states. By the end of the century, law enforcement agencies had noticed a significant improvement in their ability to do their work because of such digital systems. They were all reporting that they provided faster searches, required less storage space for files, displayed higher-quality fingerprints (no smudged images were allowed to get into a system), faster filing, and proved cost effective.46
115
116
The DIGITAL HAND, Volume III
Similar tales could be told about other key applications of the 1980s and 1990s in which computer-aided dispatch expanded, often integrating data, records of where people were in patrol cars, online maps, and later to other emergency response agencies, such as fire and medical services, so that a dispatcher could get a call, look up where everyone was, and respond.47 Or, a supervisor could determine when and where to assign fire, police, and medical resources. These were important uses of computing because they automated a great deal of the work associated with receiving and responding to calls from the public and provided control and management over staffs moving about a community.48 A similar story of innovation and improved access could also be told about moving information-handling equipment closer to its users, most notably laptops to patrol cars, fire engines, and other vehicles in the 1990s to provide mobile computing. Enormous investments were made in such applications in the 1990s. Between 1994 and the end of the century, the federal government alone gave state and local governments over $330 million for mobile computing.49 As law enforcement agencies expanded their use of such systems and telecommunications, they built up complex IT infrastructures that mimicked what companies were installing across the private sector.50 The one type of technology police departments struggled with more than any other involved telecommunications. Radios in patrol cars had been in use since the 1930s and remained essentially the same right into the 1980s. As communities began integrating the work of police, fire, medical, county, and state agencies, incompatibility in radio systems had to be addressed—a problem that existed even as late as 2001 in the city of New York at the time of the 9/11 terrorist attack.51 In addition, as data became increasingly digital in the 1980s and especially 1990s, the desire of police departments to transmit images and files to a patrol car grew. New technologies, even the Internet, additionally made such applications possible and desirable. Systems for 911 emergency calls were an important part of the communications ecology. First introduced by AT&T in 1968 and initially implemented by the end of 1971, such systems for the public to use to report crimes, emergencies, and other crises spread across the nation over the next three decades. These were expensive, complex systems that in time merged together digital and telecommunications functions. All during the 1990s, every form of communications technology that appeared in the market was used with 911 and other policing applications: laptops, local area networks, digital PBX systems, and the Internet.52 Analog communications networks, which were not secure (i.e., a criminal could listen in on these too), were slowly converted to digital versions. One survey conducted in 1998 reported that 13 percent of law enforcement agencies had made the transition to digital voice communications, while another 55 percent said they were planning on doing so soon as well.53 The same survey reported that when wireless communications were used, 70 percent of all users did so to access NCIC files, roughly half to do data queries, send e-mail, and do their reporting, while much smaller percentages were transmitting pictures of wanted criminals (law enforcement professionals call these “mug shots”)
Law Enforcement
(19 percent), maps (12 percent), and using GPS (10 percent).54 All the percentages rose in the early years of the new century. Students of the problem blamed lack of sufficient funding to migrate to more integrated systems, rapidly changing technologies, age, and incompatibility of preexisting equipment.55 The portfolio of applications of computing in wide use by the early 1990s spanned most functions in law enforcement. Table 4.2 lists those that law enforcement agencies used, although to varying degrees across the country with the largest agencies most digitized. Yet all had access to various regional, state, or federal systems, primarily to search for information.56 A new initiative in the early 1990s began to integrate even further files on criminal activities across the nation. For years, agencies had used a reporting system called the Universal Crime Report (UCR), which provided statistical report cards on the performance of law enforcement agencies across the country. Results reported in the UCR encouraged federal and local officials to apply technology to the war on crime by promoting a crime reporting mentality, providing management with data they could use to fight crime. These data included statistics on response times and arrest and crime rates, causing a shift by management from making decisions based on anecdotal evidence (or experience) to those based more on quantitative data.57 Working together, various agencies developed a concept that would better leverage computing’s capabilities, called the National Incident-Based Crime Reporting System (NIBRS). Digital data on forty-six classes
Table 4.2 Sample Police Uses of Computing, circa 1990 Animal licensing Arrest/crime records Automated vehicle location Case disposition reports Case management Citation control Computer-aided dispatch Computerized sketching Crime analysis Crime lab operations Criminal associates lists Criminal history DWI/DUI Evidence management Fingerprint processing
Firearms registration Fleet management Fraud offenses Gang activity Intelligence gathering Inventory control Juvenile records Law violations Licenses/registration Mapping/geocoding Missing persons Name indices Narcotics control
NCIC data entry NIBRS Organized crime Parking tickets Pawned articles Report writing Stolen property Summons management Traffic accident reports Traffic case processing Traffic tickets Transport of prisoners UCR Vehicle inspection
Source: U.S. Department of Justice Programs, Bureau of Justice Statistics, Directory of Automated Criminal Justice Information Systems 1993, vol. 1, Law Enforcement (Washington, D.C.: U.S. Government Printing Office, 1993): 809–831.
117
118
The DIGITAL HAND, Volume III
of crime would be collected, with a half dozen specific types of information for each to be organized in a standard format that agencies could access all over the country. Many cities already had stand-alone crime data management systems that would have to be converted over to this one so that more organizations could share these data. During the 1990s, the FBI funded various pilot projects to start developing such a system.58 Meanwhile, crime-mapping applications continued to spread and data from those systems also were distributed to patrol cars, not just to supervisors.59 Technological innovations and improved cost performance, when coupled to federal and state funding for systems, facilitated the waves of adoption that had taken place. Yet one other innovation in policing also influenced adoption of computing. Beginning in the 1980s and extending right through into the new century, law enforcement agencies, and particularly urban police departments, changed tactics for how they provided protection to communities. They moved from simply responding to requests for aid to more community-based policing, where citizens, other agencies, and the police worked together to reduce and prevent crime. That shift in tactics required generally moving from localized policing applications of computing to different ones that supported this new approach to policing. One police official in Philadelphia left us an explanation of the problem so many senior police management now faced: “This tendency toward sporadic, highly specified, and hurried technology acquisition has created a maze of stovepipe systems of varying technological architectures that can be efficiently completing their tasks, but cannot, as a system, provide decision makers with a decision support concept that encourages strategic thinking and decision making in the organization.”60 Yet increasingly, that is exactly what computing had to help with as roles changed. Emergency 911 systems were merely one set of examples of this process at work. Arming the cop on the beat with information now became even more crucial. A report late in the century described the interrelationship between this kind of policing and technology: “Information technology has migrated from centralized mainframe/dumb-terminal architecture towards distributed, clientserver designs linked in local- and wide-area networks. This evolution has dovetailed with law enforcement’s widespread adoption of community-oriented policing strategies, which tend to decentralize authority and decision-making down to the precinct.”61 In a culture in which policing had always been quasimilitary in form, and in which supervisors back at the police station made all the key decisions, responsibility for decision making was moving outward, supported by the field having data that in prior times would not necessarily have been available to them. This change in the culture of how policing occurred paralleled in the 1980s and 1990s what happened in the private sector as increasing amounts of information and access to data shifted from management and headquarter locations to workers on the factory floor, in call centers, and in many other functions.62 It is in this context of policing and IT that we can begin to understand how the Internet became part of policing’s repertoire of computers. Use of the Internet
Law Enforcement
by law enforcement agencies (primarily local and state police and, to a lesser extent, sheriff departments) in its early stages can best be understood within the context of community policing. As police departments reached out to communities for help in preventing crime or in assisting police in other ways, the Internet became a tool both could use, and its adoption began spreading slowly in the mid-1990s. Police departments around the country set up Web sites with information about their organization and initiatives, then posted requests for help with wanted posters (what historically U.S. Post Offices displayed on their bulletin boards) and other information to help domestic violence victims. As in the past, funding from the federal government provided much financial wherewithal for departments to invest initially in constructing Web sites. Meanwhile, the FBI integrated this technology into its revamped databases in the NCIC, which by then had over 43 million digitized fingerprints and some 30 million criminal histories.63 By late 1996, some 500 police Web sites existed in the United States and Canada.64 By the early years of the new century, most police departments had either their own or shared space on their local government’s Web sites. This tool proved to be an important channel of communications between citizens and police on such basic issues as requesting tips about crimes, informing communities of various policing initiatives, and providing contact information, including who to write e-mails to in a police department.65 Finally, police began using the Internet very quickly in their investigations of crimes. As the amount of data available on the Internet increased, particularly after 1998, it became an important research tool for investigators and, by the early 2000s, an essential component of a police department’s operations. In support of police departments, various police associations, federal law enforcement agencies, and other institutions, all began posting material useful to each other on the Web. One of the earliest inventories of such Web sites (circa 2001–2002) listed hundreds of these.66 Deployment of the Internet mimicked patterns evident in many other government agencies, most companies, and industries. The first Web sites provided information and contact data (addresses, telephone numbers). Every agency that installed a Web site went through this phase, and so there were always some organizations in any year after 1995 in this stage of maturity. Second-generation Web sites began appearing in the late 1990s, still with information (mission statements, messages from the local chief of police, and other data). By the end of the century, a third generation appeared distinguishable from prior sites in that they were interactive, now having the facility for police to post requests for help in solving crimes, frequently presenting statistics and other information about the evolving crime situation in a community, all the while giving citizens the ability to communicate back and forth with police over the Internet. As one would thus expect, the amount of data being presented over the Net increased, such that by about 2004, these sites had become fairly standard, even ubiquitous across the nation.67 In the final analysis, policing agencies had made quite a transformation over the previous thirty to forty years. From not using computers until the early 1960s
119
120
The DIGITAL HAND, Volume III
to the end of the century, much had changed, a great deal of it even in the 1990s. At the start of that decade, for example, large cities (with over 250,000 residents) evolved from 90 percent of their police departments using computers in a wide variety of activities to 100 percent at the end of the century.68 Even the mundane use of in-field computers reached to over 90 percent in the same decade. The proverbial “everyone” used automated fingerprint and computer-aided dispatch systems. Movies and novels often characterized sheriffs as either gun-toting John Waynes or semi-illiterate power-hungry Southern power-brokers. Nothing could be farther from the truth. They encountered computers much like urban police, adopting them for the same reasons and slowed similarly from acquiring more due largely to budgetary constraints. Almost every sheriff’s office in the United States used computers for one purpose or another by the end of the century. Large counties relied extensively on computing to handle most administrative functions; nearly 90 percent used personal computers. Like police, they stored data on criminal activities and accessed the same FBI and state criminal records. Just over half used computer-aided dispatch systems, and nearly 40 percent of smaller, less funded counties did too. In short, like town and city police, sheriffs had become highly dependent on computers to do their daily work.69
Role of the Federal Government in Law Enforcement Computing While various federal agencies, such as the Secret Service and the Department of Justice, performed law enforcement and policing work, the center of national activities rested in the FBI, which itself was one of many bureaus within the DoJ. Both the department and the FBI were responsible for enforcing federal laws, so in that capacity they did many of the same things as local law enforcement: policing, arresting, prosecuting, and jailing. Thus, many of their uses of computers mirrored those of local police, courts, and prisons. Because of the scale of its operations, which involved law enforcement across the nation and with much activity in other countries (such as in embassies), the department became an early user of every form of information technology to appear in the twentieth century, not just computers. In addition, the department also played a support role in helping local law enforcement regarding computers, as discussed earlier in this chapter. In addition to funding local initiatives and investments in equipment, the FBI assisted in investigations through its forensics capabilities and research into its massive fingerprint and other criminal files. The department also had responsibility for tracking crime across the nation, reporting results to other government agencies and to the public at large. The NCIC within the FBI became the centerpiece of much law enforcement activity, of course, and an extensive data processing center for the nation’s law enforcement community. Its earliest use of computers in the 1960s involved collecting and maintaining batch files, which in the 1970s it made accessible online with terminals physically placed in police departments and state police agencies.70 Over the next several decades, the FBI enhanced its files and upgraded
Law Enforcement
and updated its technologies at a far more effective pace than either the IRS or DoD. Important new files were created along the way. For example, in 1971, the NCIC launched its Computerized Criminal History (CCH) database. CCH collected information about individuals and fingerprints of people arrested for committing major crimes; and it acquired additional data on what crimes were committed and on their disposition. By the 1980s, the NCIC managed fourteen major databases (see table 4.3). These files were rich deposits of information, which the NCIC expanded in number and volume of detail per person or crime over time. This repository provided a massive body of information that the FBI and DoJ could use to help solve crimes and to perform data mining so as to better understand patterns of crime in the United States. While these files were originally housed in twin IBM 3/360s in the 1960s, demand for computing kept rising, making the FBI an important user of the largest available computers. By the end of the 1980s, for example, the center used several of IBM’s very large 3033 computer systems to support some 60,000 various local, state, and federal agencies that needed access to its data. By 1990, terminals and personal computers were intermixed in the network so that data could be queried, updated, and added to in various ways.71 The FBI spent the second half of the 1960s building its various databases, piloting systems, and so forth. The Computerized Criminal History (CCH) system went “live” on November 30, 1971, and over time increasing numbers of states began contributing data to the system. From the beginning, these databases were used by law enforcement, and use rose all through the last three
Table 4.3 Major Databases, National Crime Information Center, 1980s Wanted persons Missing and unidentified persons Criminal history and fingerprint classification Stolen and felony vehicles Recovered vehicles Stolen and recovered firearms Stolen and recovered heavy equipment Stolen and recovered boats and marine equipment Stolen license plates Stolen and recovered securities Stolen and recovered identifiable articles Canadian warrants U.S. Secret Service protective file Originating agency identifier file Source: J. Van Duyn, Automated Crime Information Systems (Blue Ridge, Penn.: TAB Professional and Reference Books, 1991): 5–16.
121
122
The DIGITAL HAND, Volume III
decades of the century, although slowly in the beginning. Its development, along with the activities of the NCIC in the period of the late 1960s to the late 1970s, proved to be one of the more active times in the FBI’s deployment of IT (see table 4.4). Yet, one analysis from the late 1970s reported the FBI’s “lack of enthusiasm for continued participation in the CCH system” due to “lack of state participation, underestimation of costs and effort which would be required to establish, collect, and maintain data for the more elaborate CCH record format.”72 Insufficient disciplined data entry and lack of adequate technical capabilities at the local level hampered early deployment of the FBI’s databases, a problem, however, solved slowly over time. As of 1978, eight states were contributing data to the system; but participation grew steadily over the years. At the time, however, twenty-six states were accessing the system for data.73 So, local officials were more willing, or able, to access existing files than contributing to them. Funding, skills, and resources served as gating factors as they did at DoD and at the IRS. However, the Department of Justice knew what had to be done. In the late 1960s, it had established the Law Enforcement Assistance Administration (LEAA) to help the states. In 1969, LEAA launched Project SEARCH, consisting of a group of state governments to build and demonstrate the feasibility of a computerized network that would allow states to exchange data on criminal histories. The FBI would manage the network. In 1972, LEAA
Table 4.4 Key Computing Activities, NCIC and CCH, 1967–1977 Year
Event
1967
Commission on Law Enforcement and the Administration of Criminal Justice recommended deployment of decentralized systems Project SEARCH was created to develop a state-level network to exchange criminal history information FBI given control over Project SEARCH criminal history index FBI announced it added nation-wide criminal history data bank to NCIC Major discussions, problems faced regarding standards, security,cost, and use of CCH with states FBI authorized to serve as telecommunications switch for NCIC-related messages Justice Department began publishing standards and regulations regarding dissemination of criminal records and histories LEAA issued regulations regarding dissemination and sharing of computer systems FBI requested that it terminate its participation in CCH
1969 1970 1971 1973 1974 1975 1976 1977
Source: Office of Technology Assessment, A Preliminary Assessment of the National Crime Information Center and the Computerized Criminal History System (Washington, D.C.: U.S. Government Printing Office, December 1978): 77–80.
Law Enforcement
began funding work to encourage states to develop criminal justice systems at the local level.74 In the 1980s, the FBI shut down CCH and replaced it with a decentralized national criminal history record system, an approach preferred by the states as easier and less expensive to support. States had enhanced their technical capabilities all through the 1970s, making it possible to have decentralized systems in the 1980s. By the late 1990s, over forty states had local criminal databases, contributed to the national files maintained by the FBI, and had networked to systems maintained by other states.75 These systems were popular and grew in size. At the end of the century, they held collectively more than 59 million records of individual offenders in criminal records files, up from 30.3 million records in 1984 and 42.4 million in 1989. Put another way, the states doubled the size of their digitized records between 1984 and 1999. In addition, at the federal level, there were some 43 million files on individuals by the end of the century. With financial support from the federal government, the number of states accessing such files increased. In 1999, forty states reported that more than 75 percent of their criminal record histories were now automated. A substantial increase in deployment of digitized records had taken place in the 1990s; in fact, twenty-six states had these kinds of records by 1992, then forty at the end of the century.76 If we add into the mix the growth in state and federal fingerprint files that had been digitized, one can conclude that the states had caught up with the federal government by the late 1990s. Equally important, by now states routinely contributed information to the FBI’s files, while they had also become dependent on their own systems for the daily work of law enforcement. Of course, things were never perfect. Many inside and outside law enforcement expressed concerns about the accuracy of information. They debated who should have access to it and worked on other issues related to privacy.77 The federal government worked through many of these issues all through the 1980s and 1990s, a process that continued into the new century. One brief example involved the development of a standard “rap sheet” that all law enforcement agencies could use to document an individual’s criminal record (much like a job resume), an initiative launched in the mid-1990s. In addition to dealing with issues of accuracy and privacy, the proposed standard initially settled on the use of Internet-based technologies.78 Like the states, the FBI and other federal law enforcement agencies began to view use of the Internet in a favorable light late in the 1990s, primarily because now effective data and transaction security and encryption systems were available. They reacted much the same way as financial and retail industries to the evolving technology at the same time.79 Meanwhile, the FBI continued modernizing its internal systems. An important initiative of the period, called NCIC 2000, improved the NCIC’s aging telecommunications system, hardware, and software, eliminating entirely the exchange of paper-based records, for example, and adding new capabilities made possible by technological innovations, such as graphical data, including mug shots, pictures of tattoos, and signatures. Improved data mining and search functions were added, initiating an early but important application of artificial intelligence methods that could work with seventeen databases, not just the
123
124
The DIGITAL HAND, Volume III
original 14. NCIC 2000 went live on July 11, 1999. Within a year, it was processing over 2.3 million transactions each day.80 Then implementation of the Brady Act became the next major IT issue in 1993 for the DoJ because this law called for criminal background searches on people wanting to buy handguns. It further required the U.S. Attorney General to build a computerized system for that purpose within five years. That digital tool, called the National Instant Criminal Background Check System (NICS) became operational on November 30, 1998, making data available within thirty seconds of an inquiry by accessing the FBI’s preexisting criminal databases. By 2001, over 30,000 inquiries were being made every day.81 In short, in addition to its normal law enforcement duties, the FBI and its parent department continued serving as the nation’s central hub for major new law enforcement IT initiatives. In each of its annual reports, the Department of Justice always discussed its digital initiatives as important as its traditional law enforcement and prosecutorial responsibilities. In any year in the 1980s and 1990s, it had a half dozen or more major IT systems under development in support of its missions, those of the states, and even of other federal agencies responsible for law enforcement.82
Computers and the Courts Courts were some of the latest users of computers in the law enforcement ecosystem, embracing the technology in tentative ways in the 1960s and 1970s, and not substantially until the 1980s and 1990s. When courts finally did use computers, they did so for the same reasons as others in the world of law enforcement and the legal profession: to gain control over mounting loads of cases and their attendant paper-based records. Like police systems, theirs began as standalone applications, which courts integrated over time across a state or federal legal system. In time, they integrated with police systems in criminal cases as well. As technology became less expensive and easier to use and application software arrived on the market, courts could afford the time and effort to start weaving computers into the fabric of their daily work. The most fundamental driving force creating the need to use information handling tools proved to be the rising volume of work faced by all court systems across the United States during the second half of the twentieth century. The volume of cases rose faster than the number of new courts and judges available to deal with them at the local, state, and federal levels. For instance, between 1950 and the end of the century, the number of federal cases alone expanded from nearly 100,000 to over 375,000.83 Both civil and criminal cases grew in volume. Civil cases tended to be particularly more data intensive than criminal cases, and they increased in volume by 86 percent between 1990 and 1995, a period that saw extensive deployment of computers by courts that tried both civil and criminal cases. In that same decade, filings for bankruptcy doubled to 1.3 million in 1999, forcing courts to handle vast quantities of paper.84
Law Enforcement
We could tell a similar tale of increasing workloads about local and state judicial systems. Throughout the second half of the century, over 80 percent of all judges and courts were state and local. Providing comparative data to what occurred at the federal level, again using 1999 to illustrate the order of magnitude of work, 91.8 million cases were filed in state courts, ranging from traffic to civil cases, along with criminal, domestic, and juvenile cases. A massive 54.4 million of these involved cases in traffic courts, far surpassing all other categories combined.85 Some 14,000 trial courts, staffed with 18,000 judicial officers, were the lowest level courts in the nation, where most cases began their judicial odyssey. These courts represented 85 percent of all judicial bodies in this country. These lower courts handled 67 million matters annually by 1999 out of a grand total of 91.8 million cases filed in all state courts.86 Even writers of textbooks, who normally do not editorialize, characterized the work of these lower courts as “staggering.”87 Like law enforcement agencies, the nation’s courts comprised a patchwork of local, state, and federal courts, clerks’ offices, and other ancillary support functions, most staffed with a few employees. Into this world came police, criminals, other litigants, witnesses, prosecutors, various court officials, jurors by the millions, the press, interested bystanders, and, of course, lawyers representing all sides. To a large extent, orchestrating the comings and goings of all direct participants was a major activity for all courts. Making sure that evidence—data—kept up with all the key players in this never ending flow of activity, and always in an overworked, normally understaffed, and underfunded legal system, remained a constant challenge. The same complaints about large volumes occurred in every decade. In the broad landscape of life in America, one observer noted, “soaring crime rates and an increase in both the number and complexity of civil cases have turned America into a Litigant society.” However, “at the very time when more Americans want or need a day in court, the machinery to give it to them on a fair and timely basis is breaking down.”88 Courts had adopted precomputer tools on a limited basis, such as typewriters, telephones, and in larger city and state courts, some punched-card tabulating equipment. The only major information processing hardware developed specifically for courts (and newspaper reporters) in the precomputer era was, of course, the device court stenographers used to document what was said in courtrooms, called the stenograph machine.89 At the high end of information processing, involving tabulating equipment, the most sophisticated hardware was used to schedule court cases and appearances of jurors for duty because of the large numbers involved.90 Then, as other law enforcement agencies began using computers to help with the management of work loads and information, judges started slowly using the same technologies, initially driven by local and state governments with computers making them available to courts. Later, courts acquired their own applications (with software) and even their own dedicated systems. The process of adoption proved slow. As of the mid-1960s, barely a half dozen computers were in use for court administration; by the end of 1971, some
125
126
The DIGITAL HAND, Volume III
255 courts used them. That still left thousands of courts operating without using computers. The earliest applications involved selection of jurors and efforts to centralize or partially automate collection and reporting of statistics required by many state governments. Next, such uses as scheduling and tracking juvenile traffic cases came along and others to schedule court cases in general. In the early 1970s, interest in integrating various systems existed among vendors and other experts on computing, but courts were slow to change. Speaking at a conference of judges in 1971, President Richard Nixon—himself a lawyer—suggested that they “take advantage of many technical advances, such as electronic information retrieval, to expedite the result in both new and traditional areas of the law.”91 The most important early uses involved managing court dockets in large urban centers that had the greatest volume of cases. Leveraging computers to automate calendaring functions called for the redesign of long-standing internal procedures. For example, in Philadelphia in 1969, an IBM mainframe began scanning documents for all cases being presented before a local court on a weekly basis to determine their status and to schedule upcoming court actions, while informing all parties of the schedule.92 By the early 1970s, published reports began documenting how computers were helping speed up the movement of cases through courts.93 Beginning in the mid-1970s, independent applications started transforming tentatively into more integrated systems that tracked cases from beginning to end, from initial arrests (criminal) or filings (civil) to their dispositions, a morphing process that continued into the 1980s and 1990s. In the same period, online access to information and databases also spread widely across the nation in all types of courts from local traffic to federal.94 Clerks could plan their activities and schedule more efficiently and accurately judges and cases, while now providing online information, often in the courtroom itself. Database tools for this market appeared. For example, IBM, which had initially developed software to manage the over one billion pages of documents associated with its antitrust suits with private sector rivals and later the federal government in the 1970s, made it available as a product for lawyers and courts.95 Meanwhile, tools developed for lawyers were also used to help judges prepare for cases, using such tools as LEXIS and WESTLAW.96 In the 1980s, courts were extensively integrating case tracking from beginning to conclusion in fairly comprehensive ways, making it possible for state attorney generals’ offices, police departments, and court officials to share data. That circumstance reduced the volume of paper handling and other manual tasks per case that often were tedious, not merely time consuming. Meanwhile, the volume of transactions a court could process increased right along with the workload of new cases.97 For example, in the case of traffic courts in Maryland, with the help of its Judicial Information System in the mid-1980s, judges processed 700,000 traffic citations and an additional 160,000 other cases per year. The traffic side of the system helped generate $30 million in fines, primarily by handling larger volumes and not missing cases that otherwise would have gone unattended. Prior to the use of a computer, no court would have known
Law Enforcement
Figure 4.3
A law clerk researches online a prior case, circa late 1970s. (Courtesy IBM Archives)
how many traffic citations there were, let alone if some were not paid. In addition, the system balanced court appearances of state police at court with cases before a court and judge. The case file was the heart of such a system, what everyone involved in a proceeding could look at, update if authorized to do so, and use to spin off reports.98 By the early 1990s, many local and state courts had similar systems in operation. Many were state-wide, that is to say, developed by the state for common use by all its courts.99 Yet, for all this activity in the 1980s and early 1990s, courts still deployed computing less so than law firms or police departments. When compared to other government institutions odd omissions existed. For example, while many government agencies in the early 1990s accepted credit cards as a way to pay for licenses, normally courts still did not for fees and fines. Many courts did not use
127
128
The DIGITAL HAND, Volume III
computers to track individuals who failed to pay fees, yet even small towns had well-established accounts receivable systems to pursue overdue property taxes. In 1992, one student of court applications commented on why the lag: “Courts haven’t gotten a lot of attention in the technology arena until now because they didn’t have the big payrolls and the big budgets as did the executive and legislative branches.”100 Judicial culture, however, that is to say, the power of inertia and tradition, probably played an equally strong role. So mounting paperwork remained a constant problem for courts in the 1990s, although use of OCR scanners spread, while initial case records were being implemented by police departments whose systems could be extended to provide courts with data collected originally, for example, when someone was arrested.101 By the end of the century, however, all federal courts did have automated systems that interested parties could search to retrieve information on specific cases, using personal computers at court houses and over a dial-up system called PACER (Public Access to Court Electronic Records). Many courts also hosted home pages on the Web. A nearly similar extent of deployment of such applications also occurred at the state court level and in many large cities. E-mail also had rapidly spread through court systems all over the United States.102 Availability of the Internet marked a new period of innovation in court applications, yet generally slowly. By the mid-1990s, legal professional organizations began exploring the potential value of using the Internet, such as the National Institute of Justice (NIJ). In March 1996, it reported to the justice community that the Net promised speed of access to information and, of even greater value, the ability to allow various participants in the criminal justice community to engage in dialogues and share data. The NIJ began aggressively using the Internet in 1995 to disseminate information to its community, providing access to its files and a forum for discussion, while offering training programs on how to use the Net.103 By the end of the century, while many judges were familiar with computing, so many others still were not, while most lawyers pleading cases before a court had some working knowledge of computer-based tools, such as those used to do legal research online. Assessing the situation in 2000, Judge Edwin L. Nelson, a U.S. district court judge in Alabama, commented that “in a very few years, the portable computer will be as ubiquitous as long yellow legal pads, number two pencils, dictating equipment, and law books were 10 years ago.”104 The same judge reflected back on the 1990s: The last 10 years have seen an enormous positive shift in the acceptance of IT at all levels of the Judiciary. For example, when I became a district judge, the only e-mail system available connected my secretary, two law clerks, and me over a homemade network that I, more or less, devised and installed with the help of our clerk. There was no DCN (Data Communications Network). We had no nationwide e-mail system, and many judges were skeptical of the value of such a system. Today, as we move to the full nationwide implementation of and migration to Lotus Domino/Notes, many judges and courts believe that a
Law Enforcement
reliable, robust, and secure e-mail system is essential to the performance of our mission-critical functions.105
Clerks and judges could use Internet-based versions of WESTLAW and LEXIS to do research and another system called Virtual Law Library. The hard data was beginning to confirm this judge’s comments. All fifty states provided some online access to court decisions, including opinions. Thirty-nine states recognized digital signatures as legally binding for some transactions; later, federal law made that universal across the land. Over a dozen states either had or were implementing digital systems that accepted pleadings, motions, and filings of briefs. Most important, forty-four states had already integrated some or most of their criminal justice/law enforcement information systems available to judges and other court officials. In 2000–2001, the states with the most extensive automation of court/law enforcement systems included Colorado, Delaware, Illinois, New Jersey, Ohio, Georgia, Maryland, and Pennsylvania, and, of course, the federal courts.106 The tale of computing in courts in the years that followed became one of further deployment of integrated case management systems, rapid deployment of research over the Internet, and use of e-mail.107 Before moving to a discussion about computing and jails, we need to survey briefly how lawyers used computers because, while the majority of their work did not necessarily result in court trials (such as preparing deeds and wills), as the statistics cited in this chapter demonstrate, many legal matters did end up in court. So it is part of the story of computing in courts. There are three major classes of IT-related issues involving lawyers and uses of computing: applications related to the operation of a legal practice (such as tracking billable time, calendars, payroll), substantive law related to IT and its industries (such as the role of digital signatures, privacy), and research for cases using this technology. It is this third area of use that in particular fills in details regarding the role of computers and telecommunications with courts. Large law firms had long struggled with mounting loads of paper, just like courts and judges, along with the accumulating volume of legal literature and judicial decisions and cases that they had to study in preparing theirs for trial. In the early 1960s, one could begin reading articles about the case for computerized legal research, with some attempts to convert files into machine-readable form.108 The legal community first focused its attention on how to use technology to do research, making this application the central adoption of computing in the 1970s. The major early event came when the Ohio Bar Association built the first widely available computer-assisted legal research system (CALR) in the mid1970s. It proved successful and spun off as a private enterprise called LEXIS, owned by Mead Data Corporation. That software provided full text of all its files, making it an attractive tool. A competitor called WESTLAW evolved from just an index-and-abstracts-based system to one that provided full text as well. Since the 1970s, these two systems have served as the primary (although not only) software tools used by lawyers, legal researchers, and clerks to find related cases, statutes, and other information, searching through key words specifically sought
129
130
The DIGITAL HAND, Volume III
in a document or as concepts.109 Additional tools also existed, such as IBM’s STAIRS (Storage and Information Retrieval System), and still others created by federal and state agencies over the years.110 Use of such research tools through dial-up telecommunications became widespread among large and medium-sized law firms by the mid-1980s. The quality of the searches that could be done also improved. Lawyers enjoyed two immediate benefits: first, it cost less to do online searches because these could be done more quickly, and second, searches became increasingly complete and thorough.111 Meanwhile, in the same decade, as law firms became comfortable using these applications, they found other uses for computing, such as in support of legislative bill tracking through state and federal legislatures, commercially available databases to collect personal and financial data (particularly useful in bankruptcy cases), and business and medical databases.112 In addition, within a firm, computers began doing work seen in other industries, such as billing, payroll, and accounting, with some large firms in major cities acquiring their own data centers by the 1980s. Smaller ones outsourced their accounting work. The legal literature of the day began routinely publishing on the role of IT. These included Legal Administrator, Legal Economics, and the National Law Journal. The American Bar Association established a standing committee to advise their members on matters related to computing and telecommunications.113 Like other industries, companies, and public agencies, law firms began looking into the value of integrating their various stand-alone systems in the late 1980s, a process undertaken during that period and continuing throughout the 1990s.114 Arrival of the Internet enhanced the research capabilities of all law firms, driving down the costs of research on the one hand while on the other, making vast quantities of legal and nonlegal information readily accessible. During the 1980s and 1990s, vast quantities of case material and statutes were digitized in what can only be regarded as a remarkably fast transformation from paper-based libraries to digital ones. By the 1990s, it was difficult to find even a junior law clerk not familiar with online search tools and word processing. By the end of the decade, even judges were becoming familiar with the application, which they learned about as part of their prior jobs as lawyers or in pursuit of private interests and hobbies at home.
Computers and Corrections By the end of 2001, over 5.6 million residents in the United States had been incarcerated in either a state or federal prison. Like the rest of the law enforcement ecosystem, that population had grown over time. For example, from 1974 to 2001, the percent of all adults who had been in jail expanded from 1.9 to 6.6 percent.115 Much like the rest of the law enforcement ecosystem, the nation had a patchwork of federal, state, county, city, and town prisons and jails of varying sizes. In the 1950s and 1960s, use of computers to manage back office applications in prisons essentially did not exist. Accounting, tracking of prisoners, and
Law Enforcement
performance of administrative functions were an intensely paper-driven collection of processes, although use of precomputer information processing equipment was widespread, such as typewriters and adding machines. Telecommunications included analog radio and internal telephone networks. In the late 1960s and early 1970s, some efforts were made to teach prisoners about computing, primarily programming, as part of rehabilitation initiatives, but these were few and far between.116 In large communities, deployment of computing to support operation of prisons began in the late 1960s or very early 1970s. For example, Los Angeles County, California, operated the largest county jail system in the United States at the start of the 1970s. It had to track 11,000 inmates and on any given day, 1,250 were being moved about within the system or to and from court hearings. In 1971, the county implemented an online system that included automated booking, a booking information file, and an inmate database, all operating on an IBM S/360 Model 50 (a very large computer for its day), with terminals linked to this application scattered across the entire Los Angeles law enforcement community, not just to jails.117 During the 1970s, computing came into its own in large federal, state, and urban prison systems. New Mexico implemented applications to track accounting activities, inmates, and to produce statistics, and linked to other applications that tracked criminal activity across the state.118 Corrections officials in Baltimore, Maryland, focused on tracking inmates and availability of jail space to house them. Because prisoners could be in jail for short periods of time, paperbased record keeping often did not keep up with their comings and goings. A digital system made that possible, while maintaining a current jail census and a list of who visited each prisoner (information often requested by courts). Because many jails adopted similar applications in the 1970s and 1980s, table 4.5 lists the kinds of information maintained by this early online prison system. The kinds of information collected in each category illustrate the variety of data Table 4.5 Online Jail Information Used by Baltimore, Maryland, circa 1979 Inmate (name, address, aliases, identification number) Location (cell assignment, medical appointments, visitors) Court (case number, arrest number, next pending court action and date, charges) Classification (medical conditions, drug addiction, psychiatric problems) Appointments (lists of inmates scheduled to appear in courts, hospitals, elsewhere) Occupancy (data used to assign housing and processing returning inmates, expected arrivals, expiration of sentences) Transportation (data from which appointment reports are produced) Visitors (lists of visitors to all inmates, dates of visits) Cell assignment history (data on all transfers) Source: IBM Corporation, Jail Online Inmate Control System Baltimore, Maryland (White Plains, N.Y.: IBM Corporation, 1979), Box 246, Folder 7, IBM Archives, Somers, N.Y.
131
132
The DIGITAL HAND, Volume III
required to run a prison.119 Similar systems were installed by other states in the late 1970s and the 1980s. These included routine budgetary, accounting, and inmate census applications as well. Normally, they were online systems that could be accessed by both the staff at a prison and other law enforcement officials who had permission to access the data, such as police, sheriff, and state law enforcement agencies. The largest states became the earliest users of such systems, led the way in enhancing them over the years, and upgraded constantly the equipment and software involved. The state of Texas, which always had a large prison population, became a model for innovative systems during the last two decades of the twentieth century. But it was not alone as other states also implemented integrated applications as well.120 Reasons for implementing such systems mimicked what happened elsewhere in the law enforcement community. These systems improved scheduling of personnel, reduced the cost and time required to maintain accurate records on prisoners, made it possible to analyze patterns of costs and populations, improved the quality of decision making (for example, in deciding about paroles), and minimized the growth in staff. Courts and corrections changed their strategies over the years to find new ways of dealing with overcrowding of jails, moving toward rehabilitative strategies other than just incarceration. So, computing needs changed, particularly in the 1980s, in support of reforms making their way through the law enforcement world. One important alternative to imprisonment was the idea of house arrest, in which a prisoner, parolee, or probationer lived at home but wore an electronic monitor around their ankle, an idea that law enforcement first began considering as far back as the early 1960s but that did not become popular until overcrowding of jails in the 1980s made it an attractive alternative. Not until the introduction of computer chips into these monitors in the 1990s did such systems begin acquiring digital features. However, in the early 1980s, those wearing electronic monitors received computerized phone calls requiring them to respond via automated voice verification as a way of ensuring they were adhering to their confinement. If not, software would notify correctional personnel of a potential problem.121 During the 1980s, corrections facilities also had as much of a need for telecommunications as did police departments. Inmates needed to make telephone calls and corrections officials had to communicate with each other within a facility through wired and wireless (radio) systems. Prior to the 1980s, internal communications systems were often “home grown,” that is to say, put together locally or based on old AT&T analog systems. Beginning in the early 1980s, as telecommunication switches began to appear in the market (primarily digital), prisons had new options. All during the 1980s, new and old prisons modernized their telecommunications infrastructures, mimicking what occurred in other industries and across public safety agencies. Since 80 percent of a prison’s budget could go to payroll and other staff expenses, any tool that came along that could hold down such costs proved attractive. Modern PBX systems often required fewer telephone operators than before, so, just as companies and telephone service providers moved to the new technology to lower personnel costs, so, too, did large prison systems.122
Law Enforcement
By the late 1990s, federal, state, and large urban prisons had largely digitized their inmate record systems, routinely used computers to track and control budgets and to track assignment of personnel to shifts and duties. One national study conducted in 1998 reported that computerization of some 207 types of data had been accomplished by just over half of the largest prison systems in the country, while over 85 percent had implemented systems that collected and used a large core of that set of data, the kind listed, for example, in table 4.5.123 Use of the Internet by prisoners and their jailors has hardly been studied. However, by the early years of the new century, files of prisoners on location began moving to state, county, and municipal intranet and internet sites. In the case of internet sites (those that can be accessed by citizens), people could inquire about the location of inmates, for example, without having to call law enforcement agencies or prison officials. In some states, such as Minnesota and Washington, if one saw a neighbor arrested, they could query the Internet to find the charges files against that individual.124
Origins and Early History of Computer Crime The early history of crime involving computers is shrouded in hyperbole, anecdotal stories, and what extant evidence would suggest were few reported cases. Then, as now, even the definition of what constituted computer crime remained unclear. However, what is very certain is that from the earliest days of companies using computers, accountants and their auditors expressed concern about the possibility of accounting and financial fraud occurring through the use of computers.125 Their concern grew out of the general lack of knowledge about how computers operated in accounting circles in the 1950s and 1960s, the uncertainty caused by having to rely on programmers whose performance could not necessarily be understood by line management, and from the lack of best practices and auditing tools in computer systems. To one extent or another, firms and public agencies worried about these issues throughout the second half of the twentieth century. However, a look at computer crime from the perspective of law enforcement and courts, two communities that had no experience dealing with this new form of criminal activity in the 1950s and 1960s, presents a less passionate or idiosyncratic picture of computer crime waves in America. As late as 1978, one author of a book on computer crime confessed that “there is no widely accepted definition of computer crime. Some authorities define it as making use of the computer to steal large sums of money. Others include theft of services within this definition, as well as invasions of privacy. Some . . . [view] it as the use of a computer to perpetrate any scheme to defraud others of funds, services, and property.”126 The leading expert on computer-based crime of the 1960s–1980s, Donn B. Parker, made the same point in 1976, adding that “knowledge about the incidence of crime is small, and the data available are inaccurate,” and that comment came from him after he had compiled the most complete census of
133
134
The DIGITAL HAND, Volume III
computer-related crimes in the world!127 In 1983, he again made a similar point: “there is no general agreement on the meaning of computing abuse or computer crime.” 128 So his categories of abuse became for many the de facto early definition of computer crime. His categories included vandalism of hardware and software; theft of information, hardware, or software; financial fraud or theft committed by altering software and files; and unauthorized use or sale of services involving computers.129 His inventory of cases dated from the first one he could document (1958) through October 1975 and came from around the world. The industries most frequently subjected to this kind of problem were those that were some of the most extensive users of computers in those years, and which often did a relatively poor job in managing their digital assets: banking, education, all levels of government from local to federal, manufacturing, insurance, and computer services. Of the 372 cases he documented for that period, banking experienced 70, education 66, governments 61, manufacturing 46, insurance 28, and computer services 24. Another dozen industries often had less than six incidents each.130 A task force organized by the American Bar Association in the early 1970s characterized all cases as less important than violent crimes, while the American public viewed these as white-collar crimes, which in those years were routinely punished less harshly than others.131 One can conclude that the number of documented incidents prior to the late 1970s was quite small when compared to crimes law enforcement had faced long before the arrival of the computer. In fact, Parker said so in 1983: “computer crime is a relatively rare occurrence when compared with noncomputer crime.”132 There was much speculation about the magnitude of these crimes. Every study done of the issue in the United States in the years prior to 1990 all speculated that they ranged from an average of $400,000 to tens of millions of dollars. The ABA thought they were enormous, reporting that a fourth of their survey respondents thought they had been victims of the problem and that their total losses ranged from $145 to $730 million, which led the ABA to the simple math of dividing the number of respondents of their survey into these figures to arrive at a range of $2 to $10 million per enterprise.133 At about the same time, another study suggested that the total cost to the American economy was closer to $15–27 million.134 The disparity in the data suggests that nobody really knew the true numbers. Part of the data problem that all commentators had until the 1980s was that many enterprises and public institutions either could not or did not have the capability to identify computer-assisted crimes. If they did, they proved reluctant to report these for fear that such information might have an adverse effect on their business. Observers also noted how management and the public at large had great faith in data coming out of computers, assuming that printouts of information were actual documents of record, rather than a by-product of data in a computer that was first organized in a way that was then presented in the form of a report. Against this backdrop, prosecutors faced very few cases. Between the 1950s and the end of the 1970s, district attorneys reported handling 244 computer-related cases, what they called computer crime, applying existing laws that did not specifically address the use of this new technology. Of the 244 cases,
Law Enforcement
they prosecuted 199 and were able to get 157 convictions by plea and an additional 10 by trial. They handled a further 311 cases of fraud that involved use of computer-generated information. Of these, they prosecuted 215, plea bargained convictions on 158, and won convictions by trial in an additional 20 cases. The largest number of cases existed in large metropolitan areas, such as New York, Chicago, and Baltimore.135 The FBI ran its first courses on how to detect computer crimes in the mid-1970s and publication of what became the bible for many years used by police to investigate these kinds of crimes.136 The entire law enforcement community used existing statutes, such as federal mail and wire fraud legislation, because much fraudulent data moved across state lines, and the courts had been liberal in their interpretation of all manner of fraud under those laws.137 In the 1980s, as the number of computers in the American economy continued to increase, the kinds of crimes documented by Parker in the 1960s and 1970s continued to occur. With software products now widely available, these ephemera were either pirated or used to commit more traditional crimes, such as fraud and embezzlement. The earliest worms and viruses also began appearing in the 1980s, and it was in this period that the term “hacker” first made its appearance.138 In that same decade, state and federal legislatures began passing laws that specifically targeted computer-based crimes, such as arson (a major early problem with computing), hacking, and more traditional crimes related to fraud, embezzlement, theft, improper use of software, copyrights, and patents.139 Hacking became a highly publicized subject of increasing concern to law enforcement after the arrival of the PC and widespread use of telecommunications in the 1980s and 1990s.140 By the 1970s, however, the law enforcement community was already beginning to form opinions about these activities, which they began viewing as legal, illegal, or simply mischievous. But in general, police and prosecutors considered hackers as digital trespassers and thieves, while the press tended to treat them more like “a modern-day joy rider, roaming the electronic highways.”141 One lawyer familiar with computer crimes commented in the late 1980s that “nobody knows how hacking got started,” although he knew of instances dating back to the 1960s involving college students; but these kinds of users had not become a major issue until the 1980s, with the arrival of PCs and widespread computer literacy, particularly among young people.142 However, precomputing hacking had occurred in the 1970s with “phone phreaks,” people who developed “blue boxes” to access long-distance telephone lines, often for fun or to avoid paying for long-distance calls. John Draper (aka Cap’n Crunch) became famous as an early phone phreak. He was arrested, convicted, and sent to prison in Pennsylvania in 1976. The other famous case from this early period involved the Massachusetts Institute of Technology’s (MIT) Cookie Monster, who would destroy files if a user seeing the term “cookie” on their screen did not “feed” the monster by typing in “cookie.”143 As with fraud and more conventional crimes committed using computers, police, prosecutors, and judges had a difficult time understanding what was legal and illegal in these early years when it came to hacking.144
135
136
The DIGITAL HAND, Volume III
By the 1980s, a more serious problem began spreading that proved disturbing and remained a chronic issue to the present, namely, the role of organized crime. Like good business managers elsewhere in the economy, this class of criminal used computers to track their operations, to improve skimming and communications with clients and colleagues, and to track profits and income. Key areas of applications involved gambling (online bookmaking); providing such services and goods as prostitution, pornography, and drugs; fencing (buying and selling of illegally gotten goods); pilfering; money laundering; loan sharking; and use of fraudulent credit and, later, debit cards. All of these applications were in use by criminals by the early 1980s. In addition to these applications of the digital hand, they also stole and sold computer chips, computers, peripherals, and software.145 In the 1980s, the number of computer-based crimes began increasing to the point that large metropolitan police departments, in particular, state-level law enforcement, state and federal prosecutors, and the FBI and Secret Service (because of its mission to protect U.S. currency) had to increase their expertise in investigating and prosecuting these classes of crimes. By the early 1990s, such knowledge, coupled with a growing body of new laws, had begun making its way through the law enforcement ecosystem.146 The first step was the passage of laws. In 1978, the Florida legislature passed the first high-technology crime bill in the nation in response to a recent case where computer files at a race track were manipulated to show horses winning that had not, resulting in the payout of millions of dollars on lost bets. Subsequently, over the next quarter century, states either passed new laws or modified old ones to account for emerging applications of computers in criminal activity. By the end of the century, all states had done so. At the federal level, Congress passed over a dozen laws specifically designed to deal with computing and telecommunications. By the 1990s, the FBI had acquired a considerable body of expertise in fighting digital crime. As noted earlier in this chapter, the FBI had also become an extensive user of the technology, which in turn served as a training vehicle for its investigators. It also collaborated with other nations in fighting international digital crime and ran training programs for law enforcement organizations from around the world. In May 1999, the FBI opened the Internet Fraud Council as a center to which consumers could bring complaints about digitally based crimes. The FBI had already established the Computer Investigations and Infrastructure Threat Assessment Center (CITAC) and the National Infrastructure Protection Center (NIPC).147 Meanwhile, state, city, and federal collaboration and funding of initiatives had finally entered the mainstream of law enforcement in the 1990s.148 The wide deployment of the Internet and other dial-up networks that began in the 1980s opened up a whole new chapter in digital crime, a subject worthy of its own chapter. However, a brief view of the situation at the start of the new century suggests how far this new form of crime had come in barely two decades. In a small survey conducted in 2001, the Department of Justice learned that 74 percent of businesses surveyed had reported being the victim of some sort of Internet-based crime, now fashionably called cybercrime. Two-thirds had
Law Enforcement
experienced one or more viruses, one-quarter a “denial-of-service” attack (for example, slowed response time due to vast quantities of data pouring into a firm’s computer). A nearly similar percentage had been subjected to vandalism or sabotage of their computer systems. While the sampling was small (198 private companies) and hardly statistically representative, the study hints of the ubiquity of the problem and led the Department of Justice to announce publicly that it would begin collecting this kind of data much as it did on other types of crimes. Perhaps also interesting was why several hundred companies declined to participate in this study, reasons that resembled those given in earlier decades regarding computer crime to Parker and others: 17 percent were concerned about confidentiality of their survey input, 14 percent because they just did not know how often they had been victimized, and 82 percent stated simply that they did not participate in voluntary surveys. Buried in the details of this early cybercrime survey was information about the kinds of problems faced. Ranked by number of incidents, at the top were computer viruses, followed in descending order of frequency by denial of service, vandalism or sabotage, fraud or embezzlement. Given the fact that by the 1990s, first, much security software had been installed to protect accounting and financial systems and, second, on physical security for data centers and other locations with high concentrations of personal computers, the order of frequency would not be a surprise. The survey reported that in 2001, the number of incidents was higher than that of 2000, and that 10 percent of the respondents had insurance policies to cover losses from cybercrime.149 In 2005, the Department of Justice reported that Internet-specific crimes were now drawing considerable attention from the law enforcement community. These classes of crime included “cyberstalking,” “cyberbullying,” child pornography, Internet fraud, and identity theft. All were perverse digital innovations of the 1990s.150
Conclusions Law enforcement is a community made up of a patchwork of agencies varying both in size and mission, from little town police departments with ten employees to large court systems in Los Angeles, New York, and Chicago. Yet it is a community that shares many common values and ways of conducting its work. It is an observation that many who have studied this sector made over the years. The integration of the digital into its daily work streams reflects shared values and practices. Much as in so many industries and across other public sector agencies, digital applications came in waves. Large agencies could afford computers first, and so they naturally led the way. As their experiences demonstrated successes and failures, other agencies learned from these and either adopted, waited, or ignored implementations until they made sense. The availability of funding proved a most critical factor in the decision to embrace and deploy a particular use of computing. Enthusiasts of computing always complained that agencies adopted these tools too slowly. Kent W. Colton typified all students of the subject, lamenting
137
138
The DIGITAL HAND, Volume III
that adoptions were slower than desired, but that early uses embraced the digital for computers to do repetitive operations.151 Another expert, Scott Hebert, also complained that there was much resistance to new ways, largely due to cultural factors but often also to the inability to demonstrate improvements in policing operations.152 Those applications that gave police on the street information in a rapid fashion were consistently the most widely adopted and supported uses across the entire half century, largely because they fortified well the work of police. By the 1980s, integrated systems became fashionable as police, courts, and corrections began leveraging technology to collect and move information along at a speed that kept pace with the daily activities of the law enforcement community, while nearly spectacular improvements in digital fingerprinting and records retrieval helped enormously. Yet overall, the effects of computing on the operational cultures of various institutions were less pronounced than in other government agencies. While the ability to get one’s traditional work done faster, better, and less expensively using computers and telecommunications improved, consequences of a cultural nature only emerged slowly. The most dramatic one that cross-functional, integrated justice systems should have affected was the willingness of agencies to share information with each other. To be sure, there were dramatic examples of this, such as the pooling of data by multiple communities in California, St. Louis, and elsewhere related to wanted persons, stolen vehicles, and so forth.153 Clearly, the work of the FBI in establishing national databases evolved into a crucial contributor to this process. But as late as 2004, complaints of hoarding data and of turf battles continued to appear. One article in Governing, a major journal in the field of public administration, reported in 2004 that “there are still a lot of obstacles to getting good information-sharing systems in place. For many jurisdictions, the idea of pooling data resources . . . remains a little frightening. Individual agencies are nervous about allowing others free access to their information, even when there’s a valid public purpose involved.”154 There were many reasons cited in this important review of the problem—lack of technical and operational standards, legal impediments, issues of security and privacy—but nonetheless, just as the 9/11 Commission pointed out a couple of years earlier, sharing did not take place sufficiently, with the result that terrorists, or in the case of Governing’s report, criminals, could slip through the law enforcement system. Nonetheless, sharing and pooling data had made remarkable progress because of the ability of agencies all over the country to use the computer’s ability to house large volumes of data, provide rapid and accurate search and retrieval of this information, and do so virtually anywhere at any time. By the dawn of the new century, expert criticisms of law enforcement often had an IT tone. For example, GAO’s examination of the FBI’s critical functions focused on technical issues, such as on IT standards and modernization strategies.155 Laws and many court cases had evolved in response to the use of the new technology. Indeed, a sea change was occurring. In the 1960s and 1970s, most judges and prosecutors saw computer crime as traditional forms of criminal
Law Enforcement
behavior just done with computers rather than with some other tool, such as a gun. That attitude began changing in the 1970s and was reflected in legislation all over the land in the next quarter century. As security expert Donn Parker recollected in 2003, “having specific computer crime statutes was a way to establish a social agreement that these were real, serious crimes. The other purpose of getting specific statutes was to help law enforcement agencies get the budgets they needed to develop the capability to ultimately be able to deal with investigation and prosecution of computer crime.”156 On balance, one would have to conclude that adoption of computing and upgrading of communications had changed much of how the core institutions of the law enforcement world went about their work. Efficiency and effectiveness improved incrementally all through the period, making today’s use of computing seem normal, although ubiquitous. Today’s workforce working in this industry in the vast majority has only lived in a time when computing was widely available in their organizations and across society, so only a few members of this ecosystem can hardly recall a time when computing was not part of how they did their work. Even judges—the last within the law enforcement community to embrace the technology—followed much the same pattern as CEOs in companies. As they came to understand the value of the technology, they brought it into their operations; it was not a pattern explained away simply by arguing that judges were older than most police, criminals, or lawyers who came before them. Furthermore, as they learned how the technology worked and affected law, it influenced their conclusions; and as they had to try computer related crimes, they learned what had changed from prior criminal behavior and yet what remained consistent from one era to the next. The picture that the last four chapters begin to paint is that of whole communities within the American society adopting computing and communications in waves at various speeds but in common ways. By looking at additional uses of computing and communications in the federal government, we can see that process more clearly at work. For that reason, the next chapter is devoted to describing digital activities in other corners of the national government.
139
5 Digital Applications in the Federal Government: The Social Security Administration, the Bureau of the Census, and the U.S. Postal Service The Postal Service may be nearing the end of an era. —Bernard L. Ungar, General Accounting Office, 1999
T
he Social Security Administration (SSA), the Bureau of the Census, and the U.S. Postal Service (USPS) are storied agencies in the American government, touching directly the lives of all residents in the United States. In the case of the SSA, it is the largest insurance and old-age pension organization in the country, and the vast majority of residents are registered with it by way of their social security number. Over 50 million people receive pension, disability, or some other form of financial support from the SSA. For the poorest in the land, it was (and is) often their only financial safety net in their old age, or if disabled. The Bureau of the Census is responsible for tracking the population of the United States, for conducting other census counts of the economy, and is the federal government’s largest statistical agency. Its census count determines how many representatives any state can have in Congress and serves as the basis for the proportional distribution of federal funds to states, localities, even schools. Both the Census Bureau and the SSA have been extensive users of information technologies. In the case of the SSA, its early use of IBM punched-card equipment in
140
Social Security, the Census Bureau, and the USPS
the 1930s has taken on a near mythic persona in the histories of early computing and of IBM. With the Census Bureau, it was one of its employees, Herman Hollerith, who developed the modern punched-card and tabulating equipment in the 1880s; the company he established to rent the equipment ultimately became IBM. Use of information processing at the bureau, beginning with the census of 1890, began a century-long dependence on technology that is both storied and important. The Post Office was America’s first nation-wide public information highway, established in the late 1700s. It remained the backbone of much communications right into the twenty-first century. Each of the first six days of the week, this agency visits nearly every residence, business, or other organization in America, delivering paper-based communications, publications, and packages. All three agencies are highly visible public institutions, remain constantly in the news, and are discussed in every civics class taught at American high schools. They all long have been extensive users of IT. The importance of these agencies and of their use of IT and telecommunications is so great that no history of computing in the public sector would be complete without discussing their use and the effects of their technologies on their operations. All three agencies operated more as self-contained organizations than many other federal offices. They derived their specific roles and missions from federal laws. They often were more influenced by the actions of other federal agencies, the White House, and the Congress than by other potential participants in their ecosystem. I have suggested in this book that many public sector organizations were members of some larger ecosystem (such as law enforcement), and that their activities within such extended communities were increasingly facilitated by the digital hand. However, while this is also the case with these three organizations, it is sufficiently less so than with the IRS, for example, to allow us to examine them as discrete entities. Also important to keep in mind is the massive quantity of information and files and papers these organizations handled in the twentieth century. The statistics cited below of their volumes of transactions and files are stunning and huge by any standard and thus serve as extreme examples of the value of computers to them. By looking at each, we can see how technology affected the work of three very important components of the federal government, while containing the discussion to highly defined organizational boundaries. We can see how the three became familiar with technologies of all kinds within the context of highly defined institutional cultures and how they confronted and deployed computers. The importance of such a discussion is reinforced when we keep in mind that none could do their work without extensive reliance on information technology. That was so by the end of the census of 1890 in the case of the Bureau of the Census using precomputer IT and within a year or two after the founding of the SSA in the late 1930s. The Post Office experienced severe competition and important operational changes made possible by the new technology. In all three instances, their ability to use IT effectively had a direct bearing on their ability to
141
142
The DIGITAL HAND, Volume III
carry on their missions. The SSA and the Census Bureau were regarded as sophisticated users of computers and other forms of information technologies in the period 1950s–1970s, but subsequently, like the IRS and the DoD to various extents, were increasingly regarded as behind and ineffectual in their use of computing, or faced new operational challenges. In the case of the SSA, it, too, faced a serious and real risk of not being able to get its work done on time in the 1980s, while the Census Bureau became enmeshed in other operational issues, such as about the value of estimating scientifically how many people there were as opposed to counting all “noses” in the 1990s and in the 2000 census. In the case of the USPS, its use of computing was a longer time coming and its effects only became evident late in the century. How IT influenced the work of all three organizations is the subject of this chapter.
Social Security Administration Established in 1935 during the Great Depression, SSA is responsible for administering a collection of national social insurance programs in which employees and employers contribute to a pool of funds used to provide old-age pensions and other monthly payments, such as to disabled workers, widows, and orphaned children. It became the largest social insurance program in the nation supporting the aged. Congress extended its mission to assist financially those less able to take care of themselves over many decades and in various ways. Each time Congress added or changed a program, the agency had to modify or add new applications of their digital infrastructure in support of these new responsibilities. The majority of discussions conducted by historians about the use of IT by the SSA focused on the initial deployment of IBM equipment designed for the SSA in the 1930s, and that represented for the small IBM Company its largest sale to date. For the SSA, the hardware provided the ability to carry out its mission, since its leaders quickly realized that paper-based record keeping and payments could not be scaled up to the volumes needed.1 By the time computers were just becoming available in the federal government during the 1940s, the SSA had over a decade of experience using various kinds of sophisticated data processing equipment, employed a highly motivated and expert workforce, and implemented a noble mission that valued quality service to the American public characterized by accurate record keeping and timely payments of pensions.2 So it should be of no surprise that early on the agency recognized the potential possibilities of computers and took the time to understand, then to embrace the new data processing technology.3 Before discussing computing, it is helpful to appreciate the magnitude of the mission of this agency, keeping in mind that every transaction, piece of information, and individual it served required some form of data handling. Over time, these were activities and pieces of information that were either partially or completely collected, preserved, and managed by the digital hand. From its earliest days, the numbers were large at the SSA. In 1937, during its first full year of
Social Security, the Census Bureau, and the USPS
carrying out requirements of its charter, the SSA issued 26 million social security numbers to American workers and assigned 3.5 million employer identification numbers as well. It simultaneously tracked incomes and payments received from both the 26 and 3.5 million, using a centralized approach. These activities led to the creation of the Visible Index, listing every worker covered by the insurance system. That year, the records alone occupied 24,000 square feet of space, and that was only the beginning, because the nation’s population kept growing and an increasing number of people applied for social security numbers.4 By the time the SSA first began using computers in the mid-1950s, it maintained some 150 million accounts. By 1960, nearly 90 percent of all employed workers were either covered by social security or were eligible for some form of coverage, for a total of approximately 75 million workers. The rest of the population it served were survivors and others eligible for support. Nearly 75 percent of all individuals over the age of sixty-five were either receiving or eligible for benefits managed by the SSA.5 Those numbers, again using 1960, translated into 3.5 million newly established accounts, 2.3 million changes and updates in data to existing accounts, and calculations done quarterly on incomes for the 75 million workers. Some 25,000 employees at the SSA carried out this work scattered across the nation, with central offices (and files and data processing equipment) located in and around Baltimore, Maryland.6 Jump ahead twenty years, and the volumes
Figure 5.1
How SSA stored records on individuals prior to 1962. This is only a tiny corner of the vast collection, circa 1960. (Courtesy Social Security Administration)
143
144
The DIGITAL HAND, Volume III
Figure 5.2
This is SSA’s Visible Index to all the files illustrated in the previous photo; note the vast quantity, circa 1960. (Courtesy Social Security Administration)
simply grew. For example, in 1980 over 140 million workers were covered and at the dawn of the new century, in excess of 187 million; these numbers did not include underage children, widows, and others also helped by the SSA.7 The SSA concurrently managed funds contributed toward benefits. In the period 1980 to 2001, the numbers also proved daunting, suggestive of the complexity and volumes involved: old-age funds amounted to some $105 billion in 1980 and, when combined with various other assets, over $1 trillion in 2001.8 Meanwhile, the number of employees, while they grew (then shrunk) in number over many years during the second half of the century, did not increase as rapidly as the number of people they served. Yet as of 2005, for instance, the SSA employed 65,000 people deployed in ten regional offices, six processing centers, and over 1,300 field offices.9 In short, by any measure, all volumes involved were large. The SSA was a perfect candidate for using computers from the inception of this technology. Early Deployment of Computers, 1950s–1960s Officials had to define what files they would keep and update and implement formal processes for handling these. Their core data processing consisted of creating a file on every worker and eligible individual registering for a social security number (SSN) and a similar process for every employer; then perform
Social Security, the Census Bureau, and the USPS
the necessary reporting and recording of wages. By the time the first computer went “live” at the SSA in 1956, the agency had established approximately 120 million accounts of which 80 million had some updates made during the course of the year. Employers provided updated tax information to the Treasury Department each quarter, and that data (which dealt with what each employee was paid) Treasury passed over to the SSA.10 That was essentially the process used for decades. Other updates involved requests for benefits filed by individuals, followed by an assessment by SSA employees validating what payments should be made and then authorizing them. Thus, the essential requirements of IT for many decades consisted of: • Collecting large quantities of similar data in bulk form • Storing massive amounts of like information • Accepting similar updated information on a regular basis • Producing trend reports • Providing ad hoc printed reports of individual accounts, and in the 1990s, printing and mailing account information to each owner of a social security number.11
While this is a highly simplified description of the data processing features involved, these functions mimicked many of those in the Insurance Industry.12 More important, early computers could read punched cards and long tape files, features that were well matched to the needs of the SSA: a great deal of cheap storage capacity and less requirement for complex or extensive calculations. Prior to 1950, computations were done by persons in the district offices; the next year two IBM 604 Electronic Calculating Machines were able to perform some 100 computations per minute. The small size of these machines when compared to today’s computers demonstrates that data collection and storage were larger applications than the actual calculations themselves.13 Over the decades, storage continued to improve as the agency went from cards to tape in the 1950s and 1960s, then from tape to disk in the 1960s and 1970s, and from batch systems to a mixture of batch and online applications in the 1960s and 1970s. Increasingly in the 1980s, the agency went online with distributed processing to the large number of offices, beginning in volume in the late 1970s and extending to the end of the century. The SSA’s initial foray into computing involved a series of careful analyses of computers to determine how best they might work. As early as 1946, with the availability of the ENIAC, SSA officials began talking to computer designers and engineers. They tracked major developments in computing in the 1940s, both with vendors and other potential early users in government, such as the Census Bureau and the military, a dialogue that continued into the 1950s. Technological improvements evolved to the point that a task force within the SSA in September 1954 recommended that the agency place an order for a system for the purpose of performing statistical work. It also concluded, however, that the technology had not yet progressed sufficiently, nor cost effectively enough, to warrant using digital computers to perform the massive record-keeping functions of the agency.
145
146
The DIGITAL HAND, Volume III
Most specifically, sorting data with a computer remained too expensive when compared to performing the same functions on tabulating equipment.14 But for statistical work, in-house experts recommended using an IBM 702 system, concluding it could do the data crunching functions more typical of scientific computing to which the 702 was suited. The SSA could also use this system to experiment with other applications.15 Then, Congress passed legislation calling for changes in the process by which the SSA used earnings to calculate benefits, straining the capabilities of existing tabulating equipment, while increasing the need for more file cabinets, cards, and employees. Switching to tape, and away from punched cards, would save space and time. But it was the change in law that became the primary reason for the agency’s finally deciding to acquire its first computer, requesting bids from eleven vendors in June 1955. The SSA concluded that “the possibility of recording summary data in magnetic tape appears to be a solution which will give immediate relief to the problem of punch card storage. It also will permit the introduction of electronic processing in the statistical operation and will afford opportunities for extending and improving the recordkeeping system in general.”16 These same considerations of space, handling large volumes of data, and for processing and calculating remained core applications and concerns affecting IT deployments over the next half century.17 Space and speed were not trivial considerations. As one observer noted years after the move to tape: “The information on 60,000 summary cards could be stored on a single reel of magnetic tape 10 inches in diameter. Also during the period 1945–55, the internal speeds of the computer had increased 100,000 times, and storage capacity and reliability had improved 100-fold. The technology the agency needed was in place.”18 The SSA ordered the system; it came in during 1955 and went live in the fall of 1956. Between order time and installation, the agency began converting data from cards to tape, while training personnel on how to operate the equipment and to write software. With workloads increasing in the late 1950s, workers displaced by the computer system were assigned to other departments within the SSA. The amount of floor space needed to house cards shrank. Accuracy in handling and manipulating data began to improve, largely because the 702, and subsequent computers, could check the validity of every number and item as it was handled, spitting out potential errors for humans to assess and correct. In 1958, employees began posting updates directly to magnetic tape, a process performed much faster than with older data processing equipment.19 The agency replaced its 702 with larger twin IBM 705s in 1957 to perform quarterly updating of earnings records, to search accounts for information, to prepare coverage and earnings statements, and to conduct repetitive error checking operations. In short, once installed, computers were quickly assigned data handling operations central to the work of the SSA. By the end of the decade, additional incremental applications had been developed, such as preparation and addressing of correspondence to employers. Meanwhile, old accounting machinery began leaving the agency, displaced by computers. Faster systems came into
Social Security, the Census Bureau, and the USPS
Figure 5.3
The data center where the SSA began storing its records on magnetic tape, circa late 1950s. By the late 1970s, it had more than 500,000 reels of tape. (Courtesy Social Security Administration)
the SSA in the early 1960s and 1970s, reducing the amount of time it took to perform specific transactions, searches, and sorts, while increasing the capability of handling more volumes.20 The agency was benefiting from the rapidly improving capabilities of the technology suited to its needs. As workloads expanded in the 1960s, new systems came and went, but the agency proved able to handle its duties and workloads. GAO, the Congress, and SSA’s senior leadership were pleased with the results.21 Table 5.1 shows the incremental acquisition of applications that directly supported the SSA’s mission and made it possible for the data processing staff to do their own internal work. By the mid-1960s, the SSA had implemented various applications in support of eight basic operations: establishment of accounts, maintenance of earnings records, processing earnings incorrectly or incompletely reported to the SSA, certification of earnings records, maintenance of master beneficiary rolls, certification of benefit payments, preparation of benefit payee records, and maintenance of beneficiary payee rolls.22 An internal assessment of the role of computers painted a positive picture: “The over-all effect of automation on the Social Security Administration has been excellent. The impact of new legislation and growing workloads on Administration personnel requirements has been offset . . . by the use of EDP and . . . the Administration is giving better service.”23 The SSA also had to invigorate its telecommunication infrastructure. It had long used telephone services that included voice and later teletype transmission for data. But in 1960, the SSA began upgrading its network to handle more electronic transfers of information, leading to its reliance on a network built by
147
148
The DIGITAL HAND, Volume III Table 5.1 Introduction of Key SSA Applications, 1957–1969 1957
1958 1960
1961 1962
1963 1965
1966 1967 1969
Preparation of earnings statements and quarters of coverage statements Quarterly up-dating of summary records Summary file searching for claims and earnings statements Statistical operations Preparing and addressing employer correspondence File searching for miscellaneous requests Earnings report processing Establishing and correcting employee records Claims control Telecommunications processing Suspended accounts processing Correspondence control SSA payroll and leave system Bureau of Disability insurance claims control Bureau of District Office Operations management control Personnel records system Selected Claims in Process (SCIP) System Preparation of tapes for quarterly up-dating Claims and award processing Bureau of Disability insurance folder control Magnetic tape library control
Source: “History of Data Processing in BDPA,” internal report, undated, circa 1969, p. 30, Social Security Administration Archives, Baltimore, Md.
AT&T that remained in use until 1966. At that time, the next generation network, called the Advanced Records System, at the SSA went live, which made it possible to transmit ten words per minute over telephone lines to a central message center where employees moved the data to magnetic tape and in turn shipped the tape to the main computer center at the SSA’s headquarters in Baltimore. In the 1980s, the SSA replaced this system with an online network (called SSADARS) with which employees could directly access files at the main computer center.24 Crises of the 1970s and 1980s But as had occurred at the IRS, not all remained well over time. One government report about the SSA’s computing problems of the 1970s characterized the period as “a decade of deepening problems,” which “became intractable.”25 The most probable of all roots of the problem rests with the fact that between 1972 and
Social Security, the Census Bureau, and the USPS
1981, Congress passed fifteen new laws related to social security, which called for the agency to perform new and complex services that in turn required, of course, either changing existing data processing applications or adding new ones, all with no additional employees.26 The problems created were further compounded by the fact that increasingly over time, Congress gave the SSA less time from when a law passed to when it had to provide a new or different service. The inevitable backlogs climbed all through the period as a severe imbalance developed among technical, operational, and human resources in the face of new duties. As at the IRS, the rapid turnover in senior leadership, inadequate planning, and insufficient dialogue with Congress and various administrations about what it would take to do new things compounded the problems. Passage of the Supplemental Security Income (SSI) Program in early 1972 became the proverbial straw that broke the camel’s back, a crisis still discussed by veteran employees at the agency in the 1980s and 1990s. The programs in this law required extensive dialogue between employees and beneficiaries, while the volume of traffic on computer systems chocked networks, brought down systems, and created service delays, all damaging to the SSA’s reputation for being a well-run agency. The complexity of SSI was compounded because the SSA was asked to accept 3 million beneficiaries onto its rolls from some 1,300 state and local agencies, in effect federalizing a great deal of welfare programs originally the direct responsibility of the states, with the inevitable variety of programs, data formats, and so forth one could expect from such a mix of agencies. The SSA simply was not prepared to do this well, “not up to the task,” as one student of the crisis called the situation.27 Software to support the new service could not be written and deployed in time, while the telecommunications network in the agency could not handle the massive surge in traffic. The Office of Technology Assessment (OTA) carefully examined the situation and did not simply blame “new” or “old” computers, concluding instead “that the workload had become too large, too complex, and too dependent on automated processing to be handled by SSA’s existing workforce with existing technology. In this situation, every addition to the workload became a potential crisis.”28 The GAO also looked at the situation and reached similar conclusions, adding that the agency had sufficient computing power during the early stages of the crisis: “Before January 1977, Social Security was processing most of the workload for its major programs on 17 large-scale computer systems,” and the “systems were capable of supporting more than twice the largest identifiable workload processed.”29 Added to that one program were other laws passed that called for such actions as providing citizens with automatic cost-of-living adjustments, the Reagan debt collection initiative of 1981, and the Omnibus Reconciliation Act of the same year. One could begin to understand the crisis enveloping the SSA. Every law required changes to existing practices, policies, and software.30 An underlying managerial practice also contributed to the problem: the rapid conversion of hundreds of programs on the fly with little or no documentation of changes, with the result that as data processing personnel turned over within the SSA, what digital systems looked like became less clear to remaining technical staffs. In turn, that circumstance made
149
150
The DIGITAL HAND, Volume III
it more difficult to fix software problems or to know what to change and how as new laws went into effect. OTA criticized SSA’s management for not planning changes and for not having sound development plans for IT.31 The net result was a collective near-meltdown of the agency in the late 1970s and early 1980s, despite the fact that SSA had seventy-six major computer applications, although all seemed to be degrading with age and changing technologies and requirements. Here is how OTA described the consequences: “During the 1960s and 1970s, SSA was progressively less able to respond to congressional mandates without Herculean efforts, resulting in large backlogs, high error rates, deteriorating cost-effectiveness, and worsening workplace conditions.”32 The agency responded in 1982, however, with a strategy for fixing its problems, called the Systems Modernization Plan (SMP). The SSA proposed to create new software tools in support of its mission and staff; databases would be integrated, and both telecommunications and computers would be upgraded to handle new functions and larger volumes of transactions. The SSA wanted to do this over five years at a cost of some $479 million. Meanwhile, OMB proposed that in the process the SSA should reduce its workforce by 17,000 employees as the expected increase in worker productivity derived from new systems.33 For all intents and purposes, the SSA was able to implement its plan, despite much criticism and skepticism, while moving many systems from batch to online and converting additional paper files to digital form. It also installed toll-free 800 numbers for the public and better telecommunications for staff.34 By the early 1990s, one could see results. For example, an individual could get a social security card in ten days unlike the six weeks it took in 1982. The issuance of social security numbers, while it may seem such a small action, was actually complex and important, so speeding up their issuance proved crucial to the process of improving services. One could not accrue benefits or receive payments without such a number, and over the years increasing percentages of all age groups acquired a social security number. By the mid-1980s, the SSA had issued nearly 292 million numbers and had a potential 700 million left in its “inventory” that it could issue. In the decades of the 1960s through the 1990s, it was not uncommon for the agency to issue between roughly four and seven million numbers per year. By the mid-1980s, the process had essentially become fully automated.35 Emergency payments could be made in five days rather than in fifteen, as occurred in 1982. In 1992, it only took twenty-four hours to process an annual cost-of-living increase, while ten years earlier it required six weeks.36 SMP cost over $4 billion to implement while all systems cost the SSA $400 million per year to operate.37 Within the agency, much was done to improve digital support of the organization. A careful reading of the agency’s own internal newsletter to its employees documented many IT activities in the late 1980s and early 1990s designed to rescue the agency. These included modernizing and rewriting applications, installing terminals and networks, and replacing old hardware. Files moved from tape to disk, minicomputers into field offices, and massive numbers of terminals spread across the agency. The effort mimicked in scope and effort what the SSA
Social Security, the Census Bureau, and the USPS
and even the IRS had done to implement their original uses of the digital hand in the 1950s and 1960s.38 Telecommunications expanded to put most employees on a modern network, costing an additional $122 million to operate in 1992. The network was handling some 11 million transactions each day. So, one can conclude that by the early 1990s, after more than a decade of extensive investments in efforts, resources, and time, the crisis of the 1970s had been overcome. The mountain the SSA’s management had to climb over was complicated by the fact that they had to continue providing services to the public in conformance with law. A few numbers suggest what they faced in 1982. That year the SSA maintained 240 million individual records of active accounts, which included every holder of a social security number working or receiving benefits. Every month they had to pay out $170 billion in benefits to over 50 million individuals. They issued some 10 million new social security numbers and posted 380 million wage items reported by employers. They received and had to process 7.5 million new claims applications and processed an additional 19 million postadjudicative transactions, which included recalculating 2.5 million existing cases. Finally, they had to process some 120 million bills and inquiries from a myriad of health insurance intermediaries, providers, and so forth, all the while managing a work force of some 86,000 scattered across nearly 1,400 offices.39 Just reading this paragraph is exhausting. The price for relative success was high as well. Employee morale sank as staff saw that the way new systems were cost justified was by elimination of their jobs, which they resisted. All the while the number of residents of the United States needing the SSA’s services increased, and Congress expanded offerings to the public. The professional leadership and long years of experience of senior staff prior to the 1970s gave way to the kind of turnover in leadership evident at DoD and at the IRS, with a continuing decline in expertise concerning computing at the most senior levels of the agency. It seemed each new head of the SSA wanted their own reorganization, although one positive trend was the SSA’s tendency to evolve into functional structures rather than remain in programmatic lines. That trend made it possible to align the public’s needs with the evolving capabilities of computers (such as integrated databases making possible case management approaches).40 Even outsourcing work to vendors proved problematic. Space does not permit a review of the story of how Paradyne Corporation and the SSA became involved in one of the largest civilian government upgrades in IT up to the early 1980s, but the saga resulted in failed systems, lawsuits, and what even the normally sympathetic employees at OTA characterized as “a management disaster.”41 The rescue of the SSA and its systems should not be discounted; it was a massive effort. Just looking at the technical side, one senses the magnitude involved. OTA documented what had to be changed on the software and application side of the agency: There were in 1982 some 12 million lines of poorly written and undocumented program code. There were about 6,000 COBOL programs,
151
152
The DIGITAL HAND, Volume III
1,500 assembly language code programs, and another 1,000 miscellaneous programs. Over the years SSA had translated old manual procedures into software using now outdated programming languages, and then, converted them line by line to COBOL, preserving the inefficiencies of the older technology. The old code is being cleaned up and rewritten as it is needed, according to SSA.42
In the mid-1980s, OTA was not blind, however, to the work yet to be done, such as in both designing a comprehensive modern database management architecture and then implementing it.43 It did not help that SSA technicians had decided years earlier, when moving files from tape to disk (1960s and 1970s), to write their own software access method (Master Data Access Method or MADAM) instead of using an off-the-shelf product that would be maintained by a vendor, rather than by the SSA. When not well cared for, MADAM put the agency at risk of not having systems compatible with newer software and hardware.44 IT in a Networked Age, Late 1980s–2007 In the 1980s, the number of citizens continued to increase, while the size of the staff at the agency actually shrank, due to various federal cost-cutting measures. Increasingly, the SSA’s management looked to technology to bridge the gap, as they had in earlier times. But this continued to be a Herculean struggle that extended right into the last decade of the century. In 1994, one assessment of the agency described the situation as nearly precarious: “An ever-increasing workload, combined with staff reductions, threatens SSA’s ability to meet congressional and public expectations for service delivery. The agency’s toll-free 800 telephone numbers are severely overloaded during peak periods, for example, and its Disability Insurance benefits program is in serious distress with a large backlog and long processing delays.”45 As to the importance of the digital hand at this time, the same report opined that “information technology is essential to SSA in carrying out its mission. Indeed, SSA would literally collapse without the use of computers and telecommunications.”46 In no other GAO or OTA report done in the past half century had any auditor or expert made such an emphatic statement about the importance of IT for a federal agency as in this case. All the other agencies and departments assumed that somehow they could do “workarounds,” throwing additional manual labor at a crisis, as happened on a few occasions at the IRS and repeatedly in the military with logistics, usually during wartime. During the 1980s, the SSA deployed the SMP strategy, which led to dramatic reductions in the time required to service the public and to handle growing volumes of work per employee. But for all its success, the problems of volumes and staffing remained, while congressional changes in benefits programs did too. Several data points suggest the immediate problem: in 1984, the SSA had nearly 80,000 employees, a decade later only 63,000. The Disability Insurance program—which was the most labor-intensive for the SSA to manage—grew in
Social Security, the Census Bureau, and the USPS
number of claims from 1.7 million in 1990 to 2.5 million in 1993.47 In 1991, management had begun developing a new strategy for using IT, called the IWS/LAN Technology Program, which stood for “intelligent work station (IWS) and local area network (LAN).”48 It called for decentralizing work by beefing up the SSA’s telecommunications infrastructure and deploying applications out to employees in field and regional offices, using over 40,000 intelligent terminals. To a large extent, this strategy reflected what was occurring in the late 1980s and early 1990s all across the American economy as companies and public institutions shifted digital applications to where people worked and away from large centralized data centers. They redesigned mainframe systems and rewrote older application software. Also, new applications included further conversion of thousands of pages of regulations from paper manuals to online sources to reduce the “sea of paper” employees worked with across the nation.49 Enhancing client files online would also make it easier for employees to work with individuals requesting benefits, bringing quickly to an SSA official information needed in order to adjudicate a request. Word processing would improve communications while integrating files that had remained stand-alone, despite work done to address that issue in the 1970s and 1980s. All were elements of the new plan. As the SSA implemented it in the 1990s and early 2000s, paper in hundreds of offices shrank in volume, more ergonomic workspaces were created, and dumb terminals replaced first by intelligent CRTs, then with personal computers, increasingly linked into telecommunications networks, both locally and nationally. While the SSA narrowly saw this plan as an infrastructure project, it did result in changes in the way work was done as well. For example, the SSA continued to expand its use of toll-free 800 telephone numbers in the 1980s and 1990s. The SSA implemented electronic data interchange for use by businesses to file earnings reports, and it introduced direct electronic deposit of benefit payments. Throughout the 1990s, volumes of transactions going through all three applications grew steadily.50 The three key benefits programs managed by the SSA underwent additional automation during the 1980s and 1990s. These included the oldest program, Old Age and Survivors Insurance, followed by the highly complex set of benefits known as Disability Insurance51 and Supplemental Security Income. Of all three benefits programs, pensions was the most automated, while the processes underpinning disability benefits were the least as late as the mid-1990s. At the time, however, core applications for all processes remained highly centralized on older systems and applications, although that continued to change all during the 1990s and early 2000s. Basic claims-taking processes and individual files had been largely automated (particularly for the pension programs, but not yet for disability) or partially integrated within benefits programs and not across programs. The SSA used largely improved and modernized systems implemented in the 1980s now accessible through intelligent terminals and an increasing number of PCs. The SSA became one of the first federal agencies to launch a Web site to serve the general public. On May 17, 1994, it launched on the World Wide Web SSA Online at http://www.ssa.gov. In 1996, the SSA awarded Unisys Corporation
153
154
The DIGITAL HAND, Volume III
the agency’s first contract to implement the IWS/LAN plan. Meanwhile, the SSA continued to add information to its Web site and experimented with various applications, such as providing online personal earnings and benefit estimate statements, an application that went up and down on the Web site over the next several years as the agency dealt with concerns regarding privacy and data security over the Net. In 1998, the SSA launched a pilot service to make it possible for people to apply for retirement or survivors benefits over its 800 numbers, offering an alternative to the earlier processes of filing forms or visiting an SSA office. Direct deposit payments continued to rise, reaching roughly 75 percent of all payments made by early 1999. This number was not higher because not all beneficiaries had bank accounts.52 By the middle of that year, the SSA had installed 75,000 new workstations and had created 1,742 LANs in either its offices or those of state agencies with which it worked. Meanwhile, the SSA’s Web site began receiving positive reviews from IT industry publications, no doubt encouraging management to rely increasingly on this channel of communications with its clients, a pleasant trend that continued right into the new century.53 Table 5.2 lists key applications added to its Web site in the 1990s. The agency’s intranet made available to employees a substantial amount of information to help them with their daily work. It also provided e-mail and a growing body of electronic versions of the SSA’s forms, which employees, and later the public (via the Internet), could use to file and change claims. Like other
Table 5.2 Key SSA Applications on the Internet and Intranet, 1994–2003 1994 1995 1996 1997 1998 1999 2000
2002 2003
First Internet site launched (http://www.ssa.gov) to provide public information SSA adds information about programs to its Web site SSA launches Online Personal Earnings and Benefit Estimate Statement (PEBES) SSA temporarily suspends online PEBES due to privacy and security concerns SSA launches Digital Library for staff use; SSA first government agency to be Y2K compliant SSA completes first phase of IWS/LAN project with over 75,000 workstations installed with 1,742 LANS used by SSA and state DDS facilities SSA launches Electronic Newsletter (eNews) for public to get SSA news SSA launches electronic Retirement Planner SSA makes it possible for Medicare beneficiaries to apply for replacement cards SSA places Forms Repository on its Intranet site SSA introduces eVital to provide immediate online verification of birth and death data information to speed up processing of claims and other services One millionth response to Internet-received inquiries achieved since 1996
Source: Social Security Administration, “History of SSA 1993–2000,” undated, http://www.ssa.gov/ history/ssa/ssa2000exhibit1-1.html (last accessed 9/10/2005); Larry DeWitt, Historian, U.S. Social Security Administration.
Social Security, the Census Bureau, and the USPS
agencies during the Clinton Administration, the SSA was required to reduce paperwork and in response moved many of its existing paper files to online versions, only altering work processes to integrate more fully online variants in subsequent years. Applications relevant internally to the SSA were also automated and enhanced, such as the development of an integrated human resources system that used off-the-shelf software to process various personnel actions. More mundane activities were supported by specialized automated systems: ordering supplies, processing purchase orders, and supporting various electronic commerce activities (for example, forms printing, creating labels, and maintaining bidders mailing lists).54 The SSA’s budgetary commitments to IT had always been high, but the innovations of the 1990s and early 2000s continued to drive up expenditures, making this agency a major consumer of IT. In 2001, for example, the SSA spent over $740 million on IT, supporting 548 operational systems and projects (70 percent of expenditures), and an additional 265 systems it acquired or developed that year (30 percent of expenditures).55 Since many of the projects under way in 2001 would become major systems widespread long after this chapter was published, it is worth cataloguing them (see table 5.3).56 In addition, the SSA employed over 5,000 workers directly in its IT operations.57 The SSA entered the new century with Congress, OMB, and the GAO more pleased with its performance than had been the case a decade earlier. It continued to worry about the management of its disability programs since the American population was aging and thus would file more claims in future years, and wrestled with a variety of planning and research initiatives, all quite benign when compared to the crises of the 1970s and 1980s. The SSA continued to enhance its disability software with an electronic claims intake process for use by its field offices in the early years of the new century. It also began implementing
Table 5.3 Major IT Projects at the Social Security Administration, circa 2001 Financial Accounting System (FACTS) Managerial Cost Accounting System (MCAS) New national 800 number call center applications Talking and listening to customers Title II system redesign Electronic Service Delivery (ESD) Internet customer services Paperless program service centers Electronic wage reporting system Security infrastructure and operations support Source: General Accounting Office, Information Technology Management: Social Security Administration Practices Can Be Improved, GAO-01-961 (Washington, D.C.: U.S. Government Printing Office, August 2001): 11.
155
156
The DIGITAL HAND, Volume III
a related Internet-based process for people and health providers to send medical information to the SSA while making it possible for retirees to apply for benefits over the Internet.58 GAO, known famously for being critical of the operations of federal government agencies, however, expressed favorable opinions of the SSA in the early 2000s. In one assessment of its operations reported on in January 2003, GAO auditors stated: “Our evaluation of SSA’s information technology policies, procedures, and practices in five key areas—investment management, enterprise architecture, software development and acquisition, human capital, and information security—found that the agency has many important information technology management polices and procedures in place.”59 To be sure, it worried about identity theft over the Internet since social security numbers “are widely found in public records” and increasingly on the Web.60 So, as with all other federal, state, and local agencies using computers, the issue of data confidentiality remained an important issue for the SSA in the late 1990s and early 2000s. As of 2007—when this chapter was completed—the SSA managed a variety of applications and processes that were relatively stable and effective when compared to those of prior decades. Its technical infrastructure was reasonably modern and better managed than it had been since the 1960s. In late 2005, SSA operated over 1,500 offices, while in excess of 158 million individuals were paying social security taxes, and 48 million individuals received payments exceeding $490 billion. It employed 65,000 people and worked with an additional 16,000 state employees. The SSA was finally creating an online Electronic Disability (eDib) process that promised to liberate the agency of vast quantities of paper folders. Its human resources systems continued to expand, while management began using new reports on productivity and allocation of resources. In 2003, the SSA had launched a newly designed accounting system called Social Security Online Accounting and Reporting Systems (SSOARS), now the agency’s accounting system of record, with additional modules added in 2004 and 2005, that by 2006 was a major tool for management.61 The SSA had one of the most visited Web sites of the federal government in each of the early years of the new century as well, and 2005 proved to be no exception, well on its way to exceeding the 35 million visits it had in 2004. As it reported to its constituents and employees in 2005: Today, people can apply for retirement, disability and spouse’s benefits over the Internet, and use Social Security’s benefit planners to help determine what benefits they and their families would be eligible for. Services for current beneficiaries include: change of address, direct deposit, replacement Medicare card, replacement 1099, and proof of income letter request. Cumulatively, these and all other online services handled over 1.2 million transactions in FY 2004, a 225 percent increase over FY 2002.62
A similar tale could be told about internal administrative reporting and operations, many now handled through the SSA’s intranet, including how it handled many services, such as retirement and disability claims processes. The SSA’s
Social Security, the Census Bureau, and the USPS
newest database (eDib) worked along with such other systems as a new Case Processing and Management System (CPMS), which could access eDib’s database, but also produced a variety of operational and tracking reports.63 New functions made possible now by technological innovations led to new applications just appearing in federal agencies in the early years of the new century. These included online (electronic) signatures enabled by legislation signed by President Clinton; the SSA implemented such a signature proxy for claimants filing in person, online, or by telephone in 2004. In short, many of the agency’s IT enhancements focused on expanding use of the Internet by the public (citizens and employers, each of whom used different applications) and its intranet site by employees and other public officials.64 One IT application became a hugely popular service, one based largely on the old-fashioned batch processing, printing, and mailing out in paper form. Called the Social Security Statement, it is a printed document mailed to over 140 million workers each year that explains the SSA’s offerings, tells individuals how much they have paid into the system, and what they might expect to receive in the way of benefits upon retirement. The SSA uses the occasion to provide other information to these workers. As with all processes at the SAA, this one too has high volumes. In the several years since it has been functioning (to mid-2005), the SSA mailed out over 780 million automated statements, roughly 500,000 each day for an annual number of approximately 125 million statements. The process is essentially completely automated.
Bureau of the Census The federal government currently has some 100 agencies whose mission it is to collect data on a vast array of issues ranging from economic activities and population to agricultural productivity and health. Many of these agencies are large and critical to the functioning of various other governmental bodies. These statistical agencies include the Bureau of Justice Statistics, Bureau of Labor Statistics, and Centers for Disease Control and Prevention.65 All federal departments also collect statistics. But the most senior and best known of these agencies is the Bureau of the Census, whose mission the founding fathers of the American government embedded into their constitution in the eighteenth century. In the twentieth century, statistical agencies as a group advanced the field of statistics, were major supporters of the use of information technologies of all kinds, even participated in the early development of punched-card tabulating equipment and later computers (most notably at the Census Bureau), and shared innovations among themselves. In short, they represent a subculture within the government that, while highly fragmented, interacted extensively, and that, as a group, have hardly been studied by historians and others interested in public administration.66 Throughout the twentieth century, the Bureau of the Census was responsible for conducting various censuses and disseminating results of that work. These collections of data included population counts every decade and intermittently
157
158
The DIGITAL HAND, Volume III
within decades, others related to economic and social issues, and publication of findings and data in a variety of ways: printed reports and later large digital files accessible by researchers, public officials, and, ultimately, citizens. So there was a constant need for tools to collect, analyze, and publish data, and always to do it both cost effectively and in a timely fashion. It was thus no accident that when computers were first being secretly developed during World War II, officials from the Bureau of the Census became involved in exploring their possibilities. As historian Margo J. Anderson observed, “statisticians saw that the savings from such technological innovations would enable the bureau to process more data, at a lower cost, and with more accuracy.” But also—and key to understanding how work would change over time—she pointed out that “the savings could also be plowed back into further improvements in the statistics—to improve survey methods, to analyze errors, to try experimental new techniques.”67 Between the end of the census of 1940 and that of 1950, the bureau first encountered computers. As early as October 1944, John Mauchly (of the John Presper Eckert and Mauchly team that later built the first UNIVAC computer) met with William Madow of the bureau to discuss the potential uses of computing to speed up sorting and tabulating of census data, discussions that continued over the next couple of years. These conversations influenced Mauchly and others in his firm in the design of their new system, focusing particularly on how to handle massive quantities of data that had to be processed in a repeatable fashion. In the late 1940s, the bureau partially funded R&D on their new system, and in 1948, the government placed an order for one UNIVAC on behalf of the bureau. On March 30, 1951, the bureau took delivery of the first UNIVAC I, Serial Number 1. The immediate importance of this event lies more on the effect it had on computing in general rather than on the bureau since it was only used for a few calculations involving the census of 1950. More important, the construction and successful operation of the system made it possible for Eckert and Mauchly to demonstrate the viability of computers and helped launch the commercial computer market. The UNIVAC I became the poster child of the new technology during the early 1950s.68 Because of the ongoing conversations with its developers dating back over a half decade, officials at the bureau had become quite familiar with computing technology, often serving as experts to other agencies in the late 1940s and early 1950s. As for the census of 1950, the system’s construction was completed too late to play a central role. That census took place in essentially the same manner as prior ones with data collected, edited, coded, punched, and tabulated using some 6,000 older machines, ranging from card punches to tabulators. For that census, the bureau created nearly 200 million card records; just to punch holes in the cards required about 114,000 man days.69 Its own internal history of that census recorded that while the UNIVAC “was the most far-reaching innovation in automatic tabulating equipment” of the day, it “did not become available until late in the tabulating program and was used for only a small part of the population and housing tabulations” by moving data from cards to tape that then could be sorted using software. Initial use proved effective in reducing the number of
Social Security, the Census Bureau, and the USPS
times data had to be read to be sorted into various categories of information. Officials concluded that “with machines like the Univac, future censuses should be processed with considerably greater speed.”70 The key technological innovation that caught the attention of statisticians was the ability of the machine to work with large quantities of data on magnetic tape. While the UNIVAC has received much attention in the historical literature, in the run up to the 1950 census, the bureau and its suppliers of IT (most notably IBM) continued to improve existing precomputer technologies as done in preparation for prior censuses.71 The biggest data processing problem the bureau faced concerned data collection. It was too slow, too expensive, and required too many people; management had to automate the process as much as possible. Conversations in the 1940s and 1950s with the National Bureau of Standards, Mauchly, IBM, and others dominated much effort at the bureau. By the early 1950s, however, the possibilities of optical sensing and microfilm offered a way out and led to the development of Film Optical Sensing Devices for Input to Computers, better known as FOSDIC. A beam of light directed at microfilm created an image of a spot on the film, which could be read by a photoelectric cell connected to a device that “wrote” the pulses (dots) onto magnetic tape. That tape then served as input into the computer. Early versions of FOSDIC I could handle 2,000 spots per second and up to ten times as many by the 1970s, all very accurately. This new way of collecting data first went into use at the bureau in 1954, and over the next two decades improved models and approaches were developed, such as FOSDIC II, which went into service in 1957, FOSDIC III in the 1960s, and later FOSDIC IV. These classes of machines
Figure 5.4
First FOSDIC system at the Census Bureau at Jeffersonville, Indiana, circa 1960. (Courtesy Census Bureau)
159
160
The DIGITAL HAND, Volume III
were used for the 1960, 1970, 1980, and 1990 censuses. For decades, therefore, the combination of automatic microfilming and digital computing was used to pull into computers ever increasing amounts of data at faster speeds.72 By the time of the census of 1960, the bureau had begun to make strides in data entry (FOSDIC) and, of course, now had in-house experience with digital computers.73 The number of people living in the United States had increased to 180 million in 60 million homes. Employees of the bureau, called enumerators, called on these homes to collect the necessary data. Improved processes for handling the data, and now greater reliance on computing to tabulate information, meant that initial reports to the nation on the size of its population were published six months faster than for 1950’s, and final reports of the census, more than a year earlier than in the past. Enumerators used machine-readable forms and the bureau’s four FOSDIC systems. The data were then tabulated using computers. Much of the work was centralized for the nation at a facility in Jeffersonville, Indiana, where clerks checked the work of the enumerators for accuracy and so forth. Two Univac 1105 systems had replaced earlier systems, and magnetic tape now served as input into the systems and as output that could feed printers to produce final reports, along with other 1105s located at the University of North Carolina at Chapel Hill and at the Armour Research Foundation at the Illinois Institute of Technology in Chicago.74 During the 1960s came the huge surge in advancements in computing, symbolized by the IBM S/360. Disk memory became widely available as an alternative to cards, microfilm, and magnetic tape. Machines became faster and capable of handling larger volumes of transactions and data, but the bureau did not fully exploit these new capabilities, other than for handling larger volumes of data and performing some new data mining functions. Many of the systems needed for the next census had been put in place as early as 1962–1965, just before the arrival of the major technical innovations of the decade, too late to be fully integrated into the next great count. As with earlier censuses, the population had grown and the volume of information to be collected on each person had as well, in fact, for a total of over 4 billion pieces of information. The bureau wanted to make a great deal of this raw data available in digital form to officials and scholars. Innovations in software and programming languages also made possible new capabilities that the bureau was able to take advantage of with existing hardware. One reporter wrote in 1970 that “the improved utility of the census is due mainly to development of new software which can reformat and re-aggregate census statistics in ways specifically desired by the end user.”75 Just as important, users could relate census data to random lists of people, which allowed marketing managers, for example, to estimate the incomes of people in specific locations. Four Univac 1107s and two 1108s were at the ready to process this year’s data, while relying on yet six newer models of the FOSDIC input systems, reducing to some 8,000 tapes the nation’s entire census. In 1970, for the first time, most of the questionnaires were mailed to households rather than relying so greatly on just human enumerators to collect all the data. The forms were
Social Security, the Census Bureau, and the USPS
machine-readable and because more data was collected, there were more machine-readable statistics. In 1960, only a few digital statistical files were made available to the public, but with 1970, data on all questions asked were in machine-readable form. This represented a massive increase in statistical data on the nation. In response to this capability, some sixty firms came into existence to analyze this information for companies, such as for marketing departments and insurance companies. Census officials also expanded their software protections for data confidentiality (required by law), which had become of much concern, beginning in the 1960s with the growing dependence on computers.76 For 1970, the bureau added questions, did sample surveys on employment, inventories, and another on residential finances, all part of the growing trend of collecting increasing amounts of data about the nation by statistical agencies. The bureau also planned on producing a much larger array of demographic maps, a process initiated using computing with the 1960 count, although at that time on a far more tentative basis. Final reports totaled some 200,000 printed pages in paper and microfilm and some 117 printed reports. The data processing applications built on those deployed in the early 1960s, and accounted for the experiences gained in the mail-out/mail-back census returns, and for the requirement to present more information, along with the expanded publications of maps. Calculations and publications were computerized. Univac 1100s worked with two now aging IBM 1401s. The bureau had yet to take full advantage of disk storage, and it continued to rely on FOSDIC-based processing of input. Yet once again, the bureau had deployed the largest single use of computing to collect data in the history of the nation.77 While special interest groups and politicians debated whether or not the Census Bureau had undercounted the population, particularly minorities, in the 1960s and 1970s, deployment of computing for the 1980 count progressed. As the internal record of that census reported, “the 1980 program resembled the one for 1970 in scope, but with far greater emphasis on disseminating data on computer tapes and microfiche.”78 The volume of machine-readable reports published increased fivefold over 1970’s.79 Because the bureau began planning the next census while tabulating the last one (a pattern of dual operations dating back to at least the early 1960s), 1990’s became a major project during the 1980s. Continuing questions about undercounting had led the bureau to consider various techniques for statistically estimating populations, beginning in the 1960s, which became an urgent issue by the early 1970s. Being able to use continuously emerging statistical techniques and obviously more powerful computers to perform necessary calculations on increasing amounts of digitized census data influenced the technical discussions among experts and the bureau.80 Driving the work of the bureau from one census to another was the growing appetite for more and better (also accurate) data by public officials and the private sector, motivated by the succession of additional capabilities the bureau acquired over the years by using computers.81 The workload the bureau experienced in the 1980 count proved higher than originally planned; indeed, it counted over 5 million more people than
161
162
The DIGITAL HAND, Volume III
anticipated and actually ran out of budget, which Congress fixed with a supplemental appropriation. In the postcensus dialogue internally within the bureau, employees spent considerable time discussing the still labor-intensive data capture processes, particularly the manual field data collection tasks. One internal report written in 1984 on these activities during the census taking of 1980 noted that “55,000 clerks checked-in, checked-over, and hand-sorted into batches all of the 88 million decennial census documents.” In addition, “this was a time-consuming, expensive, non-uniform, and uncontrollable operation,” made worse by the fact that the census “employed over a quarter of a million people.”82 To avoid that, and other problems from the prior census, for the 1990 census the bureau chose to implement a strategy similar to the SSA’s, namely, to increase automation of its field operations. In January 1986, the bureau decided to acquire roughly 555 minicomputers to beef up its processing power. These were to be used to automate the check-in process by which questionnaires were received and checked for completeness, obvious errors, and so forth, much like the IRS began doing with electronically submitted returns later in the century. These machines would also be used to prepare maps for enumerators and to automate further other steps in the preparation of various and economic censuses. Bid proposal conflicts, however, delayed procurement of these machines, putting the bureau at risk of not being able to test software and processes in its dress rehearsals for the 1990 census. But a smaller number were installed and used. In addition, more digitized data were published, made available on CD-ROMs.83 Preparation and publication of statistical tables were significantly automated over prior years while its earlier software for tabulating all results were updated to reflect the current questions asked of the population at large and in support of various statistical studies under way. Finally, the bureau expanded a service begun in the early 1980s of making available online some of its published reports, which could be reached through time-sharing services. This service proved highly successful, ranging from a low of a few thousand users in the early 1980s to some 50,000 downloading 396,000 files (largely onto PCs) in 1992.84 Census maps had long been popular, and in the 1980s and 1990s, the bureau expended much effort in automating their creation and publication. For 1990, the bureau wanted to create “a single, nationwide, digital geographic and cartographic data base from which to produce all the required geographic products and with which to perform the geographic services of geocoding,” an impressive project.85 Working with the U.S. Geological Survey (USGS), the two created the Topologically Integrated Geographic Encoding and Referencing (TIGER) System, which was used as part of the census of 1990 and 2000. All maps could be generated by computer, using data provided by the USGS. TIGER covered the entire nation, making this one of the largest GIS applications implemented in the United States. The system could produce over 146,000 different maps. These contained data on roads, railroads, hydrography, various transportation features; all were integrated in specific local, regional, and national maps.86 In addition to this application, the bureau developed a new address control file for over 100 million housing units in the United States, done to
Social Security, the Census Bureau, and the USPS
support tracking and controlling data collected during the 1990 and future censuses. It included bar coding of census questionnaires, which tied the data in those documents to the new system.87 For conducting the census of 2000, the bureau used a combination of preexisting and new digital applications conjoined for this year’s data collection. The heart of the IT infrastructure consisted of ten major systems, listed in table 5.4. These did not include the more internal operational systems required by the federal government for budget, payroll, and human resources. The key
Figure 5.5
By the end of the century, all data entry was online, circa 2000, Jeffersonville, Indiana. (Courtesy Census Bureau) Table 5.4 Key Digital Census Applications, 2001 Headquarters Processing Data Capture System Geographical Support System Operations Control System Pre-Appointment Management System/Automated Decennial Administrative Management System Telephone Questionnaire Assistance and Coverage Edit Follow-Up Internet Data Collection/Internet Questionnaire Assistance Accuracy and Coverage Evaluation System Management Information System Data Access and Dissemination System Source: U.S. General Accounting Office, 2000 Census: Headquarters Processing System Status and Risks (Washington, D.C.: U.S. General Accounting Office, October 2000): 16–17.
163
164
The DIGITAL HAND, Volume III
observation to draw from table 5.4 is that all these systems had to operate interactively with each other to allow the bureau to count the nation. Each system also had subapplications; for example, the headquarters processing system had forty-eight applications that did such things as update address files, create files of census responses, and prepare data for tabulation and dissemination.88 The GAO observed that as in prior censuses, there were issues concerning confidentiality of data, accuracy of information, and insufficient staff to handle complex systems, concerns it had with other government agencies as well. In 2005, when the bureau was already gearing up for the census of 2010, 10 percent of its permanent workforce worked in IT (1,100 employees out of 12,000). In addition, it already had 500 consultants working in IT projects.89 In preparation for the 2010 census and other studies, it upgraded existing applications and invested in new systems. New applications of the digital included an American community census, a new master address file topologically integrated, an automated export trade statistics system, data access and dissemination systems, demographic statistics IT support systems, economic census support tools, e-filing of data, field support, and additional software for its geographic support system.90 In short, IT remained a major focus item for this bureau as it entered the new century.
United States Postal Service Today this organization is called the U.S. Postal Service (USPS), a government agency structured as a quasi-independent organization since 1971, ruled by a board of governors appointed by the president with its mission articulated by various federal laws. Between the 1790s and the 1820s, it was called the General Post Office, and afterward to 1970, it was the Post Office Department (POD); it became a cabinet level department in 1872. While those who have written about the postal system differentiate the subject as pre- and postreorganization, when viewed from the perspective of how IT affected the work of the POD or USPS, the differences are less obvious, because the adoption of technology was a long, slow process that caused the work of both organizations to evolve over time in incremental ways. A more useful chronological distinction accurately reflecting the changing nature of postal services should be either of two other dates. The first could be some time in the 1980s, when competition from the private sector became legally possible and pronounced (such as the services of UPS, Federal Express, and others) and cut deeply into its rapid delivery (e.g., overnight mail) services and delivery of packages. The second date could be one after the wide adoption of the Internet in the second half of the 1990s, because this new use of IT challenged the need for First Class Mail service, which provided the largest single source of revenue for the USPS. It was also the one technology that finally called into question whether the USPS either needed to be completely privatized or would simply disappear at some time during the full flowering of the Information Age deep in the twenty-first century, a theme hinted at by the
Social Security, the Census Bureau, and the USPS
epigraph introducing this chapter. The discussion below does not focus extensively on the increased role of competition because that had less to do with technology and its effect on the nature of work at the USPS than it did with regulatory changes making possible alternative service providers. The Internet, however, has to be faced squarely. From its earliest days, when Benjamin Franklin was its first postmaster general, the Post Office had as its core mission to provide universal access to information through postal delivery of mail and newspapers, later as well other publications, financial documents, and packages. It was the nation’s first “information highway.” Throughout the twentieth century, Congress added the requirement that the department (and later the USPS) be financially self-sustaining, that is to say, be able to finance its operating costs with the fees collected for its services. Congress also established ground rules for how those charges could be set and changed. The POD and the USPS have more often than not been unable to be financially self-sustaining for many reasons we cannot get into here but important to note because all through the second half of the twentieth century, it sought to use mechanization of mail collection, sortation, and delivery as an essential way of lowering operating expenses, and computers to reduce operating costs and to sustain high levels of service. For many postmaster generals of the late twentieth century, mechanization and automation were some of the very few options available to their organization for controlling expenses. Reducing salaries and benefits—the lion’s share of their expenses—or the number of employees, an option often chosen by executives in private sector service companies, proved very difficult to exercise due largely to the effective prowess of the postal unions, which fastidiously protected jobs and pushed through significant increases in compensation, benefits, and work rules, even launching a national strike when needed, an activist agenda that was most pronounced after the 1960s. The option of raising the cost of postal services— such as the price of stamps—met with frequent public, White House, and congressional resistance and when exercised never provided sufficient revenue to meet operating expenses. Hence, the importance of computing and other technologies to the modern postal service, which will probably continue to affect the course of events at the USPS since the other two options remain problematic.91 The story is also a big one because the USPS is one of the largest organizations in the American economy, often the second or third largest employer within the U.S. government (depending on whether we measure the total number of workers in peacetime or war), routinely ranked as one of the ten largest employers in the nation, and it handles massive quantities of mail. Table 5.5 displays a variety of data to suggest the size of the enterprise. Size is important because that affects the ability of an organization to leverage economies of scale, as happened at the Department of Defense. Size is also a gating factor for how quickly (or slowly) an institution can change, and how visible it is to its various constituencies and rivals. In the case of the USPS, for example, the vast majority of its 900,000 employees are unionized and enjoy higher levels of salary and benefits than rivals in the package delivery business, making it difficult for the USPS
165
166
The DIGITAL HAND, Volume III Table 5.5 U.S. Postal Service at a Glance, 1950–2005
Year 1950 1960 1970 1980 1990 1995 2000 2005
Pieces of Mail Processed (Millions) 45 64 85 106 166 181 208 212
Number of Post Offices* 41,464 35,238 32,022 30,326 28,969 28,392 27,876 27,385
Revenue** ($ Billion) 1.7 3.3 6.5 18.8 39.7 54.3 64.5 69.9
Expenses ($Billion)
Number of Employees*** (Thousands)
2.2 3.9 8.0 19.4 40.5 50.7 63.0 68.3
501 563 741 667 843 875 901 803
*The numbers are lower than actual because these do not include retail called “contract stations and branches,” or “community post offices,” which add another 3,000 or more to the totals. **These are operating revenues and operating expenses. ***The numbers are lower than actual; the USPS also has what it calls “non-career employees,” which in 2005 alone added an additional 98,284 people to the count and in most years in the early 2000s, exceeded 100,000, bringing the total employment base back to where it was in the 1990s. Source: U.S. Postal Service, The United States Postal Service: An American History, 1775–2002 (Washington, D.C.: U.S. Postal Service, September 2003): 53; U.S. Bureau of the Census, Historical Statistics of the United States: Colonial Times to 1970 (Washington, D.C.: U.S. Government Printing Office, 1975): Part 1, p. 806; U.S. Census Bureau, Statistical Abstract of the United States: 2002 (Washington, D.C.: U.S. Government Printing Office, 2001): 692, and annual reports.
either to transform the nature of its work or to get its costs in line with those of its competitors in an age when in addition the Internet poses an enormous threat to its income. Table 5.5 includes data on 1995, the last full year before the Internet’s rapidly increased use by American society was felt across the economy and the latest information at the time this chapter was being written (2007). Very obvious is that the services of this organization did not go down with the public’s adoption of the Internet as a crucial source for information and to conduct business, or with the success of private sector rivals, as one might otherwise assume should have been the case. The reasons are manifold, but they include the fact that the population of the nation continued to grow, while the economy expanded in that same period as well, both factors, thereby, creating collaterally more business for all. The data also reflects the historic practice of the nation of using concurrently all manner of information platforms, tools, and providers. Americans used simultaneously the Internet in increasing amounts and paperbased mail and package delivery services, just as they did various forms of music (CDs, tapes, and Internet downloads) and publications (newspapers and Internet news sites).
Social Security, the Census Bureau, and the USPS
The postal system is unique in one other related way: its employees routinely visit nearly every household and establishment (business, public and private) in the United States, normally once a day, six days a week. No other government agency does that. In 2005, for example, that meant the USPS routinely delivered mail to over 142 million addresses, while over 7 million customers conducted transactions at its post offices. It had to collect mail from over 280 million places (buildings and blue mail boxes), operate nearly 200,000 vehicles and dozens of large postal sorting facilities. It delivered over 98 million pieces of First Class Mail that year and had an operating revenue of $69.9 billion. Its tag line, “we are everywhere you are,” was thus not such an exaggeration, particularly since (by then) the USPS was also accessible over the Internet for information and to conduct transactions, such as the purchase of postage stamps.92 All of these highly visible activities involved extensive use of information technology. Early Deployment of Mechanization and IT, 1950–1983 In the postal system, there were two activities that took place simultaneously: mechanization of the acceptance, sortation, and delivery of mail and use of computing and telecommunications for the operation of the institution. For the majority of the period we look at, these two tracks of activities occurred almost independently of each other. Then, in the 1970s, the two began to intertwine, most noticeably because of ZIP codes, which made it possible for digitally enhanced equipment to sort mail for delivery. Each track presented fascinating examples of how large organizations react to the emergence of new technologies and deploy them. But, it is important to understand that frequently the two were separate, and that circumstance was not always clear. A reading of the annual reports of the postmaster general written over the decades can be confusing, for example, because frequently postal officials in the early years of the computer thought of this technology as yet another tool to mechanize (their term) or automate (the word used more frequently by other government agencies) both core activities. The Post Office constantly expressed interest in leveraging machinery. While no comprehensive history of this extensive activity at the Post Office has been written, we can discern many of the major activities by reading the annual reports, which discussed mechanization every year right into the new century. Take 1954’s statement as an example of management’s intent, one repeated in many ways over the next half century: “The mechanization of postal operations to the maximum extent is an integral part of our program to improve service and reduce costs. We are exploring all possible methods of utilizing machinery to simplify or eliminate the major manual operations.”93 In the 1950s, work started to look at how to use electronic devices to read addresses as a way of speeding up sorting, an initiative that resulted in the creation of ZIP codes in the 1960s. All through the 1950s and 1960s, postal engineers worked with various companies to develop automatic culling, facing, canceling, scanning, and sorting machines. Some of these machines were massive in size and worked as systems
167
168
The DIGITAL HAND, Volume III
of integrated devices used at regional offices to sort mail for delivery to individual post offices. Then in the late 1950s, the Post Office began experimenting with optical scanning technology to cancel stamps in the upper right-hand corner of envelopes.94 A key provider of technologies in support of mechanization initiatives was the Burroughs Corporation, also a major vendor of computing equipment in the 1950s and 1960s.95 On November 28, 1962, the Post Office introduced the Zoning Improvement Plan for mail sorting, better known as ZIP Code numbers, to start deployment in 1963. In its own words, the Post Office announced that “ZIP Code was designed to help the Department efficiently handle the rapidly mounting mail volume without a correspondingly large increase in manpower—to contain costs and forestall the day higher postage rates might become necessary.”96 Officials were clear on why they wanted the ZIP Code: “when fully implemented, [it] will reduce the number of times a letter must be handled, thus speeding its dispatch and
Figure 5.6
Data processing center in Richmond, Virginia, Regional Office, mid-1960s. (Courtesy USPS)
Social Security, the Census Bureau, and the USPS
reducing the number of mis-deliveries that might otherwise be encountered through misread addresses.”97 The Post Office made development of optical scanners to read ZIP codes its highest R&D priority. In the early 1960s, officials envisioned sorters and scanners with memories that could compare data on a letter to where it needed to be sent, and that linked this action to scheduling of transportation, which in turn meant the deployment of data processing, as well, at local Post Offices.98 In 1965, the USPS began creating digital ZIP code files.99 By the end of the 1960s, use of ZIP codes had been widely accomplished across various classes of mail and packages; by the early 1970s, it had become relatively ubiquitous with American mail. In the 1970s, wide deployment of bar coding and other devices to physically sort and move mail dominated much of the service’s mechanization initiatives. Some of the capabilities of the equipment proved very impressive. The postmaster general in his annual report of 1972–1973 noted that “envelopes with bar symbols can be sorted at speeds ranging up to 42,000 pieces per hour,” while providing companies that use these the ability to add additional “bits of information,” making possible more refined sorting of mail for delivery to specific departments or offices.100 Despite work done in the 1960s and 1970s to mechanize the handling of mail, Postmaster General Benjamin F. Bailar noted in his annual report of 1974–1975 that prior to the reorganization of POD into the USPS, “operational innovation was forced to lag behind those of other industries, and mechanization was minimal.”101 Yet development and deployment of mechanization mimicked the rate of innovations evident in other federal agencies, such as the Department of Defense, where often it took over a decade to develop a new tool and yet another decade or more to fully deploy it. Table 5.6, based on data collected by the USPS, provides evidence that mechanical sorting of mail was progressing quite rapidly in the early 1970s. By the end of 1980, it was sorting mechanically over 70 percent of all mail. To accomplish that task, the USPS spent nearly a billion dollars in equipment in the 1970s.102 Meanwhile, the ZIP code was expanded by an additional four digits to extend automation sortation down to the postal carrier’s individual route, a process of deployment that began during the 1980s. The earlier five-digit ZIP code was in use with over 97 percent of all mail by the early 1980s.103 Because the USPS chronically could not overcome fiscal deficits, the pressure to increase productivity through mechanization remained intense right into the new century.
Table 5.6 Letters Sorted Mechanically by the U.S. Postal Service, 1971–1977 Year Percent
1971 25
1972 35
1973 44
1974 52
1975 60
1976 63
1977 64
Source: U.S. Postal Service, Annual Report of the Postmaster General: Fiscal 1976 and Transition Quarter (Washington, D.C.: U.S. Government Printing Office, 1977): 6, Annual Report of the Postmaster General: Fiscal 1977 (Washington, D.C.: U.S. Government Printing Office, 1978): 7.
169
170
The DIGITAL HAND, Volume III
Yet progress had been made. For example, if we use the USPS’s own simple metric of gross productivity—total work years divided into total volume of mail handled—one sees demonstrated improvements, which it attributed largely to the use of new technologies. In 1973, the metric was 128,000 pieces, and by the end of 1983, that number had gone up to nearly 175,000.104 To be sure, costs had increased substantially too, so the simple measure has to be taken with the proverbial “grain of salt.” But what it suggests is that mechanization was having a direct and positive effect on how much mail its employees could handle and on controlling the rate of growth in operating costs. With regard to the large collection of back office functions, such as accounting, budgeting, cash management, and payroll, the Post Office had a variety of punched-card systems dating largely from the 1920s and 1930s, although its first use of this class of information technology began in 1913. Following World War II, as the department grew in size, it sought to consolidate and optimize operations, leveraging punched-card technology of the late 1940s and early 1950s. For example, in 1956, the department completed a major reorganization of disbursements and mechanization, moving to a totally punched-card form of check writing to pay its bills and payrolls, moving these functions from individual postmasters to regional offices. In the case of payroll checks, that meant some 520,000 issued every two weeks could now be partially automated and the cost to process them reduced, both for POD and the Treasury Department. As one department report noted about the new process, it “has simplified and drastically curtailed paperwork at post offices” and created “valuable by-product data . . . replacing mass accumulation of reports heretofore manually prepared by the operating bureau.”105 This application of predigital technology is important for several reasons. First, the combination of using technology and reorganizing to leverage economies of scale proved to be a positive experience for the department, thereby encouraging it to make other changes involving the combination of technologies and reorganizations. Second, it increased the department’s use of IT for accounting, payroll, and so forth in large enough quantities that when computers came into the organization in the late 1950s, these now older applications of technology could take advantage of the large computing capabilities of the digital hand. In short, like so many other large federal agencies and private companies, the Post Office could use computers and initially did so for the same applications. Third, it made it easier to collect data leading to insights on the operations of the agency. In its annual report for 1956, the department noted that “the regionalization program has simplified post office accounting and expedited and improved accuracy of reports. The use of punched card equipment makes available important statistical by-product data for management analysis, planning, control, and decision-making.”106 Fourth, it stimulated use of such precomputer technologies in other areas, as in converting all money orders from paper documents to punched cards, a massive application involving 360 million money orders produced at a centralized facility in Kansas City, Missouri.107 Building on these applications of punched-card equipment at various regional centers, the Post Office Department began transferring this work to
Social Security, the Census Bureau, and the USPS
computers in the late 1950s. Its earliest computers came from Univac (Model 60 and 120 mainframes), initially installed in New York and Chicago.108 Other mainframes went into regional headquarters in Washington, D.C., Dallas, Atlanta, Cincinnati, and Philadelphia. By the end of 1959, eleven out of the fifteen regional offices had digital computers, primarily for handling payroll and by-product reports.109 This commitment of digital equipment reinforced the mission of the relatively new regional headquarters in their support function as accounting centers for the post offices around the country and as a source of information for national headquarters in Washington, D.C. This development set in place the pattern of organization and operations for many years to come. Older accounting methods were redesigned and ported over to computers. These included, for example, cost record keeping for the motor vehicle fleet (1961), reconciliation, control and audit of money orders (1962), personnel time and record applications (1962–1963), second-generation payroll systems (1962), early operations management systems and installation of second-generation computers in regional offices (1963–1964), production of summary information on revenue, pieces of mail, weight, pound per mile, class of mail handled, and revenue and transactions (1964), and preparation of budgets (1965). In 1964, the department began consolidating regional offices, with data processing also consolidating from fourteen to six centers.110 These now reported into a new organization called the Bureau of Finance and Administration, which had previously been two separate departments: Office of Management Services and Bureau of Finance. In this reorganization, we see the second-order effects of the digital hand, an organizational transformation.111 The first order had been the reduction in cost of operations that resulted from consolidating and optimizing functions on computers. Recall that the transformation to the ZIP code process for sorting mail was also under way at the same time, relying on computers as well. The department continued to add new accounting applications all through the 1960s and in the 1970s, while simultaneously upgrading computing and telecommunications as new machines became available (such as IBM’s System 360 Model 65 installed in 1969) for use at both its national headquarters and by regional and very large post offices. One of the more important applications that remained a major source of data for the department in various forms for decades was the Origin-Destination Information System (ODIS). The department began collecting data to determine volumes of mail by point of origin and destination, using sampling methods and computers in 1969. This data was used to schedule workflows, determine manpower needs, deployment of trucks, and so forth, making ODIS a core managerial tool in the 1970s.112 The telecommunications backbone for this and other systems was the postal source data system (PSDS) deployed across its post offices and regional headquarters in the 1970s.113 It is customary for observers of the American postal system to characterize operations in this department as transforming slowly prior to its major reorganization into the USPS in 1971. However, with regard to computing, the Post Office Department moved into computing as fast as other government agencies and industries and, in the case of the ZIP code, did not hesitate to embrace a
171
172
The DIGITAL HAND, Volume III Table 5.7 Major IT Applications Implemented at USPS, 1968–1984 1968 1974 1975 1977 1977 1977 1978 1982 1982 1982 1983 1984
Postal Source Data System (PSDS) Code Directory System PSDS replaced with Management Operating Data System (MODS) Transportation Management System (TMS) Bulk Mail MIS Start of National Time Sharing, completed in 1979 Master File Data Base E-Com Direct Data Entry/Direct Reporting (DDE/DR) Permit System City Time and Attendance System (CTAPS) Corporate Information System (CIS)
Source: James L. Golden, Information Technology: 40 Years of Innovation and Success (Washington, D.C.: U.S. Postal Service, 2005), unpaginated.
radically new application of IT. Many of the major IT applications and projects of the 1970s were extensions of initiatives already under way in the years prior to reorganization. Emphasis on cost cutting using computers, sustaining quality and speed of service delivery with ZIP codes, and other increasingly digitized applications were points of focus for management before and after reorganization. Table 5.7 catalogs many of these events. Critics, however, noted that speed of deployment could always be quicker. Postmaster General Benjamin F. Bailer admitted as much in his annual report in 1975: “Prior to postal reorganization . . . operational innovation was forced to lag behind those of other industries, and mechanization was minimal,”114 a statement, however, not fully supported by the historical record. There is no doubt that the USPS increased its interest in IT in the post-1971 period. It installed more modern equipment earlier than before, for example. Thus, while IBM’s mainframe of the 1960s, introduced in 1964 and first shipped in 1965 (S/360), did not make it to the Post Office until 1969, the company’s follow-on product, the S/370, went in earlier in its life cycle, presenting the USPS with photo opportunities for its annual reports by mid-decade.115 By the late 1970s, the USPS was also using minicomputers at the same time as these new machines were being used in the private sector because they made it possible to continue automating manual operations, such as the process for forwarding “undeliverable as addressed mail.”116 Deployment of IT before the Internet, 1984–1994 During the 1980s, the twin strategies of using ZIP codes with OCR equipment and bar codes commingled into a pervasive way of handling mail. While manual
Social Security, the Census Bureau, and the USPS
collection and sorting never disappeared, and deployment of OCR equipment came slowly in the 1980s, ultimately deployment of these two technologies was extensive and altered substantially how many tasks were performed, enough that by the late 1990s, one could begin to see emerge a new style of physically handling and tracking mail. In 1986, the USPS was already beginning to deploy a second generation of OCR equipment, designed for its own uses. During the second half of the 1980s, the USPS increased its use of bar coding, reflecting the same level of interest and applications evident across many manufacturing, retailing, and distribution firms and industries in the American economy. The USPS did this with an eye to bar coding virtually all mail by the mid-1990s, because the cost of handling a piece of bar-coded mail was far less than half of a manual operation.117 Between 1987 and 1991, the USPS installed over 2,000 OCR, bar code sorters, and other devices, the start of an extensive deployment of new equipment that occurred over the next several years. Postal officials attributed their ability to reduce the number of employees directly to their use of this equipment, in effect articulating the same rationale for embracing technologies of various types evident at such other federal agencies as the Bureau of the Census and, most notably, at the Social Security Administration.118 By the early 1990s, the USPS was also encouraging business customers to pre-bar code their mail in exchange for discounted prices for postage. The USPS also deployed a strategy of replacing older equipment with more modern units that did the same work. It relied increasingly on vendors to evolve the technologies needed rather than to conduct its own internal R&D operations as it had in the 1950s through the 1970s. Thereby, the USPS seized the opportunity to drive down operating costs while avoiding the risk of installing obsolete equipment. Auditors from the GAO looked at the results in the early 1990s and concluded that since “more than half of the work of the Service is not directly affected by automation, this reduction did not have a perceptible effect on overall postal costs”; in short, despite extensive deployment, more was needed.119 Deployment, while continuous, remained a slow process by the USPS’s own admission. For example, in its annual report for 1994, the USPS acknowledged it had installed only 40 percent of all the automation equipment it wanted, at the not inconsiderable cost of $2.6 billion since 1987.120 Desire to reduce the number of employees also proved illusive. In the period 1992 through 1995, for example, despite public declarations that it intended to eliminate 30,000 employees, the number actually went up, from roughly 782,000 to 855,000. To be sure, the volume of mail had increased in the same period of time. The GAO noted that the increase in employment came in jobs that were labor intensive and that had not yet fully felt the effects of automation: clerks, nurses, city carriers, and mail handlers.121 A second broad area that the USPS approached with IT involved retail sales operations. IT began influencing retail functions in a tentative, yet visible way in the 1980s. For example, USPS experimented with a service to provide rapid transmittal of faxed letters and documents to over two dozen countries in the mid-1980s over a dial-up network. The offering, called INTELPOST, became
173
174
The DIGITAL HAND, Volume III
available in some 145 post offices in twelve cities, a small deployment when one keeps in mind that there were tens of thousands of post offices, but nonetheless an early experiment with telecommunications. A second early project was called Electronic Computer-Originated Mail (E-COM), which failed to catch on in the mid-1980s and, thus, due to low volumes, cost more to offer than the USPS collected in revenues; so the USPS shut down the offering on September 3, 1985.122 Meanwhile, in the same decade, post offices began installing point-of-sale terminals, with major deployment taking place during the second half of the decade. By using POS terminals, the USPS hoped to shorten the time it took a customer to conduct a transaction at a post office, since the terminals housed rate
Figure 5.7
Automated postage stamp dispensing machine, 2002. (Courtesy USPS)
Social Security, the Census Bureau, and the USPS
and other mailing information. It wanted to achieve twin objectives of shortening the time a customer had to wait for and then conduct a transaction, while additionally increasing the number of people a postal employee could work with. By the mid-1990s, deployment of the USPS’s Integrated Retail Terminal was relatively ubiquitous. The reasons for its success were not hard to find. In addition to speeding up service and improving the quality of that service (such as by using electronic scales to determine very accurately postage), it collected accounting data for the USPS. During the 1990s, the USPS added functions to the system, such as printing receipts and production of self-sticking postage labels with bar codes.123 As the body of information that the USPS collected via bar and ZIP codes increased, along with deployment of scanners throughout at its post offices, it became increasingly possible to start tracking packages and mail, a function done by its competitors in the private sector. In 1991, the USPS began work on what ultimately became its digital tool for tracking Express Mail, expanding the service nationwide in the early 1990s. During the last decade of the century, the USPS expanded its use of tracking mechanisms for its own use, while making that kind of data increasingly available to customers via telephone and, later, the Internet. The third area in which the USPS used IT concerned internal operations. As in all other agencies, the USPS had a variety of financial and accounting systems running on its computers to track budgets, expenditures, deployment of manpower, project management to manage the construction and maintenance of post offices and to manage maintenance and deployment of vehicles. These were systems that had been incrementally deployed all through the half century, often in response to requirements set either by the Treasury Department or the Office of Management and Budget (OMB), and after creation of the USPS, by Congressional edicts. In its use of IT for such mundane back office operations, the USPS was essentially functioning much like many other federal agencies. But like all agencies and private sector enterprises, its management also sought to use IT in ways specific to its mission. Perhaps the most important modern innovation began in the 1980s with the installation of online tools that postal management could use to control internal operations. In the 1980s, integrated systems provided management with quantified data on deployment of people, volumes of transactions conducted, and so forth, information necessary to run the business of the USPS and that supplied facts and insights to senior management and ultimately appeared in its annual reports. One major software tool, called the Supervisor’s Workstation, combined operational and planning functions, initially at mail processing facilities and later in other parts of the USPS, such as a system to help local post offices optimize delivery routes, one of the earliest expert systems at USPS.124 The GAO noted that in the 1980s and 1990s, during various efforts to restructure the USPS, shrink the number of employees, provide new services, and improve productivity, and with continuous growth in the volume of mail handled, service to the American public remained at levels very satisfactory to customers, Congress, and employees.125
175
176
The DIGITAL HAND, Volume III
The Postal Service in the Internet Age, 1995–2007 No public institution seemed affected so directly by the intervention of a new information technology as was the USPS, with the possible exception of the military. Most specifically, the wide use of the Internet to transmit e-mail has directly cut into the USPS’s First Class Mail volumes and revenues to such a sufficient extent as to stimulate an ongoing debate about whether or not the USPS would survive the twenty-first century and a less draconian option, whether it could be completely privatized to speed up its further transformation into a more competitive mail delivery service.126 As one very pessimistic observer noted in 2000—after the Internet had become widespread and interactive—the economic and functional balance of power was shifting away from the USPS: “Since about 1970, the nominal price of a first-class stamp has quadrupled, growing by about 10 percent in real terms, while the inflation-adjusted price of a long-distance phone call has declined by 88 percent and the price of a unit of computing power has declined by a factor of 10 million. The price of a cellular phone has fallen by 98 percent since 1984.”127 In addition to that harsh reality, the same commentator, Thomas J. Duesterberg, pointed out some internal challenges faced by the USPS, many of which had been well documented by the postal service in its annual reports: Despite more than $5 billion invested in automation equipment in recent years, the number of full-time postal employees has grown by about 5 percent since 1994. At the same time, the volume of mail delivered has been stagnant, growing at about 1 percent per year in recent years for first-class mail. Each full-time employee . . . moves, on average, about 223,000 pieces of mail per year. By contrast, each American Online (AOL) employee moves more than 13.8 million e-mail and instant messages per year and facilitates about 43 million “hits” on Web pages per year.128
The Pew Foundation documented the continuous growth in e-mail as well, some of which was at the expense of First Class Mail. While one could quibble over how much the USPS’s services mail volumes declined as a direct result of e-mail over the Internet, the numbers are stark enough not to ignore.129 The substitution of one technology for another (electronics for paper) was not lost on the USPS or other interested government agencies. It had experimented with e-mail as far back as the early 1980s with its E-COM offering, at a time when the USPS was already experiencing threats to its First Class Mail service from fax and electronic data exchange (EDS) services, which spread widely across many industries in the 1970s and 1980s. Thus, from the period when telecommunications began affecting the USPS, the problem grew in severity, because all through the second half of the century, roughly half the volume of mail and revenues came from First Class Mail. These alternative services addressed, first, its business-to-business volumes (1970s and 1980s) and, next, its person-to-person business (late 1990s). The GAO reported as early as 1994 that “the risk to the Postal Service posed by competition and changing
Social Security, the Census Bureau, and the USPS
technology is very real,” leaving the USPS with more expensive, less profitable mail to handle.130 In that year, the GAO reported that e-mail was growing annually at 25 to 30 percent, facsimile (fax) by 20 to 30 percent, while videotext (such as provided by CompuServ and Prodigy) by 30 to 40 percent. EDI, already a well-established communications medium, particularly in manufacturing and process industries, was also still growing at 30 to 40 percent each year.131 In short, the threat to First Class Mail’s revenue stream faced by the USPS pre-dated the Internet and was far more extensive than this one form of electronic communications. The USPS responded to these rapid technological developments by entering the EDI market, providing e-routing, setting up interactive kiosks at post offices, and deploying an electronic commerce system for federal agencies. It also participated in a European ePost network, and, of course, deployed retail systems. It was clear by the mid-1990s that the USPS had a real problem on its hands: how to thrive economically in the face of these new challenges. It had to thrive because Congress required it to provide universal mail service and remain fiscally solvent. The postmaster general in this period, Marvin Runyon, had spent the bulk of his highly successful career in the Automotive Industry and in other senior government positions and thus knew how to conduct reorganizations and offer new products. He expanded use of retail terminals to customers and introduced electronic notification services and acceptance of credit cards for payments.132 He began touting the idea of serving “America’s communication needs” as opposed to the notion of simply mail. He asked Congress to give him more freedom of action to set terms and conditions, to offer new services, and to set prices—all requests that were not provided to his level of satisfaction.133 Yet during the late 1990s, service levels and opinion surveys of the public demonstrated that the USPS was able to do its work, albeit economically not as productively as it wanted or as effectively as its niche competitors in the package delivery business. Nonetheless, the USPS introduced additional services based on IT, such as PC postage (1998), which was a digital stamp that could be purchased and printed using personal computers, while tracking and guarantying delivery services, relying on digital monitoring of shipments. In 1994, the USPS had launched its first public Internet site—and much like other government agencies, first provided the public information and forms and, later, the ability to conduct an ever growing collection of transactions. As the USPS approached the end of the century, it still employed about a third of the entire civilian federal employee population, had higher costs for labor than its rivals, faced the challenges of the Internet, and worked through the risks posed by Y2K.134 GAO auditors, and a growing number of observers, remained pessimistic. One GAO report from late 1999 began with the simple, yet dramatic, statement that “the Postal Service may be nearing the end of an era”; despite heroic efforts, the USPS itself shared with the GAO the view that its core business would continue to decline, largely due to “the growth of the Internet, electronic communications, and electronic commerce,” minimizing the other problem of competition from private sector package delivery services.135
177
178
The DIGITAL HAND, Volume III
The USPS attempted to displace lost revenue with new sources all through the early 2000s, largely based on electronic delivery of messages and e-commerce, such as Stamps Online, eBillPay, Electronic Postmarks (EPM), Internet change of address services, NetPost.Certified, and NetPost Mailing Online.136 In the early 2000s, the USPS emphasized how its Internet-based products enhanced connectivity among people and ease of use. If its annual reports are a true reflection of its main concerns, like others from so many other government agencies, these devoted considerable attention to the deployment of IT-based services and continued improvements in internal operations, thanks to the helping hand of digital technology. Yet volumes of large revenue generating products continued to decline and fiscal deficits remained a chronic problem. The problem faced by the USPS was not just competition from the Internet or private competitors. One government analyst described the root issues in 2002: USPS’s basic business model, which assumes that rising mail volume will cover rising costs and mitigate rate increases, is increasingly problematic since mail volume could stagnate or decline further. USPS has also had difficulty in making and sustaining productivity increases. Moreover, USPS’s framework of legal requirements, which form the foundation of USPS’s business model, as well as practical constraints impede USPS’s ability to ensure its own financial viability.137
To be sure, the fiscal situation had eroded, with the USPS experiencing shrinking net income from 1995 through the early 2000s, due to its inability to lower operating and infrastructure costs at a rate commensurate with growing challenges posed by the Internet and competitors. First Class Mail, for example, dropped in volume, as measured by millions of pieces, from nearly 103.7 million in 2001 to some 97.6 million in 2006. That decline resulted in the gradual loss of nearly $1 billion in revenue each year. Yet, to put those numbers in perspective, when all sources of income were accounted for, total operating revenues grew from $65.8 billion in 2001 to some $72.8 billion in 2006, albeit almost relatively flat, as did the number of pieces handled.138 The postmaster general in 2005, John E. Potter, acknowledged the challenges faced by the USPS: declining mail volumes, difficulty in controlling costs, and the overly optimistic assumption that increased volumes of First Class Mail might have helped to balance the financial books. He reaffirmed the value of continuing to use the digital hand, as had the prior half dozen postmaster generals. Despite his attempts to present the USPS’s initiatives in a positive light, he presented his dilemma clearly: “as electronic diversion continues to erode First-Class Mail volume, this product will become more pricesensitive than ever. Higher rates will likely increase the pace of change, accelerating the volume decline, resulting in falling revenue and the need, again, to increase rates. It is an economic model that is not sustainable in the long term and could lead to the proverbial death spiral that many have predicted.”139 Several months later, the USPS was authorized to raise the cost of First Class Mail from 37 cents to 39 cents, going into effect in January 2006. In 2007, another authorized increase raised the rate to 41 cents.
Social Security, the Census Bureau, and the USPS
As this crisis developed a head of steam in the 1990s and continued in the new century, the USPS sought out new ways to apply the digital hand in its internal operations. The USPS is one of the few cases we have of an agency adhering to a technology implementation plan regardless of what political party was in office. In the case of the USPS, this involved letter sorting and movement automation, which had proven so productive and effective within the period of any single postmaster general that incrementally deploying technologies of various sorts, and then incrementally upgrading them as new versions became available, made good sense to do. In the instance of letter mail automation, for example, the current round of implementation began in 1982, involving the movement of materials that had been bar coded either by business customers or the USPS. Extending bar coding to do delivery point sequencing (DPS)—the sorting of letters down to the carrier’s route—began in 1993 as an extension of the earlier initiative and was a program that the USPS kept deploying all through the 1990s and into the new century. By the end of 1997, 81 percent of all letters were bar coded, and a similar target was achieved on bar coding down to the carrier level. Overtime costs for carriers dropped as the amount of time they needed to spend hand sorting at their post offices declined throughout the 1990s, providing tangible returns to the USPS, although not at the aggressive levels desired.140 When compared to the results of many other federal technology projects, this one proved very effective. Over the previous twenty years, the USPS extended automation to process flat and parcel mail as well. The cost savings possible were enormous. One assessment made in 1997 demonstrated that 1,000 pieces of letter mail sorted manually cost the USPS $44.94; when sorted with the aid of mechanical devices, it cost $27.69; and when sorted through complete automation, $5.39. The latter was made possible by computer-managed bar coding.141 While confirming delivery of mail to customers was seen by the public and competitors as a new and desirable service in the late 1990s, it also came as a response by the USPS to growing competition. Postmaster generals saw this service as contributing to revenue. When the USPS began offering a delivery confirmation service for Priority Mail and parcel shipments in 1999, it built on its prior investment in bar coding and automation. Doing that required deploying over 300,000 scanners across the nation to read bar codes, while linking the data gathered to databases feeding an 800 toll-free telephone line and the USPS’s Web site. By the end of that first year, over one million shipments per week included use of this new service.142 In each of its strategic plans developed during the last two decades of the twentieth century, the USPS discussed the role the digital hand had to play to drive down operating costs for personnel, to improve and sustain quality of service, to keep employees motivated, and to offer new services that would be competitive, all the while generating new sources of revenues. Use of IT increased as a strategic tool from one plan to another. No major function at the USPS was immune from computerization by the mid-1990s. Web-based tools for employees, integrated databases and supply chains, and e-services came into
179
180
The DIGITAL HAND, Volume III
service.143 Initiatives under way in the early 2000s remained consistent from one year to the next as well as the purposes of these activities. With over 60 percent of all households accessing the Internet, the USPS anticipated continued decline of First Class Mail regardless of what actions it took. In its best case scenario, First Class Mail would decline just slightly between 2005 and 2010; the base line scenario, relying on historical and economic trends, had the volume dropping by 10 percent, and the worst case, by over 20 percent.144 Businesses were expected to use various emerging and already widely available technologies to reach customers as well, for example, the Internet, cell phones, and even iPods. The USPS intended to increase the amount of information customers could get over the Internet regarding the status of their mail delivery and for help. Bar coding would be extended to make mail delivery processes more intelligent and accessible by systems, employees, and customers. As new technologies became practical to use, they would be deployed to track mail, such as radio frequency identification devices (RFID) (already used by the Pentagon to collect information on the whereabouts of its supplies), and scanning and bar coding to improve business reply mail functions. Giving customers the ability to print postage and labels online—called Click-N-Ship—would be further enhanced. Finally, a variety of internal operations would be upgraded with shared services in accounting for all parts of the postal system, a more modern communications infrastructure, and increased use of online training tools. In short, plans as of mid-decade were replete with IT projects, far more than had been the case in the 1970s, 1980s, or early 1990s.
Conclusions The three organizations reviewed in this chapter all became extensive users of information technology, in fact, to such an extent that it would be difficult to imagine how they could function in the future without its use. How each came to such a point reflected various experiences unique to each agency. Rate of adoption and extent of deployment reflected more internal operational and managerial issues than just the merits of a particular technology, although, as we noticed in other federal agencies and departments, digital tools had to be configured in ways specific to their needs. In the case of the Social Security Administration, we have the instance where the commercially available functions of computers matched very well the needs of the agency: computers could handle large quantities of repetitive tasks and information formatted in the same way. Moving to the digital from precomputer technologies thus did not disrupt initially how tasks were done or disturb the mission and practices of the organization. This was so much the case that only when Congress began altering the historic role of the SSA did officials in the agency begin finding use of computers to be both challenging and even more necessary; in short, using computing after the 1970s became far more complex. In the instance of the Census Bureau, computers also evolved in ways that
Social Security, the Census Bureau, and the USPS
naturally fit into the daily operations of this organization since they could handle large, similarly structured files in massive quantities. At the Census Bureau, the technology also encouraged employees to expand general knowledge about statistics and, later, digital mapping (GIS systems). The Post Office, however, provided a different model of using IT in that other than for normal back office accounting applications, and many years later with retail terminals, it had to invent new equipment to make the digital useful. It pushed vendors in the 1960s and 1970s to improve OCR technologies and invented many specialized devices to sort and move mail, which, over time, had embedded in them digital components, such as bar coding. Computing, therefore, became easier to use and earlier, in those agencies where the technology’s functions happened to align nicely with the mission of an organization. But other factors also affected adoption of computing, most notably, availability of funding (since these were large capital expenditures), focus of management, and changing roles mandated by Congress. What is a remarkable finding from these three cases, however, is how consistent an agency could be in wanting to deploy and use a technology regardless of which political party was in power. In that sense, it mirrored the practices and cadence of deployment evident at the IRS and at DoD. While the GAO constantly criticized government agencies for poor or inconsistent leadership when it came to use of digital tools, these agencies, nonetheless, deployed computers and telecommunications that improved their operations. Like the IRS and DoD, however, they, too, were overly optimistic about the results they would achieve using IT, while implementation always took longer. These three agencies mirrored practices in other departments, such as the fact that it always took over a decade to implement a major deployment. The one possible exception was the Census Bureau, where incremental improvements were spurred on by the circumstance that it had to conduct a census every ten years, with no delays allowed in carrying out this constitutionally mandated mission. IT did not threaten the abilities of the SSA and Census Bureau from carrying out their respective missions. In the case of the Post Office, we have a different situation where much of the debate about the future of the USPS has centered around the effects of the Internet on this organization’s effectiveness. The historical record, however, suggests that too much attention is being paid to the effects of the Internet on revenues from First Class Mail. It appears that management at the USPS performed the due diligence required to understand the potential uses of IT evident in other public agencies and in the private sector, carried out implementations quickly enough or too slowly (depending on one’s views), but often with their cadence influenced more by the availability of budgets than by some reluctance on the part of management. The real problems at the USPS had less to do with technology and more with issues that had existed (in some cases) for a half century: a pool of employees too expensive when compared to competitors or even other government workers; legal restraints on altering the size of the enterprise to expand or contract to meet the realities of the market (for instance, unprofitable post offices could not be closed down as might a bank branch
181
182
The DIGITAL HAND, Volume III
office); and the existence of a culture of entitlement that reflected more of a public sector style of operation than what was evident in the private sector—a natural consequence of its heritage and legal structure as a public institution. How productive did computers make these agencies? In the case of the SSA, one cannot imagine it being able to carry out its fundamental mission without computing; in short, it was always a “high tech” Information Age enterprise. In the case of the Census Bureau, because it relied on a combination of automated and manual operations, one could observe productivity increasing as an evolutionary process over the decades as manual operations were incrementally automated, with wave upon wave of new uses of the digital hand. In the case of the USPS, which seemed to be criticized more than the other two agencies for its ineffectiveness, it may have been more harshly criticized than warranted. As the data in table 5.5 suggests, IT, when combined with other forms of mechanization, and managerial and operational practices, did make it possible for the USPS to handle four times as much mail at the end of the century with only twice as many employees (its biggest single expense). Put another way, the USPS was able to handle larger quantities of mail with less resources per piece of mail over time, while sustaining high levels of satisfaction with its services. To be sure, the cost of everything changed too over time, from salaries to the price of gasoline for trucks. The USPS always likes to point out that its cost for First Class Mail service is less than in most countries, but the historical record demonstrates that the public and Congress did (and do) not care about what stamps cost in other countries. Rather, the USPS has failed to understand fully and/or to communicate effectively the role technology played in keeping costs in check, such as the effects of OCR technologies that made it possible to read the vast majority of terrible handwriting on envelopes and packages. Imagine the cost to the nation if 200 billion pieces of mail had to still be hand sorted by a highly unionized well-paid workforce.145 A lesson these three agencies teach us is that deployment of IT is clearly more a function of nontechnical issues: budgets, labor practices, and congressional and constitutional mandates. In each instance, however, these are agencies that are highly visible to the public and to the Congress, with the result that all their major digital initiatives are not obscured from public scrutiny, with the possible exception of the USPS’s sorting equipment. In the census of 2000, arcane debates about the mathematics associated with sampling were debated in committee rooms of Congress, while the SSA’s ability to provide pensions became a major topic of national debate in the late 1970s–early 1980s, and again in the early 2000s. The point to draw from these realities is that IT, while absolutely crucial to the operations of these organizations, remains subservient to the factors that drive how they are used and to what extent. That reality was more important than the fact that by the late years of the twentieth century, all three were very much information-based organizations, massively dependent on IT to do their work. The American government is one of the most intensive users of IT in the world, perhaps more than any other in either the public or private sector. It
Social Security, the Census Bureau, and the USPS
certainly embraced digital computing and telecommunications earlier than other national governments, and often quite massively. It was no accident that the Clinton administration made IT policies a major focus item, leading to programs to get children and the nation at large onto the “Information Highway,” while simultaneously “Re-Inventing Government,” reducing paper work, and fostering deployment of IT and telecommunications across the economy through its regulatory practices. All of this grew out of decades of growing dependence on and understanding of the technology. For that reason, the next chapter is devoted to providing an overall view of the role of the digital hand in the federal government, allowing us to put into a larger context the case studies of specific agencies discussed in the first several chapters of this book.
183
6 Role, Presence, and Trends in the Use of Information Technology by the Federal Government At present, there is no function of Federal Government—administrative, scientific, or military—that is not dependent on the smooth functioning of computer hardware and software. —Grace Commission, 1983
T
he federal government was one of the earliest and largest users of computers in the twentieth century. In the early decades of computing—1950s–1970s— it was the most extensive user of computers in the American economy and at the dawn of the twenty-first century relied more on IT and communications than it did in the earlier years. The primary reason it does not continue today to be the most massive user of computing in the world is because the American private sector—which is roughly four times larger than the federal government—caught up with and deployed more computers in the last twenty-five years of the century, as did industries in many other countries. Various government agencies and departments embraced computing in its nescient form, funded its rapid evolution in the 1950s and 1960s, and used every new form of the technology that emerged in subsequent decades in nearly all corners of government. The process of using computers and becoming reliant on them across the entire government followed the pattern described in the past several chapters. Each agency made its own decisions about how and when to deploy the
184
Role, Presence, and Trends in Federal Government
technology, responded to massive increases in dependence on IT, budgetary issues, complexity, turnover in management, reacting at various speeds to the availability of new forms of technology. Attempts by various administrations and the Congress to rationalize the management of adoption and use of the technology remained unfulfilled, although by the end of the Clinton administration important progress had been made. To put the story into a simple numeric context, it was not uncommon for the entire federal government to spend between 1 and 3 percent of its total budget on computers, software, and staff to run them, and those numbers do not include the additional cost of telecommunications. In short, billions of dollars were spent each year on IT and telecommunications; their use proved as crucial to the operations of the entire government as it did to the agencies and departments discussed in earlier chapters. Yet, large agencies and departments had more difficulty in wringing out effective use of computers (DoD and IRS, for instance) than did smaller agencies (FBI and Census Bureau), despite the fact that the opportunities for greater budgetary savings, increased productivity, or improved service often lay with the largest agencies, bureaus, and departments. Issues related to size and complexity mixed with political and institutional crosscurrents to affect the role of the digital hand. By looking at adoption and use of computing across the entire federal government and generalizing about common patterns, we are able to develop a clearer picture of how one large segment of the American economy used this technology and changed the nature of work within it. However, there is a caveat: because adoption of computing occurred in a highly decentralized fashion, with each agency often making its own decisions about IT in considerable isolation from what was occurring elsewhere in the government, it is still impossible to paint a complete picture of the use of computing. In some cases, it was a matter of national security that led officials to purposefully mask data on this subject, particularly during the Cold War involving such extensive users of IT as the National Security Administration (NSA), DoD, and the Central Intelligence Agency (CIA). In other instances, issues turned more on poor accounting practices (as the GAO frequently reported was the case at DoD and at other departments) and, as so many presidential and congressional study groups noted, the lack of some effective centralized authority to track expenditures and establish technical standards. As a consequence, all the data presented in this chapter understate the extent of use of computing both in terms of expenditures and for what applications. Nonetheless, many commissions and agencies collected substantial quantities of information about the uses of IT in the government sufficient to make it possible to create an initial catalog of federal uses of computing over the past half century. Because so many industries followed many of the IT practices of various agencies during the early decades of computing, it is also important to understand those uses and practices. So, as with earlier chapters and prior volumes of The Digital Hand, treating the federal government as if it were one de facto industry allows us to tease out of the historical record many broad patterns of adoption.
185
186
The DIGITAL HAND, Volume III
Use and Deployment of Information Technology, 1950–1980 Early on, management all over the government began to appreciate the potential usefulness of the new technology. As early as 1958, Joseph Campbell, Comptroller General of the United States, sent to the Congress one of the first surveys done on federal use of computing in which he described characteristics of the technology and use that were to be repeated by other studies for decades to come. These uses proved attractive in two general situations: (1) when “processing large volumes of data, where the emphasis is on reduction of per-unit cost of transactions processed, and where there are relatively few management implications involved”; and (2) when “processing large volumes of data, where important managementcontrol implications are involved.”1 The first involved large but usually similar types of data handling activities, as took place in processing payrolls by the Treasury Department, wage data by the SSA, and population counts by the Census Bureau. These tasks did not call for generating data that would allow one agency to control another. In the second instance, use of IT involved analyzing data in order to decide how to expend and control funds and other resources, and to track results. These included large supply and logistical systems in DoD necessary to manage the process for procuring and deploying materiel. The comptroller general noted that use of computers had grown rapidly in the 1950s, “due to advances in technology, the population increase,” and to other demands of work in the government. But largely in the beginning, it was about reducing the cost and effort of collecting and using some 25 billion pieces of paper (recall the excitement at the SSA when it could use computers to reduce paper volumes).2 Every major agency of the government explored possible uses of computers by the early 1950s and began installing these systems by the end of the 1950s. All the surveys and inventories of the 1950s documented the surge in deployment that began by the mid-1950s, grew in number in the late 1950s, and extended into the early 1960s.3 Just to cite one inventory suggests the surge in deployment. Between 1951 and 1958, approximately 237 systems (comprised of computers, peripheral equipment, and software) went into the federal government. That number nearly doubled by mid-1960, to nearly 540 computers.4 The expert on computing who compiled this information, James D. Gallagher, also explained what government agencies were using all these computers for: Inventory control; aircraft engine management; electronic-equipment failure analysis; availability-and-demand history computation; inventory review-andavailability editing; cataloguing; payroll; stock-requirements; supply management; aircraft-configuration accounting; property accounting; earnings and claims data processing; cost accounting; workload forecasting; price-support analysis; road and bridge design; actuarial work; population statistics; war games; mobilization planning; economic census; air-traffic management; fiscal and budgetary control.5
The only major initial uses he failed to mention were early weather forecasting, scientific research, and air traffic control applications. Otherwise, the list was quite representative of the first generation of uses.
Role, Presence, and Trends in Federal Government
The variety of applications is important to call out for a number of reasons. It is customary for historians of computing to speak of first uses of computers concentrating on accounting, scientific, or military applications. To be sure, this and other listings of early uses document the especially wide deployment of computers to perform cost accounting, payroll, budgeting, and other accounting and fiduciary work for the simple reason that these involved labor-intensive, repetitive, and tedious tasks. But Gallagher’s list also reminds us that many nonaccounting applications were also deployed in the earliest years of the computer. Put another way, Paul Armer at the RAND Corporation observed in 1966, “the first phase of utilization by government has involved the mechanization of existing manual systems or of punched-card procedures,” which were quickly evolving into a second wave of applications in the early 1960s implemented simultaneously with this first group. This second wave involved integrating various data processing functions into more comprehensive systems comprised of what, in future years, would have been called “stand alone” applications, picking up momentum “at such a pace that some organizations will undoubtedly bypass the first,” which is largely what happened, for example, at the FBI.6 As of the mid-1960s, about 10 percent of all computers installed in the United States were nestled in federal government agencies, with the majority housed at DoD, NASA, and the Atomic Energy Commission (AEC).7 Thus, we can conclude that while computers were seen in the late 1940s and early 1950s as useful predominantly only for scientific and engineering applications, next for accounting, like users in the private sector, government officials, early and broadly, saw computers as useful for more applications by the mid-1950s than earlier stereotypical descriptions would suggest was the case. To be sure, scientific and engineering applications actually grew in importance in new fields. For example, by the mid-1950s, the U.S. Weather Bureau had installed IBM computers to help it do its central task—actual calculations with which to predict the weather—and not simply to process mundane accounting transactions.8 A second important area of interest involved aircraft safety. The Federal Aviation Agency (FAA) began using computers in the 1950s and became a massive user in the 1960s with national networks and other safety applications. With tens of thousands of airplanes flying in U.S. airspace by the late 1950s, air traffic control became especially important and increasingly complex to manage. IBM’s 650 system became an early workhorse for this application, helping the agency coordinate the work of radar systems all over the country. In fact, many of the applications installed in the early 1960s remained essentially the same until late in the century.9 Automatic data processing (ADP), as many frequently called IT in the late 1950s and early 1960s, quickly became a major line item in the government’s budget and the subject of much attention by the White House through its Bureau of the Budget. The interest that this office maintained in its various permutations continued right into the new century because of the cost of technology and its potential for offsetting other government expenditures through increased labor productivity. In 1959, of 40 nonmilitary agencies tracked by the Bureau of the
187
188
The DIGITAL HAND, Volume III
Budget, 24 had already installed one or more systems, while another 12 still used punched-card systems from earlier times. In each of the subsequent several years (1960–1963), the number of agencies installing computers increased at the rate of roughly a handful each year, such that by the end of fiscal 1963, 32 out of the 40 agencies had computers. By the end of 1963, these systems had translated into annual expenditures of roughly $688 million, outlays that grew over the years. By 1967, all but 3 agencies used computers. By the end of the decade, the proverbial “everyone” was using computing in support of their agencies’ work.10 None of this data included military or other “classified” applications of the digital hand. Table 6.1 shows the number of computers by year during the first three decades of the government’s adoption of the digital hand. Because of the limited processing capabilities of these early systems, it should be of no surprise that the majority were dedicated to one use only, rather than to multiprocessing applications, which became common by the end of the 1960s. For example, of the 1,006 systems reported in use in 1962, dedicated applications included scientific work (274), administrative or accounting (265), program work, such as military inventory control or the Treasury’s disbursing operations (224), while only 220 were used for two or more applications. The remaining 23 systems were largely deployed in classified uses.11 Depending on whose survey data one consults, what they all have in common is the finding that by the mid-1960s, approximately a third of all computers in use in the United States were federally owned, leased, or being used under contract to a government agency.12 One by-product of all this activity was growing interest on the part of Congress, the Executive Branch, and various data processing experts concerning how agencies were going about acquiring this technology. This interest came at a time when it appeared to public officials that data processing was consuming almost 2 percent of the national budget, an amount that could not be verified at the time but, in hindsight, seems reasonable when compared to patterns of expenditures occurring in the private sector. There was also growing concern that Table 6.1 Number of Computers in the Federal Government, Select Years, 1950–1979 1950 1955 1956 1958 1960
2 45 90 250 531
1962 1963 1965 1967 1971
1,030 1,326 2,412 3,692 5,934
1973 1975 1979
7,149 8,649 12,190
Source: Chart 6, Bureau of the Budget, Inventory of Automatic Data Processing (ADP) Equipment in the Federal Government, Including Costs, Categories of Use, and Personnel Utilization (Washington, D.C.: U.S. Bureau of the Budget, August 1962): 13; ibid., 1965 edition, 11; ibid., 1966 edition, 7; National Bureau of Standards, Computers in the Federal Government: A Compilation of Statistics, NBS Special Publication 500–7 (Washington, D.C.: U.S. Government Printing Office, June 1977): viii, 3.
Role, Presence, and Trends in Federal Government
open bidding for procurement of computers was not happening, thereby denying the government the economic benefits of a competitive Computer Industry. The story of this issue is complicated and deserves further study by historians; however, what is important to realize is that in 1963–1965, senior public officials had begun to take actions to improve the efficiency and effectiveness by which agencies justified, acquired, and managed computing. They initiated a string of managerial activities that increased in volume and importance over the next four decades, even becoming a major component of the domestic policies and politics of the Clinton administration in the 1990s.13 The result of the early concern was that Congress passed the Brooks Act, which went into effect in 1966. It defined more clearly than prior legislation and regulations how computer equipment would be acquired. The Office of Management and Budget (OMB) was required to set overall policies for all federal agencies while Congress charged the General Services Administration (GSA) with responsibility for overseeing implementation of these guidelines. The National Bureau of Standards (NBS), nestled within the Department of Commerce, assumed the task for setting technical standards for information processing. For the next three decades, these agencies retained the same responsibilities. They did not always operate as envisioned by either Congress or the White House for myriad reasons, resulting in a string of presidential and congressionally mandated commissions, audits, and task forces to study the acquisition and deployment of IT over the years. The epigraph at the start of this chapter came from one of the more important of these commissions. Acquisition of new computer systems continued all through the 1960s and 1970s, as illustrated by the case studies in prior chapters. Acquisition remained a highly decentralized function with each agency and department making its own decisions on what technology to deploy, although increasingly bowing to guidelines set by OMB, carried out by GSA, and watched over by the GAO and Congress. The legislative branch criticized the effectiveness of the watchdog functions established by the Brooks Act. Then in 1980, Congress passed the Paperwork Reduction Act, the first of several laws enacted over the next quarter century to reduce bureaucracy, streamline work in the government, and to shrink the time and effort required of the private sector to fulfill legal obligations, such as reporting salaries paid and filing tax returns. This law called for the appointment of an information resource manager (IRM) within every executive agency to improve the management of IT activities. The law also contained provisions for better collection of information about the use of digital tools.14 Since government computer-counters were delivering messages in the 1970s that the federal government’s use of computing was aging and slowing, their own data provides useful insights about both the government’s adoption of computing and how that help set the context of these acquisitions within the American economy at large. In that crucial decade from the mid-1960s to the mid-1970s, when computing spread rapidly across the American economy, it did, too, within government. Beginning with roughly 10 percent of all computers in the United States, a decade later the federal government’s share of installed systems had dropped to 4.5 percent.15 This shift was less a statement of the government
189
190
The DIGITAL HAND, Volume III
slowing down adoption, which table 6.1 clearly demonstrates was not the case, and rather more a reflection of the rapid expansion in the deployment of computers across the entire American economy, as suggested earlier in this chapter. In the same decade within the federal government, DoD, NASA, and the Energy Research and Development Administration (ERDA) (which replaced the Atomic Energy Commission in 1975) remained the three largest consumers of computers, accounting for some 90 percent of all nonclassified systems in 1966 and 84 percent a decade later.16 As with the general economy, this decline of 6 percent of systems in the government demonstrates that other agencies were installing their first or additional systems in the same period. What effect did these laws and commissions have on the use of IT by the federal government in the 1970s and 1980s? The inventory of aging technologies began to worry public officials. The example of the IRS being quick on its feet and creative in the 1960s, and later less so, proved emblematic of what happened at many agencies installing their first- or second-generation IT. However, despite a large inventory of installed systems, deployment in the 1970s became a mixed story of slowed innovation and faltering deployment, causing maintenance costs to exceed those evident in the private sector, and that weakened the ability of agencies to respond to new congressional mandates. Smaller agencies normally proved more effective in deploying modern technologies in support of their work, such as the FBI, while the very largest departments and agencies were increasingly viewed as backward when compared to IT practices in the private sector. The Grace Commission (1983) did not criticize the laws enacted in the 1960s or most recently in 1980 but rather blamed “the inability of the Office of Management and Budget (OMB) and the agency administrators managing ADP and their leadership to effectively introduce, justify and maintain effective ADP systems.”17 The GAO—an audit arm of the Congress—also found fault with many agencies over the same problem but shied away from criticizing the growing number of new programs mandated by the Congress (as happened with the SSA), or from acknowledging that lack of sufficient budget often was a severe impediment either in upgrading hardware and software or in rewriting old applications in the 1970s. Yet, the various assessments of DP activities written in the 1970s concluded that some gains in productivity were achieved, despite rapid increases in expenditures on IT, following the pattern of acquisition set as far back as the late 1950s. Existing applications from the 1960s were incrementally upgraded and changed, while staffs to maintain these systems also expanded.18 With the growing availability of faster and bigger computers in the 1960s and 1970s, the advent of online processing, and existence of bigger and less expensive digital storage, one could see the same patterns of usage documented in earlier chapters spreading across other government agencies. Because of the largeness of many agencies, it was reasonable to expect that these would develop large, complex, even comprehensive applications in support of their daily work. In fact, that is exactly what happened, particularly in the 1960s and often for the first time. During the 1970s, agencies slowed their development of new systems as they began to use the ones launched in the 1960s and early 1970s. The
Role, Presence, and Trends in Federal Government
pattern of evolution was one of incrementally adding functions to existing applications. Maintenance activities associated with taking care of installed hardware and software became an increasingly important and expensive set of activities in the 1970s, and because of the difficulty of using and maintaining a growing patch quilt of older systems by the dawn of the 1980s, upgrading and changing applications, software, and hardware proved more complex. These trends resulted in many of the systems of the earlier period remaining in use throughout the 1980s, as evidenced by the cases presented in earlier chapters for such agencies as the uniformed military services and the IRS. One report on federal computing of the early 1980s noted that in addition to complexity, bureaucratic inertia, and increased complexity in procurement practices, in those agencies in which systems were responsive, modern, and useful (usually in smaller organizations), these enjoyed “management continuity and attentiveness to ADP concerns,” citing the FBI as one example of the process at work.19 Finally, a brief comment is in order about the role of telecommunications working with computers in this era to round out the picture of IT usage in the federal government during this early period. Dial-up AT&T services, deployment of private networks, and quasi-public/private networks (such as what eventually would be known as the Internet) commenced in a wide fashion in the 1960s and expanded even more rapidly in the 1970s at rates ranging from just less than 10 percent increases in the billions of characters transmitted in some years to 19–25 percent annual rates of growth in volumes. In the period 1977–1981, just before the breakup of AT&T, and the subsequent emergence of a rapidly changing Telecommunications Industry, utilization of private and leased data circuits by the government increased at 16 percent each year. This growth represented an annual compound growth rate in expenses for the government’s telecommunications of 25 percent over the two decades.20 Reliance on telecommunications expanded further throughout the 1980s.
Use and Deployment of Information Technology, 1981–2007 Use of the digital hand in the 1980s shifted to new issues as well. While the period of the 1960s and 1970s saw the creation of IT systems in support of existing work streams, federal officials in the 1980s and early 1990s incrementally enhanced those systems, largely in response to new duties mandated by the Congress or that augmented preexisting data processing.21 While comprehensive hard data on expenditures are difficult to come by, at minimum the federal government spent annually over $9 to $15 billion on IT in these years. Since those figures often did not include all the expenses of maintaining systems, or others used in intelligence gathering agencies and for secret military projects, the actual figure was, no doubt, higher, possibly by as much as 50 percent.22 All these expenditures, however, made possible new applications to adopt. Innovations in technology and declining costs of IT and telecommunications helped too. Major initiatives were also launched to control and leverage better IT and information.
191
192
The DIGITAL HAND, Volume III
Applications of the digital hand that increasingly became part and parcel of an agency’s IT infrastructure included computer-assisted modeling in analytical forecasting and research activities of agencies. Decision support tasks also were augmented by use of computers. A third involved the management of information and paper-based records. All three had been the subject of some data processing in earlier years, but more so in the 1980s. Federal agencies acquired large numbers of microcomputers (PCs), beginning in the mid-1970s and reaching over 100,000 just by mid-decade, and had well over 500,000 just five years later. Employees used these systems to do desk-top analysis (using spreadsheet software) and to access larger files on agency mainframes. Spreadsheet software did more perhaps than any other category of IT tools in the early years of the microcomputer to cause the decentralization of computing from large data centers to desktops that began to occur in the federal government in the 1980s and 1990s. In addition to spreadsheet software, federal employees acquired or wrote modeling and decision analytic software tools. These made it possible for individuals to model options for many small or simple decisions.23 At the other extreme of technological developments came significant improvements in supercomputers, making it possible to build very complex models that, in fact, were developed quickly in such broad areas as aerodynamics, high-energy physics, and weather forecasting. These were deployed at NASA, DoD, and at each of those national laboratories under the management of the Department of Energy. Modeling at this level of sophistication had begun in the federal government in the 1950s for numerically intensive projects, such as weather forecasting; but during the 1960s this class of applications expanded, along with the capabilities of computers, to model issues relevant to DoD, NASA with its space programs, and at the Department of Energy. As the cost of computing dropped and capacity to handle more complex and higher volumes of data increased in the 1970s, new applications became part of the government’s tool kit in such areas as air pollution, solid waste management, water resources, urban development, and transportation. In the 1980s, 60 percent of 141 agencies used computer modeling, either with supercomputers or smaller systems, suggesting that computer-based modeling had become an important class of applications.24 Table 6.2 lists major uses as of the mid-1980s, demonstrating that the applications had also become quite diverse. To put this information in some useable context, the data in this table reflect a small sampling of the over 3,600 applications of modeling using computers. More pointedly on the question of how computers were used in decision support and analysis on a daily basis, deployment of this class of applications spread across various agencies from the 1960s to the end of the century. However, in the 1960s and 1970s, such uses of computers began largely within military agencies, elsewhere for some business-oriented work, and, of course, for a large variety of research and development projects. Government employees had long used paper and pencil, slide rules, and calculators, so when computers and, most specifically, microcomputers became available, there was no lack of projects one could envision. What made the computer attractive as a tool was its ability
Role, Presence, and Trends in Federal Government Table 6.2 U.S. Government Computer Modeling Applications, circa 1985 Economic research (analysis of farm produce, world food supplies, trade policy, forecasting supply and demand) Natural resource management (timber resource allocation, fire management, road designs, oil and gas lease management) Military uses (impact of defense spending on U.S. economy, strategic defense, force mobility modeling, war planning) Health and human services (modeling of social security and welfare programs and options) Emergency management (mobilization in event of nuclear war, earthquake damage, strategic stockpiling policy development, economic impact of disasters) Source: Office of Technology Assessment, Federal Government Information Technology: Management, Security, and Congressional Oversight, OTA-CIT-297 (Washington, D.C.: U.S. Government Printing Office, February 1986): 110.
to calculate accurately many large numbers quickly and frequently. The growing availability of commercial software packages in the 1980s simply facilitated further reliance on the digital hand. The first survey conducted by the federal government on this issue (mid-1980s) suggested that over 80 percent of all agencies used spreadsheet software, such as Lotus 1-2-3 or VisiCalc. Nearly 50 percent reported using such quantitative decision analytic techniques as linear programming, queuing analysis, and critical path analysis. Roughly a fourth used forecasting techniques and software packages, such as for regression analysis. A similar number relied on quantitative decision analytic techniques that included judgmental input, such as the use of decision trees and subjective probability methods. (Expert systems were added in the 1990s.) The same survey pointed out that about 10 percent of all agencies intended to adopt these various digital tools in the near future. The surveyors also concluded that their data undoubtedly understated the use of computers to do this kind of work.25 The point of going through this litany of deployment is to suggest that the use of analytical tools based on computing became part of how government agencies did their work by the end of the 1980s. An even larger set of applications of the digital involved the collection, use, and dissemination of information, paper reports, and forms across the federal government. This government was reputed to be the largest publisher in the world; certainly its printer, the Government Printing Office (GPO), was the biggest publisher throughout the second half of the twentieth century. Additionally, as discussed in chapter 2, the IRS alone produced massive quantities of documents and forms, costing the American public billions of dollars to read, fill out, and file. The government had collectively recognized the need to constrain the flow of paper work while making information more readily available to both the public and to its own employees. All through this period, public officials saw the potential of computers to assist on all fronts. While various attempts were made to
193
194
The DIGITAL HAND, Volume III
rectify the situation in the 1950s, 1960s, and the 1970s and became a major point of emphasis of the Clinton administration in the 1990s, it was in the 1980s that a combination of cost-effective computing and the accumulation of prerequisite debate, study, and skills led to tangible efforts addressing the broad issue of managing information and paperwork. In addition to data and paper, by the 1970s an increasing amount of the government’s information was beginning to appear in electronic form, not just on paper, raising questions about how employees and the public could gain practical access to it. As the amount of information and paper had increased in the 1960s and 1970s, officials became aware of the growing problems they presented. As use of computing spread rapidly (1960s and 1970s), officials and Congress sought to stem escalating costs associated with this class of work. Their efforts resulted, for example, in passage of the Brooks Act in 1966. But on the application side—information and paperwork—one of the most seminal steps taken was passage of the Paperwork Reduction Act in 1980. While not the first time Congress had legislated on the use of information— it had done so over the course of nearly a century and a half—legislators designed this law to reduce the burden of both public and private sectors in processing information required by law and the daily work of agencies. It expanded and invigorated various federal activities focusing on the management of government information. It aimed to reduce the burden of paperwork on businesses, individuals, and state and local governments.26 The details concerning the specific role of agencies, such as the GSA and OMB, while part of the law, need not detain us. What is important to recognize is that it stimulated a surge of actions across the federal government to conform to the law’s terms that have continued right into the new century. As part of the effort to reduce paperwork and make information more readily available, agencies turned increasingly to computers for help and paid more attention to the management of IT, as we saw with the examples of DoD, the IRS, and the smaller agencies of the FBI, SSA, and Bureau of the Census. One report on the actions of the Reagan administration summarized a whole new class of use for computing that took place in the 1980s: “the Executive Branch is tightening its procedures and policies, letting OMB direct the management of information resources, and expanded the collecting and reporting of information in electronic or magnetic tape form while curtailing publication in paper copy.”27 The Congress saw these actions as a critical collection of initiatives in support of reducing the deficit of the federal government and lowering the operating costs of agencies on a relatively permanent basis. In short, as had been the case in how most computer applications had been justified during prior decades, the hunt for less expensive ways of doing the work of government played a prominent role in public administration during the 1980s. By mid-decade, some 40 percent of all federal agencies had established ways to deliver or make available information to employees and the public in some electronic form, albeit in most cases in modest fashion, far more limited than what became prevalent after the wide integration of the Internet into the daily work of nearly all government agencies in the late 1990s. The first major application in response to the law expanded use of what quickly became known as
Role, Presence, and Trends in Federal Government
e-mail. In one survey of 118 agencies, 47 reported using this application by mid-decade to circulate “press releases, bulletins, notices, and short reports, and the use of computer tapes for distribution of statistical databases and reports.”28 Departments making available information housed in their electronic databases included Agriculture, Commerce, Energy, Health and Human Services, Interior, Justice, Labor, and Transportation.29 By the late 1980s, policies, programs, audits, and studies of progress documented a substantial increase in the use of computing to digitize information and its dissemination.30 Many functions and all agencies relied on computing in one fashion or another by the mid-1980s. In addition to the kinds of uses described above, agencies had developed specific software tools tailored to the needs of their organizations to a large extent; in other words, they had written or purchased software tailored to their needs and not just acquired generic software tools (such as spreadsheets and word processors) that could be used across agencies. Figure 6.1, put together in the mid-1980s by IBM, catalogs the variety of digital applications already available in the federal government. Some of the uses listed are generic in that they appeared in multiple agencies, such as training, but in almost all cases, highly tailored to the needs of specific organizations. Many were developed by federal employees in IT organizations, while a growing number also came from software vendors, or contractors who wrote software for hire. Budget data from the period provides additional evidence of the extent to which the digital hand played a role in governmental activities. As in earlier years, the data demonstrate that from the early 1980s into the early 1990s, during which time the federal government focused on cutting expenses on domestic and “backroom” operations, while investing in military systems to outbid the Soviets into what eventually became the collapse of the Cold War at the end of the 1980s, expenditures on IT continued rising both in dollar amounts and as a proportion of the federal budget. To put a fine point on the matter, in 1982, IT expenditures on nonclassified systems, software, and data processing personnel took up 1.23 percent of the federal budget, much in line with what companies spent in many private sector industries. As a proportion of the budget, this percentage increased to 1.70 by 1993. Total budgetary obligations (which include discretionary spending, mandatory programs such as Social Security, and net payments on the federal debt) for IT measured in dollars nearly doubled, from $9.1 billion to over $16.1 billion in 1991, with projections of $17.2 billion for just two years later. In real dollar terms, between 1982 and 1991, expenditures grew at an annual rate of 6.5 percent, a bit less than the private sector, nonetheless at an impressive rate. If we look at IT as a percent of the operating budget (which is the total budget of the government except mandatory spending and debt servicing for example) year over year, IT consumed annually 3.4 percent in 1982 and a decade later (1992), 5.4 percent. IT budgets grew faster in this period than the overall budget of the federal government.31 Expenditures in the early to mid-1990s showed a nominal increase to roughly $26.5 billion by middecade, largely driven by civilian agencies since the largest user of IT, DoD, actually leveled off its expenditures in this period.32 By the mid-1990s, however, IT
195
196
The DIGITAL HAND, Volume III
Figure 6.1
Federal uses of computers, mid-1980s. (Courtesy IBM Archives)
expenditures as a percent of the total federal budget amounted to about 1.8 percent while for the operating budget it had expanded to 6 percent.33 Concurrently, utilization of telecommunications across the federal government continued to expand. GSA had leased equipment to agencies and departments for years, but the costs and complexity of various networks had increased, so in 1988 GSA awarded AT&T and U.S. Sprint Communications contracts for supplying telecommunications to the government. The contract was estimated to be worth some $25 billion over the next decade, making this expenditure one of the largest for communications in the American economy. This more integrated telecommunications was named the Federal Telecommunications System (FTS
Role, Presence, and Trends in Federal Government
2000). It provided a combination of voice, data services, and video at higher speeds and capacity than available before, which made possible a national e-mail network using personal computers and two-way video conferencing. This initiative set the federal government on a path to have a near state-of-the-art telecommunications network by the end of the decade that was in place by the time the Internet became a major focal point for public officials. It replaced a network that had been in place since 1963, and that provided the government with longdistance telephone service, with voice the primary application. With the new one, data transmission was recognized as an equally voluminous and important application, and, of course, provided faster data transmission than available before. Finally, the new system provided digital transmission and began the process of retiring an all-analog network.34 Before discussing the role of the Internet—a major topic of interest to the Clinton administration in the 1990s—we need to acknowledge that the digital hand played other roles in the government. In short, the history of the 1990s is not just about the Internet in government. For one thing, at that moment in time when use of the Internet grew rapidly across the American landscape, the federal government was spending over $27 billion a year on IT; only a tiny portion of that sum went for Internet activities. Earlier concerns about continuing to improve governmental operations also resulted in passage of the Information Technology Management Reform Act in 1996 and repeal of the Brooks Act as part of a multiyear, ongoing process of reforms and transformations. To put this legislation in broad context, these legislative actions were only two out of over 300 passed by the Congress dealing with information and technology between 1977 and 1990 alone.35 Applications of the digital implemented in prior decades continued to be used in the 1990s. The GSA reported that in 1995, the government had over 30,000 computers, not including microcomputers, microprocessors, or high performance workstations. Every agency used computers of various sizes and capabilities. Many stand-alone applications dedicated to specific agency needs, or for engineering, for example, using Digital Equipment, Wang, and IBM products alone accounted for nearly 8,000 systems. Large mainframes of the 1970s and 1980s operated in every cabinet level department and in many agencies, accounting for another 700 systems. These large systems were most widely used at DoD, NASA, Department of the Treasury, and at very large agencies, or in those that needed extensive computing, such as the Department of Energy. The first three organizations just mentioned alone accounted for over 560 systems, indicative of where some of the largest and oldest uses of computing applications resided in the government.36 Agencies and departments continued to enhance existing systems. For example, the White House upgraded its e-mail software and added an online resume system, while various agencies enhanced theirs to combat fraud in their welfare programs.37 All during the 1990s and into the early years of the new century, data mining applications became extremely popular new uses of IT with over half of all government organizations exploiting software tools in support of this expanding use of computing. Widely used data mining included efforts to
197
198
The DIGITAL HAND, Volume III
improve services or performance; to detect various forms of fraud, waste, abuse, and criminal activities; in support of scientific and other research activities; increasingly for the management of human resources; after 9/11 for analyzing intelligence data for detecting terrorist activities; and, in general, for understanding patterns of behavior, whether of criminals, land use, or tax payers.38 The GSA continued to develop methods for agencies to acquire, install, and operate IT all through the 1990s, and their reports for these years reflect much of what one expected IT departments in private industry to work on all through the half century.39 All federal agencies and departments focused on becoming Y2K compliant in the late 1990s, a massive effort historians will someday need to document, because it consumed a large amount of resources of all kinds (technology, manpower, management, and budgetary) all through the late 1990s.40 The OMB continued to pressure agencies to lower operating costs by leveraging use of IT—a major theme all through the half century—while the GAO critiqued the effectiveness of existing IT managerial practices. During the Clinton years, agencies were also encouraged, indeed ordered, to improve the quality of their services to citizens and to peers within the government, part of the Reinventing Government initiative of this administration. This latter effort had simultaneously been adopted by many governments in Europe, Asia, and in parts of Latin America, representing a global trend extending far beyond the deployment of e-government (Internet-based services).41 In January 2001, the comptroller General of the United States, David M. Walker, who headed GAO, sent to the Congress a broad assessment of federal managerial challenges and opportunities. While much of the report’s concerns were swept away in the aftermath of the 9/11 attacks on the nation, his assessment, nonetheless, provided a snapshot of practices and thinking as of the late 1990s. Perhaps the most significant was the shift that had taken place in the 1990s toward implementing the concept of performance-based management, that is to say, the notion that departments should do their work in compliance with targets that measured the effects of their activities on citizens, not merely to conform to budgetary targets or internal priorities. This shift in thinking—which also began to appear in many European governments and soon in the European Union’s own managerial practices—held out the promise that new IT applications would be created in response to the changed thinking. At a minimum, the shift called for new reports on performance, relying on data mining and on new accounting systems, many of which had yet to be developed, a process started during the Y2K initiative that led to installation of some new systems. Walker urged the Congress to leverage that work: “It is critical that the momentum generated by the government’s Year 2000 efforts not be lost and that the lessons learned be considered in addressing other pressing IT challenges.”42 Specific problems facing government’s use of IT mirrored those of earlier decades, with the exception that he added “electronic government” to the list: • Improving the collection, use, and dissemination of government information • Pursuing opportunities for electronic government
Role, Presence, and Trends in Federal Government
• Constructing sound enterprise architectures • Fostering mature systems acquisition, development, and operational practices • Ensuring effective agency IT investment practices • Developing IT human capital strategies43
During the Bush administration, and particularly in the aftermath of 9/11, existing uses of computing continued as before. The one major new element was an increased emphasis on using computers in support of intelligence gathering, as we saw in the case of the FBI. The creation of the large Department of Homeland Security by consolidating twenty-two agencies did not lead to the kinds of radical changes in digital applications recommended by the 9/11 Commission during the first George W. Bush administration. However, more collaboration increased among airlines, the Transportation Services Administration (TSA), and various intelligence and law enforcement agencies. One major process started during the late 1980s continued all through the years of the Clinton administration, extending right into the new century. In 1990, the CFO Act passed, stipulating a variety of accounting practices for all of government, essentially calling for the kind of accounting and financial practices deployed in the private sector: establishment of a CFO function in each department, better accounting systems (many of the existing ones dated to the 1950s and 1960s), and the requirement that all departments pass financial audits, all done to improve financial management. During the last two decades of the century, agencies quietly went about either improving their internal accounting and financial IT applications or running into bluntly written GAO criticisms of failure, as experienced by DoD. The administration of the first President Bush strongly recognized and embraced the importance of this law, resulting in initiatives all over the government. In a rare statement of positive results, GAO commented that “financial management systems and internal control [sic] have been strengthened” by late 2005.44 It reported that eighteen of twenty-four CFOs recently passed their audits, a threefold increase from 1996 when only six passed. Passing agencies also generated data, called “accountability reports,” that extended beyond just financial issues.45 The GAO proposed, however, that many agencies and departments still needed to introduce a new generation of software applications in support of accounting (particularly for cost accounting), finance, and general accountability. Expenditures for IT in the early 2000s went more for maintenance of existing applications—by then a nearly $60 billion annual expense—than for new uses of computing. Leave aside for the moment discussion of the large new supply of Internet-based uses of computing for discussion below, because the larger concern of public officials in the early 2000s lay more with the management of existing tools than with adding new ones, with the exception of intelligence gathering required for the “war on terror.” Passage of the Information Technology Management Reform Act in 1996 created the role of chief information officer (CIO) for each agency, yet another effort to manage more effectively what had long been a massive expenditure on IT.46 A half dozen agencies dominated these
199
200
The DIGITAL HAND, Volume III
expenditures. DoD accounted for $28.5 billion in 2004 and increased its expenditures the following year. Upon its creation, the Department of Homeland Security instantly became one of the largest users of IT in the American government and, for that matter, in the nation, with expenditures in 2004 reaching nearly $4.8 billion, and that grew by over 15 percent the following year. A few other agencies in descending order of consumption by fiscal 2004 included Department of Health and Human Services ($4.6 billion), Department of Treasury ($2.8 billion), Department of Energy ($2.6 billion), and Department of Transportation ($2.5 billion). These half dozen combined departments spent over two-thirds of the entire budget; yet it was not uncommon for many other cabinet-level departments to spend annually between $30 and $400 million. The Social Security Administration in 2004 alone spent $868 million, while the Department of State expended nearly as much too ($857 million).47 The conclusion to draw is that, if anything, reliance on IT had actually increased right into the new century.
The Internet and Federal Government since 1993 The period of the Clinton administration (1993–2000) paralleled the explosive growth in the adoption of the Internet by the nation as a whole. Twin developments in the 1990s resulted in the deployment of new applications of IT by the federal government that mimicked in volume and transformative effects the use of computing in the 1960s: use of IT again as a strategic tool to improve the efficiency and effectiveness of government operations and the rapid and extensive deployment of the Internet by government agencies and departments. The Clinton administration came into office in 1993 with a near missionary zeal to transform government, making it more accountable for its performance, increasingly transparent in reporting its results, and responsive to citizens. Vice President Al Gore was put in charge of the new initiative at a time when Quality Management practices had already swept through the private sector as a new mantra for modern management practices, and that would be accompanied by the “process reengineering” initiatives of the early 1990s. An early area of focus was the acceleration of initiatives to reduce the amount of “paperwork” in government, and that included simplifying regulations and forms. Paperwork reduction acts were passed in 1995 and 1998, while the OMB and the vice president pressed for substantial reforms in how agencies did their work.48 To monitor progress, within months of taking office the new administration announced the National Performance Review (NPR) process, an initiative led by Vice President Gore to review performance of agencies as the government shifted its culture “from complacency and entitlement” to one of “initiative and empowerment.” The NPR would be used to “reinvent government.”49 President Clinton directed his vice president “to redesign, to reinvent, to reinvigorate the entire National Government.”50 Set into that broad policy background was the administration’s interest in stimulating national economic development through the use of IT and deployment of technology to improve operations of government. Its first, and broadest,
Role, Presence, and Trends in Federal Government
IT policy initiative came with a series of statements initially made within two months of the new administration taking office and that culminated with the introduction of the U.S. National Information Infrastructure (NII) initiative on September 15, 1993.51 It recognized the convergence of IT, telecommunications, and new ways of using computing and became the basis for much policy regarding information, process reengineering, inspiration for what R&D and infrastructures invested in, and what applications to deploy.52 Yet, as had long been the case, it also followed a long-standing practice of the federal government stimulating deployment of new technologies across the economy and within its own walls, dating back to World War II. Early efforts focused on reforming the Telecommunications Industry, replete with passage of the Telecommunications Act of 1996 and a stream of regulations.53 The NII envisioned use of national networks, following again the tradition of the early development and deployment of the Internet in the 1970s and 1980s, building on this prior work. Part of the administration’s initiative involved using its political, economic, and regulatory power to put the Internet into every classroom in America and to facilitate further use of this telecommunications network in the private sector, all at a time when the nation’s appetite for using this technology was growing. One student of the initiative, Brian Kahin, concluded that “as a whole (NII) has succeeded in focusing public attention on the transformative potential of information technology and networks and the need to develop a deeper understanding of their social, economic, and policy implications.”54 Writing in 1996, he also observed that NII “provided a useful framework for communications among federal agencies with diverse charters and perspectives.”55 The effect on the national economy and its citizens has been discussed elsewhere; less understood is what occurred within the federal government.56 At the heart of what happened in the 1990s was the emergence of a new way of looking at the work of government, called electronic government. Initially, the notion was a fuzzy acceptance of the possible use of telecommunications and computing, but as the decade proceeded, it became the symbol of an opportunity, a path, so to speak, toward a transformed government that could serve its citizens better and more cost effectively. The idea quickly took the form of finding better ways to deliver services to the American public. As early as 1993, one government report put the case forward for using IT to accomplish the objective: Information technology—computers, advanced telecommunications, optical disks, and the like—can be used by the Federal Government to deliver services to citizens. Most Americans, if they think about it, can identify at least a few Federal services that affect their lives. These include the:
• • • • •
46 million recipients of social security benefits, 27 million recipients of food stamps, 31 million Medicaid recipients, 14 million recipients of aid to families with dependent children, 15 million scientists who receive National Science Foundation research grants each year,
201
202
The DIGITAL HAND, Volume III
• 20,000 small businesses that receive business loans, • 600,000 persons participating in job-training programs, • people and organizations that annually place about 1.6 million orders for a total of 110 million publications from the U.S. Government Printing Office, • citizens who annually receive a total of 10 million pamphlets from the Consumer Information Center, • 30,000 or so academic and business researchers who receive research results and technical information each week from the National Technical Information Service, and • 170 citizens who use Federal depository libraries each week.57
The same report, however, stated calmly that “interest in the electronic delivery of Federal Government services (and related State/local services) has mushroomed,” with “electronic service delivery closely linked to the ‘reinventing government’ and ‘service to the citizen’ movements that started at the State and local levels and have spread to the Federal Government.”58 The Clinton administration launched a plethora of initiatives to implement its national IT policies in the 1990s.59 By late 1995, within the federal government many IT-centric initiatives focused on the use of the Internet, for the same reasons as occurred at the time in the private sector and already in state and local government. Public officials saw the Internet as a manageable way of meeting many objectives of NPR: most immediately to lower the cost of paper (required by various paper reduction acts), while making it easier and quicker to deliver information to the public at a time when every month the number of Americans using the Internet for the first time increased, and to improve the efficiency of internal operations.60 Looking back from 2001 at these early efforts, two observers concluded that “the federal government’s efforts over the past decade to put government information on the web, prompted by the Paperwork Reduction Act of 1995 and other legislative mandates, have been energetic, with volumes of information now available to the public through government agency websites,” only complaining that conducting transactions over the Net still was in a primitive stage of development.61 The term “electronic government” became “official” public language when, in 1997, the Clinton administration published a report called Access America: Reengineering through Information Technology.62 At the same time, the government was using the phrase “Information Highway” to describe its vision of linking businesses, schools, people, and governments more closely by way of the Internet.63 But how was the federal government using the information highway? To a large extent, it mimicked patterns evident in many industries. Early in the decade, agencies and departments began “putting up” their first-generation Web sites, populating them with information on how to contact them, and that included statements about their roles and missions.64 All through the 1990s, they added information that citizens constantly asked of agencies, as we saw with the IRS’s publications in chapter 2. By mid-1997, over forty federal agencies had established some 4,300 World Wide Web (WWW) sites.65 Obviously, it was not
Role, Presence, and Trends in Federal Government
uncommon for an agency to have multiple sites and to be constantly replacing them with new ones. Many were specialized with information intended for narrow audiences. For example—and a very typical illustration of this pattern across the entire government—in 1997, the Department of Agriculture maintained sites devoted to specific regions of the United States, others were hosted by the Agricultural Research Service (over 100) devoted to specific lines of inquiry (poultry, livestock, various types of plants), as with the National Agricultural Library, Animal and Plant Health Inspection Service (over a dozen sites), Cooperative State Research Education and Extension Service, and the Forest Service (over three dozen) to mention just a few.66 But these early uses of the Internet by government agencies did not fundamentally alter their daily work. In fact, as late as 1998, the GAO was still trying to explain to agencies what to use the Internet for and the benefits to them.67 In fairness to government fin-de-siècle officials, however, making it easier for citizens to get information was actually a major accomplishment since an historically important role of government had always been to create and disseminate information. In less than a decade, the Internet had become the single most important channel of access to federal information for people coming to government for data and an increasingly important source of information for federal employees with which to perform their duties. While agencies began to put information out on thousands of Web sites in the second half of the 1990s, standards for implementation and maintenance improved. Turnover in Web sites continued, however, as agencies moved rapidly from one generation Web site to another in the late 1990s, either in response to changing technology (such as the arrival of security software to protect data security and citizen privacy) or as federal IT organizations learned how better to create, populate with data, and manage Web sites, learning essentially at the same time as the private sector.68 By the mid-1990s, forty-two federal agencies were tracking expenditures for Internet- and dial-up-based activities, providing us with specific evidence about the already extensive use of the WWW. Between 1994 and 1996, these agencies spent cumulatively about $349 million, with $59 million of this sum expended in fiscal year 1994, approximately $100 million in fiscal year 1995, and roughly $190 million in the following year, a testimonial to the fact that even in 1994 the government was already an extensive user of the Internet and, like so many industries, increased massively its investment in this form of telecommunications after the arrival of tools to make access and use of the World Wide Web convenient. Of the total $349 million spent, roughly $325 million went to Internetbased activities related to the 4,300 Web sites mentioned earlier, and for 215 dial-up accounts. The lion’s share of the Internet expenses went for establishing Web sites, providing access to these for employees, and for maintenance of these Web sites.69 One of the earliest applications was e-mail for employees, in fact, for 1.7 million of them by 1997, representing approximately 50 percent of all civilian and military employees. Some 31 percent gained direct access to the WWW. The GAO reported that by 1997, “the Internet has [sic] become a valuable and widely used means of communicating and sharing information.”70 E-mail with
203
204
The DIGITAL HAND, Volume III
colleagues, and increasingly with the public, spread in this period. Employees sought professional, scientific, and technical information on the Web, while making increasing amounts of their data available to the public as well. As the case studies in prior chapters demonstrated, uses of the Web varied widely and early across the government.71 The percent of employees within agencies and departments with access to e-mail and to the WWW varied enormously in mid-decade, so the 50 percent cited above for e-mail and 31 percent for access to the Web are a bit misleading. Table 6.3 provides a sampling of data for fifteen organizations to demonstrate the breadth of deployment (hence use) by late 1997. Included are the largest employers in government and, when available, data on organizations discussed in earlier chapters, such as the SSA and the Justice Department. It should be of no surprise that since adoption of computing had always been a highly decentralized activity, it would be so with deployment and use of the Internet. However, the one conclusion that leaps out from the data, and the cumulative percentages of 50 and 31, is that as a whole the federal government proved to be an aggressive early adopter of the Internet when compared to most industries in the American economy. Between 1997 and the early years of the new century, government agencies added more data to their Web sites, exchanged more information among themselves through both Internet and intranet sites, and began adding transactions,
Table 6.3 Percentage of Federal Employees with E-mail and WWW Access, 1997 Federal Organization Department of Defense Department of the Treasury Department of Veterans Affairs Department of Transportation Department of the Interior Department of Agriculture Department of Health and Human Services Social Security Administration Environmental Protection Agency General Service Administration Department of Justice Executive Office of the President Federal Communications Commission Federal Deposit Insurance Corporation Federal Emergency Management Agency
% with E-mail
% with WWW Access
49.5 54.6 26.0 62.6 83.4 50.8 80.6 40.0 100.0 80.7 7.9 100.0 100.0 100.0 100.0
34.3 7.7 4.0 16.5 36.3 23.7 67.5 7.7 40.8 71.0 8.0 100.0 100.0 100.0 100.0
Source: General Accounting Office, Internet and Electronic Dial-Up Bulletin Boards: Information Reported by Federal Organizations, GAO/GGD-97–86 (Washington, D.C.: U.S. Government Printing Office, June 1997): 35–36.
Role, Presence, and Trends in Federal Government
such as the ability for a citizen to order a publication or, as we saw with the IRS, to file tax returns and other forms and make payments. It was in this later period that public officials at local, state, and federal agencies began to speak about “e-government” and “e-business,” which loosely defined was the ability to conduct electronically official business with minimal or no reliance on paper-based documents. Being able to order online supplies at DoD, for example, represented a profoundly different way of managing and deploying the acquisition, record keeping, and consumption of budgets, goods, and services. By the end of the century, federal agencies had already started to serve businesses, citizens, employees, and other governments over the Internet. A survey conducted in 2001 documented over 1,300 uses of the Internet in one of these four ways or a combination of the four. While extant data is limited, it appears that a third of these new applications of IT served citizens, 20 percent other agencies, just about 23 percent employees, and 20 percent businesses. However, the majority (just over 50 percent) had dissemination of information as their largest application. As one student of the process put it at the time, “the move to a full e-government is in the early stages, estimating that only 4 percent of these were transformative in nature.”72 Yet 34 percent had implemented transaction-based applications on their sites. Ironically, online forms actually represented a smaller use of the Internet than transactions at this time, a phenomenon described in chapter 2 about the IRS. The same survey, however, noted that all agencies and departments were now users of the Internet and that some 40 percent focused on serving citizens. Of the 1,200 sites surveyed, some 800 provided information and forms, often with the enthusiastic support of citizens. At that time, well over 50 percent of all residents in the United States had access to the Web.73 Agencies and departments were also creating intranets for their internal use, not to be confused with Internet sites, which were designed for use by individuals from outside an agency, department, or government. Functional portals were developed to deliver single managerial functions, such as human resource and financial tools. Web applications for a single purpose continued to be very widely used, such as for processing a new hire or for making travel arrangements. “Fat portals” were just coming online, used to house complex, multifunctional, and enterprise-wide functions. These were just beginning to make it possible for employees to have personalized work environments. “Thin portals” also existed for the purpose of making available an agency’s information and links to other intranet sites.74 The surge in use of the Internet and intranets continued unabated until the attacks of 9/11 in 2001 when, for security reasons, departments and agencies began to question the wisdom of keeping certain types of information on their Web sites, such as organization charts and some names and addresses. These were concerns, however, that dated back to at least 2000 but now became more urgent. One of the key recommendations from the 9/11 Commission was to integrate various sources of data in support of what became the Department of Homeland Security. While much of its IT activities remained shrouded in secrecy as of this writing (2007), some evidence from the GAO, for example, suggested
205
206
The DIGITAL HAND, Volume III
that agencies were collaborating more than in the past, including departments such as Agriculture, Treasury, and Health and Human Services.75 Hackers had also been a source of problems since the late 1990s as well, while considerations about the privacy of information concerning individual citizens had accompanied development and use of the Web across the entire economy and were thus not limited to issues regarding federal uses of the Net.76 As of late 2002, these issues commingled with a large body of existing uses of the Internet now accessible by federal employees and the public. All sites provided basic information about an agency or department; over 95 percent posted documents; already half made available downloadable and interactive forms. Roughly a fourth had started to provide multimedia applications as well. Despite all the trade press discussions about e-commerce, just over 10 percent of government Web sites had applications of this type. By that time, the majority (85 percent) provided information on how to apply for employment and a nearly similar number showcased publications (81 percent). Half posted statistics and information about obtaining contracts. A recent development evident on twothirds of federal Web sites was the ability to contact an agency through e-mail directly from the agency’s site, while almost all of them now provided search functions or site maps.77 By then, the premier portal to get to federal Web sites was FirstGov.gov. It became operational in September 2000 to help citizens access federal information by service rather than by agency. One student of federal Web sites concluded that “FirstGov serves as an efficient, effective gateway into the full range of federal information and services,” with some 51 million pages of information in 2,000 government Web sites by late 2002.78 A new generation of FirstGov was implemented in 2007 and renamed USA.gov, six years after the first governmentwide portal had gone into use.
Conclusions By looking at the government as a whole, several patterns of use, deployment, and effects become evident. As earlier chapters illustrated with specific departments and agencies, the federal government demonstrated a continuous appetite for information technologies for over a half century. The motivations for relying on the digital hand came largely out of desires to lower operating costs and the amount of labor required to perform work. Agencies and departments, however, also proved quite reluctant to alter fundamental aspects of operating as a consequence of using IT, such as their missions, work processes, and measures of accountability for results. Over time, their increased use of IT ultimately did cause incremental changes in how work was done. As we saw with the SSA and IRS, these changes encouraged Congress to change missions and work, because of the availability of digital tools that made it possible either to do things more cheaply, faster, or better, or to do simply something new. Yet, the fundamental structures of government proved quite resistant to change in contrast to what happened in the private sector, despite a clear historical
Role, Presence, and Trends in Federal Government
record of extensively using computing and telecommunications. As recently as 2004, one highly regarded expert on the effects of IT on organizations, Don Tapscott, bluntly stated that “unfortunately, many private sector innovations have yet to be embraced by the public sector.”79 He noted, however, that there was growing recognition within governments in developed economies of the need for fundamental transformation, including the American government. The reasons for that recognition have less to do with the functional value of IT than with other influences, such as private sector organizations claiming roles historically resident in the public sector (such as private package delivery services), undermining of government credibility, legitimacy, and even relevance caused by a myriad of issues ranging from badly managed activities (such as the ineffective support of hurricane Katrina’s victims or ballooning budget deficits during the George W. Bush administration), to an emerging sense of citizen empowerment (such as the civilian groups patrolling the U.S.-Mexican border), and the growing necessity of cooperation among governments in sustaining security.80 Nonetheless, federal agencies found many practical ways to use information technology, particularly in support of the collection, analysis, and dissemination of data in all decades and during the period since the late 1990s, for conducting transactions with other agencies, suppliers, and interacting with citizens using the Internet. For those applications, government’s pattern of adoption and use was the same in the era of the Internet as in the late 1950s and early 1960s when it first installed large mainframe-based systems. The cadence of change in applications and base technologies proved more a function of the availability of budgets and congressionally mandated changes than a result of the attractiveness of some new technology over an earlier one. The one general exception to this pattern was the development of weapons systems across departments and agencies, which, while slowly implemented, nonetheless consistently leveraged innovations in technology, even supporting the R&D to create new tools and weapons (such as through DARPA and the NSF). In that instance, military strategy, tactics, and to a certain extent organizations transformed as a direct consequence of new technological capabilities. The record is quite clear that federal agencies became as aggressive in adopting the Internet as any industry in the private sector. Part of the reason, of course, was the convenience provided by the new technology. But a series of mandates from Congress and an enthusiastic supporter of the Internet in the form of the Clinton administration, which remained in power for eight years, did more to motivate agencies and departments to take advantage of the Internet as their primary vehicle for reducing paperwork than the convenience of some new technology. The fact that the Internet could be implemented in small incremental phases, unlike large national, normally centralized (or just in few large geographic regions) applications of the digital hand, also proved quite attractive to senior public administrators all over the government. The number of disastrous or faulty implementations of Web sites and new e-based services proved fewer than for some of the major systems of earlier times. However, as we saw with the IRS, reduction of complexity in government was not a significant by-product of
207
208
The DIGITAL HAND, Volume III
the Internet’s deployment. The government still faced problems with large systems at the dawn of the new century, such as the FBI’s virtual case project and the FAA’s perennial attempt to manage better flight traffic in the United States.81 In contrast, there were fewer negative headlines about failed Internet projects. Selection of uses, deployment, and later changes to reflect new technologies, missions, and budget realities proved to be highly decentralized activities. To be sure, attempts were made to have some central authority responsible for setting standards, publishing guidelines, and so forth throughout the period by Congress, the White House, and specific agencies. In this regard, the GAO, OMB, and the National Bureau of Standards (NBS) played various roles over the decades. It is difficult to conclude that their efforts proved highly effective, but they clearly played support roles because, while agencies and departments remained fundamentally in charge of selecting how to use IT within their organizations, they increasingly bent to government-wide standards and practices regarding the way IT was acquired, used, and accounted for. Decentralization contributed mightily to the concurrent and extensive deployment of IT all over the government, even creating best practices that one agency could learn about from another, as occurred in the early years of the mainframe when the Bureau of the Census served as an advisor to other agencies and decades later organizations with awardwinning Internet sites became advisors to other parts of the government. For over a decade now, there has existed an intense discussion about e-government. Overwhelmingly, participants in the debate have acknowledged that their visions of government agencies relying extensively on the Internet and other forms of IT were collectively states of being yet to be implemented. Furthermore, a reading of annual reports of various agencies and departments would almost lead one to believe that this future state had arrived.82 To be sure, a great deal had been done, particularly through use of the Internet. A group of experts on the role of IT in the federal government declared as much: “Within a brief period of time—largely, the final decade of the twentieth century—the application of new IT to the performance of federal responsibilities, with a view to improving the efficiency and economy of government operations, produced e-government.”83 To be sure, any examination of all the laws and executive mandates from the White House that appeared in the 1990s and early years of the new decade clearly demonstrated that senior officials were perhaps as interested in creating new structures and roles made possible by IT as evident in any earlier decade.84 During the first term of the Bush administration, that is to say, after 2001, signs emerged suggesting that new ways of justifying the acquisition of IT were being implemented, building on initiatives of earlier administrations that, if successful, might provide government agencies with the necessary incentives to transform to the degree already evident in the private sector. Specifically, the Executive Branch encouraged agencies to present formal business cases for adoption of new applications. Thus, for instance, the Department of Homeland Security’s request for funding to consolidate multiple databases and applications into more cohesive ones in support of national security led the White House to
Role, Presence, and Trends in Federal Government
endorse DHS’s request for $884 million for this project. Implementation would result in organizational changes as well. Other business cases began to appear that talked about reorganizations, outcomes, and cost avoidance. The overall IT budget for the federal government was projected to increase by some 7 percent in 2006, distributed in a familiar way: 46 percent to DoD, 19 percent to DHS, and the rest to all other departments.85 Whether the government was entering a period of important transformation such as experienced by the private sector in the 1990s and early 2000s remained to be seen. But ultimately, we have to answer the question, how much of an effect has the Internet had so far on operations of government? The rhetoric about its successes and failures is loud and voluminous, and attention to non-Internet-based uses of IT quiet and in the background. One of the original objectives for embracing the Internet was to make available a large body of information to the public and, secondly, but later, to reduce the amount of labor required to conduct business with the public by having citizens seek out their own information and file applications and other forms with minimal involvement of government employees. Regarding the first objective, success has been well achieved, with many government Web sites and portals some of the most extensively visited around the world, not just in the United States. Visits to federal sites early and continuously remained high. More than just designing good Internet sites, these were implemented in a nation comfortable in its use of all manner of information technology, most recently the PC, and at a time when the percent of the population using the Internet was high and growing faster than in any other nation in the world. The fact that information, literature, and forms were normally free probably also helped to encourage traffic to Web sites. If there is a surprise in this finding, it is the speed with which citizens began using federal Web sites. As they had done with retailers, citizens pushed for access to Internet sites by voting with their dollars and their log-ons. With regard to the second objective of offloading to the Internet work with citizens that otherwise would have been done by federal employees, here, too, we see substantial growth in use of the Internet, although the evidence of change is less definitive because this is a phenomenon more of the period after 2000 than before. Darrell M. West, who has studied extensively the federal government’s use of the Internet, also observed that the Internet provided only “limited” transformation, reaching the same conclusions I have about the incremental adoption of IT across all of federal, state, and local governments across the past half century, not just regarding use of the Internet. West is absolutely correct in arguing that changes came slowly because that is the way of political thinking, and due to the fact that most uses of IT tend to reinforce existing circumstances, at least until the technology has been in place for some time.86 His—and for that matter, my—observations, however, are as applicable to pre-Internet use of technology as to developments that came after use of the Web. That is why, for example, we can argue that all the signs point to another technological success slowly in the making. So what we have is the citizenry at least transformed, now in the habit of increasingly using federal Web sites. A similar conclusion can be
209
210
The DIGITAL HAND, Volume III
reached about the internal use of intranets by federal employees. Extant evidence leads us to the same conclusion that usage is rising but varies from agency to agency, but not much beyond that fact. American soldiers in Iraq were comfortable sending e-mails home. The SSA only saw traffic in the form of transactions rise after the boomer generation began visiting its Web site, although an increasing number of senior citizens are now comfortable using the Internet. With regard to structural and cultural changes in the government, the Internet remains an alternative channel of communication and way of doing work, but not to the extent that we can conclude that the federal government works in a digital style, as we concluded was now the case in so many industries. The experience of the first “computer revolution” in government took place with the first adoption of computing in the late 1950s through the early 1970s. It was a slow and painful process that did not cause fundamental changes in the structure of organizations, institutional cultures, and only partially their missions and roles. So, we are left with the question, since so much that is now occurring with the Internet in government parallels the prior experience, how much “reengineering of government” can we expect? The question calls for a prediction, and that lies outside the scope of this book. However, evidence of what is happening to governments in other countries that have used the Internet more extensively to deliver services to its citizens suggests that there will be changes directly attributable to use of the Internet, but as much likely to be more of a consequence of a whole generation of employees retiring by 2015 as of the technology.87 As exciting as are the prospects of transformed federal agencies and departments, the experience of local, county, and state agencies proved more dramatic and far reaching, particularly during the last decade of the twentieth century. For that reason, we now shift attention away from the federal government and to local public administration over the next several chapters. The story shares with the federal experience such common traits as the hunt for increased productivity through use of IT and decentralized decision making about the acquisition of computing and telecommunications. The story is one of smaller orders of magnitude, a circumstance that itself had interesting consequences.
7 Digital Applications in State, County, and Local Governments We’re using information technology to support and enhance the core functions of Michigan government and to position our state as a global economic powerhouse in the 21st century. —Governor Jennifer M. Granholm, 2004
G
overnments at the state, county, and municipal level consist of tens of thousands of organizations. There are fifty states, and many have nearly 100 counties each, and almost every state has thousands of towns and cities, from little hamlets with hundreds of residents to large cities with populations of over 10 million people. But remarkably, they all share a broad collection of roles, ranging from public safety and law enforcement to providing education, water, and sewer services. Each one is also responsible for economic development, protecting the environment, managing government within the democratic framework spelled out in the U.S. Constitution, and preserving “quality of life” at levels expected by citizens. Collectively, these three sets of governments comprise a major segment within the nation’s economy and society. Perhaps it should be of no surprise that as a collection of governments, they sought to use the same digital tools all other industries did across the American scene. This chapter tells the story of what they did with the digital hand, how, and why. This chapter addresses the same questions put to other agencies earlier in this book. There are issues of timing to address, such as why large cities deployed IT sooner than small towns, and why the cadence of adoption varied from one type of government to another due to the cost performance and technological 211
212
The DIGITAL HAND, Volume III
evolution of IT hardware and software. It is a big, complicated, and diverse story, but one that reflects many of the patterns of behavior evident in some other public agencies and in so many private industries. The links become obvious and specific as the story unfolds.
State Governments and the Digital Hand The period immediately following World War II began with forty-eight states and by the end of the 1950s, had grown to fifty, with the addition of Hawaii and Alaska. In addition, there was the Commonwealth of Puerto Rico, which functioned much like a state as well since many of its government’s activities were the same as those carried out by states. States operated relatively independently of each other, making their own decisions about what uses of computing and telecommunications to implement. The one fundamental exception to this pattern of behavior occurred whenever the federal government injected itself into local activities either through funding of specific projects, such as the use of computing in law enforcement, or when it mandated, supported, or funded specific initiatives. Two important examples of the latter included the myriad welfare programs begun in the 1960s, briefly discussed earlier in regard to the role of the Social Security Administration, and the military services, because of the role of the National Guard. States also looked to each other for examples on how to use IT, and their technical staffs often talked, just as whole functions shared information and communicated, most notably officials responsible for state prison systems, sheriffs, state police, and taxing authorities. States varied in the size of their geographic footprint and in the number of citizens they had to serve. Some were highly urbanized, others overwhelmingly rural, and most comprised a mixture of the two situations. So, generalizing about what states did with computing and telecommunications remains a tenuous proposition; nonetheless, it is possible to navigate their broad variety and circumstances, and a necessary exercise because states mimicked each other and learned lessons from peer governments. The first issue to discuss concerns what in general states used computers for in the performance of their work. Various students of the issue have developed typologies and lists. Two reflect conveniently the clusters of applications. To understand them, we should recognize that over time states implemented a wide variety of uses of technology in support of preexisting and, later, changing operational responsibilities, just as had agencies of the federal government. In short, the variety of applications mimicked the rich diversity of those evident in the federal government and in cities. Figure 7.1, put together in the mid-1980s to educate IBM sales personnel calling on state and local governments, suggests the breadth of functional areas in which deployment of computing had already begun. Ignore the dots around the circle indicating for which functions IBM had software products, because states obtained their software from many vendors, not just IBM, and also developed many of their own software tools, just as did
State, County, and Local Governments
Figure 7.1
State and local uses of computers, mid-1980s. (Courtesy IBM Archives)
the federal government. Within each major category of activity—such as welfare—one can see that state and local governments had already started to equip nearly a dozen clusters of activities with computers and telecommunications. Many of the tasks performed by cities were similar to those of states. For example, each maintained roads, courts, and law enforcement, did accounting, budgeting, and finance, and had welfare, public safety, and education roles. An IT industry research firm, Gartner, created an equally useful list of applications late in the century that, while less detailed than the first one, was
213
214
The DIGITAL HAND, Volume III Table 7.1 IT Market Segments within State Governments, circa Late 1990s Administration and finance Transportation Public safety Human services Health
Criminal justice Natural resources/environment Public works Others
Source: Gartner, Inc., Trends in U.S. State and Local Governments: Market Trends (Stamford, Conn.: Gartner, Inc., March 19, 2002): 4.
quite similar. Table 7.1 is a simple list of the clusters of applications that Gartner tracked by the end of the century, which by then were market segments for software and hardware firms competing to sell specific products into those specific areas of state government.1 To a large extent, the story of computing in state governments is a tale of adoption of computing into these various parts of state government over time, coupled with a subsequent and often concurrent move toward state-wide integrated systems and centralized management of IT. As occurred in the federal government, state agencies normally acquired systems independently of each other. As costs for independent IT operations rose, governors and legislatures sought to harvest the benefits of economies of scale by creating state-wide data centers used by multiple agencies and by distributing, or centralizing, computing over time as the ebb and flow of technological innovations suggested new opportunities for cost containment or improved services. However, there were differences between federal and state applications, particularly in the era before wide use of the Internet. Most notably, states, like county and municipal governments, interacted directly with citizens more frequently than federal agencies and departments, literally face to face, much as occurred in the banking and retail industries. The federal government interacted with citizens more from afar, indirectly or by mail, with less mano-a-mano contact, reflecting patterns evident in the brokerage and insurance industries. Of course, there were many exceptions to this generalization; however, true enough that many of the uses of computing at the state and local level were sufficiently different than in federal applications. One brief example illustrates the point. Millions of citizens interacted with state employees each year to obtain an automobile driver or fishing license. On the other hand, it was always possible for an adult resident in the United States to live for many years without having to deal with a federal official face to face. It was impossible, however, to avoid that kind of contact at the state level, let alone at the municipal level, although the Internet was beginning to alter that circumstance in the new century. When the Internet became a viable tool for governments to use, states had decades’ of experience deploying computer-based aids in support of their dealings with the public and thus, more than the federal government, relied on this technology often in more interactive ways to deal with its citizens. In short, we can consider how states used digital tools as a variant in the digital style of public administration.
State, County, and Local Governments
Early Uses of Digital Tools, 1950s–Mid-1990s The motivations for initially and continuously using computing were, however, no different from what occurred at the federal level. As the nation’s population grew along with the role of governments, the desire to avoid adding employees to state payrolls and to lower operating costs in general constantly served as managerial priorities throughout the entire period. The same applied to local governments as well. So, improving the productivity of employees and controlling expenditures remained critical objectives of all legislatures and governors. Larger governments embraced computing sooner than smaller ones for the same reasons as in industry: they could afford the initial high costs of implementation and had more productivity to gain from economies of scale.2 The one major difference from the private sector was that state (and also municipal) governments deployed computing later than either the federal government or many commercial industries. In other words, it was more of a story that began in the 1960s and 1970s than with the federal agencies and companies, which began using computers at the start of the 1950s. Most state governments bypassed the first generation of computers (1940s–1950s) and began using the technology when it was in its second or third generation, when the equipment and software had reached levels of price performance states could more readily afford and when functions began to match the needs of various agencies. One proof point illustrates this trend. When IBM introduced its PC in 1981, it attracted a great deal of attention at both the state and local levels, and by the mid-1980s, these machines had been widely deployed across hundreds of state and local agencies. Yet, it took well over a decade for second-generation mainframes to become widespread in state government two decades earlier.3 State governments had used precomputer information processing tools for decades. It was not uncommon for state governments to have adopted IBM’s tabulating equipment in the 1920s and 1930s to do accounting, payroll, and other data collection and analysis. They had also become extensive users of smaller “office appliances,” such as desktop adding machines and calculators from scores of vendors as early as the two decades prior to World War I.4 Large governments, however, were the first to install computers in the 1950s and 1960s. Some of the earliest uses of computing in the 1950s included California’s installing an IBM 702 in its Department of Employment to perform employment insurance accounting. The Illinois Division of Highways acquired a Bendix computer in September 1956 to conduct calculations for highway design, as did the same departments in the states of Georgia and Ohio a few years later. Massachusetts installed a Burroughs system in 1960 to do statistical tax calculations. Work unique to a state also went to computers, such as the management of docks at the port of Mobile by Alabama.5 The case of California’s employment application of computing illustrated the changes envisioned by public officials at that time. The Department of Employment did everything from helping people find jobs to paying them unemployment insurance, and to tracking trends and collecting unemployment taxes from employers. During the 1950s, the state legislature introduced new terms and
215
216
The DIGITAL HAND, Volume III
conditions for people to qualify for unemployment insurance that required state employees to perform new calculations of the type “if this situation exists, then benefits are xyz,” which increased the complexity of determining benefits on a person-by-person basis. Officials had been using IBM tabulating equipment since 1937 and recognized earlier than many other peers across the nation the potential value of computers to handle ever growing volumes of employees, employers, beneficiaries, and calculations. They committed to using computers in 1955, most specifically an IBM 702 in early 1956. By the end of 1957, nearly half of the Department of Employment’s old tabulating equipment was gone, and the number of shifts of employees doing back office work had essentially shrunk to one per day. File management evolved from massive quantities of card files to magnetic tape records, much along the lines occurring at both the U.S. Social Security Administration and at the U.S. Bureau of the Census. In the case of California, the number of employees needed to do accounting work shrank as management offloaded work to the computer, while stabilizing work streams into predictable collections of activities planned for, and staffed, in an organized manner. Officials realized “substantial savings” in operating costs for the department. They reported that when they had regular, repetitive work running on the IBM 702, these operations outperformed prior systems and practices, and they were quicker. Work, however, now had to be more coordinated than in the past to leverage the speed and functionality of the system. The 702 proved far more accurate than earlier labor-based or even small machine-based approaches. In addition, with more data available now in machine-readable form, officials could perform additional analysis of the data more easily and quickly.6 As a result of this early success, other state agencies in California began to adopt computing, such as for vehicle registration, using an IBM 650 system beginning in 1958, and for doing engineering calculations in road design, one of the most popular early state applications across the nation.7 Illinois, another large state, became an early user of computers at the same time, also extending its use of an IBM 650 to various applications in the Department of Revenue. In fact, Illinois, like California, Michigan, New York, and later Washington, became an extensive user of computing, often deploying new applications of data processing before other states.8 Yet, computers were not limited to the upper Midwest or California. Florida became the first user of an IBM 1401 system in 1961 when it installed two of these systems to do accounting, finance, highway design, tax audits, and payroll processing. Earlier, each of these applications had been done using precomputer systems, including the most advanced electronic calculators of the day, such as an IBM 604 to produce checks.9 But these represented mostly isolated cases of implementation in the mid1950s. By around 1961, however, almost every state had at least one computer and often used them earliest for highway engineering, because in these years the technology was most suited to perform calculations far more rapidly and accurately than preexisting tools. Revenue administration proved slow to computerize, perhaps because even as late as 1958 only seventeen of the thirty-three states had income taxes or even used punched-card equipment for this purpose. In
State, County, and Local Governments
addition, less than a handful had even started to use computers to assist in welfare management. One study from 1961 characterized the deployment of computers at the state level as “slow,” because they were operating in the mode of “experimentation, trial and error and adoption on a piece-meal basis wherever departmental receptivity is greatest.”10 The nature of the technology also influenced the types of applications and rate of adoption in these very early years. One contemporary observer described the situation: “At the time that many states began to consider installing a computer, there was a considerable gap between the most popular small or medium-sized computer and the very large and expensive computers. Therefore, states found themselves facing a situation in which the departmental applications they were considering were a little too large for the smaller ones but much too small for one of the large ones.”11 Gaps were filled—by the early 1960s with new computers from a handful of vendors, stimulating new demand for computing. This new wave of adoption grew largely out of the shift from large vacuum-tube-based hardware to smaller transistorized systems, all of which also lowered costs of preparing transitions by the states from earlier modes of operation, even saving on expenses for air-conditioning, climate-controlled data centers, but not for rewriting of application software. Use of computers now began to increase, which is why the story of computing at the state level realistically became important in the 1960s, rather than in the previous decade. The first known survey of computing in state governments pegged the number of systems in state agencies at 101 as of 1960.12 A second survey, conducted in 1963, reported that over forty states used 243 systems. By the end of 1964, some 275 were installed. To place the latter statistic in some meaningful context, at the time there were about 22,000 computers installed across all industries in the United States.13 One can conclude that the take-off at the state level had finally started, albeit slowly when compared, for example, to most industries in the private sector or to the federal government. Based on data from the same period, table 7.2 lists some of the key applications and the number out of forty-three states that responded to a survey using computers in a specific functional area of public administration. One can immediately see that the priorities were only partially similar to those of the federal government, while many uses were unique to state governments, once again reflecting that systems were deployed in support of existing work flows and long-standing institutional missions. These applications either lent themselves to quick automation, such as payroll, or were funded by the federal government, for instance, employment security and highway design. (Recall that federal support for law enforcement applications did not start until the second half of the 1960s.) All of the applications cited in table 7.2 focused largely on routine record-keeping activities and thus had little effect on the organization of government, and only modestly on how public employees did their work. Yet, as occurred in the federal government, state officials would visit peers in other states who had already implemented a new use to find out how to do the same with the result that change did not vary as much from state to state as in the private sector, where companies within industries varied in objectives, structures, and workflows.14 In short, seeds of future uses were being planted.
217
218
The DIGITAL HAND, Volume III Table 7.2 IT Applications in State Governments, circa 1964 Application Area
Applications
Public works Revenue Finance
Highway computation & accounting Corporate, income, & sales taxes Expenditure & encumbrance accounting, payroll Benefits, employer contributions Registration, licensing Grant computation, check writing Contributions, pensions Vital statistics, patient billing
Employment Motor vehicle Welfare Employee retirement Health & mental hygiene Insurance Education Civil service Purchasing Law enforcement Conservation Agriculture Equalization Liquor control
Workmen’s compensation Scholarships, state aid Exams, eligible lists Inventory, purchase-order writing Arrest record keeping Hunting, fishing, & motorboat licenses Milk & disease controls Equalization computation, per capita aid Inventory licensing
Number Using 38 26 26 20 18 16 15 13 13 12 12 11 10 10 7 5 5
Source: Data in Dennis G. Price and Dennis E. Mulvihill, “The Present and Future Use of Computers in State Government,” Public Administration Review 25, no. 2 (June 1965): 144–145.
One final useful statistic is worth pondering, the percent of state budgets devoted to computing. As of mid-decade, it was approximately 0.4 percent, further evidence of the modest nature of the use of computing at the state level even as many American industries were spending at nearly twice that rate.15 As in the private sector, these early systems were normally housed in accounting, budgeting, or financial centers within state government, which were willing to share excess capacity with other departments, a practice not as evident in the federal government, yet clearly a rapidly growing practice in companies. Use of computers in the second half of the decade (and continuing into the next), mimicked the pattern initiated in the 1960s regarding what to automate, where to host computing, and rate of deployment. In this period, the federal government increased its demand for information from local and state agencies, which provided an additional incentive to mechanize and automate data collection, analysis, and reporting.16 By the late 1960s, the pattern of who used computers became clearer. One group comprised operational departments that relied on computing to support work concerning motor vehicles, agriculture, water resources, public health, social welfare, and justice. All these agencies came into constant
State, County, and Local Governments
contact with citizens. A second group focused on using computers in support of administrative tasks, such as those of the state comptroller, general administrative services, finance, and tax boards. A third class of agencies involved policy making and were the latest wave of adopters of computing in the late 1960s. Their applications included support of elected officials (governors and legislators) and policy boards. The latter required rapid summarization of large bodies of information officials could use in making decisions and in formulating policies and programs. Operational support for management normally required help in planning, allocation of resources (people, assets, and budgets), monitoring, and correction of errors in data. At the level of what computers actually did, specific practices had emerged. In operational applications, officials often used computers to do off-line repetitive types of jobs that could be scheduled in advance, such as accounting, updating payroll records and printing checks, tax bills, and periodic statistic records. The technology of the day was best suited for these kinds of “batch” applications. A second increasingly used class of applications involved performing engineering calculations, again because computers could do a great deal of this work quickly, in large volumes, and at scheduled times.17 Applications that officials found attractive in the 1960s spread all through the 1970s and 1980s, first into large states, then increasingly to smaller ones, particularly as the technology dropped in price, came in more granular sizes, and with increasing amounts of software products and tools.18 By the late 1970s, older systems, first installed in the 1960s, had either been expanded or replaced with newer versions, including use of online query capabilities after data processing staffs began moving files to disk drives and off magnetic tape and cards.19 As occurred in private industry and across the federal government, budgets allocated to data processing increased slowly in the 1970s, then sped up in the 1980s as the number of PCs in state government spread. Most states established large central data centers in the 1960s and 1970s to optimize the economies of scale that technologies made possible and even centralized the acquisition and deployment of myriad telecommunications, not just voice communications. More effectively than the federal government, legislatures began passing laws to improve the professional management of IT assets all through the 1970s and 1980s, an aspiration the Executive Branch of the federal government ambivalently supported for its own departments. Specifically at the state level, governors and small groups of legislators drove forward the innovations in management, demonstrating a quickness in leadership not as consistently evident in the federal government at the time.20 In short, computing proved more relevant to state public administrations than to many senior federal officials who, one could surmise, had more, or different, issues to contend with, ranging from foreign policy and the Cold War to enormously large organizations, some with hundreds of thousands of employees, encouraging them to delegate reforms to subcabinet officials and directors of agencies. In the 1980s, new applications evident across all industries and the federal government became relatively common in state governments. Already mentioned were the desk top uses of PCs, spreadsheets, and word processors. These little systems, along with other applications housed in mainframes, led to the wide
219
220
The DIGITAL HAND, Volume III
deployment of various decision support systems that had become so popular by the early 1980s.21 These were nested into what had evolved into a complex technological ecosystem. A professor observing this development, Sharon L. Caudle, described this world of the 1980s at the end of the decade: Information Technologies in the past decade increasingly served as powerful tools for government in providing services, regulating, and formulating and evaluating programs and policies. Information was the fuel in government’s business. Telecommunications and office systems technologies joined computing as central technologies. Information management and information technology organizations grew in visibility and power. Not surprisingly, elected and career government officials found themselves more and more concerned with the significant dollar investments in information technologies and their applications, compounded by accelerating waves of new computer and communications technologies.22
She reported that nineteen out of fifty states reorganized their IT organizations in the 1980s in order to improve operations, provide new services, and control costs, with a preponderance of states situating their technical staffs largely in administrative and financial departments.23 An immediate consequence, first felt in the late 1980s and extending right through the 1990s, was the emergence in these newly centralized organizations of an appetite to build statewide IT and telecommunications infrastructures. These in turn would have profound effects on the structure and work of state governments by the end of the century. Caudle was one of the first observers of state IT operations to identify the characteristics of this trend. In 1990, she noted that “the infrastructures link individual workers with a multitude of databases, ranging from stand-alone computing to powerful central data centers to contracted information services. Voice, data, and video telecommunications are rapidly developing as central informational technology tools for mission support. The technical means to instantaneously move information to and from local and state headquarters, across programs, and in real time will soon be a practical reality.”24 While many would criticize how slow states were in transforming their IT and telecommunications, states did not run into the same magnitude of budgetary and organizational paralysis that plagued so many federal agencies in the 1970s and 1980s. To be sure, governors and their legislatures struggled with budgetary constraints; but, perhaps because they ran less complex, smaller bureaucracies, they were able to do a better job in exploiting technological innovations and to make data processing a more integral part of their operations by the end of the 1980s than could evidently officials in the federal government.25 State legislatures played a particularly active role in the 1970s and 1980s in the use and deployment of IT in excess to that of the U.S. Congress. One can posit that there were perhaps two reasons for this activity, largely drawn from the personal familiarity of many legislators with IT not necessarily evident in the profiles of members of Congress. For one thing, many state legislators were not full-time lawmakers but rather had other careers in law and business, which exposed them
State, County, and Local Governments
to many uses of computers and telecommunications, a feature in their background noted by a number of observers of legislators’ role with IT.26 All congressmen in both houses were full-time legislators, and since they turned over infrequently (less than 10 percent lost their offices in any given election), many were career congressmen. Thus, they had less exposure to the evolving forms of computing than their state counterparts. In short, they faced the kinds of technological insulation experienced by judges. So, unless they served on some congressional committee charged with examining the role of IT or telecommunications, they might not be personally as aware of the possibilities of technology or the managerial considerations involved as would state legislators and city and town council members. Recall the chronic problem the Social Security Administration and the IRS faced in the 1970s and 1980s as Congress passed laws without giving sufficient consideration to the IT implications affecting the agencies. Unless one wants to attribute to congressmen a mean-spirited lack of consideration of such factors, one is left with no alternative but to conclude it was the paucity of adequate personal knowledge about technology that led federal legislators to act as they often did in these years. I also have been singularly unimpressed by the ability of agencies to articulate their IT needs to the staffs of congressmen. A second source of knowledge for state lawmakers came directly from a variety of digital applications used in the legislative and voting processes of their states. While use of computing by legislators and their staffs dated as far back as the mid-1960s,27 actual use did not begin in a widespread fashion until the late 1960s. Early uses included digitizing state statutes for ready look-up by staff, managing various drafts of legislation, word processing, and quick publication (usually photocomposition). By the early 1970s, half the states used their main computer data centers to support online retrieval of statutes,28 and about fifteen updated drafts of legislation using the digital hand, making the process faster, less tedious, and eliminating errors. As draft legislation went digital, soon after so, too, did tracking of potential laws as they made their way from conception to final passage. By 1972, over thirty states used this method for tracking and scheduling legislative activities, much the way court clerks tracked and scheduled cases. As with other state applications of IT, the largest states began first, with legislatures in New York, Washington, and Illinois, for example, experimenting early in the 1960s.29 By the end of the 1970s, thirty-four states now wrote, edited, and managed legislative actions via computer, making it possible for staffs to handle larger volumes of legislative work and to respond rapidly to iterative changes. Software products to help legislatures began appearing on the market. In addition, budgetary information derived from state financial and accounting departments became available through batch and online access to computerized files. A few states also experimented with voting via computer at a legislator’s desk.30 By the end of the 1970s, an estimated forty-four states now used computers for one application or another.31 The introduction of computers had a greater effect on legislative staffs; it was this community that seized the greatest initiative in persuading their legislators to invest in computing. One survey looking at the period of the 1970s and 1980s
221
222
The DIGITAL HAND, Volume III
made the crucial observation: “They (staff) wanted word processing, automated bill production, legislative status tracking, and legal document retrieval systems to make their work easier. The driving requirement was the usually impossible deadline of incorporating amendments into the budget bill and having it on the floor for debate within a few hours.”32 Table 7.3 shows the results of their efforts to integrate digital tools into the work of legislatures. While the extent of deployment may seem small, recall that this represented the cumulative results of about one decade’s worth of deployment and involved over half the states. In turn, use of computing changed the role of staffs, requiring them to learn how to search for material online, to understand how budgets work, and to be able to read and model budgetary options, in addition to doing queries very much as lawyers and their usually much younger and computer-savvy legal aids. Clerical staff had to understand how to use word processing, track the status of bills, retrieve statutes, use e-mail, and interact with online calendaring. Increasingly in the 1970s and 1980s, computer skills became a prerequisite for being hired. It was, therefore, not uncommon for legislators and their staffs to use online systems in the 1980s.33 In that decade, distributed processing expanded as legislatures acquired internal networks and sometimes their own computer systems. E-mail and other online delivery of legislative material to staff and legislators became increasingly common. Key applications that built on earlier ones of the 1970s included updated word processing, electronic mail, file management, and telecommunications, usually linked to district offices of legislators. Access to pending legislation and existing laws via terminals became increasingly normal practice across the country.34 By the end of the 1980s, it was not uncommon for legislators to be very familiar and facile with online query systems, and direct users of their legislature’s digital tools, deployed in the state capitol building, in their district offices, and at home via dial-up telephone communications. They also became early adopters of personal computers and, in the 1990s, laptops, carrying the latter into committee hearings to take their own notes or to look up
Table 7.3 State Legislative Applications in Use by 1977 Application Bill status tracking Fiscal-budgetary Bill drafting Statutory retrieval Bill indexing Modeling
Number of States Using 32 25 4 3 3 2
Source: M. Glenn Newkirk, “Trends in State Legislative Information Technology,” Government Information Quarterly 8, no. 3 (1991): 264.
State, County, and Local Governments
information. By the late 1980s, legislators more than their staffs were also the individuals requesting new digital tools, such as spreadsheets, local area networks on the floor of the legislatures, and economic and financial modeling capabilities.35 Finally, as pending bills made their way into computers and online access more common, legislatures began making these files available to other state officials, reporters, and interested parties via telecommunications. Alaska, Illinois, and Virginia had led the way in the 1970s, and by the mid-1980s, nearly a dozen permitted such access. By 1990, the number of states had nearly doubled, and by the end of the century it was ubiquitous.36 In short, as early as the end of the 1980s, use of computing had largely been woven intricately into the fabric of the daily work of state legislatures. Lest one think that the U.S. Congress eschewed computing, nothing could be farther from the truth. Just as staff at the states had pushed for computing in the 1970s and 1980s, a similar process had taken place at the Congress even earlier, in the 1960s as a matter of fact. By the 1970s, it had become an extensive user of IT, for the same reasons as at the state level. Patterns of adoption of computing evident at the state level were similar in Washington, D.C., although, as with many other federal organizations, they began earlier, in the 1960s, and often were only slowly modernized in the 1970s and 1980s. But use proved extensive with the very large staffs and volume of work characteristic of the Congress and its hundreds of district offices around the nation.37 Switching back to the executive branches of state government, deployment of computing in the later 1980s and early 1990s continued unabated, morphing to account for the use of distributed processing (thanks largely to cheap PCs and communications) and knowledge gained from using earlier versions of digital tools. Imaging applications became important by the early 1990s to digitize large bodies of records used in daily work, such as land records and birth certificates, and, of course, tax returns and Medicaid documentation. Optical scanning of documents came into the states at the same time as in the Insurance Industry, also a large community of workers inundated by vast quantities of documents. One report on this new application described why it was an attractive use of computing: “By creating electronic images of paper documents for computerized manipulation, storage and retrieval, the technology cuts down on space needed for filing cabinets, ensures accuracy of information, simplifies and speeds up employee access to information and thereby saves time, labor and money.”38 It was, however, a primitive technology in the early 1990s; but, by the late 1990s, it had stabilized into a very cost-effective, relatively easy-to-use application.39 Overall, by the time the Internet became a factor, beginning in the middle of the 1990s, states were collectively spending some $30 billion annually on computing, and nearly every office-bound employee interacted with the digital in one fashion or another.40 Large systems continued to be implemented, some very successfully while others were multimillion-dollar disasters, such as California’s failed welfare system of the 1990s, which officials had to abandon and start over with a new one.41 Applications ranged from department of motor vehicle systems to others in support of welfare and Medicaid. Some of these systems were
223
224
The DIGITAL HAND, Volume III
designed in response to federal mandates that states take over functions originally handled by federal agencies. This migration of responsibilities proved difficult to accomplish because all states tended to build the same kinds of systems simultaneously to meet federal deadlines for compliance (thereby absorbing all the consulting knowledge about a particular application available in the United States) or were designed based on federal guidelines or laws stipulating what services to provide or what rules to enforce. One frustrated state employee commented that “the notion that you can really develop systems from Washington under federal guidelines is akin to playing Mozart with mittens on.”42 On the other hand, there were also success stories of new generations of large applications being installed for welfare, Medicaid, and education, to name a few.43 As tax revenues increased in the 1990s during the expansion of the nation’s economy, states increased their budgets for IT, modernized old applications, added many new ones (such as GIS), distributed IT processing, and replaced old hardware and software.44 Collectively, state expenditures for IT rose from nearly $35 billion in mid-decade to well over $50 billion annually by the end of the decade. The largest proportions went to administrative and financial functions, but additionally to human services, transportation, and public safety. Health, criminal justice, environment, and public works also invested increasing amounts in IT all through the decade as well.45 By the late 1990s, state governments had been aggressively injecting computing into all manner of work across each department and branch. Areas of greatest use that governors and their staffs focused on in the mid- to late 1990s included higher education, elementary and high school education (K–12), business regulation, taxation, social services, law enforcement, and the courts. Simultaneously, more software to help in decision making at all levels of government kept being installed on every kind of platform from laptops to mainframes with open and proprietary software and networks.46 Recent Trends The center of much activity involving IT since the mid-1990s centered on the deployment of Internet-based uses of computing and communications. Like the federal and local governments, states paralleled similar patterns of applications and deployment. But, unlike the federal government, state governments went farther in creating applications that increasingly have been labeled e-democracy by which citizens were beginning to interact more frequently in public policy decision making by way of the Internet. It was a process that was just becoming evident in the early years of the new century,47 which was also reigniting a debate about the virtues and problems of direct versus representative democracy. These were issues that had first been discussed by the political leaders of the 1770s and 1780s when they created the structure of the federal and state governments.48 State agencies began creating Web sites as early as the mid-1990s. A decade later, observers noted a pattern discussed throughout this book in which states went through several generations of Web sites rather quickly, in a matter of a few years per generation instead of the decade or more per generation of computing
State, County, and Local Governments
more normal with mainframe, mini-, and PC-based computing from the 1950s to the end of the 1980s. One student of the states’ process, Darrell M. West, in 2004 identified four stages of evolution: “billboard stage,” “partial-service-delivery stage,” “portal stage,” and “interactive democracy with public outreach and accountability enhancing features” stage.49 That first one involved posting information, which described an organization’s mission, largely providing addresses and telephone numbers. The second enabled citizens to begin interacting with an agency, such as sending e-mail and filing applications for permits. The third organized menus of services not by departments but by type, also providing navigation tools to search out topics independent of any particular agency. The last stage is about e-democracy—discussed in more detail below—which states were just entering by the middle of the first decade of the new century. Roughly generalizing, states went through the first two stages at various speeds between the mid-1990s and about 1998/99. The third began about 1998 and continued into the new century, while the fourth was just starting as this book went to press in 2007.50 Early uses of the Internet involved linking state and local government agencies together so that they could collaborate on problems, such as public emergencies, and that created bulletin boards to inform citizens. Early examples included UtahNet, Info/Kansas, and HawaiiFYI.51 Pre-Internet communications with the public had included use of dial-up access via PCs and kiosks; officials now began to migrate to Internet-based platforms.52 Evolution to the second stage involved a large variety of state agencies working in an incremental fashion, adding more content and slowly next the ability to communicate and conduct a very limited number of transactions.53 Within a year, more than half the states were clearly implementing second-stage applications, and officials were largely convinced by then that they could use IT to realize another round of efficiency and cost savings in state operations, thanks to the Internet. As one report on this attitude noted in 1999, “The Web’s cheap, friendly, flexible interface is fast becoming another face of government.”54 By the late 1990s, over half the households in the U.S., and a just slightly higher percentage of workers, had access to the Internet, providing an environment in which state officials felt encouraged to use as a new way to deliver their information and services to the public at large. Furthermore, residents in the United States were learning in their nongovernmental activities to have access to businesses and information twenty-four hours a day, seven days a week and began expecting the same from their federal, state, and local governments. The states in turn saw provision of such services as a way of improving their response to the public’s will while containing operating costs. It seemed like an attractive “win-win” arrangement for both. With interactive services just starting to become available by the end of the century, citizens could obtain online their fishing licenses, register their cars, and so forth in many states. However, even as late as the dawn of the new century, the list of online, or in the parlance of the day “e-commerce,” applications remained few and only increased incrementally, department by department.55 Officials, observers, and citizens increasingly recognized by the end of the century that state and local government operations had entered a period of
225
226
The DIGITAL HAND, Volume III
significant, if ill-defined, transformation. Restructuring of public expenditures was one transformation, as was the way in which citizens would interact with government officials. But it was also becoming clear that traditional organizational structures could potentially change as similar services of various agencies were either bundled together or that caused multiple agencies to deal with a citizen in a more coordinated manner, using case-management techniques already widely deployed by the Insurance Industry and law enforcement and increasingly by welfare agencies.56 As with earlier adoptions of IT, almost from the beginning of the Internet’s incursion into state operations, publications and organizations tracked deployment, reported results, and influenced (indeed encouraged) officials to move further in using the Internet, beginning largely in the late 1990s.57 By 2001, all fifty states had at least arrived at a second stage of deployment, and over a fourth were deep into the next one, as we saw, for example, with the use of online tax filing.58 Yet deployment of specific uses remained uneven across the nation. Those functions that facilitated generation of taxes and fees seemed to be the earliest to be converted into second- or thirdgeneration Internet applications. Table 7.4 lists common examples of this process as of late 2002. Use of portals made it easier for state employees, private sector managers, and citizens to navigate through myriad state Web sites for information and services. An early example from Michigan illustrates the issues faced by public officials concerning the Internet and why they began deploying portals. Their circumstance existed essentially in similar form all over the country. A contemporary report (2001) documented the issues: “The tangle of Web services available until recently grew up in individual agencies with limited resources. It was
Table 7.4 Widely Available State Government Internet Applications and Number of States Offering These Services, 2002 For Tax Preparation and Filing: • Downloading forms • Tax advice • Filing For Registering Vehicles: • Downloading forms • Completing registration online For Obtaining Professional Licenses: • Downloading forms and obtaining information • Partial online registration • Totally online registration
42 38 35 11 16 50 25 2
Source: Ellen Perlman, “The People Connection,” Governing 15, no. 12 (September 2002): 32.
State, County, and Local Governments
confusing to users because there was no common navigation, no search function across agencies and no common look and feel. Users had to know the name of the agency to find the service they were looking for. At the new Web portal, which was launched July 11, services are arranged by theme.”59 Michigan had long been an extensive and early adopter of digital tools, along with such others as Illinois, New York, and California, so it ran into these sorts of problems earlier than most states. How they resolved them provided guiding practices used by many other public officials and their IT staffs.60 As occurred in the private sector, issues concerning data security and privacy also proved troublesome to both citizens and public officials, beginning in the mid-1990s, and were not fully resolved in the early years of the 2000s.61 State governments had two classes of expenditures in the 1990s and early 2000s that affected their move to the Internet. The first involved those for hardware, software, and staffs to support, update, and run pre-Internet based applications, such as the many back office uses developed in the 1970s and 1980s. A second cluster of expenses involved the creation, operation, and innovation of Web sites. In the late 1990s, state officials were generally able to spend between 1 and 2 percent of their state budgets on IT. Some states, like Michigan, spent more, as did others that enjoyed additional tax revenues and fees during the booming economy of the post-1995 period and before the arrival of the national economic recession of the early 2000s. By about 2003, while the absolute amount of dollars expended before on types of IT continued to rise, it was quite clear that their collective priorities settled on four sets of initiatives: to improve homeland security, continue expanding use of the Internet in “e-government” initiatives, support agency-specific applications, and outsource some IT work to the private sector.62 Governors had to cut back expenditures on projects to upgrade existing IT infrastructures due to the lingering national recession that was reducing the inflow of tax revenue to the states. Did citizens use Internet tools? There is some evidence that helps us to answer the question. However, a survey conducted in 2003 suggests that citizens were extensively using these Web sites. Officials reported that substantial use of their Web sites occurred in over half the states but varied by application, with lower usage for employment and public safety, while very high for professional licenses such as with over 80 percent of all nurses using the Internet for licensing in those states where online applications for licenses existed, for example.63 Another survey reported that some 68 million people had accessed government Web sites of all kinds (federal, state, local) by 2003.64 So, extant evidence suggests that the answer is “yes, they did.” The next stage in the evolution of Internet-based computing involved effects the digital hand was just beginning to wield on democracy and the practice of government as this book was going to the publisher (2007). At the risk of dealing with the issue too briefly, because so many other commentators have so extensively, the historic debate centers on whether citizens should vote directly, for example, on measures before a legislature (such as on the famous propositions constantly put before voters in California) or allow elected representatives
227
228
The DIGITAL HAND, Volume III
to do that on their behalf.65 The first form of democracy is often characterized as direct, while the second, representative. The founding fathers chose unequivocally the latter form largely in order to prevent passions of the day causing illconceived laws from being passed and to increase the possibility that well-informed legislators would use sound judgment on behalf of the people’s interests. The issue is complicated by the fact that modern democracies are predicated on the availability to citizens of large amounts of information. Enter electronic or e-democracy. As one student of the subject neatly defined it, “electronic democracy can be understood as the capacity of the new communications environment to enhance the degree and quality of public participation in government,” such as allowing citizens to vote using the Internet, or to do real-time polling of public opinions.66 The debate on the advantages and disadvantages of such use of technology is extensive. It appears that there is also an emerging new twist, namely, the growing involvement of nonprofit organizations, the media, experts, and others attempting to become involved in the governance of the nation and actively engaged in providing specific services to agencies.67 The issue has not led to any consensus and promises to become of greater concern in the years to come as the further deployment and use of the Internet takes place in public administration and in support of democratic practices. More narrowly within the confines of a government agency, there is the concern that technology would affect the way citizens and officials interacted and influenced each other, a subject not yet clear to anyone at the dawn of the new century. Will chat rooms affect how legislators vote? Will polls of a near instant type affect decisions made by elected and career officials? What will the vastly increased amount of information and access to government have on the level and role of public trust in public administration?68 These issues go to the very heart of how democratic government functions in the United States. There is one issue related to IT in general, and not simply about the Internet, that has become far more public in recent years than the effects technology will have on democracy: voting. In the United States, state governments have the greatest hands-on responsibility for conducting elections for federal, state, and many local offices, although county and municipal governments do too. The two key applications involving any form of technology concern how to cast and tabulate ballots. Since the 1950s, many states had used punched cards and IBM tabulating equipment to speed up tabulation and to increase accuracy of the work, all the while providing audit trails. Specialized voting equipment became available as well, and states and counties deployed these widely around the country throughout the 1960s and 1970s. In the 1970s and 1980s, various optical scanning systems began appearing around, while older punched-card systems also remained in use.69 Nobody living in the United States during the elections of 2000 could possibly have avoided hearing about the technical problems faced by officials in Florida with punched-card voting that ultimately led to the U.S. Supreme Court declaring George W. Bush as the winner in the national election for president over Albert Gore. During that election, approximately 32 percent of all polling
State, County, and Local Governments
places in the nation still used punched-card ballots, another 27 percent relied on optical scanners, and 18 percent on the old-fashion lever-operated machines. Only 9 percent had deployed touch screen (digital) systems. An additional few states still had paper ballots. At the time, only twenty-two states had established official guidelines on how to use digitally based voting systems.70 Largely as a result of the problems experienced by Florida, an uptick occurred in the deployment of digital voting systems. By 2002, thirty-one states reported that roughly half their precincts were using digital means to cast and tabulate votes, while eight remained committed to manual, paper ballots.71 Deployment was a function of what local and state governments did independently within a state, and states independently of each other.72 Technical problems and lack of budgets for new systems constrained deployment of digital voting systems. As of 2004, for example, optical scanners still predominated, although paper and punched-card systems had beaten a hasty retreat, largely supplanted by optical scanners and some digital systems, the latter mainly in Nevada, southern California, and in various parts of the southern United States.73 Only about one third of all votes cast in 2005–2007 were done electronically. Thus, we can conclude that deployment of digital voting tools remained relatively low and that adoption had been slow, incremental, and fragmented. Closely tied to voting issues, and directly an Internet application, were campaign Web sites, which became popular with campaign organizations in the late 1990s, especially for gubernatorial, state, and federal legislative elections, and somewhat less for county and municipal campaigning. They became standard fare during the early years of the next decade. Sites varied widely in the amount of information they provided, ability of people to conduct transactions (such as e-mail and making monetary contributions).74 During the national elections of 2004, however, the Democratic Party raised hundreds of millions of dollars over the Internet. In that election, roughly one third of the electorate used Internet sites to obtain information about candidates for local, state, and federal positions, up by a third from the off-year campaigns of 1998. By 2002, organizations had begun codifying best practices for use by candidates under the assumption that no one could now run a campaign without effectively using Web sites.75 The press, political scientists, and others also were now extensive users of the Internet for information related to political campaigns and voting.76
County Governments and the Digital Hand County governments play a special role in American society. On a more local basis, they carry out duties similar to a state, such as to provide law enforcement, manage voter registration rolls, conduct elections, protect the environment, maintain roads and bridges, and often are an important employer in the community. They also share jurisdictional responsibilities with municipalities as well, doing similar duties, such as running school systems and social welfare programs. They range in size from the very large Los Angeles County, which is
229
230
The DIGITAL HAND, Volume III
larger in population and budget than many states, all the way to very small ones in Delaware and Rhode Island. Some are quite urban, such as those that share the geographic footprint of Atlanta, Nashville, and Chicago, and also quite rural, as is the case across much of Montana, Wyoming, and Alaska. In short, they vary a great deal. Most studies of computing at the local level invariably merge practices of county and municipal governments into one milieu under the assumption that their roles are more similar than different. However, extant evidence exists on differences among the two types of governments, suggesting that at least in so far as the adoption of IT is concerned, there are sufficient variations that should be recognized. To the extent possible, it helps to define better the exact role of computing and telecommunications at the local level, across the many thousands of towns, cities, and counties in fifty very different states. Early Uses of Digital Tools, 1950s–Mid-1990s As so evident with states, cities, and companies, the largest counties often were the earliest to use computers, and for the same reasons. They had adequate scale and budgetary and staffing wherewithal, particularly in the beginning when computers were very expensive and required many other resources to implement. Large counties also had installed earlier information processing equipment, such as IBM punched-card tabulators and billing equipment made by Burroughs. They even used cash registers in support of financial transactions. One of the largest in the 1950s was Los Angeles County, home to some 5 million people and fifty-five cities, including the city of Los Angeles and such other wellknown communities as Pasadena, Glendale, Long Beach, Burbank, and Santa Monica. The county had in excess of 35,000 employees and a payroll bigger than that of forty states. In the early 1950s, it had approximately sixty administrative departments and other organizations and several data centers. So, it should be of no surprise that officials in this county became some of the earliest public servants to study the feasibility of using computers, beginning their exploration in 1953 and ending with the installation of a Datamatic 1000 in the fall of 1958. Over the next several years, they moved work done on such older calculators as an IBM 604 over to this system. Applications included assessments and collection of taxes, centralized payroll preparation and accounting, maintenance of voter records, others about grantors and grantees, preparation of social welfare statistics and claims for the state, accounting for the County Hospital, creation of utility bills and records, and support for the Road Department. An IBM 650 system did scientific analysis of wind currents and air pollution, housed at the county’s Air Pollution Control District.77 Other large counties followed suit although at a slower pace than Los Angeles County. Counties began with finance, payroll, billing, accounting, and tax applications, automating and speeding up existing processes for these core functions. One survey conducted in the early 1960s reported that there might have been roughly thirty computer systems installed in county governments, although the accuracy of the survey was not precise because data on cities and
State, County, and Local Governments
counties were often mixed in together. However, what is not in question is the fact that counties came to computing later than states.78 By the mid-1960s, all large counties saw computers as the next wave of technology they would soon embrace if they had not yet done so. As the cost of IT dropped during the 1960s, ever smaller counties, with their more diminutive budgets and populations to serve, emulated their larger brethren.79 Because many counties were first-time users of computers in the late 1960s and 1970s, they acquired state-of-the-art equipment, such as IBM System 360s in the second half of the 1960s, and IBM’s System 370 in the early years of the next decade, free from not having installed second-generation computers requiring major conversions, as faced by such early adopters as the IRS and some very large state governments.80 By the mid-1970s, approximately 25 percent of all counties with populations of over 10,000 residents were now using computers. The largest counties, that is to say, those with populations of over 250,000, were largely invested in the technology, minimally 97 percent of all such organizations, to be more precise. These also had the largest number of well-trained, professional managers and technical staffs who could install and use such technologies, which were far advanced over tabulating and accounting/calculating equipment. Surveys of the period also showed that counties proved slower than cities to adopt computing, statistically behind (as measured by size of population) by roughly five years. That placed the “take-off” in use by the largest counties in the early 1960s, midsized counties (say with populations of between 50,000 and 100,000) in mid-decade, and the rest in the 1970s. Counties of all sizes, however, often ran computing applications at service bureaus or shared resources with local cities or nearby counties, thereby avoiding, or lowering, costs of having their own systems and staffs. Even counties that had systems of their own also outsourced some work to a specialized public regional or private service bureau, such as payroll or ad hoc work. At the same time (1960s and 1970s) when counties did use computing, the percent of budget allocated to this endeavor often exceeded that of cities of comparable size because of the nature of their work. Counties ran large digital applications to prepare voter registration lists, property tax assessments, and in support of welfare and healthcare, often in excess to similar work assigned to many municipal computers. Counties also were more geographically dispersed than cities and frequently used computers to provide administrative integration of services, practices, data collection, and so forth. By the late 1970s, on average counties had nearly fifty different uses (applications) running on their computers, largely automation of routine tasks, such as record keeping, calculating, and printing, with primary emphasis on financial and accounting controls and procedures. Close behind these applications were those, of course, for law enforcement of the type discussed in chapter 4. In short, many county officials focused on generating revenue and supporting law enforcement.81 Use of computers, therefore, supported internal, inward-looking applications. Not until the late 1980s, and more extensively after wider deployment of the Internet, would citizen-centered applications be installed by their county
231
232
The DIGITAL HAND, Volume III
governments. About the only way citizens saw evidence of county IT at work in the 1960s and 1970s was through documents printed by a computer and mailed to them, such as utility and tax bills, or notification of voter status. Surveys of the period demonstrated that just having a computer did not lower operating costs or requirement for staff. To be sure, low-level clerical work sometimes went away, automated by the “giant brains,” but at the same time, counties had to hire more expensive technically competent staffs to run these systems. But there were offsetting benefits. These included the ability to process large complicated files (such as land records for assessing taxes), uses requiring frequent searching and updating of records (such as wanted persons or stolen vehicle files for sheriffs and police), and delivering information to geographically dispersed offices from centrally controlled records. Officials commented in the 1970s that such functions did make it easier to do more work faster and that IT helped them to avoid future budget increases. All of these considerations grew in importance during the second half of the 1960s when the federal government began requiring growing quantities of data from all local governments concerning crimes, education, welfare, and employment.82 In the late 1970s and early 1980s, increasing numbers of counties acquired computers for the kind of applications that had been embraced by earlier adopters of digital tools. In the period from the mid-1970s to the early 1980s, the cost of technology continued to drop while availability of software tools increased. It was also in this period that personal computers became available and managers all over the economy acquired the greatest number of minicomputers. However, the smallest counties that could afford computing did not seem as aggressive in wanting to rely on computing as larger counties. The same applied to smaller cities. One survey done in the mid-1980s demonstrated that both types of government had only increased their use of computers by 17 percent over the surveys done in the early to mid-1970s, but about 53 percent of all counties and cities combined now used computers (their own or through some service bureau or shared arrangement) to do some or a great deal of computing. The data demonstrate that counties with “professional management” were more inclined to use computers than otherwise. More precisely, the 53 percent broke out with 36 percent of all counties using computers, while over 67 percent of all cities as well. Thus, as a group, all counties tended to lag behind cities in their use of computing. One of the authors of the survey, Professor Donald F. Norris, opined that the rate of adoption turned on the question of the quality of management.83 However, while that was clearly the case, scale economic factors also played an enormous role, such as size of a county, volume of transactions it needed to perform, and the cost of technology. Less understood, but clearly a factor, was the additional issue of the availability of technical staff conversant in IT. The smaller the county one looked at, the more likely that it had less or no staff familiar with computer technology. By the mid-1980s, the most widespread uses of computers were in support of generating payrolls, accounting, budgeting, utility billing, tax assessments and billing, to manage personnel records, support law enforcement, track inventory,
State, County, and Local Governments
and voters. These remained core applications for both counties and cities for the rest of the century, regardless of the size of the governmental body. Use of computing facilitated the performance of existing work but did not fundamentally change the kind of activities performed by public employees, nor the structure (organization) of local government in this period. Thus, the effects (so far) were more on operational tasks, where work could be sped up, automated, or done more accurately and less expensively. Computers had yet to affect the management and decision making of county officials, a desire professional supervisors and advocates of computing hoped would occur.84 The story of the use of computing in counties between the second half of the 1980s and the mid-1990s is a tale of embracing new technologies, such as the shift from just batch to online systems and the adoption of personal computers in increasing amounts. The number of counties relying on computers also increased as the technology became more affordable and accessible so that by the mid-1990s, one would have been hard pressed to find a county that did not use computers for multiple applications.85 Recent Trends Like city and state governments, counties upgraded software and hardware throughout the late 1980s and right through the 1990s. Online systems expanded, while remote locations were increasingly linked together through private and public networks. From the perspective of the daily work of counties, they incrementally expanded use of computing to new areas, such as digital mapping (GIS) and to schedule work, planning economic growth, and infrastructure maintenance and development. PCs proliferated, while work that had been done on service bureau computers often came back in-house run by employees. Early adopters from the 1960s and 1970s faced Y2K remediation in the late 1990s to get ready for the year 2000, while those who came to computers later faced less of a potential problem since by the late 1990s a great many software products (and updated microcode in hardware) were already written to reflect the change of date that came on January 1, 2000. While counties were slow to establish a presence on the Web, they did, with their major take-off occurring in the early 2000s. So where were counties at the end of the century with computing? Survey data from the period suggested that over 80 percent of all departments in county governments had access to computing in one form or another, with almost half of all employees having access to PCs for use as part of their normal work. Less than 10 percent of all counties reported not using computers. Access to the Internet by employees remained low as of 1999–2000, to just less than 10 percent, yet half used internal e-mail. The best evidence suggests that a third of all counties now had an intranet site, but only about 13 percent of all county employees had access to the Internet at work. Forty-two percent of all counties reported having a Web site. The earliest services provided over the Internet existed in less than 10 percent of counties. When they were available, these included processing open records requests, providing copies of vital records, and handling voter registration—all offered mainly by very large counties. Two
233
234
The DIGITAL HAND, Volume III
percent or less had yet made available to citizens the ability to register motor vehicles over the Net, or for that matter, to obtain building permits, make tax payments, or to pay fees, acquire licenses, and settle fines.86 In short, counties were just getting started on their e-government journey. As one reporter covering local governments wrote in early 2001, “Counties typically haven’t been viewed as hotbeds of cutting-edge e-government innovation.”87 Large counties, however, were in the process of hiring chief information officers (CIOs) and consolidating their IT operations into IT departments which took advantage of professional management to leverage the technology. Press commentary in subsequent years increasingly reported that county governments were finally embracing the Internet and transforming how they did daily business. However, the rhetoric seemed to get ahead of the realities cited above. For example, one industry watcher declared in 2003 that “the Digital Counties Survey shows that technology is truly transforming government as we know it at the county level,” yet the data in that study citing where that was happening offered minimal evidence on extent of deployment.88 Most of the instances described were from large counties. However, this survey did indicate that by early 2003, over 82 percent of all counties had a Web site, and that half of these made it possible for citizens to explore information online, such as job openings in their county. Nearly all also provided e-mail for their employees and elected leaders.89 The same survey was conducted the next year, and the hubris continued. The most important change reported in this survey was less about how many more counties had Web sites—that number kept growing—than about the move to county portals (61 percent). This growing trend proved important because portals made it easier for citizens to conduct business online with their county, and we know from the experiences of state and municipal uses of portals that over time these did effect changes as governments focused more on the needs and desires of citizens and less on their own departmental, hence internal, priorities. Counties expanded the ability of citizens to reach individual officials by way of e-mail, and Web-casting of governing body meetings had started, building on the prior experience of local public television broadcasting such meetings as early as the 1970s.90 What was happening in smaller counties? The only data we have is for a category of counties with less than 150,000 residents, suggesting that what we do know is about those closer to the high-end populations than for others that, for example, had populations of less than 25,000. Yet even in this broad category of counties, by 2004 nearly 90 percent had Internet sites and gave citizens access to staff, management, and elected officials via e-mail. A third implemented IT using a strategic technology plan to guide their activities, and a similar number had created portals, which tells us that there were multiple Web sites in many counties that needed to be integrated. Online transaction processing was still rare (roughly 2 percent) while most made forms available over the Net. In contrast, a third reported having a broad selection of digital tools available to law enforcement, no doubt funded largely by grants from the federal government.91
State, County, and Local Governments
Observers of the adoption of computing at the local level noticed, however, that in the early 2000s, the public was willing to invest tax dollars in creating Internet-based e-government services. Regardless of whether a town, city, or county, it seemed that the proverbial “everyone” was expanding their Internetbased services and operations, and early adopters were moving through second, third, or into fourth stages of evolution, from simply posting information to slowly starting to offer transaction processing with the public. However, when compared to services in the private sector, they remained unsophisticated, with most still in very early stages of evolution more typical of the private sector of the mid-to late 1990s.92 Barriers to more rapid and sophisticated adoption and use of the Internet varied from one county to another. However, common issues most faced in the late 1990s and early 2000s included lack of staff knowledgeable about creating and managing Internet sites, insufficient budget to allocate to such initiatives, concerns regarding data security and privacy, and more pressing needs to upgrade earlier installed IT applications.93 In short, issues had less to do with desire or awareness of the potential benefits of computing, and more to do with what had always constrained adoption of digital tools in all governments over the past half century. Furthermore, these concerns turned less on the functionality of technology, or over time, even on the relative costs, and more on the absolute costs and availability of staff. In other words, even if the price of a computer declined from one generation of hardware to another, it still did not necessarily make it attractive if there were insufficient dollars in a budget to pay for the less expensive machinery and software. Earlier adopters also faced problems evident in large federal agencies. As several observers of the scene pointed out in 2003, “For those local governments that have historically adopted technology earlier, they may experience the ambivalence first as they are more likely to report both positive and negative impacts,” with the consequence that they “are likely to find that e-government is a mixed blessing and that expanding and enhancing electronic services may be further stretching already stressed IT staff resources.”94
Local (Municipal) Governments and the Digital Hand In the United States, it is commonly accepted that the phrase “local governments” refers to municipalities: towns, villages, and cities, not to unincorporated communities, and frequently counties as well. Municipal governments are normally run by elected mayors and town or city councils, although any community of size may also employ professional managers who are not elected, such as town managers and CIOs. Urban centers are of extraordinary importance, indeed, often far more so than counties, because the majority of the people living in the United States reside and work in towns and cities. Urbanites have outnumbered those living in rural communities since the 1920s, and by 1950 by nearly 2 to 1. The trend of living in urban centers continued right into the new century. In 2000, for instance, 79 percent lived in urban communities.95 If for no other
235
236
The DIGITAL HAND, Volume III
reason than where people lived, the role of towns and cities is of great importance in any discussion of life in modern America. Use of digital tools by these local governments, therefore, takes on a greater importance than by counties, because citizens often interacted with municipalities more frequently than with state officials. One could argue that IT in towns and cities was more visible and had a significant, if not greater, direct effect on the daily life and work of residents in this country than that of county, state, and federal governments. While local municipalities had issues specific to their circumstances, they shared similar concerns and roles with counties and state governments regarding the management of public institutions. Budgets remained tight for all public officials in periods of economic growth and recession. This meant officials found it difficult to free up funding for new ventures, such as deployment of IT, the same problem faced by police departments. The problem proved most acute in small municipalities where budgets were minimal. As with all governments, including federal agencies, senior public officials knew little or nothing about computers, their value, cost, or how to manage their use and funding. Scores of observers commenting on the use of computing at the local level considered this problem greater than the lack of appropriate technology or sufficient budget.96 A third commonly shared issue, although not always a problem, concerned the public nature of decisions and actions. Officials needed to demonstrate to the public their wise use of tax dollars; for IT that often meant improving services for citizens while driving down operating costs. For cities and counties, tax payers and voters wanted to see improvements in the speed, effectiveness, and accuracy of work done. These criteria proved important in the selection of digital applications. This reality never changed, even once local officials passed through the initial stages of automating such internal operations as accounting, finance, and basic personnel and inventory management functions. Early Uses of Digital Tools While knowledge of computing by public officials often was less the smaller the municipality one looked at, we should not draw the conclusion that they lacked interest in using information technology to improve the productivity and effectiveness of their governments. Quite to the contrary, throughout the twentieth century, municipalities of all sizes used typewriters, adding and calculating machinery, billing equipment, telephones, telecommunications, and other devices. In fact, that trend continued unabated after arrival of the computer. Put another way, officials continued to acquire precomputer information technology equipment and computers, the latter, however, only after they became affordable. Obviously, larger communities embraced computers earlier than smaller ones because they could more afford the high initial entry costs into the world of digital computing. But both continued adding mechanical aids to data collection and handling even after computers became available. Reading the professional literature of municipal officials for the 1950s and early 1960s, such as American City, one would think that computers had not been invented. Billing machines
State, County, and Local Governments
were popular for creating utility (water, sewer, and electricity) billing and both tax assessments and bills in communities of all sizes. The largest cities had already integrated various technologies into these processes prior to World War II while in the postwar period, smaller communities that had not done so often upgraded prior IT tools.97 Their trade press celebrated the further deployment of other accounting applications, such as cash management, payroll, and accounts receivable, touting the increases in speed and accuracy with which existing work streams were conducted.98 Large cities modernized their precomputer systems and added new uses, such as New York with its traffic violations process in the early 1950s.99 As occurred with county and state governments, large cities were extensively committed to using the modern technologies of the day, while midsized and smaller communities to the extent they could cost justify them.100 All, however, installed new equipment and changed their processes for doing work in waves, adding applications of these technologies year over year, to the extent that one writer in 1959 declared, “the general trend toward office automation has been accomplished by widespread and increasing municipal use of electronic and mechanical data processing equipment.”101 In fact, communities continued to install additional older technologies until the mid-1960s.102 Large cities took their initial plunge with computers in the second half of the 1950s. New York, with a massive payroll of nearly 200,000 employees, installed a Remington Rand Univac 60 computer in 1957 to improve the speed and volume of work associated with its partially automated earlier payroll system, while the city’s Transit Authority installed a Univac 120 the same year to help manage its inventory.103 On the other side of the country, Los Angeles installed an IBM 650 in its Department of Water and Power, initially to handle the payroll of its 11,000 employees who made up the largest municipally owned water and electric utility in the nation.104 Other cities followed suit. But the question is why did cities begin showing interest in computers? The answer mimicked that of state and federal agencies. As one commentator in 1957 put it, “alarmed by the staggering amounts of paperwork necessary for municipal, government operation, city administrators across the nation are looking to electronics to solve their problems.”105 The shift to computers by the largest cities came slowly, really not becoming a recognizable trend until the 1960s. Officials first needed to learn about the new technology. Next, those who embraced the technology early were asked to explain why. For example, when Boston installed a Univac 60 in 1961, the city auditor, Joseph P. Lally, discussed his city’s decision in print. Changes in state tax laws complicated payroll calculations for Boston, making existing technologies too slow and cumbersome to do the work in a timely fashion. Additional accounting applications were now also “speeded 30 to 40%,” saving “thousands of dollars in overtime work—about 250 hours in the tabulating department alone, to say nothing of payroll and other departments,” while industry publications began educating officials on when and how to acquire computers.106 The complexity of city work became another reason to modify work streams using computers. A data processing manager at Tulsa, Oklahoma, explained the problem addressed by his city: “As population and services expand, administrators require data systems that
237
238
The DIGITAL HAND, Volume III
provide information to support highly complex decisions on comprehensive land use planning; community and urban renewal; school facilities planning; resource planning for police, fire, and public works departments; and for analysis of services provided by health, building, and sanitation departments.”107 As with state governments, the initial focus rested in accounting and financial applications, and the new systems were installed in those departments, often the centers of whatever deep knowledge municipalities had of prior IT systems, such as tabulators and billing equipment.108 So processing speed and cost avoidance were early attractions for public officials, but by mid-decade, so, too, the growing capability of computers to handle large volumes of data. By mid-decade, almost half of all cities with populations of over 25,000 either had installed a computer, rented time on one at a service bureau, or otherwise were engaged in making decisions to acquire such technology. Local governments celebrated every installation of computers in municipalities in the 1960s as signs of progressive management at work. Scores of public officials rushed into print to announce and celebrate in laudatory fashion their new tools. They began reporting how computers were beginning to change their work. For example, village officials at Brookfield, Illinois, reported saving operating costs of renting of precomputer equipment and freeing up labor for other work when they moved billing operations to an IBM system in 1965.109 Officials in New York declared that in addition to saving time by automating purchasing, “the computer enabled the department to start several improvement programs previously not considered possible.”110 Officials in San Jose, California, began using computing to manage the flow of traffic and did not hesitate to laud the benefits of their new system: “The computer has reduced wait time at signal lights by 14 percent, eliminated 50,000 stops a day, and shaved one minute from every 10 previously spent in traffic along the route.”111 To be sure, there were those who criticized the rate of acceptance of computers by municipalities, but they were in the minority. One example will have to suffice, penned by a professor watching the process at work: “By and large, the nation’s medium sized municipalities have either not faced the computer issue or have been content to blunder into the acquisition of hardware with its use dictated by old wives’ tales.”112 When they did start using their computers, they did so by converting “from existing systems for computer processing.”113 Nonetheless, the variety of uses of computers by both large and medium-sized cities was impressive. Table 7.5 catalogs this growing variety of uses for computing that occurred in the 1960s, providing evidence that cities were going beyond using computers for accounting and increasingly in support of work unique to urban centers. However, on balance, during the 1960s, cities of all sizes using computers focused initially on automating or improving various previously semi-automated accounting and financial functions, only adding new functions not possible before to these earliest applications in the late 1960s and more frequently during the 1970s after migrating practices to computers. Later in this chapter, there is a discussion about waves of deployment of computers to large cities and subsequently into ever smaller communities. But to understand how the nature of work in these cities transformed over time, short
State, County, and Local Governments Table 7.5 Municipal Uses of Computers, 1960s Applications Building on Older Uses of Data Processing Payroll Accounting Budgeting Tax billing Inventory management
Applications New to Computing Scheduling traffic light repairs Road and city engineering Law enforcement record keeping Land use planning Voter services Planning activities Traffic control Online information availability
Source: Various articles in American City, 1960–1967.
histories of several uses of the digital hand critical to city and town governments illustrate the process at work. Evolution of Accounting and Financial Applications Myriad accounting activities became the first to be computerized, as one would expect, because these were the uses already most automated and structured before the arrival of the computer. So they lent themselves to further structuring because of the characteristics of the new technology, mainly its ability to handle rapidly and accurately routine, repeatable actions in large quantity and increasingly over time, more affordably when compared to earlier approaches, using preexisting input and output, such as cards and paper reports. The story mirrors what happened with accounting and financial operations in all industries and other governmental entities. In the 1950s, 1960s, and 1970s, officials processed more, faster, and often less expensively on ever newer, more reliable computers, either at in-house data centers, through the services of service bureaus, or with data processing systems shared with other governmental agencies. Over time, older, precomputer methods and tools for computation gave way to digital versions. In the 1980s, online systems and growing use of integrated files made it possible for accounting to be done very quickly across multiple charts of accounts. They provided up-to-date transaction data and “to date” running totals through printed reports, or online via a terminal. By the end of the 1980s, these also were widely available over telephone lines using PCs. Contemporary literature of the day provide overwhelming and vast evidence of this process at work, most notably, such publications as American City and academic studies.114 But what was the effect on work? In many ways, use of digital accounting paralleled what happened with telephone usage throughout the twentieth century. Like telephones, people used computerized accounting to speed up or make more efficient existing processes and policies, only modifying them incrementally as
239
240
The DIGITAL HAND, Volume III
specific characteristics of machines, software, telecommunications, and changing managerial practices and policies warranted. But again like the telephone, the technology generally complemented, rather than displaced, prior practices.115 To a large extent, this pattern can be attributed to the accounting and financial practices of the nation that were governed more by law, for example, than by the proclivities of some modern technology. The technology made possible, however, incremental integration of what used to be separate and distinct operations; payroll is one instance, receiving fees and taxes another, and so forth, bringing them all together over time. By the end of the 1980s, these had evolved into integrated systems. In turn, that situation made it possible for management to have a better and more current understanding of how funds were flowing in and out of government; they could track transactions in a more automated, current, and accurate manner; while driving down the costs of these activities. They also were better able to plan more effectively future activities involving budgets and cash flows. The ability to manage integrated flows of transactions, and to plan, represented a fundamental, even quantum, change in fiscal operations by the end of the 1980s over what they had been in earlier decades. This general observation applies as well to counties and state governments, although far less to the large highly siloed, often poorly integrated federal accounting and financial systems, which even at the dawn of the new century mimicked many of those installed in earlier decades eschewed by state and local governments. These patterns of use involved all local uses of accounting, including those unique to local communities, such as billing for utilities, exceptional tax assessments for repairs of sidewalks, and budgets needed by fire departments, emergency medical services, police, and even dog catchers. In short, by the end of the 1980s, accounting at the local level was sophisticated, easier to perform than in earlier decades, and iterative. In addition, communities found that they could track expenditures (transactions) more precisely, thereby enhancing their ability to audit “the books” much more easily and less expensively as time passed. Accounting systems improved both their comprehensive coverage of accounting and financial applications and the quality of that work, notably accuracy. Coming back to the analogy of the telephone, just as the earlier communicating device reinforced personal relationships, that is to say, how people had long interacted with each other, so, too, the computer reinforced the growing dependence of public officials and employees on data and accounting information with which to run their departments and municipalities, a process under way since the dawn of the twentieth century. At both the municipal and county level, it would have been hard to imagine publicly held meetings of even small town councils or county boards of supervisors without the ubiquitous spreadsheets and budget items on the agenda. Computing and Public Works An important function in any municipality, from tiny to big, is the maintenance of various infrastructures. These include water works, streets, public buildings,
State, County, and Local Governments
sewer lines, traffic lights, street lamps, roads, and sidewalks. A great deal of this work involves engineering, construction, scheduling work, and charging expenses to municipal budgets. After payroll and education expenses, public works normally represents the third largest collection of expenditures of money, personnel, and management’s time. What constitutes public works also varies from one municipality to another; nonetheless, they all share similar roles. Management has to acquire supplies of a enormous variety, from asphalt and pipes to paper and pencils; manage work crews who also vary in types of skills from water and sewer engineers to carpenters and garbage collectors; plan expenditures for capital investments (such as for buildings and vehicles) to myriad payrolls and operating expenditures (such as for fuel or subcontractors). In some communities, they also conduct transactions with the public (such as requests for repairs of sidewalks and street lamps). In short, all are functions at the core of what municipalities do in addition to running schools, police departments, and maintaining public safety. So, it would be of no surprise that as cities and towns began using computers, they would attempt to use these new tools in public works. Earliest applications were accounting oriented, specifically, for inventory control, purchasing, and payroll. That set of applications began spreading to the largest cities in the late 1950s, next across midsized communities in the 1960s and 1970s, then to almost all the rest in the 1980s and 1990s. Beginning in the 1960s, larger cities also went beyond accounting to create tracking systems for water-meter reading, evolving over time from punched-card systems of the 1960s and 1970s to hand held units that could turn over machine-readable data to computers in the 1980s and 1990s. Water works in particular were early adopters of computers since they had thousands or millions of customers whose water usage they monitored, usually on a monthly basis, and who were billed, just as done by electrical utility or telephone companies.116 Scheduling work became a crucial application in the 1970s to optimize use of labor, beginning with collection of refuse but later extending to road, sewer, and other maintenance activities, and for monitoring the quality of water and the nature of sewage.117 Scheduling use and maintenance of equipment and vehicles became popular candidates for automation in the 1970s and 1980s in a successful bid to control costs and to extend the life of existing vehicles and other equipment.118 As the cost of CAD software and hardware began to drop in the 1970s and 1980s, municipal engineers started to use this technology for the same reasons evident in nearly all manufacturing industries. Various alternative designs for work on roads and buildings, for example, could be modeled and then be committed to, and work schedules produced and priced. As high performance workstations and PCs became available with CAD software in the 1980s, even smaller communities were able to justify use of this application. The “take-off” in the use of this application occurred in the second half of the 1980s.119 By the mid-1980s, about a third of all cities and towns were using computers in support of maintenance of utilities, police and fire equipment, while parks managers were just beginning to do the same.120 Even the management of groundwater in such wet parts of the United States as Florida was being monitored with
241
242
The DIGITAL HAND, Volume III
computers. Thus, by the early 1990s, use of computing had spread widely across the nation’s municipalities. However, it should be noted that until the arrival of application software products and less expensive hardware in the 1970s and 1980s, adoption had been largely limited to big cities that could afford to write mainframe-based applications that could be used cost effectively by large staffs. In addition, many public works supervisors remained wary of using computers until the 1980s, having grown up in their professions without using digital tools, unlike many of their colleagues in accounting and finance, who had relied on the digital hand for over two decades.121 Computing and Traffic Control Nothing seems as local as the management of traffic, operating traffic lights, repairing streets, and removing snow. Traffic control using computers began in the 1960s and by the end of the 1970s had been highly computerized in many sizeable communities. Bill Gates, cofounder of that icon of late-twentieth-century American business, the software firm Microsoft, even wrote software for this function while in high school. Years later, he recalled that he and his friend Paul Allen (cofounder of Microsoft) figured “out a way to use the little chip to power a machine that could analyze the information counted by traffic monitors on city streets,” the little rubber hoses across streets that were still used even in the new century. Before computers, vehicles rode across a rubber hose, with an electrified wire in the hose sending an electrical signal to punch-paper tape housed in a little box on the side of the road. Gates and Allen devised a way to have a computer chip read the tape and produce charts and data. This led to the creation of his first company, Traf-O-Data, in the early 1970s; he recalled, “At the time it sounded like poetry.”122 By the end of the 1960s, specialized computer equipment began appearing in large and midsized municipalities all over the country to monitor traffic flows. They adjusted traffic lights to improve the movement of vehicles by relying on various types of sensors cars drove over, and that in turn fed data on volumes to software programs, data on volumes that it then used to send instructions to traffic lights to turn color. Prior to such systems, employees had to be stationed on the side of a road to observe traffic and adjust manually traffic signals; often these individuals were police officers. Ever smaller communities installed such systems all through the 1970s and 1980s. This use of computers provided data as well to officials involved in the much larger mission of planning what roads to repair or expand, and in support of other land use decisions, often linking traffic data to such other digital applications as GIS and budget modeling. County and state highway officials also used this type of information for similar purposes; both groups often collaborated on future plans for road construction.123 Geographic Information Systems (GIS) GIS became one of the most important uses of computers by local and county governments by the 1990s, one used extensively as well by state and federal
State, County, and Local Governments
agencies, and which also became useful to private sector firms by the end of the century. GIS was poised to become available to citizens over the Internet by the end of the first decade of the new century. Developers of this software added continuously to its functionality while users found myriad applications for it. GIS is also one of those uses of the digital hand that became increasingly affordable as the cost of IT dropped and hardware’s capacity grew to handle larger files and more complex calculations. By the end of the century, nearly as many departments in any size municipality or county government could use this digital tool as had long been the case with accounting and budgetary software. We could have discussed GIS earlier, when reviewing uses of IT by state or county governments. However, because large and midsized cities were the earliest to adopt GIS, it made sense to emphasize the central role this application played in city government. But keep in mind one other trend evident by the 1990s, namely, that municipalities and county governments collaborated increasingly on land use strategies and road planning by sharing GIS data and even working with one copy of the software nested in a shared computer. But first, because of its various applications and forms, we need a working definition of GIS. A useful one calls GIS software with information that “describes the locations, characteristics, and shapes of features and phenomena on the surface of the earth.”124 For nearly two thousand years, urban officials have used maps as tools for planning their maintenance and development. Since the early decades of the twentieth century, American communities used paper maps and later clear Mylar sheets with thematic data, such as one sheet just showing buildings and roads and another that could be overlaid on the first to picture the location of water and sewer pipes, and so forth, thereby revealing ever increasing amounts of detail about a particular area. Over the years, communities added to their GIS databases information they collected, in layers so to speak, because one paper map could never hold all the desired information or do it so that it was easy to understand. GIS software automated that prior use of maps, thereby making it possible to add and delete information relatively quickly, to model possible changes to the infrastructure, and to share information either on a laptop in a city truck, whose crew is trying to identify the source of a water leak, or with urban planners meeting in a conference room assessing various scenarios for designing the renewal of a blighted community. Earliest work on the development of GIS tools began in Canada in the 1950s and 1960s but soon also took place at the U.S. Bureau of the Census with its collection of demographic data (TIGER files) and at the National Geodetic Survey. Additional work in the 1960s and 1970s at various American universities and by a few state governments added to the body of software and knowledge about this application of the digital hand. Earliest uses of GIS involved collecting demographic and census data on where people were located (focus of much federal government work) and documenting locations of parcels of land to help communities in the sale and tax assessments of properties. By the end of the 1970s, utility companies and local governments were adding facilities and infrastructure to these layers of data, such as the location of underground pipes and
243
244
The DIGITAL HAND, Volume III
electrical wires. Subsequently, they added information about specific buildings and other above-ground infrastructures. As one could imagine, the amount of data required in digital form far exceeded what one needed to supply a digital application in accounting or word processing. Early GIS systems of the 1970s proved expensive, often costing over $1 million to acquire (hardware and software) and also costly to populate with data. As time passed, however, these systems declined in cost, particularly in the 1980s, when the price of hardware dropped dramatically, while the capacity of computers to hold these very large files became available.125 By the end of the decade, cities and counties all over the United States began equipping their staffs with this new use of the digital hand.126 As with every new digital application, in the late 1980s hype preceded wide deployment: “The trend toward geobased mapping is taking municipalities nationwide by storm, as the equipment becomes more widely available and prices begin to decrease.”127 Trade publications of the late 1980s began documenting installation and use of GIS, however, making it clear that large cities and counties were again the first to use this new application, followed by waves of ever smaller communities.128 The earliest users in the 1970s and 1980s concentrated on property recording, tax assessments, environmental management, and regional or local development. One of the first surveys of deployment of GIS, conducted in the early 1990s, demonstrated that counties and municipalities generally used GIS for similar purposes. In descending order of use from highest to lowest, these included demographic mapping, land parcel mapping, facility management, planning, and zoning. But already in that period uses that became widespread in the 1990s included land use, natural resources analysis (such as hydrographic mapping), transportation modeling, capital improvement planning, and permit tracking, to mention a few. Over 10 percent of all communities used GIS by the late 1980s,129 by the end of the century over half. By the late 1980s, communities began integrating separate digital maps and data into larger views of their communities. In the 1990s in particular, departments within a local government had to collaborate more together out of necessity to share information or out of desire since increasingly coordinated work could now be done more effectively than in the past. These activities included, for example, coordinating repaving of a street with replacement of underground pipes, electrical wires, and TV and telephone cables.130 New users now included law enforcement, fire departments, social welfare agencies, and school districts tracking the ebb and flow of students through communities as they came and went, or grew up.131 Thus, one could see GIS applications evolving through three general phases at the county, city, and even state levels, first from applications that provided inventories of existing structures, infrastructures, and parcels of land, into a second phase in which officials used GIS tools to analyze and plan activities (such as renovations of communities and economic and social development using “what if” scenario planning methods), and into a third phase clearly evident by the early 1990s, of using GIS to perform and monitor managerial and operational activities, much like project management software. Investments in
State, County, and Local Governments
GIS became some of the largest in the public sector, reaching annual expenditures of over $730 million by the end of 1992, and that continued to grow through the rest of the decade. All communities of over 100,000 residents used GIS applications fairly extensively by mid-decade.132 Because we will not discuss this application any further, it makes sense to take its history briefly into the early 2000s. Local systems became increasingly linked to county then to state GIS databases, and new layers of information were added to account for changing needs and replacement of existing buildings and infrastructures, but also expanded to include rural areas, such as mountains and vegetation. GPS sensing became available in the mid-1980s, but not until later did this system of satellites augment the accuracy of mapping. Satellite photography added more details such that, by the early 2000s, cities even had photographs of the exterior of one’s private home and street tied into their local GIS databases. Reports from the early years of the new century documented the extensive integration of GIS software in a vast array of planning and operational functions of all local governments. When the Internet became a viable tool for local government (discussed below), communities began making GIS data available online, a tool useful for building contractors and other firms doing work for municipalities, such as utility and construction companies.133 By 2005, the most extensive users of GIS in local government were administrative services, code enforcers, those involved in community development, public works, law enforcement, school districts, and regional governments. Uses had spread to a variety of functions such as agriculture, environmental management, health and human services, homeland security, land records, law enforcement, public safety, elections, economic development, library management, mapping of sex offenders, public utilities, transportation, telecommunications, water resources, and water and wastewater management. Deployment of IT in Municipalities, 1950s–Mid-1990s A brief discussion of patterns of deployment of uses of the digital hand that took place over a long period of time brings sense and order to the discussion of how so many thousands of communities of different sizes and personalities came to embrace computing. At the outset, we should acknowledge that many other applications of IT were not discussed above, such as the expanding role of telecommunications and voter systems because, while important, they do not add substantially to our understanding of how municipalities embraced computing. In the 1950s, the largest cities in America not only could afford computers the most but had the biggest problems to solve. An industry magazine reporter writing in 1957 reported that “alarmed by the staggering amounts of paper-work necessary for municipal government operations, city administrators across the nation are looking to electronics to solve their problems.”134 By the mid-1960s, officials had come to understand that computers could do more than accounting and billing, that “the great advantage of EDP” lay in its “ability to record large quantities of data and to readily provide this information in a form useful to
245
246
The DIGITAL HAND, Volume III
decision-makers.”135 Then the hunt was on to identify potential uses, cost justify these, and to implement them. Surveys of municipalities indicated that by the end of the 1960s, just over half used EDP in some form, such as with their own systems, shared with other agencies, and through use of a service bureau. Cities with over 500,000 residents spent on average some $1.8 million a year on EDP to as low as just over $40,000 by cities with populations of 25,000 to 50,000 residents.136 During the 1970s, all large cities used computers, and the story then was about their expanding portfolio of uses. The smallest local governments were in no rush to embrace computing, largely because of the technology’s relatively high cost and their lack of technical staff to implement these new devices and software. By the early 1980s, the most widely deployed applications in descending order of use were payroll, accounting, budgeting, utility billing, tax assessment, tax billing, personnel, law enforcement, inventory, and voter registration, ranging from over 85 percent of municipalities using IT for payroll to 16 percent for voter registration.137 Other surveys in the mid-1980s confirmed this general pattern, although with slightly varying statistics; but the trend was clear and obvious. New uses were added, particularly as deployment of PCs occurred in the 1980s. These applications included word processing, financial planning, and even fleet management. As commercially available software packages targeted at local governments became increasingly available in the late 1980s, use of these smaller systems by small communities expanded rapidly.138 A Gartner study done in 1989 looking at the number of workers per workstation (including PCs) provides surrogate statistical evidence of the relative deployment and availability of IT tools to workers. In state government, there were 3.79 workers per work station, while in county and municipal agencies, 6.10 employees per machine. Federal workers appeared to be the most automated as they had 2.56 workers per machine.139 One other contemporary survey reported that “virtually” all cities and counties used computers, and if one added federal usage, this resulted in the deployment of over 450 different applications of IT. As with state and county governments, however, local communities implemented most applications to reinforce existing functions and roles. So, no major changes in organizational and managerial power occurred as a result of using computers.140 Recent Trends Cities and towns continued to embrace new technologies, integrating them into their daily work, such as the Internet. Officials knew increasingly to do this carefully, because successful diffusion of any IT tool required technical expertise to implement and maintain, a lesson many learned again when they went through two to three generations of Web sites in the late 1990s and early 2000s. Many came into the late 1990s with myriad systems, ranging from large mainframe applications to relatively new ones, such as those housed in PCs or GIS systems. In recent times, PCs have played an important role in the lives of public officials. As two experts on IT in the public sector recorded, “the PC revolution came
State, County, and Local Governments
rapidly and recently. It began less than a decade ago for most cities, and unless in the context of a centralized system, PCs usually came with few of the supports provided with the earlier technology”; instead they came with “general purpose packaged software” but hardly tailored to their needs and with “minimal of support staff.”141 In 1995, a team of academic experts noted that the managerial and political environment officials operated in was not necessarily conducive to further adoption of IT: A number of major changes have taken place in the United States in the past decade that have heightened the tension inherent in the dilemma: the aftermath of expanded services provision spawned by federally supported programs; the cutbacks in funding sources for urban governments in many locales; a generally skeptical and critical attitude of citizens toward government; and a tendency to push an increasing number of responsibilities that had drifted to the state and federal levels back down to the local level.142
Meanwhile, as officials were still trying to figure out how best to use PCs and such new applications as GIS, industry reporters and citizens were asking municipalities what role they expected the Internet to play, including their participation in the “Super Highway” initiative of the federal government, more elegantly known as the National Information Infrastructure (NII).143 Between roughly 1995 and the start of 2000, municipalities also had three sets of issues to deal with concerning IT. The first was the normal deployment of new uses of IT and more current technologies, much along the lines described earlier about their activities in the 1970s through the early 1990s. The second concerned the immediate problem of Y2K, while the third, and most convoluted, involved a host of telecommunications issues ranging from understanding what they could do after passage of the Telecommunications Act of 1996 to how to respond to the arrival of the Internet. Y2K can be dispensed with quickly. Local governments were not immune to the issue; they read about it in their industry publications and heard about it at conferences, with interest building in 1997. They succumbed to the hype; for example, an industry “expert,” Peter de Jager, appeared in American City and County predicting “widespread failures,” although in fact that did not happen since Apple and many Microsoft-based software products were sufficiently upgraded in time by the vendors, as also occurred with much mainframe software.144 Yet, industry publication kept reporting that cities were not ready for Y2K; many communities busily worked to remedy their old systems.145 Simultaneously, officials had to concern themselves with the very complicated Telecommunications Act of 1996, discussed more fully in volume 2 of The Digital Hand.146 Concerns during congressional work on the law focused on the effects it might have on public rights-of-way (ROWs) and on zoning, particularly regarding TV cable systems, rights that the law generally protected. However, beginning in 1996 and extending to the end of the century, officials waited for the Federal Communications Commission (FCC) to publish enabling regulations regarding such issues as TV, cable, and telephone, all important issues to large and midsized metropolitan communities.147
247
248
The DIGITAL HAND, Volume III
In addition, there was the Internet. Communities began looking at how to use this new form of telecommunications in the early 1990s. Some fifty communities had established networks to provide citizens with information prior to the availability of the Internet; but this small body of experience had little influence on the thousands of other municipalities that were just learning about the Net at the same time as its citizens.148 Early adopters concluded that the technology would make it easier to provide citizen-centered services, and their installation of Web sites captured the attention of the industry press,149 spurred on in some cases with federal funding.150 As local governments approached 2000, many turned their attention to Y2K, momentarily putting Internet-based projects aside, such as those that could enhance purchasing or delivery of information to citizens.151 Nonetheless, large cities and many midsized communities established their presence on the Web by the end of the century, in part because citizens who had learned to buy goods online twenty-four hours a day, seven days a week, wanted to do the same with their governments, such as pay for parking tickets. Thus, pressure on officials came from citizens rather than from the more traditional source of requests for IT, municipal staffs and industry associations.152 With the Y2K scare behind them, although now saddled with a sagging economy and the clear expectation of shrinking tax revenues, governments began to move into an era of extensive use of the Internet. Surveys indicated that citizens were pleased when their local officials set up Web sites. Those municipalities that did this also found out that they had to reengineer processes and add resources to maintain these sites. In a survey of nearly 1,500 responding municipal governments, some 85 percent said they had a Web site; it is difficult to conclude, however, if that meant those that had not responded to the survey did too. Nonetheless, the evidence indicates that many had Web sites and that this number had essentially increased by at least 50 percent from an earlier survey conducted in 1997.153 Very few provided any interactive, e-government services, largely because they knew so little at this point about e-commerce and had issues concerning security and privacy.154 Observers noted, however, that as officials learned about this new use of IT, this technology was energizing local governments into improving and changing old operational practices and creating a more collaborative environment involving firms and citizens, much as happened among departments using GIS.155 That change over earlier practices represented the start of a significant transformation in how local governments worked that should not be minimized or ignored. Applications paralleled what occurred with county and state agencies of three general types: government to citizens applications (G2C), such as making forms available, e-mail, and applying for permits online; government to business (G2B), such as some shopping, corporate tax filing, acquiring permits and licenses; and business to government (B2G), such as simple purchases.156 But as of 2001–2002, these uses were in their embryonic stage of development. Nonetheless, it was already becoming evident that a paradigm shift was under way similar to what was occurring in state and county governments, away from a largely bureaucratic, inwardly facing perspective for doing work and
State, County, and Local Governments
increasingly, if slowly, toward a citizen-services-focused view of work. This subtle change was first noticed in medium and large cities, particularly those on the West Coast that had large computer-savvy populations and government officials.157 Adoption of this evolving way of working continued to be impeded by lack of sufficient technical staff and budgets, perhaps cultural inertia, and the need to redesign internal processes in order to optimize use of the Internet.158 By 2003, it was becoming evident, however, that public officials were overwhelmingly inclined to use the Internet, bolstered by positive early experiences of others, effusive press accounts of successes, and wide support from citizens who themselves were becoming extensive and enthusiastic users of the Internet. Early adopters were also going through a second round of upgrades of their Web sites, to which they were adding e-business functions, such as the ability to conduct some transactions with local government.159 Two important applications emerged that proved very relevant to municipal governments: procurement and building permits. In one important survey done in 2004, 75 percent of the responding municipal governments reported using electronic forms to speed up the process by which firms and contractors could obtain building permits. This application supported generation of revenue and, of course, economic development in a community. Purchasing online also proved popular, with 77 percent reporting relying on the Internet in support of this use. Online bill payments, however, although up year over year, remained at about 40 percent, and even then only for limited items, such as for parking tickets and obtaining parks and recreational passes. Online applications for city jobs also grew, from only 28 percent making that possible in 2001 to 44 percent in 2004. On balance, we can conclude that the “takeoff” in use of the Internet by municipalities occurred sometime after 2001.160 As this chapter was being written in 2006, wireless connections to the Internet for citizens and to municipal Web sites was just beginning, creating a whole new service and source of fees a community could charge. More important, officials saw quickly that having these “wi-fi” connections would make their cities attractive communities for high-tech companies to locate in, and as a place for computer-savvy citizens to do the same.161
Conclusions A team of experts who had long observed the workings of state and local governments concluded in 1993 that over half the public sector managers they had interviewed for a survey found “themselves as very dependent upon computing” and two-thirds concluded that the digital hand was proving important in carrying out their work.162 The dozens of surveys that preceded this one and others done over the next decade or so consistently confirmed that public sector managers and users enjoyed the same benefits and similar challenges in using the digital hand as those evident in many industries. They also avoided many of the difficulties faced by very large users of IT in the federal government, particularly in the 1970s and 1980s. Unlike so many of their federal colleagues, at all levels
249
250
The DIGITAL HAND, Volume III
of local government, officials had either newer systems that were not in dire need of replacement or small enough for them to find the necessary wherewithal and technical staff to keep systems more current. In the case of small state, county, and municipal governments, the reason lay elsewhere; they just did not yet have computers integrated into the core work of their departments. When these smaller agencies computerized work, they did it on more modern systems, often using software designed by vendors for specific applications in public administration, or shared services put together by local governments more experienced in these matters, such as by state agencies or very large counties and cities. State and local governments shared with federal departments and agencies, and almost every private sector industry, a technological resurgence, beginning in about 1997–1998 with their adoption of Internet-based services. Perhaps because the costs for Internet applications were often less, and the risk of failure minimized by the incremental nature with which Web sites (and their services) were created, one senses the kind of enthusiasm and optimism for improved productivity in governmental operations experienced when the first mainframes appeared in the American economy in the 1950s and 1960s. Press hubris was similar in both periods; managers rushed into print to brag about their newly installed systems and how much they expected of them, while citizens were just as enthusiastic and hence politically supportive of “e-government.” For many local governments, education represented the largest single line item in their budgets, and often also their biggest group of employees. Therefore, to complete our understanding of the role of the digital at the local level, we need to turn our attention to students, teachers, and their school districts. Theirs is a different story from the one just told.
8 Digital Applications in Schools Computers have become one of the expected trappings of today’s classroom, and schools have exhibited an insatiable appetite for hardware; but systemic curricular integration of computers is still more of a promise than a reality. —William R. Jordan, 1993
N
o aspect of American society seems to engage so many individuals, industries, and communities as schools. Children, their parents, community leaders, business executives, teachers, public officials, techno-enthusiasts, and nay sayers have made the world of schools so public and of such concern. Only wars have drawn greater public attention. The reason for education’s public nature is easy to find. Training future participants in how to succeed in America’s democratic society and to thrive in an “advanced” economy are central to the mission of American schools. The proverbial “everyone” has a perspective and most have personal experience with the American system.1 So, it should be of no surprise that as American society embraced computing and telecommunications, a major focus for this kind of activity would be the school house. From preschool through high school, students, teachers, and administrators came face to face with computers, raising questions about how best to use this technology, how to fit it into the fundamental missions of schools, and how it could best support teaching and learning. While billions of dollars have been spent on IT in education, this sector of the economy (and society) has been the subject of much debate and controversy with pro-technology enthusiasts touting the benefits of IT, overselling these in the minds of an equally loud community of critics. However, the epigraph opening this chapter typified the experiences of many educators, this one from a teacher in Florida who had pioneered use of computing in his school.2 The 251
252
The DIGITAL HAND, Volume III
debate about the benefits of using such technology is vociferous and unresolved despite what has clearly been a massive investment in IT in education. The debate has also spun off as many articles and books as all the literature combined for all manner of computing across all of the public sector agencies discussed in this book. The result is a conflicted, still ambiguous story about the role and value of information technology in education. It is with far less certainty that we can conclude that IT created a new style of operating in education as occurred in more certain, if varying forms, in such areas as the administration of government agencies and the military. It is that lack of conversion to a new style of teaching that makes the story of education somewhat different from the experiences of other public institutions, although much of the story told in this chapter will sound familiar: the introduction of computing, rates of deployment, and applications of the technology. However, we will also see that in this industry, as in all others (public and private), technology is embraced and effective when it supports and aligns with the core work and values of its users. To the extent that IT did or did not do these in education is where the core lessons lie for our understanding of how computing affected teaching and learning. As a distinguished student of the history of education and its use of various technologies recently pointed out: “Without a critical examination of the assumptions of techno-promoters, a return to the historic civil and social mission of schooling in America, and a rebuilding of social capital in our schools, our passion for school-based technology, driven by dreams of increased economic productivity and the demands of the workplace, will remain an expensive, narrowly conceived innovation.”3 In short, Larry Cuban reminds us that the story of computing in education is both about the continuous adoption of IT and the consequences of such actions, which are not as clear, or even as positive, as in other parts of the public sector—and this forty years after computers were first used by educators. Understanding how education fits into the broader story of the public sector’s use of computing is thus essential to appreciate and is the central focus of this chapter. To accomplish that task we need to view applications and deployment as tactically as in other chapters and agencies, through the organizations that acquired and used these technologies. In the case of education, its world is divided into two fundamental halves: administration and teaching. Administration concerns the managerial staffs in schools, such as principals and the people we students saw working in offices or in support roles within school buildings, and the staffs employed in school district offices. They hired and fired teachers and staffs; built and maintained school buildings; acquired text books; set, spent, and monitored budgets; collected grades and tracked academic performances.4 Teachers did this too, grading and reporting student performance, while communicating with their principals and parents. The second half of the educational organization—and the most populous—was the collection of teachers and students who were the ones most directly involved in the actual learning activities. That community organized in essentially three types of schools: preschool through elementary (kindergarten through roughly the third or fourth grade),
Schools
middle school (normally fourth through seventh or eighth grade), and high school. Each had different pedagogical issues to deal with, such as children of different ages and social backgrounds, were of varying sizes, and had correspondingly different sized budgets. In short, while we will generalize frequently about the deployment of computers across schools, it is important to note that when one looks at specific uses of computing, they vary between what a five-year-old can do versus what a sixteen-year-old does. It is an important observation to keep in mind because if this chapter were to be a full-length book, we would have to discuss in detail how computing affected teaching in each grade and subject, something that space does not permit here. Furthermore, there is a vast literature on this theme of differences.5 To complete the statement of scope for this chapter, it will only focus on public education, that is to say, on schools run by local governments funded by taxes. They represent the overwhelming majority of schools in America for the entire half century in question, and while private schools were also extensive users of computing, their experiences either mirrored those of public schools or were unique enough that to discuss them would take our focus away from what happened generally across the American educational landscape. Finally, keep in mind that the chapter after this one tells the story of computing in higher education, thereby further rounding out our discussion of the digital hand in education.
Computing in the Administration of Education, 1950s–2000s When one thinks of computing and education, they turn immediately to the use of IT by teachers and students in classrooms. However, use of IT in the administration of education proved far more influential than technology in teaching (so far). Deployment in administration mirrored much that took place in the deployment of computing and telecommunications in many other public sector agencies and in the private sector. Indeed, despite an enormous body of literature and debate about computing and teaching, the technology had less of an effect on teachers than one would have been led to believe.6 When looking at all the participants in education—students, parents, teachers, school administrators, and district staffs—the least amount of digital activity occurred in the classroom, although at home teachers used computers much as did parents and students, while school administrators of all stripes relied on the technology to do their daily work. For that reason, we start our discussion about the most important application of information technology in education on the administrative side of the equation. It should be of no surprise that the largest school districts in the country were often the ones that most used precomputer information technologies to track students, schedule classes, manage payrolls, accounting, budgets, purchases, inventories (such as furniture and text books), account for grades, and assign students to specific schedules, teachers, and classrooms. Paul Serote, in
253
254
The DIGITAL HAND, Volume III
charge of data processing for the Los Angeles City Schools, the second largest school district in the United States in the 1950s and 1960s, ran an operation that had been an extensive user of precomputer information technologies and, when computers arrived, became an early user of them. He described the case for using IT in support of his 700 schools: Education is big business, and you realize this when you look at your property tax bills and see that the largest portion goes to education.Behind the scenes in the education of a child is a vast machinery involved with the construction of buildings to house the student; instruction of teachers and the eventual selection of teachers to teach the students; purchasing of supplies, equipment, desks, furniture, laboratory equipment, magazines, periodicals, books, etc.; planting of grass, the development of athletic fields, the maintenance of such fields, and the cleaning of the classroom.7
He went on to describe the various functions of any school and district, arguing that the collection of functions were as complex as those of a large business and that was why his district had become such an extensive user of computers in most functional areas.8 When computers became available, superintendents in particular, but some principals as well, began looking at this new technology to determine what value it might bring to the administration of education.9 While teachers began using computers in measurable quantities to assist their teaching in the 1970s and 1980s, administrators had been relying on all manner of IT since the early decades of the twentieth century in support of office work. As in other public sector agencies, they adopted ever newer forms of office equipment as they become available. Similarly, they also had high expectations that technology could improve their productivity, thereby freeing them to do the core work of educators. Take the very typical example of the high school in Evanston, Illinois, in the late 1950s, which acquired an IBM 402 Accounting Machine—not yet a computer but a calculator-which it used for a variety of accounting applications. Its officials had high hopes for the machine: “Punched card processing equipment handles clerical problems at Evanston Township High School with such speed and efficiency that school administrators have time to give more personal attention to students, especially during scheduling difficulties.”10 This case is also a useful reminder that before computers, applications of information processing equipment had clarified what uses made sense of IT, so that when computers came along, administrators learned through prior experience with “office appliances” what potential uses there were for the new technology. That circumstance goes far to explain why the largest school districts were the first to enlist the help of the digital hand. In Evanston’s case, its 402 mimicked what computers were used for years later: to schedule classes efficiently, thoroughly, and accurately, track grades, produce statistical reports, and to do so quickly and with fewer employees.11 For many of the reasons experienced by other public sector agencies, however, most schools and school districts could not cost justify use of computers until the l960s or 1970s, but by the mid-1970s,
Schools
almost all large and even midsized school districts used computing for accounting, administrative, and statistical applications. They acquired their earliest computing in one of three ways: the result of work done at a school that spread to others, as in Evanston’s case; school districts that installed computers and implemented standardized administrative functions in their schools (the case of all major urban school districts), or by using service bureaus often made available by state or county governments. The earliest uses involved payroll, accounting, and financial reporting, just as across most public and private sector organizations. It was also largely a story of the 1960s and early 1970s. The next wave of uses were administrative, involving personnel records, inventory control, class rolls, recording and reporting of grades, and student scheduling. In the 1960s, data entry involved punched cards, but by the 1970s other input devices were used, such as magnetic cards and online terminals, while data storage had moved from cards to magnetic tape, later to disk storage. As computers became easier to use and software packages appeared on the market designed for education administration, larger school districts embraced them, such as IBM’s Systems 360 and 370. The evolution of technology also allowed school districts to take independent applications and integrate them into accounting or student systems, largely beginning in the 1970s.12 Lest we overstate the use of computing, professors studying K–12, writing in the mid-1960s, observed that “primitive paper and pencil techniques still prevail for most educational information processing,” while use of computing remained “limited” and its application to teaching “constitutes a frontier being explored only at the research and experimental levels.”13 Even as late as 1965–1966, uses involved business accounting, student records, and general administration. In states like California and New York, and in large cities such as Chicago and Philadelphia, these applications became increasingly available during this decade. Chicago’s IBM 305 RAMAC received considerable publicity as a state-of-the-art system; then later other role models emerged in Los Angeles, Atlanta, and elsewhere. One use of computers in very large school systems of particular interest in the 1960s concerned maintaining student census records to project changing demands for classrooms and teachers by grade and neighborhood. That function facilitated planning for class schedules, types and quantities of teachers needed, budget forecasting, and even planning bus routes.14 Inevitably we must ask, how many school districts actually used computers in the 1960s–early 1970s? One survey done in 1972 provides some insight. Of 12,400 respondents to the survey, just over 30 percent reported use of computing for administrative functions, while just over 65 percent had yet to use computers at all. Less than 4 percent acknowledged teachers using computers in instruction.15 Substantial increased use came in the 1970s, again more in administrative applications than in teaching. However, all the studies and surveys from the early 1970s reported that uses began in the 1960s, continued into the 1970s, with the addition that applications increasingly moved from card tub files to direct access memory systems, which were essential in the use of CRTs as well.16 With the
255
256
The DIGITAL HAND, Volume III
evolution of digital technologies in the late 1960s and 1970s, school districts endorsed the concept of management information systems (MIS), which called for integrated systems and the management of IT through centralized operations, embracing a trend evident across many industries and government agencies at that time. Information processing managers and the administrators who they worked for began integrating systems that planned programs in school districts, accounted for monies spent and budgeted, tracked and assigned people (students, teachers, administrators), documented student development, planned maintenance for buildings, and monitored inventories all while school districts, and many individual schools as well, grew in size and operational complexity in the 1970s and 1980s.17 The result of these various initiatives was that by the late 1980s, a wide range of applications of digital computing existed in school districts around the United States. Table 8.1 lists many of the more widely deployed uses. Note the variety, reflecting both the pattern evident in so many industries of using IT in accounting, personnel, and financial operations, but also those unique to education, such as in the functioning of libraries and schools. Survey data from the earliest days of computing in education suggest the speed with which digital tools were embraced. One study of 1,360 school districts conducted in 1981 looked at when they began deploying this technology. Generally, about 73 percent of school districts with 25,000 or more students began using computers between the late 1950s and 1969. About 59 percent of all districts with 10,000 to 24,900 students embraced computing between 1960 and 1970. School
Table 8.1 Common Administrative Uses of Computing in K–12 Education, Late 1980s General Type
Examples
Financial systems Office applications Personnel systems
Budgeting, accounting, purchasing, salaries Word processing, filing, desktop publishing Payroll, personnel records, faculty assignments, health records, tax records, benefits management Space utilization and room assignments, inventory management, maintenance planning, energy utilization Budget analysis, bus routing, statistical studies, testing and evaluation, project planning and control, enrollment analysis and forecasting Scheduling, class registration, grade reporting, attendance accounting, student demographics, health records, test scoring, class lists Circulation, catalogs, online database searches, purchasing
Asset applications Research and planning
Student applications
Library systems
Source: Adapted from William C. Bozeman, Stephen M. Raucher, and Dennis W. Spuck, “Application of Computer Technology to Educational Administration in the United States,” Journal of Research on Computing in Education 24, no. 1 (fall 1991): 66–68.
Schools
districts with less than 10,000 students began using computers after 1965, with about 58 percent participating for the first time during the 1960s or early 1970s.18 To put this data in context, the nearly 1,400 school districts participating in the survey—all of which used computers by 1981—represented only about 9 percent of the existing 16,000 school districts in the nation. So, it is not clear how many others might have used computers. There is enough data here, however, to surmise that districts learned about computing and adopted it as a tool in roughly the same time period, beginning in the 1960s, increasingly so after 1965, and extensively in smaller districts in the 1970s. The survey confirmed use of these applications listed in table 8.1. A second survey study conducted later in the 1980s suggested how quickly school districts had embraced computing in the 1980s. In this instance, in a sample of 20 percent of all districts in the United States, nearly 95 percent used computers for such applications as those listed in table 8.1. Sixty-two percent had a computer on their premises, another 16 percent relied on service bureaus, and the rest on systems in other districts. The data also suggest that the types of uses to which administrators put computers remained remarkably consistent with those listed in table 8.1 from the 1960s into the 1990s. The pattern was essentially about increasing numbers of districts using computers, with the smallest ones coming late to the process, and often only when they could integrate PCs into their daily work.19 No technology so affected education as the personal computer, about which we will have much to say below in discussing its role in teaching. What role did it play in the administration of schools and districts? In large school districts, already accustomed to using mainframes and equipped previously with software tools with which to do their work, administrators resisted switching to PCs or adding new applications onto personal computers when they could use existing computing power in their data centers and also avoid costs of converting existing software tools and processes. Therefore, small districts became extensive users of this new technology while others often did not come to appreciate the importance of these smaller devices for education until school principals, and their teachers, began talking about these new technologies in the 1980s.20 By the end of the 1980s, however, districts were integrating this class of technology into their inventory of applications, particularly as they linked networks from schools to district mainframes. Thus, by the mid-1990s, with the availability of the Internet just then spreading, PCs were appearing all across districts, from Apple Computer, which had captured the lion’s share of the teaching market largely at the elementary school level in the 1980s, and increasingly now, from IBM and other suppliers eager to cash in on the expanding market for first-time users and to replace the older Apples, which did not have the technical capability of attaching to networks, such as the Internet, or sufficient capacity to operate the latest software, much of which began using graphics and other presentation formats.21 Since the rest of this chapter is devoted to instructional uses of computing, the story of districts and computing in the Internet period makes sense to discuss at this point. In the second half of the 1990s, school districts began creating Web sites to inform parents and communities about schools, their programs, and
257
258
The DIGITAL HAND, Volume III
meetings. Individual schools did the same, such that by the early 2000s, hardly any school or district existed without a Web site. Principals used these to communicate with students and communities, and intranets to transmit information back and forth between districts and their schools. Districts did much the same, following familiar patterns evident in law enforcement and municipal and county administration, moving from early sites that only had passive information to later versions that provided the ability to communicate back and forth. In summary, the historical record shows that initial uses of computing were in direct support of existing administrative functions, such as payroll and other accounting and managerial operations. Doing these things became cheaper, faster, and better as technologies evolved, which also meant more accurately and for larger numbers of students, teachers, and staffs. By the 1980s, administrative functions increasingly merged, made possible by software that interconnected processes, such as accounting and records keeping. Like other agencies, management became increasingly able to rely on quantitative data with which to make operational decisions. In the case of principals, school districts, and their boards, they could model possible operational differences, just as legislators were able to do in the same period. Modeling made planning, particularly scenario planning, easier and better to do in the 1980s and 1990s. In short, the administrative side of education increasingly functioned like other agencies that used computers, integrating them into the fabric of all their major functions. At the school level, use of IT even extended to sporting events where coaches used PCs in the 1990s to analyze the performance of their athletes. Then there was teaching, presenting a far different story of adoption.
Computing Comes to Teaching, 1960s–1980s So far, in agencies described in this book, use of computing emerged as a tale of a new technology fitting conveniently into existing work of organizations and then later altering these functions to account for the capabilities of new technology. In other words, the story supports nicely our argument that what happened in the late twentieth century was the emergence of a new style of doing work. With K–12, however, and specifically with teachers, we may have an exception to the pattern. Many teachers used computers, particularly PCs, encouraged strongly to do so by parents and administrators. But the benefits of using computers to teach were not always as clear as with other applications, creating a conundrum that, fifty years after the introduction of computing, had not yet been resolved to the satisfaction of administrators, public officials, parents, and especially teachers. At the risk of doing terrible injustice to the fundamentals of pedagogy, it makes sense to summarize briefly a few basic values and styles of work of teachers that animated their attitudes and actions regarding computing. These have hardly changed in decades, indeed, in well over a century in the United States. Perhaps the most obvious is that for generations teachers embraced the concept
Schools
of “teacher-centered instruction,” which, simply put, meant a teacher taught. He or she was the center of attention, standing in front of a room full of students lecturing and orchestrating instructional activities. Teachers talked (taught) and students listened (learned). Classrooms often looked very similar in 2000 to those of 1900. To be sure, these varied with younger children often clustered into learning groups within a classroom, or with high school students moving from one room to another for the ubiquitous fifty-five-minute class on a specific subject. Classroom behavior, replete with raising hands, a code of conduct regarding who talked when, and how tests were administered, and so forth, remained relatively constant throughout the twentieth century. Teachers also saw it as their role to impart widely embraced social values, such as those of democracy in America; to sort out the bright from the less gifted young scholars; and to provide a modicum of social control. In short, the majority of the action was in the classroom. Most commentators on education who are either critical or supportive of this approach to teaching acknowledge that teachers share an ethos of conservatism, which values delight in teaching and the thrill of a student learning, but normally through a process that has changed little from one decade to another. This culture of teaching reinforces commonly shared practices from old to young teachers. Their beliefs in how students learn also are essential influences. Given the way teachers are managed, once one is alone in a classroom, he or she could do as they essentially wanted pedagogically. That situation did not begin to change in any appreciable way until the 1970s when local, state, and federal officials began mandating that students achieve specific levels of performance as tested by state or national examinations. By the late 1990s, federal aid to schools linked directly to such results. But for most of the century, teachers had enormous freedom to act as they willed in their classrooms.22 The historical record demonstrates clearly that teachers were not opposed to using innovations in their classrooms. The blackboard in the nineteenth century is perhaps the most important innovation in teaching of that century.23 All through the first half of the twentieth century, teachers and their school administrators experimented with other technologies, from radios to television, and, of course, the ubiquitous “film strips” and slides that animated so many lectures from the 1930s until nearly the end of the century. “Teaching machines” have been discussed, built, and used since the 1920s. Automation was always the Holy Grail, holding out the possibility for administrators to teach more students with fewer teachers, an important concern because teaching was a profession constantly projected to have insufficient numbers of members. Teachers saw these devices as possible enhancements to their core work in classroom-centered teaching.24 But what are “teaching machines”? These are devices that can be used by students to learn, such as skills or rote subjects.25 They originated in the work of Sidney L. Pressey in the 1920s, who developed objective self-scoring methods and standard tests, efforts which next led to his designing mechanical devices for selfinstruction. In the 1950s, a behavioral psychologist, B. F. Skinner, also expressed
259
260
The DIGITAL HAND, Volume III
interest in learning devices to teach humans much the way he had learned to do with animals. With both Pressey and Skinner, machines made it possible for individuals to control what they learned, the rate at which they accomplished such tasks, and made available an assessment of what they had mastered. By the 1960s, Skinner’s work had ignited modern interest in using some form of learning technology in teaching; although, as we saw with the military, training devices had been developed by the armed services as far back as World War II, often called military knowledge trainers. Pressey thought of his devices and techniques as augmentations to human teaching, while Skinner valued students’ providing the correct answer to a question and having that correct activity reinforced. Skinner’s approach influenced the design of teaching machines made, for example, to help children learn mathematics in the 1950s and 1960s. A less technical approach familiar to all students in any period since the 1940s was the ubiquitous workbook, also called in the education trade “programmed books.” Students were provided material, they answered questions about it, and then, and only then, were they permitted to move on to another topic. This became the basis of much early learning software, beginning in the 1980s. One further paper-based and learning-machine-based development became an additional precursor to the digital teaching tools of the 1980s and beyond called “branching programs.” Initially developed in the late 1950s, these built on developments in military training devices of the 1940s and 1950s. A student was given information, and the software required the young scholar to make a decision or provide an answer, and depending on how the student responded, the machine presented additional material. That new information varied depending on the answer. This was a fundamental principal behind video games and much modern educational training software, although in the 1950s and early 1960s, based on electromechanical devices. Many of these various classes of machines, from Pressey’s in the 1920s to Skinner’s and others’ of the 1950s–1970s, were used in adult education, while educators also experimented with elementary, high school, and college students during the rest of the century.26 While hard data on how extensively such devices were used in K–12 are hard to come by, extant evidence demonstrates that there were dozens of such devices available in the marketplace. Since many of those units each cost between $100 and $500 in the early 1960s—a great deal of money—we can safely assume that the number of teachers who had access to such tools remained few.27 Yet teachers trained in the late 1950s, 1960s, and 1970s would have known about the existence of such equipment and, of course, about the work of at least B. F. Skinner, if not also of Pressey, and others. The next chapter in the history of teaching machines involved use of computers. The story is normally understood to consist of two eras, both defined by hardware: pre- and post-microcomputers. Prior to the existence of personal computers (beginning in the second half of the 1970s), use of IT in the classroom for teaching was almost nonexistent; afterward it began a long and slow expansion to the present, and with much uncertainty about the effectiveness of its use. Taking a hardware-centric view of the history of computing in education, while
Schools
a useful shorthand for suggesting the extent of deployment, is not sufficient, but a place to start in understanding the role of IT. Seymour Papert, one of the earliest students of the use of computing in education, recalled that in the 1960s, “we were a small handful” of researchers conducting research on the use of computers in teaching, a group that increased to a “larger handful” in the early 1970s. He recollected that “the big break came with the advent of the microcomputer in the middle of the decade. By the early 1980s the numbers of people who devoted a significant part of their professional time to computers and education had shot up from a few hundred to tens of thousands,” a group made up largely of researchers in higher education and teachers in K–12.28 His observations point out that for all intents and purposes, computing was not used by teachers in the 1950s, remaining largely an experimental process in the 1960s centered at universities using mainframes. As time-sharing began evolving from its experimental forms into stable production systems at the end of the 1950s and early 1960s, various projects emerged, most notably development of PLATO. Led by Donald Bitzer at the University of Illinois, an interdisciplinary group of educators, psychologists, physicists, and engineers developed software to conduct individualized instruction, beginning in 1959. During the 1960s, it flowered into a useable system applied in teaching adults and children. In 1974, Control Data Corporation (CDC) acquired the rights to sell it as a product.29 Students gained online access to PLATO on a mainframe over a network; beginning in 1977, a student could use microcomputers and a new version of the software called Micro-PLATO. The system remained in use all through the 1980s and 1990s.30 A half dozen other mainframe-based projects for K–12 or higher education also existed in the 1960s–1990s, mostly centered at universities, and on occasion in collaboration with a school district, such as Chicago’s in the 1970s, which experimented with online teaching.31 While deployment in the K–12 environment remained miniscule in the 1960s, interest in the potential uses of computing grew and many educators debated the issue and anticipated its use. Educators recognized very quickly that teaching applications could include drill-and-practice exercises, instructing students on basic arithmetic, for example. Tutorial applications also seemed appropriate, advising students on the definition of a word, for instance, and examples of its use, followed by testing to confirm that the material had been mastered. Simulations of various experiences could also be created as a form of teaching (for example, what could be done with online games beginning in the 1980s). Educators expected that problem-solving software could be developed to help students think, reason, and solve problems, a variation of which could also be handled using game applications. Computers could also be used as a tool, much like a pencil or calculator, and for such things as word processing and mathematics (spreadsheets and calculators). Thousands of articles and hundreds of books waxed eloquently about these various potential uses of computing all through the second half of the twentieth century.32 Looked at more strictly from a pedagogical perspective, expectations always remained high that such
261
262
The DIGITAL HAND, Volume III
technology would eventually be used to impart knowledge of specific subjects, help train people to think and act creatively, to develop specific skills and be able to use techniques, and indirectly acquire desirable attitudes (for instance, a love of learning). An early and widely held aspiration for decades was the potential held out by computers to allow children to learn at different speeds and ways. Advocates of the use of computing in education believed by the early 1960s that students should progress independently, based on their individual abilities. Largely beginning in the late 1960s, and extending into the early 1970s, computers began appearing in schools, albeit slowly. Some teachers experimented with the technology, using computers acquired for administrative functions, or as part of some university-based research on IT in teaching. The high cost of mainframes, the lack of educational software, and teachers not versed in the technology combined as a set of factors holding back even experimentation with the new machines. Nonetheless, if one were to read the contemporary literature of the day, you could conclude that a great deal was going on. We read that high school students in Hinsdale, Illinois, used an IBM 1130 system to learn how to program software in 1969 and about a computer for tutorials in the early 1970s in Cleveland, Ohio. One sees reports of students learning about computers and how they worked in Memphis, Kansas City, Newark, New Jersey, Poughkeepsie, New York, or in Columbus, Ohio.33 Teachers were exposed to computers in various training sessions, even out of a mobile van from IBM in Appalachia in the early 1970s.34 Companies like IBM and CDC, eager to sell what appeared to them a large emerging market in education, promoted extensively multiple uses of computers.35 Early on, the federal government became an advocate of the new technology as well, essentially cataloguing “war stories” from around the country on the use of IT in classrooms and in school offices.36 One important report reflecting trends of the late 1970s and early 1980s assessed the situation as positive and expanding: “Four factors have more recently revived interest in interactive instruction: 1) the rapidly declining costs of computers and the advent of the desktop computer, 2) escalating labor-intensive costs of traditional schooling, 3) an improved understanding of how to create instructional packages, and 4) the development of alternative delivery mechanisms that link the computer with other technologies, such as video disk and interactive cable.”37 Government agencies funded research to figure out how computers could be used in the classroom or through remote computing, and dutifully reported results in the 1970s and 1980s.38 But its own assessments documented the slow deployment of computing in teaching in the 1970s, a decade that saw massive growth in the use of computers across the American economy and by many government agencies. As late as 1980, roughly one-half of all school districts had one or more terminals available for students and of those that did have one, only a quarter dedicated the terminal(s) for use by students. Large school districts tended to be early (but limited) adopters, while small districts reported not even having any intentions of using such technology in the near future.39 Those few devices in schools were used to teach computer literacy (that is to say, how computers worked and some
Schools
software programming), followed in descending volumes of use to do some drilland-practice and simulation exercises. School administrators pointed out that they needed funding for machines, technical support, software designed for teaching, and training on the technology for their teachers.40 All of these factors remained chronic obstacles to the use of computing in education throughout the late twentieth century. Emphasis on one issue or another increased or declined over time, but they all remained important. Traditional accounts of the use of computing in education could lead us to believe that with the arrival of the microcomputer, deployment in classrooms expanded dramatically, and that educators had begun embracing computing in sharply increasing numbers. To be sure, when compared to the 1960s and 1970s, adoption of computing in the 1980s and 1990s proved numerically massive, and by the end of the century the proverbial “everyone” had computers in their schools. But the problems cited above, and chronic pedagogical issues (discussed later in this chapter), did not go away, remaining right into the new century. But, before dealing with those issues, a short overview of educational activities after the arrival of the PC sets the context for the extent of the debate. When microcomputers first came into the economy in the mid-1970s, vendors quickly eyed schools as an attractive market. Early entrants included Apple, Radio Shack, and by the early 1980s, IBM. Schools first began using these smaller, less expensive forms of computers around 1977, and over the next half dozen years, software for educational purposes came out of universities, schools, and vendors.41 Two sets of activities became widespread in the 1980s: the move to find ways to use PCs to teach (or assist teachers) and a second to educate students about how computers could be used in general, even being called the “computer literacy movement.”42 School administrators, government officials, parents, and politicians often criticized the quality of education in general in the 1970s and 1980s, and their criticism culminated in the publication in 1983 of a report by the National Commission on Excellence in Education, Nation at Risk.43 The report was a seminal event in the history of modern American education because it galvanized opinions and led to various federal and state programs to “improve” education. One major criticism in the report concerned the fact that children were not being trained to work in a modern global economy characterized by extensive use of technologies of all kinds and in many automated functions. That concern translated quickly into a growing advocacy around the nation for more extensively exposing children to computers, while improving the overall efficiencies and effectiveness of education.44 As historian William J. Reese documented, at the same time as computing gained attention, and federal reports commented on the quality of education, the entire educational system was in churn as new forms of teaching came into use, even experimenting with such basic elements as the physical construction of classrooms (for instance, the open classroom movement), changing demographics, and the role of the Civil Rights Movement, which included the important aspiration to improve the quality of education of African American students.45
263
264
The DIGITAL HAND, Volume III
PCs were small, portable, could be put in classrooms, and cost far less than mainframes. Just as they seeped into various corners of American work and home life, so, too, did PCs appear in schools. Between the terminals available attached to mainframes, and personal computers, by 1982 there were in the range of 200,000 to 300,000 available to some 45 million students attending classes in 100,000 schools. Deployment of microcomputers had roughly doubled every year since the invention of the machine. If we accepted these statistics as a surrogate measure of their use, we could conclude that deployment had a relatively fast start. Just having the machines did not necessarily mean they were used, let alone effectively. Nonetheless, the process of using them had started at all grade levels. Some teachers found that these devices proved useful in teaching basic mathematics and early software tools appeared in support of these kinds of drill-and-practice instruction. In these early years, parents, teachers, and administrators wanted children exposed to how computers worked, serving as the most widely embraced reason—and application—for microcomputers.46 Elementary schools proved most aggressive in using the new technology, followed by secondary and high schools. Computer literacy and programming instruction were popular uses, while elementary schools were nearly twice the users of the technology for drill-and-practice applications than secondary schools, while the reverse was true for word processing. Mathematics teachers in secondary schools were some of the earliest users of PCs, but by the early 1980s, teachers were beginning to experiment with the technology in teaching language, science, and social studies, and wrote special instructional programs for exceptional students. The earliest surveys on use of computers began to suggest that students liked using the new technology.47 Impetus for deployment came from various sources: politicians, parents, education administrators, and some teachers. But it also came from vendors, as noted earlier. The most important of these was Apple Computer, which made an early and strategic commitment to this market far in excess of all other suppliers with the result that by the early 1980s, it owned over 56 percent of the public school market, and when combined with private schools, roughly half the national education market. Other brands included Atari and Radio Shack; IBM was not yet a major supplier in this market. All that said, it was still a developing market because only one terminal or PC was installed for roughly one student out of 144. This meant that very few students had any access while in school, let alone meaningful amounts of experience, working with computers in the late 1970s or early 1980s.48 However, when they were used in schools, teachers and administrators tried to optimize deployment by putting all that they had into one or few rooms, called computer labs, and to which students were brought just to work with PCs. In other words, computing had yet to be integrated into the teaching of courses taking place in regular classrooms. Drilland-practice remained an adjunct function provided by these early systems. It was the oldest, and also easiest, form of computing in education to create. Critics noted that this was no different from the previous page-turning of workbooks and that the technology was used with only slower-learning students who
Schools
needed more attention.49 By this time—1982–1984—the amount of software available to teachers to use with PCs began appearing in forms that were of some use, although vendors began accusing teachers of copying software without paying for it—calling it piracy—which led some software vendors to remain wary about entering the education market. It was an important issue because by the late 1980s, schools and their teachers had shifted their attention away from just getting machines into their buildings and now both were on the hunt for practical educational software. Interest grew in using software as an integral component of mainstream instruction. By now IBM had entered the market with its PCs and its Writing to Read software that taught young children reading skills.50 IBM, like so many vendors, aimed its selling efforts at school boards, districts, and principals as they were the ones with budgets and the ability to acquire multiple machines. This practice set in place a pattern of acquisition that remained right into the new century, causing many teachers frustration by their exclusion from decisions about what hardware and software others selected for their use.51 In the mid-1980s, commentators were beginning to question whether all these systems were being used, and about their effectiveness. However, defenders noted that “most of the available software, designed in the main to provide for drill and practice in the basic skills, was in fact dreadful, but no worse, it turned out, than much of the print materials schools had been using for the same purposes without complaints for years.”52 That observation should not be so surprising, since this early software was often created using already available printed materials that teachers and publishers had written and used for years. Meanwhile, all through the 1980s schools continued to install computers. Unfortunately, existing data is not as precise as to how many per school; the best information is in the form of percent of schools with at least one. From that perspective it would appear that a great deal was done, with 18 percent of all schools having at least one machine in 1981, to the point where by 1987–1988, over 95 percent claimed to have at least one.53 One can reasonably assume, however, that there were thousands of large schools with a dozen or more machines for their computer labs, while by now some classes would have started to obtain their own machines. Data on the number of students per microcomputer do exist, demonstrating that in 1983, the ratio was about 125 students per microcomputer; then it dropped most dramatically in the rest of the decade, in the following year or so to 75 students per machine by 1985. The number of PCs increased such that by 1990, the ratio was closer to 22 students per microcomputer.54 Because advocates of computing in schools normally criticized teachers for resisting use of computers, it is helpful to note that by the late 1980s, survey data provided clear evidence that the charges were not framed well. Table 8.2 provides evidence of a broad set of applications of computing in use by a large group of teachers who were already users of IT, although admittedly, not as widely for actual teaching, an issue explored in greater detail later in this chapter. In other words, if a teacher used a computer, this table suggested what he or she did with that system. Consensus among educators at the end of the decade held
265
266
The DIGITAL HAND, Volume III Table 8.2 IT Applications When Teachers Used Computers, circa 1989–1990 (Percentages) Text processing Instructional software Analytical and information Programming Games and simulations Graphics and operating tools Communications Multimedia
95 89 87 84 81 81 49 8
Source: Karen Sheingold and Martha Hadley, Accomplished Teachers: Integrating Computers into Classroom Practice (New York: Center for Technology in Education, 1990): 8.
that while use of computers had expanded in schools, they were “still generally considered as an add-on rather than an integral part of the curriculum and dayto-day instruction.”55 The discussion of hardware and software, their deployment, and what they were used for all blend into the larger story of how teachers used computers in the 1970s and 1980s. One student of the process stated bluntly “that the vast majority of U.S. teachers were nonusers of computers in their classrooms,” reporting that only one in four were even casual users, and only 10 percent serious users (such as one or more times per week).56 By standards of use evident in the 1980s in the private sector and in government, even teachers in that second group must be considered casual users. Most surveys on the number of machines installed in schools, and about what they were used for, are highly misleading and confusing, and thus one can suspect that the extent of deployment and utilization was probably lower than reported. Documented cases showed drill-and-practice applications at all levels in K–12, particularly for mathematics, reading, and spelling; and programming largely in high schools.57 Marc S. Tucker, executive director of the Carnegie Forum on Education and the Economy, in 1985 apply titled an article on the situation of computer use by teachers, “Computers in the Schools: What Revolution?” while defending their low use of the technology.58 Given what all the extant evidence demonstrates, one must conclude that right into the early 1990s, use of computing in schools lagged behind what was happening across many other parts of American society. It is an important perspective to keep in mind because as parents and public officials became more accustomed to using this technology in their work lives and even at home, they increased pressure on teachers, school administrators, school boards, and state legislatures to force teachers to use the technology. This situation stood in sharp contrast to all other sectors of the economy where it became relatively easier to appreciate the benefits of using computers in earlier
Schools
decades. So we should ask, why did a dichotomy exist in what otherwise was a substantial trend in integrating computing into the work lives of scores of millions of American workers? By exploring the answer to this question, we can better appreciate the substantial expanded use of computing in education that began to occur in the 1990s and that was also being affected by access to the Internet. For the truth is, it was in the 1990s that teachers began using computers in quantity, partially mimicking patterns of behavior evident in other industries and professions more obviously as early as the 1960s and 1970s, although not to the same degree of deployment.
Debate about the Role of Computing in Education Schools make up the one set of government institutions in American society most controlled at the local level; perhaps only churches are equally managed at the grass roots level. Throughout the history of the United States, local communities and school boards hired teachers and worked to have them reflect the community’s interests and values in what happened in schools. To be sure, as the twentieth century moved along, state boards of public education established various standards, generally with support of the electorate. Increasingly during the second half of the twentieth century, the federal government as well sought to influence the course of education in the United States through its funding of programs and support, much as it did with law enforcement, for example. Both in theory and practice, the electorate influenced the behavior of school principals, boards of education, and government agencies since these were the entities that trained, hired, and fired teachers, and that, in theory, could mandate what happened in the classroom. It is theory to a large extent because simultaneously teachers exercised considerable autonomy in their classrooms, a situation that did not begin changing until the 1980s as state and, subsequently, federal agencies began mandating standards regarding what they taught and to whom in any particular grade.59 However, relatively local control of schools remained in place for the entire second half of the twentieth century and that meant the views of local leaders and parents (often one and the same) played far more important and immediate roles than in many other public institutions. One could argue that this situation held so because there were more parents and students in any given community than there were more niche groups, such as accountants, firemen, Republicans or Democrats. In other words, schooling was an issue of interest to the largest segment of any community. Furthermore, citizens had long believed that education was important for the development of their children, and for the success of the American democracy and economy. As one student of American education put it, there was “a popular faith in using schools as a lever of social progress,” a theme “reinforced by the rising expectations of the post–World War II era.”60 Americans saw education as the path for personal and community improvements, the way forward toward entrance into the middle class, and access to financial well-being
267
268
The DIGITAL HAND, Volume III
and happiness. These values remain still strongly held by the public, dating as far back as the dawn of the United States. The same scholar quoted above, William J. Reese, reported that “since the 1980s, countless reform-minded citizens still placed the schools at the center of broad discussions of the good life and the future direction of the nation.”61 These were the same people who came to use computers at work and concluded that their children—and those of the nation as a whole—had to become familiar with computers if they, too, were someday to be successful in what was rapidly being called the Information Age or Information Economy. They also were joined by CEOs of corporations who could easily advocate investing in education for its modernization either in the belief that this position really did make sense, or out of some cynical notion that it would be a popular position to take with their customers and employees. The combination of constituencies in American society had joined forces with IT vendors and proponents of the use of computing in schools to demand that this technology not only be made available and familiar to students, but that IT also be integrated into the educational process. Larry Cuban rightly described all these advocates as “a loosely tied national coalition” that became vocal by the early 1980s.62 In short, there was a growing call for use of IT in teaching that built up, beginning in the 1970s and expanding to the present. To be sure, there were contrarian points of view that challenged widely held beliefs that computers should be part of the curriculum. These voices were equally articulate, expressed their views throughout the second half of the century, but remained in the minority because of the overwhelming support for the use of computing so widely held by parents, the public at large, legislators, and government officials. Often critics challenged conventional expectations that the digital hand could assist teachers in doing their jobs better and cheaper, even proposing that the quality of education would decline as a result of using this technology. Because of the enormous investment made in bringing the Internet and PCs into classrooms in the 1990s, a small surge of criticism emerged at the end of the century that challenged the wisdom of diverting funds from teaching art, music, and physical education in order to fund IT. Critics questioned the value of the cost of technology and its lack of effectiveness, or simply argued that what was done provided more than was needed to make future workers productive in the economy.63 Within the educational establishment, there were advocates of computing. Principals and administrators had early been exposed to the value of computers in handling budgets and accounting, in scheduling students, teachers, and classrooms, and so forth. So, one could reasonably expect them to be inclined to see the value of computers. Elected members of school boards used computers in their private lives and at work as well and so recognized many of the attributes and limitations of the technology as it evolved over time. Advocates of the technology in the greater society were also writing about its need in schools, from Robert P. Taylor and Seymour Papert within the educational establishment to such broader commentators as Alvin Toffler, the author of the hugely successful
Schools
vision of the future, Future Shock (published 1970), to the recent and also highly influential book by Thomas L. Friedman, The World Is Flat.64 So, teachers were constantly accused of being slow to embrace computers, dull witted Luddites holding back education and children from being properly trained to function in the modern information age. Most computers came into schools thanks to decisions about how to use them proposed by vendors and parents, and made by principals and boards of education. Often teachers were not consulted, although there always existed a small minority of them who were eager to experiment with the technology; typically they were self-taught and frequently used the machines at home the same way as other Americans, particularly after the arrival of the Internet. The historical record reflects that for over a century, teachers as a community embraced new technologies when they enhanced the way they taught, such as blackboards in the nineteenth century, and film strips in the twentieth. Rarely, however, did a new technology lead them to transform how they taught. But why was this so? Recent students of the issue suggest that the answer may lie in the way teaching is fundamentally done. Teachers use classrooms as the center of the teaching/learning experience. Add in what they are expected to do—maintain order and ensure students learn from increasingly standard curriculums—and we have the makings of a situation in which no technology would be used unless it fit into this way that teachers worked. As two historians of the role of technology in teaching pointed out: “Teachers have been willing, even eager, to adopt innovations such as chalkboards or overhead projectors that help them do their regular work more efficiently and that are simple, durable, flexible, and responsive to the way they define their tasks. But they have often regarded teaching by machines as extraneous to their central mission.”65 Computers in the 1970s and 1980s did not replace classroom-centered teaching, because the technology did not lend itself to that (such as lack of compelling software), or there were not enough of them to make a difference, or teachers did not receive adequate training on how to use the technology. Yet teachers used PCs to collect and report grades, and e-mail to communicate with each other, parents, and school administrators. The argument that they did not have enough machines failed the test of time and change. Just between 1984 and 1992, for example, the nation invested over $1 billion in providing them with digital tools, and as we saw earlier, the ratio of students to machines dropped dramatically such that in 1993, schools had one machine for every fourteen students, a ratio that continued to shrink all through the 1990s.66 Studies done in the 1990s on IT and education acknowledged that the issue might have deeper roots: the lack of sufficient complementarity between how the technology worked and teachers did their jobs. One study of the use of computing in Silicon Valley, where one would expect to see the most strident, extensive use of computing in the heart of America’s software business, found the opposite: “The teachers we studied adapted computers to sustain, rather than transform, their philosophy that the whole child develops best when both work and play are cultivated and ‘developmentally appropriate’ tasks and activities are offered.”67
269
270
The DIGITAL HAND, Volume III
The final conclusion of these researchers was clear to them: elementary school teachers sought “to conserve traditional civic, academic, and social values rather than turn children into future Net-workers.”68 But then did that mean critics were right, that teachers refused to change with the times and that they were dim-witted and not simply ignorant of the technology? The answer encompasses a complex of issues. For one thing, teachers and administrators agreed that neither had sufficient time in their daily routines to learn about new technologies or to evaluate how they might be used effectively in classrooms, let alone in distance learning or other models of education. Those teachers making the argument were often the same individuals who used computers in their homes and wanted training and time to do the same at work. They also had a history of making incremental changes in how they taught and what materials they used, just as workers did in every other industry studied for the three volumes of The Digital Hand. Teachers resisted radical changes in the classroom-centered, teacher-centered style of education that was nearly universally their practice in the nineteenth and twentieth centuries, just as workers in other industries also objected to fundamental, indeed, radical change to their work practices.69 In short, as I have argued in all three volumes, transformation in work practices was a massively incremental process, and the experiences of teachers and their administrators reflected that similar experience. This broad pattern of behavior remained in force right into the new century. Core teaching practices were simply not yet being changed by digital technologies. Later we come back to why that was the case. Seymour Papert noticed that in the 1970s and 1980s there was an additional unintended consequence at work slowing adoption of computing in teaching. In the belief that putting what few computers a school had together in one room— usually called the computer lab—principals believed they could optimize use of the technology by having students come in to use the equipment from various classes, as opposed to dispersing the machines to individual classrooms and have them sit there used far less than if concentrated into a computer lab. Papert noticed that teachers did not attempt to integrate computers into their daily teaching activities; learning about computers naturally became a separate subject, one dealt with in a separate classroom, even with its own name (computer lab), and that someone else taught: “Instead of cutting across and so challenging the very idea of subject boundaries, the computer now defined a new subject; instead of changing the emphasis from impersonal curriculum to excited live exploration by students, the computer was now used to reinforce School’s ways. What had started as a subversive instrument of change was neutralized by the system and converted into an instrument of consolidation.”70 His observation fit perfectly into the paradigm discussed in earlier paragraphs. Meanwhile, schools continued to invest in computing, indeed, more so in the 1990s than in earlier decades. This segregation of computing was unique in that I could not find a similar practice in any other industry. Everywhere else, regardless of the location of hardware, computers were integrated into daily work. Otherwise, often the
Schools
machines simply stayed in their original shipping containers in some closet, storage area, or even simply stacked in hallways in full view of all who passed by. In most instances, equipment and software came into an organization once IT experts (and their consultants), or potential users, had figured out how to integrate the technology into their work and had established expectations, implementation plans, and training schedules. None of these kinds of activities were as evident in schools. This is not to say this industry did not plan with teachers how best to use the technology, rather that the normal way of acquiring and adopting the technology was less in evidence here and may account for why so many teachers complained over so many decades about the lack of time and training to use IT. That circumstance would also contribute to why teachers, therefore, might have faulted software they did see as inappropriate for integrating into their teaching methods. It is a supposition, but based on the widely seen dichotomy between what educational software seemed to be offering and what teachers were actually doing. The issues of concern to teachers and administrators had not changed fundamentally since the dawn of the computer. An early student of the role of computing in education, Anthony G. Oettinger at Harvard University, explored how students, teachers, and administrators reacted to the computer in the 1960s. Like so many other observers, he noted the resistance to use of computers in education, and the situation he noticed and described in the late 1960s could just as easily have been seen in the 1980s: The prospects for change are also depressing when one looks at schools district by district and sees a tangle of authorities who feel mostly threatened, conservative, and broke. School administrators and teachers are fearful, filled with legitimate concerns for the safety of their jobs and of their persons. These key people are ill-disposed, by both background and training, to innovation, with or without the devices of new technology.71
He forecasted accurately that during the 1970s, there would be little change in this circumstance, despite expectations that digital tools would improve in the new decade, as they did indeed with the arrival of the microcomputer.72 Part of this state of affairs lay in the highly decentralized, fragmented approach to the adoption of computing. Designing educational software was and is a complicated process, as it is for any major software system, so expecting individual teachers, or even small specialized software firms or school districts, to do so was probably unrealistic, but often the reality faced by educators, particularly in the 1960s–1980s. As late as the early 1990s, even school districts were only just starting to look at the problem and getting involved, while survey evidence from that period suggested that teachers often still made the ultimate decisions on when and how to use computers, regardless of who acquired the equipment and software. The one use of technology, however, that did require coordination by teachers, schools, and often state education departments was distance learning, often bringing educational material to students as a complement to existing classroom-centered teaching by way of telecommunications and satellite. This use of
271
272
The DIGITAL HAND, Volume III
IT had been evident in some schools in the 1960s and 1970s, and had expanded slowly during the 1980s such that by about 1990–1991, some 1,500 school districts participated in some form of distance learning with about 25 percent of the nation’s students exposed to it in one form or another. So before the arrival of the Internet, schools had some experience with distance learning, albeit very limited, mostly using local television networks.73 One might ask at this point about what teachers were even being taught about the general subject of computers in their various undergraduate, graduate, and certification programs. Not until the 1980s does one even begin to see much commentary about the digital hand in any of their basic textbooks. The earliest textbooks devoted solely to the subject began appearing in the mid-1980s, such as those of Paul F. Merrill and his colleagues, and another by James Lockard and Peter D. Abrams.74 By the late 1990s, students in education were routinely exposed to basic information about the Internet and key instructional uses of digital and telecommunications technology.75 Meanwhile, their professors were simultaneously investigating the subject, often conducting basic research on the general theme of computing and education, or learning from consultants.76 It is an indication of how the technology was evolving, and new uses emerging, that widely used texts often appeared in new editions about every three years. But in each instance, they still had to explain why computers were controversial and to defend their use before explaining their nature and application to education.
Role of IT in Education, 1990s–Early 2000s The start of the Clinton administration in January 1993 heralded a new era of federal initiatives to get all students onto the “Information Highway,” reflecting the same interest officials entertained in using technology to improve government and the economy, described in earlier chapters. Critical to the federal government’s strategy was the further deployment of IT into the public school systems of the nation, particularly classrooms, building on earlier efforts of the prior Bush administration. In the year prior to the arrival of the new Democratic administration, the number of microcomputers in K–12 had surpassed 2.5 million, and by the end of the first six months of 1993, climbed to some 3.9 million units. The number of computers devoted to instructional purposes in schools continued to increase over the next several years, reaching nearly nine million by the end of 1998, bringing the ratio of students to machine down from about nineteen per PC in 1989–1990 to six per machine by 1998.77 By the end of the decade, some 99 percent of all schools now claimed to have one or more computers, although that figure appears to be an overstatement, while the ratio of students to computers had improved such that for every fifteen students up through middle schools there was one system and for high school students one for every ten. Regardless of effectiveness, or the much publicized emphasis of the Clinton administration’s interest in educational computing, there had been much activity in the 1980s and early 1990s. Local area networks were being
Schools
implemented to connect libraries and media centers to classrooms, and some software appeared that teachers used to create interactive learning environments, but all still in early stages of development.78 Federal officials expanded their assessments of the status of educational computing, funded various initiatives, and, as the Internet became widely available in the mid-1990s, pushed aggressively to make it accessible in classrooms all over the nation. Reliable data from the period 1994–1995 demonstrates that still only 75 percent of public schools had at least one computer, but already 35 percent had some access to the Internet, although it was not clear how it was used or how frequently. More telling, only 3 percent of all classrooms were “wired” into the Net. Public officials reported that teachers still hardly used computers for instruction, and that they needed a vision of what was possible, training, and support if they were to leverage the technology. The federal government began recommending, promoting, and funding efforts to integrate telecommunications, microcomputers, and curriculums, an effort that remained a constantly supported initiative of President Clinton’s domestic programs throughout his eight years in office.79 This all began with passage in 1994 of the Goals 2000: Educate America Act, which called on the U.S. Department of Education to create a national long-range plan for use of technology in schools and to implement programs to execute this initiative. Federal officials recognized realistically that the nation’s teachers needed training, effective software, and that their schools faced budget limitations similar to what police departments, for example, did as well. But the emphasis of schools, districts, state and federal governments had largely concentrated on getting equipment into the schools, and as we can see from the census data on PCs, the number tripled in one decade. In addition, they also focused on establishing connections to the Internet, a complementary initiative under way side-by-side with the installation of computers in classrooms. As the capability of newer machines increased to present graphical data, a mix of sound, images, and text, it became evident that these machines made application of IT more attractive than earlier devices that did not have these functions. When combined with what became available in the way of images, sound, and text on the Internet, the extensive inventory of old machines became a problem. Schools began learning how to replace older generations of equipment in the 1990s, such that by the end of the decade, nearly 2.5 percent of school budgets were routinely going to IT. Schools in economically depressed communities spent less, others in economically advantaged districts spent more. High schools tended to get the newest machines and cascaded their older devices to middle and elementary schools. About half of all computers made it into classrooms in the 1990s, and a nearly similar percentage still into computer labs. The rest were either in school libraries or in the office of administrators.80 What software did teachers have access to on all these machines? There were two classes of applications, one a collection of general uses that one might find in any industry, such as word processing and graphics, and then another group that was specifically instructional. Table 8.3 provides a catalog of the five most deployed general applications available on school machines expressed as a
273
274
The DIGITAL HAND, Volume III Table 8.3 Percentage of Instructional Computers with Access to Top Five Most Widely Deployed General Applications Software, circa 1998 Application Word processing Spreadsheet Database Drawing or painting Desktop publishing and presentations
Elementary
Middle
High School
Total
96 79 79 81
97 91 87 82
95 88 86 72
96 83 81 80
55
56
57
56
Source: Ronald E. Anderson and Amy Ronnkvist, The Presence of Computers in American Schools (Irvine, Calif.: Center for Research on Information Technology and Organizations, June 1999): 9.
Table 8.4 Percentage of Instructional Computers with Access to Top Seven Most Widely Deployed Teaching Applications Software, circa 1998 Application Math-specific Science English language Social studies Foreign language Typing CAD-CAM industrial arts
Elementary
Middle
High School
Total
62 35 56 44 11 59 3
39 29 29 28 13 51 6
22 21 23 18 15 30 10
50 31 45 36 12 52 3
Source: Ronald E. Anderson and Amy Ronnkvist, The Presence of Computers in American Schools. (Irvine, Calif.: Center for Research on Information Technology and Organizations, June 1999): 9.
percent of computers devoted to instructional purposes. Not listed were others such as image editing, multimedia development, Web development tools, and programming languages, all of which were on less than half the machines. Table 8.4 lists the most widely available instructional software. Elementary and middle schools had the greatest variety of specialized course software. However, given the age of the installed computers, many schools could not yet take advantage of the multimedia software beginning to appear on the market in support of digital instruction. By the late 1990s, there had been enough computers installed in schools to begin asking questions of the historical record about patterns of adoption and use since the arrival of personal computers. A team of education researchers at the North Central Regional Education Laboratory identified three phases of use that help situate the history of computing in teaching. The first phase reflected
Schools
the behavioral-based pedagogy favored in the 1970s and 1980s for such activities as drill-and-practice to teach specific skills and content. Software was relatively primitive, even throughout most of the 1980s, and often had simply digitized printed texts, and too frequently written by programmers with no educational experience. When teachers used these tools, students were normally sent to computer labs for their drill and practice exercises. The software thus reflected the pedagogy of the day, in which small bits of information were taught, students rewarded for learning, and were prescriptive in what they asked students to do and what the answers were to specific questions. As the authors noted, teaching moved from teacher to software, leading many educators to be hostile to technology because they did not control the experience. Nearly a thousand studies of this use of IT suggested that the technology did have some positive effects on learning, as measured by the results of standardized tests. Use of this kind of software proved practical, however, where a teacher’s personal knowledge of the subject being taught was low.81 During the second phase, it became possible to use software to introduce learner-centered activities, which made it possible to have students work in groups, for example. That phase began in the late 1980s but was really characteristic of much of the new instructional practices of the 1990s, and expanded after machines could be networked. Teachers using this new generation of software focused more on the quality of the learning experienced by students. This new software used hypertext, interactive exercises, sound, and hypertext and media formats. Word processing, desktop publishing, and access to databases of information became available, providing large increases and variety of facts, often available on CD-ROMs (and late in the decade over the Internet). It began incorporating use of sound, video, graphics, charts, and animation, all made possible by more powerful and faster machines, but which could not be used if teachers still used Phase One hardware of the 1980s. In this second phase, however, teachers and students could interact online among themselves and expand problem solving beyond just the math drill-and-practice teaching of the 1980s. Also, access to large amounts of information made it possible to design exercises in which students conducted research, organized data, and presented it in logical forms, thereby testing hypotheses, for example. Students began reporting that computing at school was motivating and fun. Networking made group thinking and collaboration possible as well. If students had access to the Internet, they could download information quickly, avoiding a trip to the school library that was time consuming. Students and teachers reported overall increases in productivity.82 This phase can be characterized as one in which computers facilitated learning across a wide range of subjects and proved capable of performing specific types of teaching: tutoring, exploration, use of tools (e.g., word processors), and communications. However, some educators who observed the emergence of Phase Two noted that schools focused primarily on extending deployment of these new tools “with little or no attention given to using technology to restructure schools or to teach higher-order thinking.”83 Another group of observers called out the fact that “many teachers
275
276
The DIGITAL HAND, Volume III
were unprepared for” the challenge of using software “to help students develop higher-order thinking skills.”84 The third phase emphasized more data-driven instructional experiences. This phase began to take form after teachers began using the Internet, a circumstance really of the late 1990s and beyond. Access to vast sums of data through the World Wide Web meant that short bits of information presented in some drill no longer could dominate use of computing. Multimedia formats were emerging, and all of the new software required networked classrooms and systems within rooms, across the school, and into the Internet, an increasingly expensive proposition for schools and one that required teachers to know more about IT than ever before. Teachers could download information for use in their teaching, while students began doing the same to do their class work and homework. Teachers were acquiring access to the Internet at home, with surveys in the period showing anywhere from 25 to nearly 60 percent had such access and used it by the late 1990s.85 By the end of the century, it had become more evident than ever before to teachers, school administrators, state and federal officials that teachers needed instruction on the technology and its uses, and time to learn how to apply it in class. In short, these were the same lessons managers had learned in other industries years earlier. Larry Cuban has made the point that some teachers used the Internet, in particular, on their own initiative as evidence for explaining why they were not so much hostile to digital technology as they were interested in finding ways to use technology in support of their preexisting classroom and teacher-centered teaching. Hundreds of surveys and studies done on how teachers used the Internet in the 1990s provided evidence that he was correct in his assessment, at least in regard to how they used the Internet.86 The story mimicked what occurred in other government agencies and across many industries: initial introduction to the Internet in the early to mid-1990s was slow and limited, but climbed during the second half of the decade as individuals acquired access at work or home, and as content increased, and as it became easier to use. By 1999, about a third of all teachers who had access to the Internet in class used it to create instructional materials, to do administrative record keeping, and to communicate with other teachers and colleagues. Less than 10 percent communicated with students, did research, created multimedia presentations, or modified lesson plans, while the youngest teachers tended to use the Internet the most, probably because they had become familiar with the technology at home at an early age. When used in class it was evident that the range of applications mirrored what had been done with PCs just before the arrival of the Net: in-class instruction, practice drills, problem solving, research, producing multimedia reports, and for conducting simulations. But to put use of these applications in perspective, for most of these, only a third of teachers reported using the Internet.87 Slow embrace of the Internet, however, was partially a function of how fast the new network became available in schools. Table 8.5 provides a snapshot of percent of schools with Internet access in at least one classroom; increasingly over time access became available in many classrooms within any given school.
Schools
By the early years of the new century, the percentage had nearly reached 100. Just as important is to understand how many students had access to the Internet, and for that answer, we have the data in table 8.6, which confirms that accessibility expanded rapidly during the second half of the decade, right into the new century. To understand the data correctly, remember that the smaller the ratio, the more access there was; thus for 2003, the data suggested there was a terminal or PC with the capability of accessing the Internet for roughly every four-plus students, suggesting that they had twice as much access than their cohorts just four years earlier. Results depicted in the tables hinted at the enormous investment made in this technology. Using 1998—the year table 8.4 shows a massive increase in deployment of the technology—expenditures for all manner of IT by public schools hovered at $7.2 billion, or roughly 2.7 percent of all expenditures that year (salaries, maintenance, construction of schools, books, and so forth). However, as impressive as were all those expenditures, they only amounted to an average of $113 per student, a pittance when compared to what other government agencies and companies had spent on their employees.88 Another way to see how teaching and learning were changing at the dawn of the new century was by asking the children to comment on what they experienced and thought. Extant evidence from 2002 provides an important window into their perspectives. As should be of no surprise, some 60 percent of all children under the age of eighteen accessed the Internet. They used the Internet to help them do homework, because it facilitated their working more quickly, to rely on more Table 8.5 Percentage of Public School Instructional Rooms with Internet Access, 1994–2003 1994 1996 1998
3 14 51
2000 2002 2003
77 92 93
Source: E. B. Tab, Internet Access in U.S. Public Schools and Classrooms: 1994–2003 (Washington, D.C.: U.S. Department of Education, February 2005): 4.
Table 8.6 Ratio of Public School Students to Instructional Computers with Internet Access, 1998–2003 1998 1999 2000
12.1 9.1 6.6
2001 2002 2003
5.4 4.8 4.4
Source: E. B. Tab, Internet Access in U.S. Public Schools and Classrooms: 1994–2003 (Washington, D.C.: U.S. Department of Education, February 2005): 8.
277
278
The DIGITAL HAND, Volume III
up-to-date information, and it allowed them to balance school and extracurricular activities. They used the Internet to do research for writing papers and completing exercises, to correspond with friends, and to share tips on homework and useful sites. They viewed the Internet much like a virtual textbook, reference library, tutor, study group, guidance counselor, and notebook. They also reported that they used the Internet in and out of school in enormously different ways. They attributed this disconnect in ways of using the Internet to several factors: school administrators dictating how the Internet was used at school; wide variety in teacher policies regarding how the Internet could be used within the same school (e.g., whether to allow its use for research or not, use in or outside the classroom); that assignments by teachers tended to be the least engaging uses to which they put the Internet. Students called for more imaginative uses of the technology to improve their attitude toward school and learning. They reported that the biggest barrier to further use of the Internet was quality of access, such as times of the day when they could use it, physical location of terminals, and the need for permission. Blocking and filtering software also constrained their use of the Internet, although they recognized the need to protect young children from inappropriate sites, but they wanted more assignments that relied on the technology. Finally, when asked what should schools do, their recommendations mirrored those of many teachers, officials, and observers: increase the quality of the access; teach teachers how to use the technology and provide them with technical support to fix problems; teach students keyboarding and Internet literacy skills; make high quality information freely available, accessible, and appropriate for each age of a child; and take seriously the digital divide that limited access of the technology to more wealthy students and schools. In short, they were telling schools to give them the same amount of access to the Internet as they had outside of school and let them use it more as a tool than their teachers currently permitted.89 All through the period discussed in this chapter, we saw teachers, and their supporters, quite defensive in explaining why they did not use computers more in their teaching, while they certainly did at home to the extent evident across the nation by people of all walks of life. To be sure, teachers and their defenders complained a great deal about the limitations of the technology and their circumstances far in excess of what occurred in all other industries. But had they changed their attitudes and concerns at the same time as students were demanding more use of the Internet? In a survey done on behalf of the teacher’s own professional association, the National Education Association (NEA) in 2004, we find a few answers. It reported that almost all educators now had access to the Internet both in and out of school and were “making valiant attempts to use educational technology as an instructional tool,” although they were still “plagued with numerous problems,” such as old hardware, too few computers, insufficient technical support, and training.90 The report made it clear through its language that use and support were quite thin, while the NEA made no mention of teachers trying to fix the problems by working through their schools and districts. The NEA recommended, however, that schools acquire more computers, technical support, training in the use of IT, and allow teachers to participate in decisions made by school administrators on what digital tools to acquire and how
Schools
best to manage them. Like the students, the NEA called for bridging digital divides among ethnic and socioeconomic cohorts. But quite telling in terms of the core issue of how teaching could be done with computers, it appeared that very little progress had been made on the pedagogical front: “The NEA strongly urges further research and development on effective technology programs to help inform the debate on the ‘value’ of technology in education . . . to help document direct links between school technology and student achievement.”91 In short, teachers, and their association, essentially still questioned the value of using IT at all, just as they had in earlier decades, while their students had already started to embrace the new tools and were becoming impatient with their teachers. The U.S. Department of Education noted in its report on the status of education in 2005 that “today’s students are very technology-savvy,” citing various uses, but importantly, that over 90 percent above the age of five used computing in one form or another. Children wanted to have their own personal machines, access to the Internet around the clock, and unfettered use in school, reporting that they were still frustrated with the constraints put in front of them in the use of IT by their schools.92 The U.S. Secretary of Education, Rod Paige, blasted the teaching profession and school administrators on this issue: “Education is the only business still debating the usefulness of technology. Schools remain unchanged for the most part, despite numerous reforms and increased investments in computers and networks.”93 Social commentators in the early years of the new century were still kibitzing about the changes in education required to produce the workforce of the twenty-first century, such as Alvin Tofler in his latest book, Revolutionary Wealth (2006), and Thomas Friedman in The World Is Flat (2005). Such commentators warned that if things did not change, traditional schools would be forced to transform by parents and policy makers, citing the rise in alternative schools, magnet schools, and home teaching as examples of changes already under way. However, schools were also creating new models for instruction, borrowing, in effect, from emerging practices in higher education. Specifically, in the early 2000s, “virtual schools” and what was called “distance learning” in higher education and in the private sector, as they came to be known, began appearing around the country. These are classes taken over the Internet that offer credits toward fulfillment of graduation requirements. By 2005, virtual schooling existed in over fifteen states, and hundreds of thousands of students participated. Some 25 percent of all K–12 public schools participated in such offerings by 2005, with high schools as the largest participants.94 This development could provide a path for leveraging technology and innovating pedagogy that then might transform how schools evolved in the years to come.
Conclusions In the meantime, the process of digital aids to education coming into schools progresses, and it appears with another round as declining costs for computing devices continue, making their way through the American society in the form of
279
280
The DIGITAL HAND, Volume III
ever smaller, more portable digitalia. In 2005, for example, handheld computers (usually called PDAs or personal digital assistants) and laptops spread widely, with about 28 percent of all school districts making these available to students and teachers. One in four PCs in schools were now laptops, continuing to reflect their rapid decline in cost over the previous several years and their convenience and capacity. By 2007, students and teachers were equipped extensively with such devices as hand-held calculators (first introduced into schools in the late 1970s) and cell telephones, but neither mainstream digital technologies were used as teaching machines. But the concern about results had not gone away. For example, as late as December 2005, one could hear the concerns such as those of Robin Raskin, founder and former editor of FamilyPC magazine: “Despite the fact that we have spent gazillions of dollars in schools on technology, it’s still just a leap of faith that kids are better educated because of that.”95 Recent experiences with PDAs and laptops suggest a remarkable consistency in the experience of the Education Industry over the past several decades. On the administrative side, there is little to discuss: the experience of administrators mimicked that of the rest of the American economy, because digital technology early and effectively evolved into forms at affordable rates that made sense to install and use. It was on the teaching side that the technology took far longer to evolve and, one might argue, has far yet to go before making it possible to “automate” learning and teaching, although that seems to be a process just now getting under way. That slow evolution of the technology can be attributed to several factors. First, for a computer to provide effective teaching automation (or partial displacement of teachers) requires a far more sophisticated form of computing than was available for all of the twentieth century. One could argue that companies, teachers, and administrators in schools and districts failed to collaborate in creating appropriate technologies, while every other industry did that with suppliers of computers and software. The experience of CDC and PLATO notwithstanding, the criticism would be reasonable to levy against all concerned. But if one is eager to find fault with why schools did not do more with computers, there is plenty of blame to go around. First, districts and principals failed to do what managers in over 100 industries routinely did: buy enough current technology, teach their employees how to use it, provide enough time to master the tools, and then tough-mindedly order them to use it or face dismissal. Those tasks were generally rarely done. In short, poor management practices were widespread when it came to the implementation of technology. But one could ask, “What about all the computers that were installed, particularly during the Clinton years?” The obsession government agencies and schools had with getting equipment physically into classrooms and networked was reasonable to expect, although disappointing in that this initiative was not accompanied simultaneously with sound implementation practices. So much equipment was wasted. Furthermore, teachers apparently proved quite intransigent about changing their teaching methods, regardless of whether one is discussing digital tools or not. That is not to say they failed to embrace new technologies—we saw in this chapter that they did—just not to the extent evident in other walks of life, and in part
Schools
because of the way they taught and also due to the primitive state of much teaching software. One could quibble about the “right” and the “wrong” of this reality; nonetheless, the fact remains that at the dawn of the new century, teachers taught in ways very similar to those working at the start of the 1900s. Rather than just fill up schools with computers, one might have hoped that federal and state officials would have spent some of the available funds on teaching teachers about the technology and in creating effective software tools. On the other hand, there is much to compliment all the parties involved. For one thing, students, teachers, and administrators used the technology when it made obvious sense to do so: word processing, e-mail, text messaging, tabulation of grades, budgeting, and so forth. Teachers used this technology as tools in support of their administrative functions and to enable developing aids to teaching, such as PowerPoint presentations, and replacements for old mimeographed teaching aids. So, while some sort of a digital revolution did not occur in education, the digital hand did assist schools. As far back as 1988, one educator predicted that computers would “be a transforming force but one that retains its place as an educational tool rather than as a technology that dictates the patterns of learning and social interaction.”96 He got it just right. So, educators responded in a responsible manner, suggesting that more than in most industries expectations of the computer proved far more exuberant than warranted by the realities of its ever expanding capabilities. As the experience described in the next chapter demonstrates, IT in education was a tool, not yet a replacement for instructors, despite the aspirations of some commentators and administrators. But were the parents and the public right in demanding that children learn about computers? They wanted children to understand how to function well in a world rapidly filling up with digital tools. It turned out that they did, thanks to the noneducational activities of parents and others. Children used computers their parents had at home; acquired digital games; adopted hand calculators, PDAs, and made cell phones integral to their social lives. They used the Internet as a convenience and entertainment vehicle and downloaded music to the consternation of the Recorded Music Industry, and all the while text messaged and gossiped in chat rooms and most recently in blogs. In short, and regardless of what teachers and principals did, students became very tech savvy, so much so that they now are the latest to advocate that schools “get with it” and use digital and telecommunications technologies more effectively. In short, students learned what they had to in order to thrive in the Information Age, leaving unanswered only the question of whether they were learning other bodies of knowledge and skills necessary to hold down jobs and live in a democracy-centric modern society. The Education Industry, however, represents a set of experiences also apart from that of other sectors of the American economy and, more specifically, of the public sector. We do not need to rehash that point again. However, what we can observe is that no industry was immune from the influence of the computer’s existence in American society. As in every other case examined in The Digital Hand, influences seeped in from other industries, vendors, and people. Important sources of influence included academic researchers and teachers of
281
282
The DIGITAL HAND, Volume III
teachers, some experiences of others with training tools (such as the military had), vendors of hardware and software on and off again, depending on their perception of the attractiveness of the education market, and that small group of administrators and teachers who were early adopters of the technology. Parents and public officials had a remarkably small influence on the use of IT in this industry; corporations that donated equipment and software even less so. To be sure, the Education Industry proved strongly resistant to change, and almost as reluctant to evolve as the Higher Education Industry demonstrated in the way its institutions operated and teaching was done, but did listen to persuasions when they fit into the existing norms for doing things—the same criteria used by people in so many other industries for determining if a new digital tool made sense to use. The conclusion we can reach in the most general of terms is that this industry demonstrated some of the limits of the nation’s transformation to a digitally rich economy. The technology had to enhance, then transform, how the daily core tasks of an industry were done. When that was not the case, deployment proved limited. The cost of units of technology had to match the size of available budgets. By this, I mean that if an educator could only spend, say, $2,000 per transaction, then IT had to be for sale in increments of about or less than that amount. Until the arrival of the PC, few schools and districts could afford computers, even though the cost per transaction had been declining for years. But, if one could only buy machines costing hundreds of thousands of dollars (or more), then the technology would be out of reach of teachers, principals, and many school boards. The nation spent collectively billions of dollars on computing for this industry, but the vast majority of those funds it expended in small amounts across many transactions conducted in a long incremental process. PCs, for example, came into this industry in a piecemeal fashion, not literally by the truck load as was the case in so many other public agencies and companies, a form of deployment that proved essential if machines were to be networked, or, if whole communities of teachers and students were to become dependent on the technology. Only administrators had sufficiently large enough budgets for their administrative needs to make the acquisition of larger, more expensive systems possible, normally leveraging the economies of scale of a school district’s larger aggregated budget. The effect of highly fragmented budgets on the deployment of IT awaits its historian; nonetheless, the problem existed in both K–12 and higher education to an extent far beyond anything evident in the other industries examined for The Digital Hand. Even when a specific form of software or hardware was available that proved imminently useful for teachers, they might not be adopted, as we saw with the availability of a new generation of PCs in the early 1990s that was far more suited to the needs of schools than the old Apple IIs of the 1980s, but which were acquired slowly. The problem was often budgetary, the lack of sufficient funds in large enough amounts in a timely manner to have the kind of effects on a school or district so evident in other industries. The flow of budgets has to be a factor that needs further attention; it is also one almost totally ignored by all the scholars who have looked at the role of IT in education.
Schools
What does higher education have to teach us about the role of the digital hand in education? With some 25 percent of the American public graduates of such institutions, it is a major collection of institutions in modern society that cannot be ignored. It is also an industry that played a profoundly important role both in the development of this technology and in its introduction to all industries, including government agencies. It is for these reasons, therefore, that we need to understand its pivotal role in the history of modern computing in America.
283
9 Digital Applications in Higher Education In the field of scholarship and education there is hardly an area that is not now using digital computing. —President’s Science Advisory Committee, 1967
A
merican universities played the central role in the development of computers in the 1930s and 1940s, doing much of the hard work of designing and experimenting with the first systems and then, in the 1950s and 1960s, in creating the field of computer science. In subsequent decades, they trained tens of thousands of computer scientists, engineers, and future business executives who went on to create what many commentators like to call the Information Age. All through the second half of the century, science and engineering professors improved information technology, even as the locus for most R&D moved substantively from their campuses to the private sector, beginning in the late 1950s. The story of the role of academia in creating the computer is well known and need not detain us here.1 However, other aspects of the story of computing in higher education have not been so well studied. For instance, less understood is the story of how higher education used computers and telecommunications for its own internal operations. As a general observation, two- and four-year colleges, and more so universities, became voracious users of the digital hand; but, as described below, in many uneven ways. Like businesses, for example, administrators used the digital hand early on in support of administrative and accounting functions. More than K–12 teachers, professors proved slow to use computing in their classrooms in direct instructional capacities, but found the technology very useful in support of such classroom related activities as e-mailing students, 284
Higher Education
posting reading materials, as sources of information, and for research. Indeed, they used extensively this technology in research, particularly in the physical sciences, later so in the social sciences, and only recently in the humanities. Telling the story of computing in higher education is daunting and complex, compounded by the fact that throughout the second half of the twentieth century, the number and variety of institutions grew from just over 1,800 (1950) to an excess of 9,000 (early 2000s).2 Also, the history of the use of IT has to be set into the context of this industry’s enormous expansion within the American economy. In 1960, for example, there were 3 million students and the nation spent $7 billion on them. At the end of the century, over 15 million students were enrolled in postsecondary education at a cost of $237 billion per year. Public colleges and universities absorbed the majority of that growth; in fact, 80 percent attended these institutions by the end of the 1990s. Put in more formal economic terms, higher education accounted for 2.6 percent of the nation’s gross domestic product (GDP).3 In short, higher education played an important role in American society. Patterns in the adoption and use of digital tools and the degree to which this was done can be understood by looking at several types of use throughout the period. This chapter describes applications of IT in administration, teaching, research, and libraries—four major foci for the use of computing in higher education. I do not discuss the training of computer scientists and the rise and evolution of computer science departments.4 However, because of the extraordinary
Figure 9.1
In the 1940s and 1950s it was common for many large universities to build their own systems. This is the University of Wisconsin’s WISC system, 1957. (Courtesy University of Wisconsin Archives)
285
286
The DIGITAL HAND, Volume III
presence of PCs and the Internet in higher education, we also need to understand their role in this industry. Higher education functioned within the context of a self-enclosed, highly articulated culture that exhibited patterns of behavior unique unto itself. There has been a massive and extended dialogue about the character of its institutions, and also regarding their resistance to change, and about the need for their uniqueness, reflecting many patterns of discussion and controversies evident in K–12. This colloquy intensified over the course of many years as the number of college graduates increased in society and as the cost of higher education to the nation did too. These events occurred during an era when the role of education and knowledge became increasingly important to the welfare and workings of modern society and as most industries changed profoundly. For our purposes, what is useful to understand, however, is less the merits of one point of view or another in this debate and more about what role information technology played in this industry. Let there be no misunderstanding, however; higher education behaved more like an industry than many other public and private corners of the economy. To understand the story of computing in American higher education, one should keep in mind that colleges and universities were rarely monolithic institutions with strong command-and-control cultures as more evident in private sector firms. Rather, they were, and still are, various quasi-independent communities that shared a campus, but operated in a relatively decentralized fashion, to the extent that one could find contradictions here in roles and purposes all through the history of computing. As one observer of the scene noted in 1962, “Universities are not only customers for large scale computation facilities but are also in the rather unique position of applying and teaching computation techniques developed for research,” yet with few communities on campus collaborating or coordinating activities to the extent seen in general, for example, in the private sector.5 Thus, professors may choose to use (or not) computing, while down the street a group of administrators may be investing heavily in digital technology to streamline their work. Generalizing about IT in higher education thus must be tempered by recognition of the reality that higher education does not have the same cohesive institutional structures evident in other parts of the public sector, and that reality often affected how members of this industry used digital tools.6 The story told below is about the extensive use of IT and telecommunications in many ways, but at the end of this discussion we will be left with the conclusion that despite such use, when compared to corporations and some other public agencies, higher education fundamentally changed less its structure, culture, and role. In fact, the normal results one would have expected—increased productivity, more effectiveness, lower operating costs, fundamental restructuring, and so forth—did not materialize to the extent its critics expected. Observers of the industry noted as recently as 2002 that in the prior two decades, tuition had doubled at twice the nation’s rate of inflation, while enrollments in four-year institutions expanded at one-half of a percent and ended the century with severe
Higher Education
budget crises, while some 500 had gone out of existence. Meanwhile, corporate and for-profit educational institutions prospered and expanded. Some 2,000 corporate universities operated, up some 1,600 since the start of the 1990s.7 While the number of these new forms of education thrived, largely with highly innovative services, traditional higher education institutions maintained the same “look-and-feel” as decades earlier. Task forces and industry organizations pled constantly for change, recounting similar urgings over the years and warning of dire changes to come if no organizational and cultural transformations occurred, but all to no avail.8 So, on the one hand, we see that higher education had an insatiable appetite for IT, but on the other hand, the effects of its use of the digital hand had not yet triggered the transformative effects on this industry so evident in many other parts of the economy. This is a feature shared with many government agencies and departments. Why those effects were less evident in the public sector in general are discussed in the final chapter of this book.
Administrative Uses Across every commercial industry and corner of the public sector, use of computing in support of administrative operations represented some of the earliest uses of computers, particularly mainframe-based applications. Higher education proved to be no exception and for the same reasons: the technology lent itself best to simple mathematical calculations and data collection, sorting, and storage. It worked well in large institutions with substantial amounts of clerical and accounting work, where administrators could take advantage of economies of scale, while initially providing incremental new ways of looking at data in support of managerial planning and decision making. As two observers of computing in higher education noted in the early 1970s, “just as computers have proved themselves useful, and sometimes indispensable, in the clerical tasks of business, they have demonstrated their value in the related tasks of educational institutions.”9 For the same reasons that managers in other industries used IT throughout the second half of the century, managers in academic administrative offices also applied these tools in their daily operations: in satisfying the need for more formal, fact-based managerial decisions, to handle larger volumes of transactions (due to greater number of students), and in response to growing demands of governments and citizens to account for results. While administrators in the 1950s began using computers for their internal purposes, by the end of the century they were heavily involved in the administration of massive networks in support of all members of an academic community. They managed institution-wide IT budgets difficult to contain as demand for IT grew, and that had exceeded 5 to 8 percent of the overall budgets of their institutions. By the dawn of the new century, administrators were beginning to recognize that perhaps community colleges, four-year institutions, and universities were finally beginning to change far more than in the recent past and that they were occupied in that process, driven as much by the growing uses of IT as by competitive forces, such as corporate and for-profit universities.10
287
288
The DIGITAL HAND, Volume III
Table 9.1 lists areas of administrative activities that administrators supported by using computers. As in so many other parts of the economy, they deployed computing incrementally and iteratively over time across the broad areas of operation listed in the table. Initially they used batch applications to collect accounting and financial data, next to collate and sort information, and to produce reports. Online interactive applications that provided services to the entire academic community came in the second half of the 1960s to some institutions and to most others in the 1970s, while new uses of IT in nonfinancial operations continued to expand. During the last quarter of the century, PCs began to play a far more significant role in higher education than in K–12, much along the lines evident in other industries, although more intensely. Simultaneously, administrations began to network their campuses. Nothing proved so important in the last ten to fifteen years of the century than use of the Internet in combination with PCs and laptops in the work of professors and students, although less so to administrators, who continued to rely mainly on centralized computing; but it was the latter that had to build, maintain, and improve their institutions’ networks. Administrative staffs of colleges and universities had long used precomputer information-handling tools, such as tabulators, adding and calculating machinery, and typewriters, and had partially integrated these office appliances into their daily work in accounting, financial management, personnel practices, student records, and so forth. When computers became useable as administrative tools by the mid- to late 1950s, and initially began to replace older punched-card tabulating systems installed originally in the 1930s and 1940s, the very largest universities and colleges were often the first to explore the possibility of automating daily tasks, much as evidenced in most industries and large government agencies. Some even had large data processing staffs poised to implement new uses of information technologies. For example, Pennsylvania State University had been using precomputer IT since the 1930s and by the mid-1960s had a staff of Table 9.1 Collections of Typical Higher Education Administrative Processes That Used Digital and Telecommunications Tools, 1950s–2000s Financial processes (e.g., cash receipts, purchase orders, financial statements) Personnel processes (e.g., payroll reports and disbursements, benefits administration, hiring of faculty and staff) Student processes (e.g., issuance of transcripts, maintaining grades, course enrollments, tuition and fees administration) Grants management processes (e.g., grants reporting, time and effort reports, budget tracking, proposal management) Management information and analysis processes (e.g., research management, enrollment management information, workforce analysis, sources and uses of funds) Source: Judith Borreson Caruso, Good Enough! IT Investment and Business Process Performance in Higher Education (Boulder, Colo.: Educause, June 2005): 1–14.
Higher Education
seventy running a combination of old computers installed in the 1950s (such as an IBM 7074 and a 1460), and soon operating the newer IBM S/360.11 Like so many large schools, this one had first started using computers for administrative purposes in the second half of the 1950s and, as newer computers became available, moved to these devices and to more current releases of software. Accounting functions were first automated incrementally; next came services for students, which, when combined with administrative applications, accounted for about 80 percent of all the work done by the data processing organization of the mid-1960s. Students and faculty used excess computing power for their research and studies.12 Other major universities followed Penn State’s example, such as the University of California at Irvine, which moved from batch to online systems in the 1960s. Growing availability of software, computer memory, and data storage made it possible to change systems, moving increasingly from sequential files on cards or tape to direct access on disk drives, thus creating more uses as the technology evolved, adding applications, for example, in support of early online enrollment systems and libraries.13 By the end of the first decade of administrative computing—approximately mid-1960s—to one extent or another over 70 percent of all public four-year institutions used computers in direct support of administrative operations, as did 99 percent of all universities with enrollments of over 7,000 students.14 Administrators faced growing volumes of work in the 1960s, which provided them with impetus to rely increasingly on this technology, largely to process work much along the lines they had with tabulating equipment in the 1940s and 1950s. They changed slowly as the technology evolved and when it became more obvious that it could be used in new ways, such as in searching online for data.15 As a couple of observers at the time wrote, “with swelling enrollments and more complex and expensive facilities and with ever-growing requirements for documentation of everything, the college administrator . . . finds the computer to be the only hope for keeping abreast of his job.”16 Already, some effects of computers could be seen on campuses: Students may complain that the use of computers for admissions, registration, and the maintenance of records is impersonal; but the fact is that without the computer, registration lines would be longer, admissions would be slower, and the student’s range of alternatives would be smaller. It is not the computer that makes the system impersonal; it is simply the growing size and complexity of our institutions which tax our ability to devise workable and humane administrative bureaucracies.17
Every major accounting, financial, and many student record management functions were ported over to computers by numerous institutions in subsequent years. Beginning in the 1960s and early 1970s, most midsized and larger schools, and later increasingly community colleges and smaller four-year institutions, began using such applications. Roughly 30 percent of expenditures for digital tools were allocated to administrative support; the rest went to research (40 percent) or for instructional purposes (30 percent), even as early as the mid-1960s.18
289
290
The DIGITAL HAND, Volume III
Introduction of computing was an iterative process, “a history of a gradual coming to terms between an old process and a new method,” as one observer described it in the early 1970s.19 As staffs learned about what computers could do, they automated records and calculations, later used terminals to access directly this data to answer questions, but still produced reports in batches as in the precomputer era. What were the earliest uses and why? The first applications focused on financial and accounting processes because they represented the most orderly and already structured operations, and hence were the easiest to move to computers. Close behind these processes came those of the registrar. In addition to both sharing highly structured tasks, they engaged some of the largest number of sequential and routine operations in administration. They also involved some of the most paper- and labor-intensive administrative activities at a college or university.20 Administrators were motivated less by quests to reduce costs of operation than by their more urgent need to keep up with growing workloads. As one University of California administrator admitted, “Almost every system we have built is more expensive than the one it has replaced because we collect more data and generate more reports.”21 In the 1970s, while the cost of computing equipment dropped, administrators learned from each other how best to use such technology, and vendors became active in selling software packages and equipment, increasingly becoming knowledgeable about the needs of academic computing. The use of computers spread to additional functions, most notably to such student-related activities as creation of class rosters within admissions, while batch applications began evolving into online versions with approximately 15 percent of all uses of computing occurring in this new form by the middle of the decade.22 Table 9.2 documents key application areas of the mid-1970s. The list reflects a substantially larger number of uses than deployed a decade earlier when systems focused principally on accounting and financial reporting. Not cited, but also increasing in number, were uses of computing to collect, analyze, and report on alumni relations and financial aid administration. By now personnel records management had also become an increasingly important area subjected to partial automation.23 As minicomputers began appearing on campuses, some administrative operations moved from large centralized data centers to decentralized facilities. In large universities, some of these applications moved to schools and colleges within an institution, and increasingly administrators converted these from batch to online systems. Case studies of successful deployment of networks, online administrative systems, student records systems, and other applications cited in Table 9.2 began appearing in the 1970s, encouraging other schools to follow their lead.24 The physical and spatial distribution of work at a campus also stimulated creation of early networks before the arrival of the Internet to connect various parts of an institution. Distribution of computing to various organizations within an institution also reinforced the preexisting decentralized structure and culture of higher education. In the 1970s, administrators used computers in essentially the same way as their counterparts in the private sector. In addition, however, faculty put
Higher Education Table 9.2 Administrative Applications in Higher Education, circa 1976 (roughly ranked from most to least deployed) Financial management Admissions and records General administrative services Planning, managing, and institutional research Logistical and related functions Financial aid Library operations Physical plant operations Hospital administration Source: John W. Hamblen and Carolyn P. Landis, eds., The Fourth Inventory of Computers in Higher Education: An Interpretive Report (Boulder, Colo.: EDUCOM, 1980): 76–79.
pressure on administrators to provide them with ever increasing amounts of computing capability to support their research, primarily at large universities, but also at smaller four-year institutions to enhance teaching, and for faculty to carry out their administrative duties, such as posting of grades and scheduling classes.25 One of the watershed technological events in higher education of the second half of the twentieth century was the arrival of microcomputers on campus, beginning in the late 1970s. They spread rapidly in the 1980s, achieving as close to ubiquitous status as one can imagine by the early 1990s. That trend will be discussed later in greater detail in this chapter, but for purposes of reviewing administrative uses of computing, one needs to recognize that this technology now began to play an important role with administrators as well. For the most part, however, computing systems in the late 1970s and early 1980s were highly centralized; they operated on large mainframes housed in centralized data processing centers. Data centers reported to administrative vice presidents or vice chancellors in most cases. Some administrative systems were also housed in colleges and professional schools using time sharing on an institution’s mainframe, a stand-alone mainframe (for example, in engineering schools), or housed in a minicomputer, often controlled by deans or directors. With the arrival of PCs, and their rapid spread across the entire Higher Education Industry, one would expect that beginning in the 1980s, administrators as well would begin to use this technology. In fact, they did. Like professors and students, they used these initially as word processors and to create and use spreadsheets, later to produce graphical representations of data, and by the end of the 1980s, to do e-mail. It should be noted that a small subset of the academic side of campus had been e-mailing via
291
292
The DIGITAL HAND, Volume III
what later came to be known as the Internet since the early 1970s, but the administrative staffs remained minor users of e-mail until the 1980s when the Internet and other networks spread across their institutions. Meanwhile, uses of PCs within administrative functions for more than just preparing word documents and spreadsheets began in the 1980s, and as software packages became available for use in their functional areas. One reliable list of administrative software products published in 1987 listed eighty-seven packages for admissions, accounting, financial contributions, grades, financial needs assessment, generating report cards, and student loan management among others. That same list catalogued twelve packages for use on minicomputers, and only three recent additions suitable for use with mainframes.26 Administrators shared with faculty and students alike some common experiences with microcomputers. As one observer from the period later recalled, “few people in the campus community (or elsewhere) anticipated the movement to freestanding desktop systems that began in the late 1970s and exploded throughout the early and mid 1980s.”27 But once they came, administrators too began to feel more enfranchised personally to use computing since micros were easier and more accessible than large mainframe systems. Senior management in administrative organizations, however, also saw a rise in demand for campus networks, beginning in the mid-1980s. Students, faculty, and their own staffs started asking that these devices be connected together into networks and that they get access to existing digital files resident on mainframes.28 In addition to word processing and graphics, administrative staffs began doing financial modeling, using spreadsheet software to inform their decisions. Economic justification of micros in administration, however, came by displacing pre-microcomputer word processing equipment with inexpensive easy-to-use PCs, which in addition could be used to do financial modeling. Modeling represented a new activity, making it possible for administrative staffs to impose greater order on budget development processes than possible previously. They imposed standard budgetary templates that could be used by departments, deans, and the entire institution on an iterative basis. That capability spread to analyzing enrollments, patterns of expenditures, comparing financial and other performances to those of other schools, and in the process, changing what data management relied upon to make decisions. That new practice led them to create more precise views of their institution’s situation couched in numerical terms before forming opinions, making decisions, or taking action.29 Another by-product of these new uses of the digital hand concerned a shift in status of individual staff members. By the late 1980s, those individuals who knew well how to use computing rose in importance and status in their offices, regardless of their title or rank; thus an undergraduate student working as a part-time employee, or a lower ranking clerk, or employee providing financial analysis to a vice chancellor responsible for accounting, could wield more influence on decisions than line management in some cases. However, it is not yet clear to historians how those newly acquired skills affected careers. While microcomputers spread through administrative offices, existing and new uses of computers largely involved relying on mainframes with an increasing
Higher Education
number of online applications. These were important, particularly for student services, because such uses of computing made it possible for administrative personnel to answer questions on the spot and to provide real-time services and reports to students, faculty, and other administrators without having to request data processing personnel to do the work overnight. Online systems reduced the need for clerical workers, and even more so by the 1990s, when self-service terminals and applications became available. Online systems began to improve productivity. As an example, online student enrollments—a popular new use of computing by the late 1980s that spread rapidly in the 1990s—illustrates how things changed for many applications and their users. In 1980, Tom Edmunds, Vice President for Student Affairs at Central Missouri State University, described the change: Previously, students met once with advisers to work out their programs, then had to return another day to have the program confirmed; it took that long to determine whether the course selections could be accommodated. If any of the selected courses were full, the student had to start over again. The system keeps track of the number of available seats in each course section, so advisors can see at a terminal if a section is full. This allows advisers to enroll students and confirm schedules during one advisory session.30
During the 1980s, other uses of online computing resulted in similar descriptions about how tasks changed in content, speed, and efficiency, particularly once a software system stabilized and worked as intended, which often took as long to achieve as major application installations in the private sector or in government agencies.31 By the late 1980s, demand for computing across all campuses in the United States had intensified, making delivery of such capability one of the leading managerial issues administrators faced, one that grew in the 1990s with further deployment of PCs and, of course, the Internet. But already in the 1970s and 1980s, administrators were scrambling to add computing capacity onto mainframes, buying PCs, and finding ways to expand networks, because in addition to their own investments in IT, professors and students were acquiring systems that they wanted to attach to academic networks. Decisions regarding acquisition and use of IT had decentralized, sometimes leaving management in a weakened position to control demand, let alone expenses. Deans at various schools were being pressured by their charges to acquire more computing as well. Administrative computing remained largely centralized in this period (1970s–1980s). At least during the first half of the 1980s, administrators still consumed about half of all computing used at an institution, slightly less at a large research-oriented university with its many science-based researchers, and far more than 50–60 percent at community and four-year schools. This had been the case even more so in the 1960s and 1970s, when administrative dominance of computing led many departments to acquire their own computers (minis), and in the late 1970s and 1980s microcomputers in support of their own research. Those initiatives started the trend of distributive processing that
293
294
The DIGITAL HAND, Volume III
became such a widespread feature of many colleges and universities by the late 1980s, often resulting in a vast assortment of products from almost every hardware and software vendor, frequently hundreds of brands and models, some old, others new. Normally, they were incompatible with other brands or earlier models; that is to say, data and software on one machine could not be shared or used on another brand of software or hardware, making networking difficult to do in many cases without wholesale replacement of older devices and software.32 It would be difficult to exaggerate the difficulties this situation posed to IT organizations responsible for creating and maintaining campus-wide networks and in providing training and support to individual users. It was still a thorny problem in the early 2000s. Presidents and chancellors responded by creating IT organizations within administrative functions to start supporting the wider academic community with networks, technical standards, institution-wide purchase contracts, help desks, training in the use of IT and networks, and so forth. They also had to increase the percent of their budgets devoted to IT of all kinds. In the 1960s and 1970s, about 2–3 percent of a school’s budget might have gone to digital tools; by the end of the 1980s, these percentages had crept up to 5 percent, and for some technologically oriented institutions, exceeded 10 percent. All of this growth in expenditures occurred within roughly one decade and proved somewhat higher than in the rest of the economy.33 By the early 1990s, administrative functions embraced widely use of e-mail and began providing many of its services online and increasingly over the Internet, campus-wide intranets, or LANs. Indeed, one can report that by the end of the decade, the vast majority of colleges and universities used Web sites to deliver information and services. Administrative functions that became widely available (that is to say, offered by over half of all institutions) over the Internet included undergraduate applications, course catalogs, listing of program and graduation requirements, class registration, library catalogs, applications for transcripts, press and media information, and just beginning in less than a fourth of the institutions, some form of e-commerce (i.e., ability to buy and pay for goods and services online).34 At the beginning of the decade, administrators at only a third of all campuses had an IT deployment plan for extending their services online or for supporting the needs of their institutions as a whole; but by the end of the decade, that percentage exceeded two-thirds, a result of campus-wide pressure for more services and as a way to control the rising costs of IT during a decade when most institutions had to cut their overall operating budgets.35 Key trends, however, mimicked what one saw in other industries. For example, large institutions installed ERP systems to put financial operations on a more businesslike footing and to integrate independent tasks that needed to share common data sets, and so forth. Pressure to produce more reports, particularly about financial matters and on results for benchmarking and justification of budget requests, increased all through the 1980s and 1990s as well, stimulating interest in ERP systems. “Homegrown” software applications were thus slowly replaced with commercially available versions that integrated financial,
Higher Education
student, and human resource functions.36 As in other industries and parts of the public sector, early Internet sites provided information about various services, then evolved into interactive tools with which to conduct transactions, such as enrolling in classes online, albeit slowly in the 1990s. By the end of the century, one could, for example, use credit cards to pay for services online in less than 20 percent of all institutions while almost all major retail stores offered online purchasing. Online registration for classes could be done at just over half of all institutions as well, although students could identify what classes to take online at nearly all schools.37 The story of online offerings in the early years of the new century is more a history of expanded use; the real change from the old ways of doing business before the Internet had been essentially completed during the 1990s. Due to severe budget cuts in the early 2000s, administrators began converting to self-service online services, whereby individuals would enter their own data, for example, to reduce the need for staff and thereby keep down operating costs.38 Academic analytics became a new use of digital tools, beginning in the 1980s, expanding in the 1990s, and a major activity in the post-2000 period as budgets shrank and demands for accountability for results grew, especially from legislatures, foundations, parents, and students. Using software to collect information from existing databases, administrators used IT to understand student academic performance, report on their graduation rates by gender, race, age, and so forth, and to understand the deployment and demographics of faculty and expenditure of budgets. Such tools spread from central administrative functions to the offices of deans and department chairs, following the practice begun with the development of budgets using computers as early as the 1960s and common by the end of the 1970s. The most active users of academic analytical tools by the early 2000s worked in finance, admissions, and research. Department chairs (and their staffs) and human resource offices were the least users, but they, too, had to rely on such software to perform their work. As should be of no surprise, the most advanced uses occurred in budget and financial planning where half the institutions extracted and reported on some transaction-level activities, while some 20 percent analyzed how budgets were spent or had begun to simulate potential budgetary allocations. Business functions and fundraising followed similar patterns of use.39 In hindsight, such uses of computing in the 1990s seemed obvious. However, administrations faced numerous uncertainties. Richard N. Katz and his colleagues at EDUCAUSE described the reality faced by senior administrators in this period: “The national economy was in the tank, states were drastically reducing budget allocations to their namesake universities, federal research was flagging . . . financial aid budgets were falling short of meeting needs as the economy sapped families’ ability to pay high private college and university tuition, and enrollments declined at many institutions across the country.”40 Reflecting on this time, one in which he participated as a senior administrator in higher education, Katz observed that “presidents and chief financial and information officers came to believe that if American industry could transform itself into a more efficient
295
296
The DIGITAL HAND, Volume III
and competitive mode, then so could—indeed, so should—colleges and universities, to achieve efficiencies consistent with a diminished and limited resource base.”41 They also needed to replace old systems in response to the potential threats posed by Y2K as well. He recalled, however, that faculty and other groups on campus resisted many of the proposed changes offered up in response to the economic realities and demands being placed on higher education: “With each newly touted capability of an enterprise system came defenses of locally grown shadow systems for their unique service to a unique clientele,” creating “concerns of confidentiality” and “territorial disputes about data ownership and control.”42 And there were technical hurdles, not the least of which was that “the ‘e’ in e-commerce was easier promised than delivered because of incompatibilities across information technologies among vendors and universities,” leading additionally to widespread underestimations of the effort to transform large applications and institutions, much as occurred at the IRS, at DoD, and at so many other federal and state agencies.43 While more will be said about deployment below, to end this discussion about administrative uses of IT, we can ask, what were administrators spending their IT funds on late in the first decade of the new century? Administrative ERP and related information systems remained at the top of their list of investments in applications. IT infrastructure also proved critical to improve so as to deliver online education, course management tools, campus-wide e-mail, and security and identity management, to mention the most obvious. These priorities had remained essentially unchanged since 2000–2001.44
Teaching and Computers Using computers in teaching brings the discussion of digital applications to a core mission of higher education. In many ways the issues mirrored those evident in K–12: classroom-centered instruction continuing throughout the century, lack of adequate faculty knowledge about computing, insufficient funding for equipment, not enough time to develop IT-based classes, and paucity of incentives for professors to change their teaching methods. There were also critics of the value of IT in teaching, and other commentators who defended it as a new way of having students interact with the material being taught.45 There are some differences as well, most notably that by the end of the century, traditional higher education institutions were beginning to compete with for-profit universities and corporate training programs that did make extensive use of IT in teaching and in providing remote training known as “distance learning.” This development led many observers to begin questioning the long-term viability of existing colleges and universities at the dawn of the new century.46 For another, both students and professors used computers in support of classroom work, for such things as research, writing, and communications, a combined set of uses of computing (most notably PCs) not as evident in K–12. But as in K–12, computing as a teaching tool came slowly to higher education and for the same reasons.
Higher Education
A U.S. presidential panel in 1967 catalogued a litany of reasons that computers were not used, largely due to inadequate funding, but reported that use had, nonetheless, started. It noted that as of 1965, less than 5 percent of all students even had access to computing for any purpose, and even those students were concentrated at large elite universities.47 Early uses of computing in teaching occurred in the hard sciences and in engineering to solve complex mathematical problems. Computing to simulate problems and solutions also began in the 1960s and early 1970s, with faculty and students using the technology in much the same way as in research. In those years, there were few large files that could be queried, a situation that changed as machine-readable data sets expanded in size and number all through the 1970s and 1980s, created by various government sources (e.g. census and economic) and by the physical and natural sciences. Early users of computer-aided instruction (CAI) focused on drilling exercises, much as in K–12, but these, too, were quite limited. Finally, we should acknowledge that students studying computer science were, of course, taught programming languages and about computer architectures and software.48 But use remained limited. One survey from the early 1970s suggested that less than 30 percent of computing budgets went to instructional uses, roughly the same amount as for administrative applications (nearly 28 percent), and less than spent on research (40 percent).49 Reasons noted for this distribution of use ranged from the research orientation of faculty at large universities to a reluctance to learn new skills (computing), to “laziness,” and “inherent faculty conservatism.”50 Examples of use of computing in teaching remained comparably sparse in the 1970s and when it did take place, relied on access to large mainframes normally devoted to administrative and research purposes.51 With the arrival of microcomputers in the late 1970s, the situation began to change, as it had in K–12 and in so many other industries. As one professor at the time observed, “for many students entering university in the 1980s their first acquaintance with computing is through a microcomputer.”52 This quote may qualify as the IT understatement of the half century in higher education because by the dawn of the new century, it would be reasonable to state that every college student either owned or used a PC or laptop in one fashion or another as part of their academic work. By the early 1980s, professors and students were relying on these machines to support their classroom, research, and administrative work, often acquiring the devices on their own, and increasingly through organized programs funded or managed by their institutions. A few academics experimented with computers in their teaching in the early 1980s, and by mid-decade, many schools were organizing efforts to extend deployment to ever larger percentages of students, from community colleges to universities. One observer from the period pointed out that professors were “not using software to deliver instruction.” Rather, they treated them as “tools for people—in this case students—to use, not electronic teachers to administer instruction.”53 In addition to word processing, spreadsheets, and graphics, students could conduct research for their class projects accessing large machine-readable databases, and other software tools as part of laboratory projects. These were the same tools used in
297
298
The DIGITAL HAND, Volume III
business and government, and in the same way, at any level of academic work. In short, less was done to create instructional software when compared to what was being attempted at the K–12 level. In that early period when PCs were just coming into higher education, say from the late 1970s to 1985, administrators continued to increase their use of centralized computing to automate functions in admissions and records, financial management, planning management, and in providing services to faculty, students, and staff, many indirectly related to teaching. Online versions exploded with growth. One survey suggested that for admissions and records and for various financial systems, they doubled between 1980 and 1985, with 60 percent of respondents having online versions; but essentially all major areas of applications that went online doubled in number.54 All through the 1980s, the number of students acquiring PCs increased. Campuses began creating networks that they could use to access online files and to communicate with faculty. Early experiments with chat rooms focused on specific courses first appeared in these years, but expanded exponentially after the arrival of browsers and the more user-friendly Internet of the late 1990s. By the early 1990s, however, surveys were reporting that use of computing had become relatively ubiquitous, often with over 90 percent of students and over 70 percent of faculty at least occasional users of PCs in support of classroom work.55 Table 9.3 lists types of uses reported in a national survey conducted in 1993. Software listed in this table was commercially available as products, such as WordPerfect and Word for word processing, Lotus and Excel for spreadsheets, or
Table 9.3 Uses of Commercially Available Software in Support of Class Work by Students, 1993 (listed as percentages in descending order of use) Application Word processing Spreadsheets Graphics Database packages Desktop publishing Instructional software Statistical analysis Engineering tools Presentation tools Authoring tools
Students 75 26 22 15 15 13 10 8 7 6
Source: Extrapolated from Susan H. Russell, Manie H. Collier, and Mary Hancock, 1994 Study of Communications Technology in Higher Education (Menlo Park, Calif.: SRI International, 1994): 28–29.
Higher Education
Minitab for statistical analysis. The same survey demonstrated that roughly 80 percent of faculty and some 60 percent of students had access to computing tools, such as PCs of their own, others in computer labs, or access to terminals in laboratories and libraries. If anything, however, this survey probably understated the extent of access everyone had to the technology by then. Impediments to even greater use by students and faculty were similar to those reported by teachers in K–12: expense of the equipment, insufficient software, time required to learn how to use it, lack of access, with complexity of operation the most common complaint in the early to mid-1990s.56 In this period immediately preceding the arrival of browsers in the mid1990s, networks in support of all manner of instructional purposes and research were, however, well in place. Survey data from the period suggested that over 75 percent of faculty and over 33 percent of the students had access to some form of campus network. Use varied by type of institution, however. Faculty use proved most extensive at universities (greater than 77 percent) and least at community colleges (38 percent), while 65 percent of students at universities used these tools in networks, but only 25 percent did so at community colleges.57 Classrooms and laboratories that were hard-wired to a campus network existed in over 90 percent of all institutions, but just as with K–12 networking, this did not mean all classrooms. Less than 20 percent of all classrooms were wired this way, but nonetheless represented a substantial investment. Nearly 85 percent had connected one or more campus libraries to their school’s network.58 But as with student uses of computing, faculty and students used these networks as they wanted, mainly in support of traditional research and teaching activities, not as interactive distance learning tools. As occurred all over the nation in all industries, the availability of user-friendly browsers in the mid-1990s on the Internet led to a surge in the use of networked computing for communications, research, and entertainment. It is in this post1994 period that renewed intense discussion surfaced about the wisdom and weaknesses of distance learning, generating a body of literature that rivaled that reviewed in the last chapter about K–12. Cases for and against were similar and thus need not detain us here too long.59 However, there also were some differences. Student demographics had changed in the 1980s and 1990s as the number of older students going to school increased. This population did not necessarily go straight from high school to college, spend four to five years full time in school, and then enter the workforce. Rather, on average, these students were several years older and many worked full or part time while attending classes. Their new requirement was to balance work and school demands on their time and, therefore, they became a logical potential pool of users of distance learning. As the 1990s progressed, they demonstrated a willingness to combine distance learning and traditional education. By then, earlier uses of telecommunications and some distance learning classes had demonstrated the efficacy of this approach to teaching, although it remained a minor form of instruction at most institutions. Deploying distance learning via the Internet was really a development that only began in earnest during the 1990s. As noted above, professors used the
299
300
The DIGITAL HAND, Volume III
Internet initially to supplement their traditional face-to-face teaching methods in physical classrooms. But in the process of using the Internet in a supplemental fashion they and their students became more familiar with the technology and at the same time that their institutions were wiring their campuses. Both professors and students were investing personally in the technology for use in their dormitories, apartments, or homes. Students became important sources of demand that increasing amounts of their education be delivered in ways that allowed them to select the time when to study. To be more concrete on the volume of traffic involved, surveys of the period suggested that by around 1997, nearly a third of all college courses used e-mail as part of the teaching experience, up nearly a third in volume since 1995.60 By the end of the century, usage had escalated. Yet, as a method of delivering instruction, distance learning grew quite slowly in the late 1990s, with experimentation going on at various universities, such as at University of Texas, University of Houston, and at the Massachusetts Institute of Technology (MIT). As one report from the period also noted, professors were reluctant to use courses developed by other instructors, therefore even early distance learning materials spread slowly.61 Nonetheless, most institutions had some offerings of this type by around 1997–1998, largely for part-time students, and one could find increasing numbers of community and four-year colleges offering such classes; less so by elite research-oriented universities. Evidence from the period demonstrated that distance learning proved effective with college-aged students, and that often such courses cost students and higher education about half of that for classroom-based teaching.62 As in K–12, students needed self-motivation to make this form of education work, and it could deprive students of the opportunity to develop interpersonal relationships as one normally did in a traditional college setting.63 Deployment of distance learning in the late 1990s is a story of slow expansion, but not a tale of technology replacing traditional teaching practices or about creating the revolution in higher education that so many observers had predicted would certainly happen quickly. However, while colleges and universities began dealing with the advantages and disadvantages of distance learning—an effort that extended right into the next century—and how best to fit it into existing programs in evolutionary ways, for-profit providers of higher education were moving more aggressively in the direction of providing distance learning for the reasons students wanted: flexibility and cost performance, particularly for those who were working and also taking classes.64 While the amount of training done this way by corporations and for-profit universities (such as University of Phoenix) remained low in the 1990s, it was a rapidly expanding form of competition for traditional bricks-and-mortar public and private institutions. Some statistics suggest the scope involved. Between 1988 and 1998, for-profit institutions that offered degrees grew by 59 percent. Put another way, in 1998 these organizations had 28 percent of the two-year college market and 8 percent of the four-year market. Those shares of the student market grew each year into the next century.65 Influencing the role of traditional teaching, and that of the new segment of the market, were factors that went far beyond distance learning and other uses of
Higher Education
computing, but that made distance learning itself an important topic that varied depending on whether it was a school like the University of Phoenix or the University of Wisconsin delivering it. Let two experienced professors/administrators explain the differences: Traditional colleges and universities tend to focus on inputs such as entering student quality and metrics such as expenditure per student as well as upon process dictated by established student-to-faculty ratios, credit hours, and degree programs. The new-for-profit providers focus instead on outputs, on measuring student learning and the competency achieved by particular programs, forms of pedagogy, and faculty. They have set aside the factory model of student credit hours, seat time, and degree programs long preferred by the higher education establishment and are moving, instead, to anytime, anyplace, any length, anyone flexibility, customized to the needs of the learner and verifiable in terms of its effectiveness.66
Distance learning allowed these new enterprises to disaggregate offerings, thereby going after “low-hanging fruit,” offering classes in great demand and that could be standardized, but presented increasingly in highly media-flavored, modular forms attractive to students. In the process, they challenged the vertically integrated organization of colleges and universities. This unbundling of teaching replicated the trend already under way in many other informationbased industries, including the media industries, and many in the entertainment sector of the economy.67 Among those professors dealing with the issue of distance learning, the potential threat of that kind of teaching coming from outside the academy hardly caught their attention, but it did of some administrators, and most emphatically of observers of higher education.68 By the 1990s, it had become a major example of the destructive gales of technological innovation that economists had spoken of for decades. While control over who is allowed by government to grant degrees constrained growth in the number of competitors threatening traditional higher education, nonetheless, the digital hand made it possible to offer programs in business, technology, accounting, and so forth that did not have to be taken at a campus or necessarily at set times. Combined with the slow response of higher education to provide more flexible programs at a time when demand was increasing for such services (1980s–1990s), computing’s capabilities encouraged new rivals, indeed, over 1,000 around the world with over a million students by the end of the century, in addition to the some 1,600 corporate training facilities in the United States that did the same.69 The emblem of this new movement was the University of Phoenix, which had over 100 centers in thirtytwo states and aimed its offerings at young adults aspiring to acquire master’s degrees in various professional areas, such as accounting and IT. In 2002, it had over 100,000 students, of whom 18 percent were taking distance learning classes. Its student body grew by about 20 percent annually in the early 2000s.70 So far, the largest response to this rapidly growing new segment of the market was university extension programs that began offering Internet-based instructions,
301
302
The DIGITAL HAND, Volume III
an initiative driven more by administrators rather than by professors, and that began in the 1980s and picked up momentum in the 1990s. An example is the National Technological University, established in the 1980s by Colorado State University, which brought together various institutions to offer courses in engineering to students already working in business. As of about 2001, it had awarded some 1,400 M.S. degrees in engineering and management.71 Experts on higher education, conducting an analysis of the industry for the American Council on Education, reported in 2002 that a commodity market for education was emerging made possible by the capabilities of distance learning. The implications they saw for how professors taught fell in line with the nature of changes in work evident in other industries: “Rather than spending most of his or her time developing content and transmitting it in a classroom environment, a faculty member might instead have to manage a learning process in which students use an educational commodity (e.g., the Microsoft Virtual Biology Course). Clearly, this would require a shift from the skills of intellectual analysis and classroom presentation, to those of motivation, consultation, and inspiration.”72 Competition for an American university could come from anywhere in the world given the availability of the Internet and a growing amount of distance learning classes. Meanwhile, deployment of distance learning (also now called e-learning) continued. By the early years of the new century, distance learning had become a major topic of conversation and initiatives. Interest grew slowly on the part of faculty to participate in the development and use of distance learning. Those that had experience with it were largely pleased with the results.73 Increasing numbers of institutions had added distance learning to their offerings in the 1990s, such that by the early years of the new century, the majority had at least dabbled with the new approach.74 Yet even as late as 2004—a decade after distance learning had become a major topic of discussion—publications in the Higher Education Industry still were publishing articles on the virtues of this form of teaching and the potential positive effects it could have on pedagogy. One frustrated advocate declared that year that “the lecture method is literally unchanged from its introduction centuries ago, and many technology innovations remain in limited use.”75 Extent of use remained a function of how far faculty and their institutions were willing to move intellectually, financially, and operationally. Faculty and students were equipped extensively with the necessary technology and wired campuses, yet both groups still used it largely in support of classcentered teaching and learning (such as e-mail and for research).76 As one senior administrator wrote in 2004: “I’m struck by many faculty members’ resistance to the obvious benefits of the maturing technologies,” acknowledging problems faced by faculty: “shortage of time, money, and energy.” His conclusion: “the problem is that the academic culture and the IT culture simply do not mix together well.”77 Yet optimistic, this dean proposed that what might work—and certainly did not yet—is having “tools that anyone can pick up, that can be customized, that are quick and adaptable, and that are less expensive in money, time, and commitment,” all characteristics of many digital tools then in wide use, such as PCs, laptops, word processing software, and cheap Internet access.78
Higher Education
Role of Computing in Academic Research While this chapter demonstrates that computers affected the tasks (work) done by faculty, students, and administrators, and less so the content and culture of higher education, the role of computing in research proved to be a profoundly different experience. What researchers studied, how they did that, and the resulting consequences were by any measure revolutionary and profoundly important and transformative for all modern societies. Research was also the one area in academic life most dramatically changed by the use of computers. The results of research can be classified as one of America’s higher education’s “finest hours,” indeed, possibly the nation’s greatest contribution to humankind. One is hard pressed to find observers who would castigate the overall results achieved in research. When criticized, academics were accused of sometimes picking trivial topics for research, but as an aggregate contribution to society, more new knowledge was created by academics in the United States during the last half century than in all time by any other society. Using the list of Nobel Prize winners as a surrogate measure of results, out of a total 758 people to be so honored through 2005, 269 were American professors and in some categories, the Americans dominated even more. The proportion of American winners during the second half of the twentieth century actually increased, with multiple U.S. scholars recognized routinely each year.79 It is nothing less than a remarkable accomplishment that dwarfs any petulant discussion we might have about the role of computing in administration or teaching in higher education. For many academic fields enjoying this historic achievement of creating new and useful knowledge, the digital hand played a central role. Furthermore, it was a role taken up at the moment computers were first built. The computer was nearly an ideal new tool for scientists and engineers, in particular, and later for social scientists, economists, and still later for scholars in the humanities. It allowed them to take myriad idiosyncratic research issues that they had and augment their existing methods of research at a perfect time in the evolution of investigation, particularly in the physical sciences, mathematics, and engineering. Since the seventeenth century, mathematics both as a field and as a research methodology had expanded, along with a large complement of various mathematical tools, such as slide rules and later mechanical calculators.80 By World War I, mathematics as a discipline had both expanded enormously in the prior century, while uses of mechanical aids to calculation had already started to stimulate broader, more data-intensive research agendas in such fields of mathematics, astronomy, physics, and chemistry. The years following World War I saw the arrival of more sophisticated IT equipment useful to research, such as deployment of punched-card equipment at Columbia University with the support of IBM, and use of analog calculators built by Vannevar Bush at MIT.81 Thus, as early as the dawn of the twentieth century, scientific research had become highly mathematized and quantitatively oriented, two trends that continued to intensify during the early decades of the new century. During the late 1930s and all through the 1940s, computers were “invented,” largely funded by the federal government in support of new applications needed
303
304
The DIGITAL HAND, Volume III
for World War II and the Cold War, with the result that in the late 1940s and early 1950s, there existed a community of scientists and engineers who collaborated in the development and use of computers in support of government research and their own.82 For example, the mathematician John von Neumann worked on government computing projects during World War II (advising on the construction of the ENIAC and EDVAC) and subsequently built his own system.83 Other scientists and engineers did the same at several dozen universities in the same period. By the early 1950s, the computer was already a useful scientific instrument. Von Neumann’s IAS Computer, for example, was used in the early to mid1950s to solve scientific and mathematical problems, as well as to learn about computers as they were evolving. One list of scientific problems worked on using the IAS had nearly fifty items, ranging from blast wave calculations for atomic bombs, to calculations in physics and chemistry, to analysis of weather data and X-ray diffractions.84 But why did computers become such an instant attraction to scientists and engineers? Bert F. Green, Jr., a psychologist by training but who spent the 1950s and 1960s as a computer scientist at MIT and at Carnegie Institute of Technology pointed out that the computer was extremely useful for doing complex mathematics and statistical calculations, and it could also manipulate large quantities of data. In discussing the effects of computing on behavioral scientists, he could just as easily have been representing the experiences of physics, chemists, mathematicians, and engineers: “Scientists are often faced with the necessity of performing many tiresome calculations in order to determine the statistical relationships among the variables they are studying. Often the sheer volume of required calculations is so overwhelming as to preclude doing the analysis by hand. Before computers were available for doing the work, many researchers were forced to settle for half a loaf.”85 In short, thorough work could not be done. But, as he noted after arrival of the digital hand, “A factor analysis that would have taken over a month to do by hand can be done in minutes on a computer.”86 By the early 1950s, computers could already compute faster than people using electromechanical calculators, by a factor of 10,000, and could manipulate far larger volumes of data—a capability that expanded by orders of magnitude throughout the second half of the century, along with similar increases in the speed of calculation.87 But the attraction of the digital hand was not limited to its ability simply to do things faster. The quality of the work itself could be improved, because researchers could more thoroughly investigate an issue and experiment with multiple paths of research rather than bypass options as being too time consuming. One chemist who worked with computers in the 1950s explained the weaknesses of scientific work prior to the arrival of the computer which included a neglect or a very inexact estimation of secondary correction factors, failure to quote precise limits of reliability, and unjustified partiality to certain items of data. Often published papers are merely summaries of the author’s conclusions illustrated with tables and diagrams that are of little value to the researcher
Higher Education
who may wish to check the calculations and measured quantities or to use the data to develop more sophisticated models or theories. Concise, the mathematically exact statements of the models tested and the raw data, including all limits of reliability, would enable the reader to quickly ascertain the usefulness of further investigations, and the unnecessary duplication of tedious experiments that would be eliminated.88
Even computers of the early 1950s made his desire far more possible than before. One astronomical example involving plotting the movements of five planets called for mathematical calculations using known techniques, but which had not resulted in a highly accurate set of answers. It illustrates specifically the kind of challenge one had in solving problems manually as opposed to using a computer, which did the work easily: “The mathematical problem is that of solving a set of simultaneous non-linear differential equations of the 30th order with an accuracy of fourteen decimals, enough to insure the desired accuracy at the beginning and end of a 400-year interval, over which the accumulation of rounding error results in the loss of five figures.”89 In short, problems requiring extensive calculations and the iterative manipulation of growing bodies of data became practical to solve. Use by scientists spread in the 1950s and 1960s from institutions and departments that were building their own machines, or systems for the government, to other departments and institutions that either shared systems or acquired their own.90 Federal funding helped speed up deployment in the 1940s and 1950s as well, particularly to solve problems in the physical sciences, to help develop the hydrogen bomb, to conduct real-time simulations, and to resolve myriad problems and issues in chemistry, high-energy physics, engineering, and even in biology and geology. During the period 1945 to about 1975, the physical sciences relied more extensively on the digital hand for research than other disciplines. However, close behind the physical sciences were researchers doing work in cognitive sciences, and by many in what eventually became known as artificial intelligence, and sometimes, cybernetics. Each involved the merger of experimentation and analysis of how brains and biology worked in systems, using theoretical and mechanical means to explore such concepts as neural feedback and response. Subsequently, in the 1960s and 1970s, biologists became extensive users of computers, particularly in enhancing numerical taxonomy, which could only be done once computers had the ability to store vast quantities of data. The use of computing so spread in this discipline that by the 1980s use led to the creation of a new name for computational biology, called bioinformatics. Meanwhile, the field of genetics came into its own. Also beginning in the 1950s, but coming into wide use in the 1970s and beyond, was deployment of computers to assist in a wide range of studies in medicine, from how diseases worked and could be cured, to the management of clinical trials, and even for administrative operations of a hospital. One student of the history of computing as a scientific instrument described the situation: “by 1975, the computer was deeply entrenched in the physical, biological, medical, and cognitive sciences.”91
305
306
The DIGITAL HAND, Volume III
Thus, in a short period of time, scientists had learned how to harness the great calculating and data manipulation features of computers to solve problems that required intense calculations not possible with pencil, paper, or even calculators; to control instruments, collect data, and perform myriad data reduction and analysis exercises. The cost of computing continued to drop while capacity and reliability improved, a trend that extended right into the twenty-first century, making it possible for ever increasing number of scientists, engineers, and researchers in other fields to gain access to what in the early years was a very expensive class of technology.92 As that availability increased, researchers were able to find more complex uses of computers, and perhaps the most important involved simulations. Unlike calculations of a complex nature to solve some mathematical or engineering problem, with simulations one creates alternative realities, or systems, and that requires both a combination of data and mathematical algorithms that cause the data to interact in predictable or even unpredictable ways. That exercise allows people in all fields to study real and hypothetical situations, often also called modeling, although, of course, they are not quite the same. It would be difficult to underestimate the popularity of such a capability to all researchers and even to all users of computers. The “what if” analysis one does on their PC using a spreadsheet software tool is a relatively simple example of computerized simulation. Most serious research in physical or social sciences today involves some modeling. Almost all forecasting of weather, economics, and military options are digital exercises in simulation. If one had to pick a single most profound change brought about by the computer in science, engineering, medicine, war, and economics, the ability to use simulations to learn new things and upon which to make important decisions, this is the tool. The strengths and weaknesses of the approach were understood in the 1950s and 1960s and remained useful for decades. One group of experts in the 1960s pointed out that while all models were artificial and that computers were often inflexible and required much work to model anything, at the same a computer simulation was “completely repeatable,” “ideal for the collection and processing of quantitative data,” and “free from the physical limitations on the system being studied.”93 The technology itself was not, however, enough to get the scientific community to come to rely so extensively on computers. Early financial support for computing extended to higher education, along with recruiting faculty to develop the technology, were the essential catalysts that jumpstarted use of computers by academics in the United States. While the role in funding research of all kinds by various agencies of the DoD and federal government have been discussed elsewhere in this book, it is important to note that for many decades, public officials channeled funds to researchers through several primary conduits. The most important was the National Science Foundation (NSF), which established a process by which researchers applied competitively for grants in support of their work, submitting proposals on topics of broad general interest to the NSF.94 The National Institutes of Health (NIH),95 and at the Pentagon, DARPA, represented other sources of funding for research. What is particularly unique
Higher Education
about their model is that funds were generally granted to researchers, making them personally responsible for spending the grants on research that they had defined. This approach stood in sharp contrast to the model of awarding contracts in exchange for research requested by the funding organization. The former approach gave researchers an enormous amount of flexibility to select what topics to pursue, and when combined with a peer review assessment of proposals, gave the process enough competition for funds to encourage quality relevant research. The approach also led scientists to shift from operating alone or in very small groups—as had been the case before World War II—to take on larger projects, some of which required hundreds of workers, creation of substantial laboratories, and often required multiyear financial commitments, leading to what in time became known as “Big Science,” a model relatively unique to the United States, and that had only been partially used by the British although extensively by the Soviets.96 That feature—the ability to scale up—provided a path to new knowledge that researchers could use to acquire access to substantial amounts of computing power, particularly before 1968, and thus proved to be crucial in the world of American academic research. After that date, federal funding for R&D began a long slow decline as it failed to keep up with national economic output or inflation. Nonetheless, the sums remained quite high, running annually into hundreds of millions of dollars. It was not uncommon, for example, for large research universities to receive over a $100 million each from federal sources in the 1980s and 1990s, and many other institutions to obtain tens of millions of dollars.97 The relative decline came as two other trends were unfolding: first, cost of computers kept declining, and second, the private sector and foundations increased their contributions to academic research. Tables 9.4 and 9.5 document the total amount of funding from all sources in higher education, including from the federal government, for a half century. By any measure, it was substantial. The enormous role played by research activities, and government funding, clearly strengthened scientific inquiry at American universities and colleges, Table 9.4 R&D Expenditures by American Higher Education, 1953–2003 (millions of dollars) Fiscal Year 1953 1963 1973 1983 1993 2003
Total Expenditures 255 1,081 2,884 7,882 19,951 40,077
For Basic Research 110 814 2,053 5,303 13,303 29,981
For Applied Research 145 267 831 2,579 6,648 10,097
Source: Table 2, pp. 8–9, National Science Foundation statistics, http://www.nsf.gov/statistics/ nsf05320/pdf/tables.pdf (last accessed 9/1/2006).
307
308
The DIGITAL HAND, Volume III Table 9.5 Federally Funded R&D at American Higher Education, 1953–2003 (millions of dollars) Fiscal Year 1953 1963 1973 1983 1993 2003
Total Expenditures
For Basic Research
For Applied Research
138 760 1,985 4,989 11,957 24,734
NA NA 1,454 3,547 8,398 19,500
NA NA 531 1,442 3,559 5,234
Source: Table 3, pp. 10–11, National Science Foundation statistics, http://www.nsf.gov/statistics/ nsf05320/pdf/tables.pdf (last accessed 9/1/2006).
more so than, we could argue, computers did. However, it was already evident by the 1960s that use of computers was having a profound effect on the outcomes of the research, results that were already making their way through the economy in a broad mix of ways, ranging from healthcare to agriculture, from weapons to aircraft and automotive performance, to the now fundamentally changing nature of how diseases, biology, and genetics were being studied and findings applied. Two academic commentators on higher education argued that the alliance between higher education and government “has made the United States the world’s leading source of fundamental scientific knowledge.”98 While much of the ground work for changing the nature of scientific research was laid in place by the mid-1960s, new discoveries and insights kept appearing as the technology transformed. The first was cultural—to big science—in part facilitated by use of computers and other complex scientific instrumentation and large projects (such as those in support of space travel and missiles for the military). “Big Science” meant scientists working in laboratories and less by themselves, with teams becoming the norm on ever larger problems and projects. Teams increasingly used data derived from iterative studies, often collaborating across departments, institutions, also with nonacademic organizations, and around the world, constantly applying new techniques. During the last quarter of the twentieth century, use of computing in academic research spread slowly beyond scientific and engineering disciplines, spilling over into business administration (especially in operations research in marketing and actuarial studies), into sociology (relying on large databases created by such federal agencies as the Census Bureau), criminology, and demographics. The next wave of adoption was, thus, largely among economists and social scientists. In the case of economics, the field had become so mathematized by the early 1960s that one could argue computers simply reinforced this trend to such an extent that in the early years of the new century, distinguished members of the discipline began arguing that perhaps economists had gone too
Higher Education
far in relying on mathematical approaches.99 But clearly by the late 1970s or early 1980s, social scientists also had begun using computers for the same reasons as their physical science cohorts. Finally, by the 1990s, researchers in the humanities did too, such as in languages, fine arts, and history. And like their scientific cohorts of the 1950s and 1960s, historians, for instance, had to deal with the value of the Internet and how best to use digital data.100 By the end of the century, all academic fields had researchers who relied directly on computers with which to conduct research. While humanists used computers to write their articles and books, some to teach, and others to send e-mail, the amount of research funds allocated to their work, in general, remained quite low, regardless of computers. In fact, one report published in the early years of the new century put the amount of research dollars invested outside of scientific and engineering disciplines at about 3.3 percent of the total.101 In other words, some 96.7 percent went toward scientific and engineering communities, even though the federal government and foundations were beginning to fund historical research, for example, requiring use of computers.102 One conclusion we can reach is that the whole notion of team-based “Big Science” approaches as a way to conduct research had yet to take wide hold in much of the social sciences, and even less so in the humanities where historians, for example, overwhelmingly researched alone with only part-time help from student assistants. Meanwhile, technology continued to evolve. Just as in the early decades the ability of machines to perform mathematics fast and to collect and analyze well vast quantities of data were points of attraction for computing, additional capabilities proved useful later. Most obvious, the increased ability of telecommunications to transmit large volumes of data in the 1980s, and especially in the 1990s, through broadband and enhanced Internet functions was of enormous value, making it practical for researchers from multiple institutions to collaborate, a practice that now is widely evident in the social sciences and the humanities. Second, myriad advances were made in the 1980s and 1990s in what computer experts called visualization, that is, giving researchers the ability to present data in graphical form on CRTs, often in motion-like video, to print out what otherwise could not be seen, such as molecules and subatomic particles or processes at work. In the 1980s, for example, those studying how tornados functioned created full-color visual models that moved like motion pictures but as representations of how the physical matter was supposed to function unseen by the human eye, based on actual data collected from tornados to illustrate their birth and evolution. Biologists studying genes or chemical compounds could also present their data in pictorial form, not just as charts and graphs as they had been able to do since the late 1950s. In fact, one could argue that today it is impossible to do the kind of research being conducted about DNA and genetics without visualization. As reliance on ever increasing amounts of computing pushed forward, the requirement to fund large data centers did too, much along the lines of what occurred in the late 1940s and 1950s, when federal agencies had to spend extraordinary amounts of money to support earlier computer installations.
309
310
The DIGITAL HAND, Volume III
During the second half of the twentieth century, the federal government either ran large data centers to which professors were given access, or it funded such facilities located at major national laboratories and at other agencies and organizations, sometimes physically located on some university campus. In the 1980s, the NSF expanded its funding for supercomputing in order to support large scientific and engineering research projects. With these systems they also added telecommunications, using, for example, its NSFNet to link these together. Some of the projects run at these data centers involved modeling of galaxies, weather, and proteins. During the Clinton administration, supercomputing became a national priority, and thus the NSF obtained funding to maintain these civilian data centers around the country.103 As this chapter was being written, once again computing was in the process of bringing about yet another clear revolution in scientific knowledge, this one involving biology and medicine. The increased computing power, the role of more sophisticated databases, and visualization among other things, are moving research through the use of simulation to new levels of sophistication. While others are describing that phenomenon, to summarize, medical assessments are moving rapidly toward greater use of scanners (such as MRI), while the basic science of disease treatment is shifting from chemically based approaches of the past to biologically based strategies, involving, for instance, genetic treatments and programming. The result in the latter case, for example, is that researchers model thousands, indeed even millions, of potential mixtures of compounds to determine what might be an ideal recipe for a medicine, while biologists model and visualize genes and changes to them. So, the image of people pouring liquids from one test tube to another or simply growing something in a petri dish overnight is fiction. Massive databases and extensive use of programmed algorithms have made modern medical and biological research highly computerized, creating in the process whole new fields of study.104
Digital Hand in the Library Libraries in the United States have played important and visible roles in American society for some two centuries. In American colleges and universities, they have historically served as the visible center of academic life, the physical and intellectual heart of a campus. In American society, there have long existed different types of libraries with varying purposes: a thick network of public libraries to serve the needs of local communities; corporate libraries to aid researchers, marketing specialists, and employees to do their work; government libraries that did the same or functioned as major research centers for scientists and engineers; and libraries in higher education that served students, faculty, and administrators, often maintaining very large collections of books, periodicals, and archival materials in support of scholarship, teaching, and even in providing a physical space for students to study, socialize, and, yes, even to take the occasional nap. Regardless of what kind of library they worked in, librarians have
Higher Education
been organized nationally as a “community of practice”105 for over a century, largely through the auspices of the American Library Association (ALA). They acted in much the same ways as members of any commercial industry’s national association. They collectively established points of view on policy issues (such as about free speech), set standards of performance for certification, described how materials should be catalogued, managed, acquired, or disposed, trained members, and published reports and other works. Unlike their iconoclastic American image of being quiet, shy public servants, librarians have long embraced every new form of information technology that came along. When three-by-five cards, file folders, and typewriters appeared in the last quarter of the nineteenth century, they quickly realized their value, becoming in the parlance of economics “early adopters.” By the very early 1900s, they were already moving on to new generations of cataloguing systems (Dewey, Library of Congress) that depended on these innovative tools. The very largest libraries used punched-card tabulating equipment between the two world wars to handle many backroom processing activities, such as book ordering and inventory control. As a profession and community, they collaborated in many projects over the first half of the century to create more efficient catalogs and to standardize the information they collected, for example. By the early 1950s, they had a long history of discussing the sorts of issues regarding information that would consume so much of the time of computer scientists who were creating programming languages, applications, and ultimately, database management systems by the late 1960s. The largest academic libraries often were the first to try using some form of IT in a new way, such as computers at large university research libraries. The Library of Congress played much the same role as did other federal agencies in other sectors of society in setting standards, advocating for federal funding and support, and for library applications, much as occurred at the FBI and the U.S. Department of Justice for law enforcement, or at the U.S. Department of Education for K–12 schools. While below I only discuss the role of computing in colleges and universities, many of the applications described also made their way into the other types of libraries not discussed, such as into public libraries, either later, or in smaller forms (such as on minis and PCs), or through shared networks. Much of the pioneering work in the use of IT invariably started at some of the largest 100 or more major American research universities, such as at various University of California campuses, MIT, Harvard, and at numerous state universities in the Midwest, but quickly seeped into smaller universities and colleges, either through dedicated library systems or as applications housed in a college’s or university’s central mainframe computer. Because librarians were well organized as a professional community, innovations at one library became widely known to their colleagues through conferences, publications, and by way of training programs for future librarians, often taught at state universities. These practices had long been in place. The ALA was founded in 1876 while many state-based schools of library “science” had been in existence since the dawn of the twentieth century. For example, that at the University of Wisconsin in Madison was
311
312
The DIGITAL HAND, Volume III
established in 1906, and a century later, this professional school graduated annually scores of librarians trained as much in the use of the Internet as they were in managing libraries stocked with books and archives, and to standards of performance set by the ALA.106 At many large libraries, precursors of the computer—namely, punched-card equipment—were familiar objects by the 1950s. In fact, their use had been codified to such an extent that one could think of them as widely deployed best practices. In fact, as early as 1952, the ALA had published its first guide to such uses.107 Of particular use to libraries was the clerical work these machines could do in support of ordering and acquiring materials; in scheduling and tracking binding; in cataloguing; and in monitoring movement of publications in and out of libraries, more commonly known as circulation. The latter is of particular interest because unlike many libraries around the world, for many decades American libraries had loaned materials to readers who could take them out of the library for specified periods of time to use at home or work. Libraries used IT to manage that substantial process since it was a core function of most libraries. For such an application, one could find many examples in use, not just in higher education libraries but also in large public libraries.108 Of all the early uses of punched-card applications, this one probably exposed more librarians in the 1940s and 1950s to notions of how data could be collected, sorted, and reported on using mechanical means than to most other users of IT on a campus or in a local government. These were crucial concepts because without appreciating the power of such early forms of IT, it would have been more difficult for library administrators to understand the potential of computers when they first became accessible to them in the late 1950s. Before describing the role of computers, we should keep in mind the operating environment of libraries in higher education. The period from the 1960s right to the end of the twentieth century saw the largest increase in the number of students, faculty, colleges, universities, research, and publications in American history. The number of students, for example, more than doubled. To be sure, budgets for all manner of services in higher education expanded, including those of libraries. The nation also went through periods of both economic expansion and recession. As with other parts of the public sector, librarians in the 1960s experienced budgetary expansion followed by contractions in the 1970s and 1980s, followed by renewed expansion in the 1990s, then another period of budgetary constraints. As with all other public institutions, library budgets were highly committed to salaries and preservation of existing infrastructure and stock of materials, leaving little, thus highly volatile, discretionary budgets essentially for acquisition of new materials, such as books, periodicals and journals, and later digital media, such as CDs. Obtaining sufficient funding for the development of IT systems, acquisition of PCs and other computing, and later for subscriptions to databases, therefore, proved challenging, much as experienced by police departments and K–12 schools. In addition to these operational environmental realities, during the last three decades of the century, librarians witnessed the revolution currently under way
Higher Education
in the development and use of a vast array of digitally based media and information: PCs, networks, CDs, DVDs, databases (information providers too), MP3s, video, and so forth. In fact, by the end of the century, some observers (including librarians) were pointing out that more information was being recorded in digital forms than on paper. The fact that libraries entered the last three decades largely operating in a paper-based world, but having to add (not substitute) digital media, presented substantial operational, managerial, and technical challenges that went far beyond budgetary concerns.109 In short, academic libraries represented a microcosm of many of the challenges faced by modern American society as they dealt with increasing numbers of digital tools and data at a time when many of their activities and attitudes were still rooted in a predigital age. Throughout the second half of the century, library administrators were animated by several needs. To be sure, reducing operating costs through use of automation and mechanization always remained an objective, but in reality they were less driven by that requirement than evident in any other part of the public sector community. In fact, librarians paid less attention to cost avoidance and justification of IT than any other community discussed in this book, with the possible exception of the military in development of weapons. Rather, library administrators wanted to reduce the amount of clerical work required to order, catalog, and manage their inventories of books and other library materials so that they could handle growing volumes of routine work and improve services. By the late 1960s, for example, global output of printed works was growing at between 8 and 10 percent a year, which translated into roughly 450,000 books and some 200,000 periodicals and a similar number of technical reports just in that decade.110 Budgets for university libraries were, therefore, growing fast—10 percent per year in the late 1960s was not unusual, a rate that could not be sustained. Often, the process for acquiring materials and getting them catalogued and on shelves could consume as much as a third of a library’s budget. While computers had not developed sufficiently by the early 1960s to make acquisition applications as cost effective as librarians wanted, they were experimenting with the technology so as to keep up with growing volumes of materials. The benefits of online catalogs were already understood intellectually, but as William N. Locke, director of MIT’s libraries at the time, pointed out, “We cannot afford on-line catalogs. And anybody who talks about storing any number of books, even off-line, is off his head.”111 Early uses of computers were, thus, driven by the costs and capabilities of computers, which goes far to explain why the great take-off in their everyday use began in the late 1960s, after these problems began to wane.112 Walk into an academic library in 2007 and you will see a building with terminals scattered about on most floors, which patrons use to look at online catalogs. In areas that used to have rows of large reading tables, one now sees cubicles or tables with PCs, devices that are used to find materials located inside the library and around the world. People download full texts to their PCs, which they can print out or store for future use. Libraries are interconnected via the Internet, as are national catalogs that tell a patron where copies of books and journals are located. At the University of Wisconsin, for example, patrons can
313
314
The DIGITAL HAND, Volume III
access some 650 online databases from private firms, publishers, commercial and nonprofit information providers, and government agencies. Its collection of electronic media is massive, and librarians will tell you that today students go first to the Internet for information before reaching out to librarians. This university is typical of many academic settings: it has multiple libraries scattered on its campus and has millions of books and periodicals, even though floor space is increasingly being devoted to terminals.113 Students, faculty, and really anyone, can access the catalog of this and most colleges and universities from the comfort of their own dorm, home, or office, often downloading instantly increasing amounts of materials to where they are physically working. Now contrast this scene with the circumstance of the late 1950s. We have the recollections of a director of the New York Public Library, who had worked at or run libraries at the University of Pennsylvania and Harvard, to call up an earlier reality: A library’s stock in trade consisted of books, journals, newspapers, and manuscript materials, and the only means of access was the library’s card catalog. The only machines in use were typewriters, Photostat machines, and some microcards and microfilm containing early printed books, newspapers, and doctoral dissertations. Those who wanted access to library materials had to come to the library and either use them in the building or borrow them for home use. If a library did not have what the patron wanted, he either had to send for it on interlibrary loan, which took up to three months, or he had to find out where it was and go to that library. Copying was done by the user in longhand or on a typewriter.114
Obviously, much had changed over the past half century.
Figure 9.2
Typical reading room at a modern U.S. university, in this case the University of Wisconsin, 2006. (Courtesy of the author)
Higher Education
In the 1960s, almost every library at a research university experimented with various forms of mechanization and later automation of their work. These various projects concentrated on lowering the amount of labor required to acquire materials (since it was often difficult to find enough qualified staffing), computerizing acquisition and management of serials (subscriptions of magazines and journals), creating punched-card and later online catalogs that displayed the same information available in paper-based card catalogs, improving data retrieval techniques, and in support of rapid less labor-intensive circulation processes.115 These initiatives can largely be characterized as experiments to learn how best to use computers, with digitally supported operations only coming on-stream toward the end of the decade and during the early years of the 1970s. One survey conducted in 1967 reported that 638 libraries out of a total of 24,000 actually used data processing equipment, although another 1,130 reported they would within a couple of years. In short, computers were in limited use in the 1960s in any kind of library, although where used they tended to be in large universities, big public libraries (such as at New York City), and by government librarians (such as at Library of Congress, national laboratories, and federal departments).116 In that same period, librarians used data processing mostly for circulation (165 libraries), managing serials (209), and to create and maintain accessing lists (170). To be sure, accounting and budgeting were widely used applications (reported by 235 libraries). Acquisitions (102), cataloguing functions (135), and preparation of union lists were also of interest (133).117 Yet as late as 1970, a librarian at the Library of Congress, home to many early computing projects, still reported slow progress: “Nowadays one can hardly throw a stone at any gathering of librarians without hitting someone who is planning, programming, or operating some kind of computer-based system. To be quite honest, the current odds are probably 100 to 1 that the stone will strike a planner rather than an operator; but, nonetheless, progress has been made.”118 The same librarian reminded her readers that librarians had a long heritage of networking socially and professionally, with the result that digital projects involved “cooperation and standardization,” so that “the field advances as a whole—that is, major changes are made only when a significant number of librarians are ready to accept them and to deal with them on the basis of the network of libraries rather than in terms of the individual library alone.”119 This explanation goes far to explain why from the 1970s forward groups of libraries created shared networks of publications, cataloguing systems, and other digital applications, and nowhere as much in the early years as in cataloguing operations, such as the development of standard catalog information, for example, the Library of Congress’s MachineReadable Cataloging (MARC).120 In short, collaboration characterized this community far more than some simple desire to leverage economies of scale to dampen expenditures. This standard alone made it possible for libraries and publishers to print standard sets of catalogs for publications, which could be sent to libraries without librarians having to recreate these labor-intensive cards.121 Bibliographic systems also appeared as well, most notably the Online Computer
315
316
The DIGITAL HAND, Volume III
Library Center (OCLC). Established in 1967, this nonprofit center provided a group of libraries in Ohio with various computerized services. It became the earliest and most widely used bibliographic service, beginning in 1970 with the publication of bibliographic cards for its members. Libraries contributed citations, and over the years, the OCLC expanded its cataloguing services, while the number of libraries of all types from across the United States joining it increased as well.122 During the 1970s, experimental, independent uses of computing gave way to more widely deployed, networked systems involving bibliographic guides, cataloguing, management of serials, interlibrary loan functions, and so forth. These were housed on campus mainframe computers that librarians shared, usually with administrative departments. During the 1970s, libraries in higher education began implementing online versions of their earlier (or first) uses of computer-based systems, such as card catalogs. By the end of the decade, some of these systems were accessible by users of a library’s services, again the card catalog being the most widely known of these systems. Online retrieval and updating of a library’s inventory of publications appeared at most university libraries and at many four-year colleges. MARC formats spread across the American library community, while networked communications for interlibrary loans came into their own by the end of the decade. Circulation control, involving scanning the transaction of a book being checked out and returned, while keying data into a terminal, also proved a popular form of mechanization, which also spread to public libraries. As online systems expanded, so, too, did use of direct access to bibliographic files created originally by the U.S. Library of Congress, various commercial bibliographic services, and by librarians managing special collections. New methods and software for conducting searches also made it possible for patrons of a library to start doing their own searches with minimal, or no, help from librarians. Speed of adoption of digital tools continued to be governed by declining costs of hardware, availability of software tools, knowledge of digital issues of librarians who either could articulate to programmers what they wanted or not, and such governance issues as who allocated budgets or managed computer facilities, or simply budgets.123 Despite substantive deployment of IT in the 1970s, at the end of the decade two librarians studying how extensively libraries used computers still concluded that “libraries are far more labor intensive than machine intensive. There also seems to be considerable fear among librarians about the increasing use of computers.”124 Librarians embraced networks in the 1970s that expanded their use of many systems in the 1980s. Beginning with OCLC, which went online in 1971, by the early 1980s there were also other networks in wide use: Research Libraries Information Network (RLIN), Washington Library Network (WLN), University of Toronto Library Automation System (UTLAS), and regional networks around the country, to mention a few key systems. Of particular interest to higher education were RLIN and regional networks, but all networks were fed with a continuous flow of data from OCLC, accessed through terminals. The combination
Higher Education
of networks and growing electronic files brought higher education libraries into the world of databases, making this new form of digitized files an important aspect of a library’s inventory of information by the end of the 1980s. Online access proved highly attractive, convenient tools for librarians and patrons, and shared networks meant shared expenses, hence more affordable systems. In fact, by the end of the decade, there was hardly an academic library that did not use some form of networking or had failed to provide access to online databases.125 Libraries embraced PCs in the 1980s in much the same way as the rest of higher education. Librarians used these devices to do word processing and to manage budgets and projects with spreadsheets.126 During the 1980s and 1990s, a great deal of information became available on CDs. Libraries were quick to find ways to add these to their collections as a way to control the costs of paper files, while making increasing amounts of material available to patrons. As with the rest of higher education, by the end of the decade, librarians were networking their PCs, using them as terminals and as tools for downloading large files to work with offline. One librarian called this practice “a return to local systems,” by which one could either access national and regional networks or work “offline.”127 But PCs were subsumed into the larger trend of using shared networks. As one library administrator recalled: “The era of localized library automation has effectively come to an end. Experience has shown that it is not economically feasible for any but the very largest libraries to afford the heavy costs of developing, maintaining, and operating complex localized computerbased systems. Many libraries are quietly abandoning this approach in favor of joining networks such as OCLC.”128 In effect, libraries were reducing their dependency on in-house IT staffs and computers, with the exception of PCs, their printers, and some on-staff PC wizardry, of course.129 Before discussing the role of the Internet, which was the next major technological introduction into libraries, it is important to understand the role of digital libraries, because they are so closely linked to use of the Internet. The term “digital libraries” is a phrase born in the 1990s. However, its roots date back to the 1960s with the development of early information retrieval systems that later were enhanced with development of hypertext systems in the 1980s, and simultaneously from the 1970s to the present through use of various telecommunications systems. Also known as “electronic libraries,” these became substantial sources of digitized information databases, largely in the 1980s and 1990s, by which time the cost of computing, storage, and networking had been dropping by annual rates approaching 20 percent, while hardware and software tools had been spreading across the entire academic community. As academics, online journals, commercial information providers and publishers, and others created online content, the notion of the digital library gained wide currency. The emergence of the World Wide Web (WWW) in the early 1990s, coupled to a massive expansion of Internet access across all of education in the second half of the decade, made use of rapidly growing digital libraries possible and, indeed, a reality. Users and librarians quickly embraced a widely shared vision of such databases as ubiquitous, with open access, and that shared information would be
317
318
The DIGITAL HAND, Volume III
anywhere at anytime. Librarians viewed digital libraries as “libraries without walls,”130 as extensions of what they had long done, which was to acquire, organize, and make available information, using whatever information technologies were around. In short, digital libraries were the next evolutionary step in what they did for a living. Experiments at Carnegie Mellon University in the late 1980s and early 1990s taught many librarians what would be needed in an academic setting, while engineers and scientists had earlier quietly built smaller digital libraries using the Internet in the 1970s and 1980s to support work they were doing for the U.S. Department of Defense. Thus, by the time the Internet had
Figure 9.3
Libraries used online systems to catalog books; here we see a librarian at the University of Wisconsin, 1983. (Courtesy University of Wisconsin Archives)
Higher Education
become widely deployed, librarians had over two decades of experience with such digital systems.131 During the 1990s, internal online systems in libraries were frequently ported over to intranet and Internet sites, making them accessible by both the academic community and, indeed, the public at large around the world. It seemed that every library in higher education was “on the Web” by the end of the century. Librarians and professors started to complain that students never used physical libraries, rather turning to the Web for whatever they wanted in the way of information. Costs of creating and making collections found on the Internet kept rising all through the decade, becoming major line items in an academic library’s budget. Librarians wondered what role they would play in organizing and making available digitally ever growing bodies of information. The University of California system announced at the end of the century that it would create a tenth library that was only digital as a way of organizing support and librarianship for this form of information.132 Web-based catalog systems, laptop connectivity, and even online classes had to be interconnected. Often college and university administrators, professors, and students now looked to their librarians for leadership and support, raising questions in the early years of the new century about what the future role should be for librarians.133 One survey conducted in 2002 illustrated the criticality of the situation for higher education. It reported that 60 percent of American faculty were comfortable using online research tools and expected to use them to a greater extent in the future. An even higher percent valued their school’s online catalogs (70 percent), yet nearly half still needed to use paper-based collections, putting the librarians